Re: [Gluster-users] Gluster client version vs gluster server

2023-08-18 Thread Strahil Nikolov
Hi,
In gluster the servers can run with newer version in a backward compatibility 
mode - a.k.a op-version.Check this article and ensure that client op version is 
not smaller than the cluster 
one.https://docs.gluster.org/en/v3/Upgrade-Guide/op_version/ .In best scenario, 
just download the packages from gluster’s repo and ensure all clients and 
servers have the same version.
Also, you can build your own rpms by following 
https://docs.gluster.org/en/main/Developer-guide/Building-GlusterFS/  if you 
don’t want the precompiled binaries: 
https://download.gluster.org/pub/gluster/glusterfs/LATEST/ 

Best Regards,Strahil Nikolov 

On Monday, August 14, 2023, 8:31 PM, Roy Sigurd Karlsbakk  
wrote:

Hi all

I have a RHEL machine with gluster 7.9 installed, which is the one from EPEL. 
Also, I have a set of debian machines running glusterfs server/cluster with 
version 9.3. Is it likely to work well with this combination or should 
everything be the same version? That might be a bit hard across distros. Also, 
RHEL just sells gluster, since it's such a nice feature so they find it hard to 
not charge us USD 4500 per year per node for it, plus the price difference 
between an edu license and a full license, per node. Well, we can probably use 
that money for something else, but we're not quite ready to leave rhel yet (not 
my fault). So - would these different versions be compatible or what would the 
potential problems be to mix them like described?

roy




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users







Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster client 4.1.5 with Gluster server 6.7

2020-02-02 Thread Laurent Dumont
That's unfortunately not possible in my case as the clients are K8s worker
nodes. The gluster client version is tied to the Kuberntes release/deployer
- Rancher in my case.

Disabling ctime "gluster volume set $volname ctime off" fixed the mounting
issue.



On Thu, Jan 30, 2020 at 4:16 AM Mahdi Adnan  wrote:

> Hello,
>
>  We had a similar issue when we upgraded one of our clusters to 6.5 and
> clients were running 4.1.5 and 4.1.9, both crashed after few seconds of
> mounting, we did not dig into the issue instead, we upgraded the clients to
> 6.5 and it worked fine.
>
> On Tue, Jan 28, 2020 at 1:35 AM Laurent Dumont 
> wrote:
>
>> After some more digging, it seems I'm hitting the following bug -
>> https://github.com/gluster/glusterfs/issues/658
>>
>> On Mon, Jan 27, 2020 at 4:23 PM Laurent Dumont 
>> wrote:
>>
>>> Hi everyone,
>>>
>>> Small question. I'm trying to mount a Gluster volume (server is at 6.7)
>>> and the client is at 4.1.5. I'm seeing the mount start on the client but it
>>> looks like the client crashes and is left in a strange state. Is there any
>>> inherent compatibility issues between the two versions? Is 4.1.5 too old
>>> when talking to a 6.7 server?
>>>
>>> Client :
>>> sh-4.4# glusterfs --version
>>> glusterfs 4.1.5
>>> Repository revision: git://git.gluster.org/glusterfs.git
>>> Copyright (c) 2006-2016 Red Hat, Inc. 
>>> GlusterFS comes with ABSOLUTELY NO WARRANTY.
>>> It is licensed to you under your choice of the GNU Lesser
>>> General Public License, version 3 or any later version (LGPLv3
>>> or later), or the GNU General Public License, version 2 (GPLv2),
>>> in all cases as published by the Free Software Foundation.
>>>
>>> Server :
>>> root@gluster01:~# glusterd --version
>>> glusterfs 6.7
>>> Repository revision: git://git.gluster.org/glusterfs.git
>>> Copyright (c) 2006-2016 Red Hat, Inc. 
>>> GlusterFS comes with ABSOLUTELY NO WARRANTY.
>>> It is licensed to you under your choice of the GNU Lesser
>>> General Public License, version 3 or any later version (LGPLv3
>>> or later), or the GNU General Public License, version 2 (GPLv2),
>>> in all cases as published by the Free Software Foundation.
>>>
>>> Steps :
>>> sh-4.4# mount -t glusterfs 10.10.99.29:kube_vol /test
>>> Mount failed. Please check the log file for more details.
>>>
>>>
>>> sh-4.4# cat /var/log/glusterfs/test.log
>>> [2020-01-27 21:03:26.770751] I [MSGID: 100030] [glusterfsd.c:2741:main]
>>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 4.1.5
>>> (args: /usr/sbin/glusterfs --process-name fuse --volfile-server=10.10.99.29
>>> --volfile-id=kube_vol /test)
>>> [2020-01-27 21:03:26.775717] I [MSGID: 101190]
>>> [event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread
>>> with index 1
>>> pending frames:
>>> frame : type(0) op(0)
>>> patchset: git://git.gluster.org/glusterfs.git
>>> signal received: 11
>>> time of crash:
>>> 2020-01-27 21:03:26
>>> configuration details:
>>> argp 1
>>> backtrace 1
>>> dlfcn 1
>>> libpthread 1
>>> llistxattr 1
>>> setfsid 1
>>> spinlock 1
>>> epoll.h 1
>>> xattr.h 1
>>> st_atim.tv_nsec 1
>>> package-string: glusterfs 4.1.5
>>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x237aa)[0x7f2e1061d7aa]
>>>
>>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(gf_print_trace+0x2f7)[0x7f2e10627517]
>>> /lib/x86_64-linux-gnu/libc.so.6(+0x33060)[0x7f2e0ec6f060]
>>> /lib/x86_64-linux-gnu/libc.so.6(+0xbf944)[0x7f2e0ecfb944]
>>> /lib/x86_64-linux-gnu/libc.so.6(fnmatch+0x61)[0x7f2e0ecfce41]
>>>
>>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(xlator_volume_option_get_list+0x35)[0x7f2e10670905]
>>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x7698e)[0x7f2e1067098e]
>>>
>>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_foreach_match+0x87)[0x7f2e10614b97]
>>>
>>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_foreach+0x18)[0x7f2e10614d78]
>>>
>>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(xlator_options_validate_list+0x3f)[0x7f2e10670b4f]
>>>
>>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(xlator_options_validate+0x39)[0x7f2e10670bc9]
>>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x5ab99)[0x7f2e10654b99]
>>>
>>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(glusterfs_graph_activate+0x24)[0x7f2e106554d4]
>>> /usr/sbin/glusterfs(glusterfs_process_volfp+0x100)[0x5643b3bc1f40]
>>> /usr/sbin/glusterfs(mgmt_getspec_cbk+0x6bd)[0x5643b3bc889d]
>>> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xe110)[0x7f2e103ec110]
>>> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xe44f)[0x7f2e103ec44f]
>>>
>>> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7f2e103e8863]
>>>
>>> /usr/lib/x86_64-linux-gnu/glusterfs/4.1.5/rpc-transport/socket.so(+0x6cfb)[0x7f2e0b4e9cfb]
>>>
>>> /usr/lib/x86_64-linux-gnu/glusterfs/4.1.5/rpc-transport/socket.so(+0x9605)[0x7f2e0b4ec605]
>>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x8052e)[0x7f2e1067a52e]
>>> /lib/x86_64-linux-gnu/libpthread.so.0(+0x74a4)[0x7f2e0f47c4a4]
>>> 

Re: [Gluster-users] Gluster client 4.1.5 with Gluster server 6.7

2020-01-30 Thread Mahdi Adnan
Hello,

 We had a similar issue when we upgraded one of our clusters to 6.5 and
clients were running 4.1.5 and 4.1.9, both crashed after few seconds of
mounting, we did not dig into the issue instead, we upgraded the clients to
6.5 and it worked fine.

On Tue, Jan 28, 2020 at 1:35 AM Laurent Dumont 
wrote:

> After some more digging, it seems I'm hitting the following bug -
> https://github.com/gluster/glusterfs/issues/658
>
> On Mon, Jan 27, 2020 at 4:23 PM Laurent Dumont 
> wrote:
>
>> Hi everyone,
>>
>> Small question. I'm trying to mount a Gluster volume (server is at 6.7)
>> and the client is at 4.1.5. I'm seeing the mount start on the client but it
>> looks like the client crashes and is left in a strange state. Is there any
>> inherent compatibility issues between the two versions? Is 4.1.5 too old
>> when talking to a 6.7 server?
>>
>> Client :
>> sh-4.4# glusterfs --version
>> glusterfs 4.1.5
>> Repository revision: git://git.gluster.org/glusterfs.git
>> Copyright (c) 2006-2016 Red Hat, Inc. 
>> GlusterFS comes with ABSOLUTELY NO WARRANTY.
>> It is licensed to you under your choice of the GNU Lesser
>> General Public License, version 3 or any later version (LGPLv3
>> or later), or the GNU General Public License, version 2 (GPLv2),
>> in all cases as published by the Free Software Foundation.
>>
>> Server :
>> root@gluster01:~# glusterd --version
>> glusterfs 6.7
>> Repository revision: git://git.gluster.org/glusterfs.git
>> Copyright (c) 2006-2016 Red Hat, Inc. 
>> GlusterFS comes with ABSOLUTELY NO WARRANTY.
>> It is licensed to you under your choice of the GNU Lesser
>> General Public License, version 3 or any later version (LGPLv3
>> or later), or the GNU General Public License, version 2 (GPLv2),
>> in all cases as published by the Free Software Foundation.
>>
>> Steps :
>> sh-4.4# mount -t glusterfs 10.10.99.29:kube_vol /test
>> Mount failed. Please check the log file for more details.
>>
>>
>> sh-4.4# cat /var/log/glusterfs/test.log
>> [2020-01-27 21:03:26.770751] I [MSGID: 100030] [glusterfsd.c:2741:main]
>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 4.1.5
>> (args: /usr/sbin/glusterfs --process-name fuse --volfile-server=10.10.99.29
>> --volfile-id=kube_vol /test)
>> [2020-01-27 21:03:26.775717] I [MSGID: 101190]
>> [event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread
>> with index 1
>> pending frames:
>> frame : type(0) op(0)
>> patchset: git://git.gluster.org/glusterfs.git
>> signal received: 11
>> time of crash:
>> 2020-01-27 21:03:26
>> configuration details:
>> argp 1
>> backtrace 1
>> dlfcn 1
>> libpthread 1
>> llistxattr 1
>> setfsid 1
>> spinlock 1
>> epoll.h 1
>> xattr.h 1
>> st_atim.tv_nsec 1
>> package-string: glusterfs 4.1.5
>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x237aa)[0x7f2e1061d7aa]
>>
>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(gf_print_trace+0x2f7)[0x7f2e10627517]
>> /lib/x86_64-linux-gnu/libc.so.6(+0x33060)[0x7f2e0ec6f060]
>> /lib/x86_64-linux-gnu/libc.so.6(+0xbf944)[0x7f2e0ecfb944]
>> /lib/x86_64-linux-gnu/libc.so.6(fnmatch+0x61)[0x7f2e0ecfce41]
>>
>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(xlator_volume_option_get_list+0x35)[0x7f2e10670905]
>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x7698e)[0x7f2e1067098e]
>>
>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_foreach_match+0x87)[0x7f2e10614b97]
>>
>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_foreach+0x18)[0x7f2e10614d78]
>>
>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(xlator_options_validate_list+0x3f)[0x7f2e10670b4f]
>>
>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(xlator_options_validate+0x39)[0x7f2e10670bc9]
>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x5ab99)[0x7f2e10654b99]
>>
>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(glusterfs_graph_activate+0x24)[0x7f2e106554d4]
>> /usr/sbin/glusterfs(glusterfs_process_volfp+0x100)[0x5643b3bc1f40]
>> /usr/sbin/glusterfs(mgmt_getspec_cbk+0x6bd)[0x5643b3bc889d]
>> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xe110)[0x7f2e103ec110]
>> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xe44f)[0x7f2e103ec44f]
>>
>> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7f2e103e8863]
>>
>> /usr/lib/x86_64-linux-gnu/glusterfs/4.1.5/rpc-transport/socket.so(+0x6cfb)[0x7f2e0b4e9cfb]
>>
>> /usr/lib/x86_64-linux-gnu/glusterfs/4.1.5/rpc-transport/socket.so(+0x9605)[0x7f2e0b4ec605]
>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x8052e)[0x7f2e1067a52e]
>> /lib/x86_64-linux-gnu/libpthread.so.0(+0x74a4)[0x7f2e0f47c4a4]
>> /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7f2e0ed24d0f]
>> -
>> [2020-01-27 21:22:35.074154] I [MSGID: 100030] [glusterfsd.c:2741:main]
>> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 4.1.5
>> (args: /usr/sbin/glusterfs --process-name fuse --volfile-server=10.10.99.29
>> --volfile-id=kube_vol /test)
>> [2020-01-27 21:22:35.078542] I [MSGID: 101190]
>> [event-epoll.c:617:event_dispatch_epoll_worker] 

Re: [Gluster-users] Gluster client 4.1.5 with Gluster server 6.7

2020-01-27 Thread Laurent Dumont
After some more digging, it seems I'm hitting the following bug -
https://github.com/gluster/glusterfs/issues/658

On Mon, Jan 27, 2020 at 4:23 PM Laurent Dumont 
wrote:

> Hi everyone,
>
> Small question. I'm trying to mount a Gluster volume (server is at 6.7)
> and the client is at 4.1.5. I'm seeing the mount start on the client but it
> looks like the client crashes and is left in a strange state. Is there any
> inherent compatibility issues between the two versions? Is 4.1.5 too old
> when talking to a 6.7 server?
>
> Client :
> sh-4.4# glusterfs --version
> glusterfs 4.1.5
> Repository revision: git://git.gluster.org/glusterfs.git
> Copyright (c) 2006-2016 Red Hat, Inc. 
> GlusterFS comes with ABSOLUTELY NO WARRANTY.
> It is licensed to you under your choice of the GNU Lesser
> General Public License, version 3 or any later version (LGPLv3
> or later), or the GNU General Public License, version 2 (GPLv2),
> in all cases as published by the Free Software Foundation.
>
> Server :
> root@gluster01:~# glusterd --version
> glusterfs 6.7
> Repository revision: git://git.gluster.org/glusterfs.git
> Copyright (c) 2006-2016 Red Hat, Inc. 
> GlusterFS comes with ABSOLUTELY NO WARRANTY.
> It is licensed to you under your choice of the GNU Lesser
> General Public License, version 3 or any later version (LGPLv3
> or later), or the GNU General Public License, version 2 (GPLv2),
> in all cases as published by the Free Software Foundation.
>
> Steps :
> sh-4.4# mount -t glusterfs 10.10.99.29:kube_vol /test
> Mount failed. Please check the log file for more details.
>
>
> sh-4.4# cat /var/log/glusterfs/test.log
> [2020-01-27 21:03:26.770751] I [MSGID: 100030] [glusterfsd.c:2741:main]
> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 4.1.5
> (args: /usr/sbin/glusterfs --process-name fuse --volfile-server=10.10.99.29
> --volfile-id=kube_vol /test)
> [2020-01-27 21:03:26.775717] I [MSGID: 101190]
> [event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 1
> pending frames:
> frame : type(0) op(0)
> patchset: git://git.gluster.org/glusterfs.git
> signal received: 11
> time of crash:
> 2020-01-27 21:03:26
> configuration details:
> argp 1
> backtrace 1
> dlfcn 1
> libpthread 1
> llistxattr 1
> setfsid 1
> spinlock 1
> epoll.h 1
> xattr.h 1
> st_atim.tv_nsec 1
> package-string: glusterfs 4.1.5
> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x237aa)[0x7f2e1061d7aa]
>
> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(gf_print_trace+0x2f7)[0x7f2e10627517]
> /lib/x86_64-linux-gnu/libc.so.6(+0x33060)[0x7f2e0ec6f060]
> /lib/x86_64-linux-gnu/libc.so.6(+0xbf944)[0x7f2e0ecfb944]
> /lib/x86_64-linux-gnu/libc.so.6(fnmatch+0x61)[0x7f2e0ecfce41]
>
> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(xlator_volume_option_get_list+0x35)[0x7f2e10670905]
> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x7698e)[0x7f2e1067098e]
>
> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_foreach_match+0x87)[0x7f2e10614b97]
>
> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_foreach+0x18)[0x7f2e10614d78]
>
> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(xlator_options_validate_list+0x3f)[0x7f2e10670b4f]
>
> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(xlator_options_validate+0x39)[0x7f2e10670bc9]
> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x5ab99)[0x7f2e10654b99]
>
> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(glusterfs_graph_activate+0x24)[0x7f2e106554d4]
> /usr/sbin/glusterfs(glusterfs_process_volfp+0x100)[0x5643b3bc1f40]
> /usr/sbin/glusterfs(mgmt_getspec_cbk+0x6bd)[0x5643b3bc889d]
> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xe110)[0x7f2e103ec110]
> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xe44f)[0x7f2e103ec44f]
>
> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7f2e103e8863]
>
> /usr/lib/x86_64-linux-gnu/glusterfs/4.1.5/rpc-transport/socket.so(+0x6cfb)[0x7f2e0b4e9cfb]
>
> /usr/lib/x86_64-linux-gnu/glusterfs/4.1.5/rpc-transport/socket.so(+0x9605)[0x7f2e0b4ec605]
> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x8052e)[0x7f2e1067a52e]
> /lib/x86_64-linux-gnu/libpthread.so.0(+0x74a4)[0x7f2e0f47c4a4]
> /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7f2e0ed24d0f]
> -
> [2020-01-27 21:22:35.074154] I [MSGID: 100030] [glusterfsd.c:2741:main]
> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 4.1.5
> (args: /usr/sbin/glusterfs --process-name fuse --volfile-server=10.10.99.29
> --volfile-id=kube_vol /test)
> [2020-01-27 21:22:35.078542] I [MSGID: 101190]
> [event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 1
> pending frames:
> frame : type(0) op(0)
> patchset: git://git.gluster.org/glusterfs.git
> signal received: 11
> time of crash:
> 2020-01-27 21:22:35
> configuration details:
> argp 1
> backtrace 1
> dlfcn 1
> libpthread 1
> llistxattr 1
> setfsid 1
> spinlock 1
> epoll.h 1
> xattr.h 1
> st_atim.tv_nsec 1
> package-string: glusterfs 4.1.5
> 

Re: [Gluster-users] Gluster client

2018-10-18 Thread Vlad Kopylov
Maximum number of connect attempts to server

On Wed, Oct 17, 2018 at 11:30 AM Alfredo De Luca 
wrote:

> What does fetch-attempts=5 do?
>
> On Wed, Oct 17, 2018 at 12:05 AM Vlad Kopylov  wrote:
>
>> You can add fetch-attempts=5 to fstab, so it will try to connect more,
>> never had an issue after this
>>
>> Problem might be as it might connect to the other server not the local
>> one, starting to push all reads through the network - so close client ports
>> on other nodes but to local
>>
>> v
>>
>>
>>
>> On Tue, Oct 16, 2018 at 10:01 AM Alfredo De Luca <
>> alfredo.del...@gmail.com> wrote:
>>
>>> The client was already connected to the volume so it takes the info
>>> about the nodes. I think I need to add anyway the backup-volfile-servers
>>> in the fstab so it can check at boot time.
>>>
>>> Cheers
>>>
>>>
>>> On Tue, Oct 16, 2018 at 3:29 PM  wrote:
>>>
 > What's the fstab equivalent?

 Hi,

 with the fuse client you can try the mount option
 backup-volfile-servers=server1:server2
 This gives alternate points to ask for the volume info.

 Joachim

 -
 FreeMail powered by mail.de - MEHR SICHERHEIT, SERIOSITÄT UND KOMFORT
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>>
>>>
>>> --
>>> *Alfredo*
>>>
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>
> --
> *Alfredo*
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster client

2018-10-17 Thread Alfredo De Luca
What does fetch-attempts=5 do?

On Wed, Oct 17, 2018 at 12:05 AM Vlad Kopylov  wrote:

> You can add fetch-attempts=5 to fstab, so it will try to connect more,
> never had an issue after this
>
> Problem might be as it might connect to the other server not the local
> one, starting to push all reads through the network - so close client ports
> on other nodes but to local
>
> v
>
>
>
> On Tue, Oct 16, 2018 at 10:01 AM Alfredo De Luca 
> wrote:
>
>> The client was already connected to the volume so it takes the info about
>> the nodes. I think I need to add anyway the backup-volfile-servers in
>> the fstab so it can check at boot time.
>>
>> Cheers
>>
>>
>> On Tue, Oct 16, 2018 at 3:29 PM  wrote:
>>
>>> > What's the fstab equivalent?
>>>
>>> Hi,
>>>
>>> with the fuse client you can try the mount option
>>> backup-volfile-servers=server1:server2
>>> This gives alternate points to ask for the volume info.
>>>
>>> Joachim
>>>
>>> -
>>> FreeMail powered by mail.de - MEHR SICHERHEIT, SERIOSITÄT UND KOMFORT
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>> --
>> *Alfredo*
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>

-- 
*Alfredo*
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster client

2018-10-16 Thread Vlad Kopylov
You can add fetch-attempts=5 to fstab, so it will try to connect more,
never had an issue after this

Problem might be as it might connect to the other server not the local one,
starting to push all reads through the network - so close client ports on
other nodes but to local

v



On Tue, Oct 16, 2018 at 10:01 AM Alfredo De Luca 
wrote:

> The client was already connected to the volume so it takes the info about
> the nodes. I think I need to add anyway the backup-volfile-servers in the
> fstab so it can check at boot time.
>
> Cheers
>
>
> On Tue, Oct 16, 2018 at 3:29 PM  wrote:
>
>> > What's the fstab equivalent?
>>
>> Hi,
>>
>> with the fuse client you can try the mount option
>> backup-volfile-servers=server1:server2
>> This gives alternate points to ask for the volume info.
>>
>> Joachim
>>
>> -
>> FreeMail powered by mail.de - MEHR SICHERHEIT, SERIOSITÄT UND KOMFORT
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
> --
> *Alfredo*
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster client

2018-10-16 Thread Alfredo De Luca
The client was already connected to the volume so it takes the info about
the nodes. I think I need to add anyway the backup-volfile-servers in the
fstab so it can check at boot time.

Cheers


On Tue, Oct 16, 2018 at 3:29 PM  wrote:

> > What's the fstab equivalent?
>
> Hi,
>
> with the fuse client you can try the mount option
> backup-volfile-servers=server1:server2
> This gives alternate points to ask for the volume info.
>
> Joachim
>
> -
> FreeMail powered by mail.de - MEHR SICHERHEIT, SERIOSITÄT UND KOMFORT
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users



-- 
*Alfredo*
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster client

2018-10-16 Thread jring
> What's the fstab equivalent?

Hi,

with the fuse client you can try the mount option 
backup-volfile-servers=server1:server2
This gives alternate points to ask for the volume info.

Joachim
-
FreeMail powered by mail.de - MEHR SICHERHEIT, SERIOSITÄT UND KOMFORT
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster client

2018-10-16 Thread Dave Sherohman
On Tue, Oct 16, 2018 at 02:45:49PM +0200, Stefan Kania wrote:
> Am 15.10.18 um 21:33 schrieb Alfredo De Luca:
> > But what happened when NODE1 is unavailable?

> The Client will only get a list of all hosts in the cluster in if one
> node is down the client will take another node.

You have to connect to the volume first before you can get that list
from the server.  What happens if, when you make the initial connection,
you try to connect to a node that's down?  I would certainly expect it
to fail, since the client doesn't have that list yet, so it has no idea
what other nodes it might attempt to connect to.

I primarily use gluster for VM disk images, so, in my case, I list all
the gluster nodes in the VM definition and, if the first one isn't
reachable, then it tries the second and so on until it finds one that's
available to connect to.

What's the fstab equivalent?

-- 
Dave Sherohman
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster client

2018-10-16 Thread Stefan Kania
I wrote my own systemd-script:
--
[Unit]
Description = Data dir
After=network.target glusterfs-server.service
Required=network-online.target

[Mount]
What=knoten-1:/gv1
Where=/glusterfs
Type=glusterfs
Options=defaults,acl

[Install]
WantedBy=multi-user.target
--
This script must be in /etc/systemd/system and the name must be
.mount



Am 15.10.18 um 21:33 schrieb Alfredo De Luca:
> Hi all. 
> I have 3 nodes glusterfs servers and multiple client and as I am a bit
> newbie on this not sure how to setup correctly the clients.
> 1. The clients mounts the glusterfs in fstab but when I reboot them they
> don't  mount it automatically
> 2. Not sure what to exactly put in the fastab as right now someone had
> :/vol1 /volume1 glusterfs default,netdev 0 0
> 
> But what happened when NODE1 is unavailable?
The Client will only get a list of all hosts in the cluster in if one
node is down the client will take another node.
> 
> Clients are centos 7.5 so the servers
> 
> Thanks
> 
> -- 
> /*Alfredo*/
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
> 




signature.asc
Description: OpenPGP digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster client

2018-10-16 Thread Alfredo De Luca
thanks Dmitry...maybe I wasn't clear enoughbut I was asking for both.
Anyway it should be fine now.

Cheers


On Tue, Oct 16, 2018 at 2:40 PM Dmitry Melekhov  wrote:

> 16.10.2018 16:38, Alfredo De Luca пишет:
>
> ok.. found out that backupvolfile-server is correct on fstab in case at
> mounting time the primary server is not responding. so
> the backupvolfile-server will fail to the next server.
> Also it seems that if you put  in your fstab and the
> server1 fails during normal operation the client will fail to the next one
> in the cluster.
>
> Cheers
>
> I thought you asking about initial mount if server one is not available :-)
>
>
> On Tue, Oct 16, 2018 at 6:28 AM Dmitry Melekhov  wrote:
>
>> 15.10.2018 23:33, Alfredo De Luca пишет:
>>
>> Hi all.
>> I have 3 nodes glusterfs servers and multiple client and as I am a bit
>> newbie on this not sure how to setup correctly the clients.
>> 1. The clients mounts the glusterfs in fstab but when I reboot them they
>> don't  mount it automatically
>> 2. Not sure what to exactly put in the fastab as right now someone had
>> :/vol1 /volume1 glusterfs default,netdev 0 0
>>
>>
>> Dunno, we run gluster on the same nodes as VM, so we put localhost in
>> domain definitions.
>> In your situation I'd use something like VRRP ( keepalived , for
>> instance).
>>
>> But what happened when NODE1 is unavailable?
>>
>> Clients are centos 7.5 so the servers
>>
>> Thanks
>>
>> --
>> *Alfredo*
>>
>>
>>
>> ___
>> Gluster-users mailing 
>> listGluster-users@gluster.orghttps://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
> --
> *Alfredo*
>
>
>

-- 
*Alfredo*
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster client

2018-10-16 Thread Dmitry Melekhov

16.10.2018 16:38, Alfredo De Luca пишет:
ok.. found out that backupvolfile-server is correct on fstab in case 
at mounting time the primary server is not responding. so 
the backupvolfile-server will fail to the next server.
Also it seems that if you put  in your fstab and the 
server1 fails during normal operation the client will fail to the next 
one in the cluster.


Cheers


I thought you asking about initial mount if server one is not available :-)



On Tue, Oct 16, 2018 at 6:28 AM Dmitry Melekhov > wrote:


15.10.2018 23:33, Alfredo De Luca пишет:

Hi all.
I have 3 nodes glusterfs servers and multiple client and as I am
a bit newbie on this not sure how to setup correctly the clients.
1. The clients mounts the glusterfs in fstab but when I reboot
them they don't mount it automatically
2. Not sure what to exactly put in the fastab as right now
someone had :/vol1 /volume1 glusterfs default,netdev 0 0



Dunno, we run gluster on the same nodes as VM, so we put localhost
in domain definitions.
In your situation I'd use something like VRRP ( keepalived , for
instance).


But what happened when NODE1 is unavailable?

Clients are centos 7.5 so the servers

Thanks

-- 
/*Alfredo*/




___
Gluster-users mailing list
Gluster-users@gluster.org 
https://lists.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org 
https://lists.gluster.org/mailman/listinfo/gluster-users



--
/*Alfredo*/



___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster client

2018-10-16 Thread Alfredo De Luca
ok.. found out that backupvolfile-server is correct on fstab in case at
mounting time the primary server is not responding. so
the backupvolfile-server will fail to the next server.
Also it seems that if you put  in your fstab and the server1
fails during normal operation the client will fail to the next one in the
cluster.

Cheers


On Tue, Oct 16, 2018 at 6:28 AM Dmitry Melekhov  wrote:

> 15.10.2018 23:33, Alfredo De Luca пишет:
>
> Hi all.
> I have 3 nodes glusterfs servers and multiple client and as I am a bit
> newbie on this not sure how to setup correctly the clients.
> 1. The clients mounts the glusterfs in fstab but when I reboot them they
> don't  mount it automatically
> 2. Not sure what to exactly put in the fastab as right now someone had
> :/vol1 /volume1 glusterfs default,netdev 0 0
>
>
> Dunno, we run gluster on the same nodes as VM, so we put localhost in
> domain definitions.
> In your situation I'd use something like VRRP ( keepalived , for instance).
>
> But what happened when NODE1 is unavailable?
>
> Clients are centos 7.5 so the servers
>
> Thanks
>
> --
> *Alfredo*
>
>
>
> ___
> Gluster-users mailing 
> listGluster-users@gluster.orghttps://lists.gluster.org/mailman/listinfo/gluster-users
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users



-- 
*Alfredo*
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster client

2018-10-15 Thread Dmitry Melekhov

15.10.2018 23:33, Alfredo De Luca пишет:

Hi all.
I have 3 nodes glusterfs servers and multiple client and as I am a bit 
newbie on this not sure how to setup correctly the clients.
1. The clients mounts the glusterfs in fstab but when I reboot them 
they don't  mount it automatically
2. Not sure what to exactly put in the fastab as right now someone had 
:/vol1 /volume1 glusterfs default,netdev 0 0




Dunno, we run gluster on the same nodes as VM, so we put localhost in 
domain definitions.

In your situation I'd use something like VRRP ( keepalived , for instance).


But what happened when NODE1 is unavailable?

Clients are centos 7.5 so the servers

Thanks

--
/*Alfredo*/



___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster client

2018-10-15 Thread Alfredo De Luca
Hi Diego...sorry it's a typo here on the email... but I ve put _netdev in
the fstab.

Thanks


On Mon, Oct 15, 2018 at 9:33 PM Alfredo De Luca 
wrote:

> Hi all.
> I have 3 nodes glusterfs servers and multiple client and as I am a bit
> newbie on this not sure how to setup correctly the clients.
> 1. The clients mounts the glusterfs in fstab but when I reboot them they
> don't  mount it automatically
> 2. Not sure what to exactly put in the fastab as right now someone had
> :/vol1 /volume1 glusterfs default,netdev 0 0
>
> But what happened when NODE1 is unavailable?
>
> Clients are centos 7.5 so the servers
>
> Thanks
>
> --
> *Alfredo*
>
>

-- 
*Alfredo*
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster client

2018-10-15 Thread Diego Remolina
You may have a typo, "_netdev" you are missing "_".

Give that a try.

Diego


On Mon, Oct 15, 2018, 15:33 Alfredo De Luca 
wrote:

> Hi all.
> I have 3 nodes glusterfs servers and multiple client and as I am a bit
> newbie on this not sure how to setup correctly the clients.
> 1. The clients mounts the glusterfs in fstab but when I reboot them they
> don't  mount it automatically
> 2. Not sure what to exactly put in the fastab as right now someone had
> :/vol1 /volume1 glusterfs default,netdev 0 0
>
> But what happened when NODE1 is unavailable?
>
> Clients are centos 7.5 so the servers
>
> Thanks
>
> --
> *Alfredo*
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster client mount fails in mid flight with signum 15

2017-06-01 Thread Niels de Vos
On Thu, Jun 01, 2017 at 01:52:23PM +, Gabriel Lindeborg wrote:
> This has been solved, as far as we can tell.
> 
> Problem was with KillUserProcesses=1 in logind.conf. This has shown to
> kill mounts made using mount -a booth by root and by any user with
> sudo at session logout.

Ah, yes, that could well be the cause of the problem.

> Hope this will anybody else who run into this.

Care to share how you solved it? Just disabling the option might not be
the most suitable approach. Did you convert it to systemd.mount units,
or maybe setup automounting with x-systemd.automount or autofs? Were
there considerations that made you choose one solution over an other?

Thanks!
Niels


> Thanks 4 all your help and
> cheers
> Gabbe
> 
> 1 juni 2017 kl. 09:24 skrev Gabriel Lindeborg 
> >:
> 
> All four clients did run 3.10.2 as well
> 
> The volumes has been running fine until we upgraded to 3.10, when we hit some 
> issues with port mismatches. We restarted all the volumes, the servers and 
> the clients and now hit this issue.
> We’ve since backed up the files, remove the volumes, removed the bricks, 
> removed gluster, installed glusterfs 3.7.20, created new volumes on new 
> bricks, restored the files and still hit the same issue at clients on the 
> nodes that also runs the servers. We’ve got to clients on connected to one of 
> the volumes that has been working fine all the time.
> 
> This is the debug logs from one of the mount as the client gets disconnected:
> The message "D [MSGID: 0] [dht-common.c:979:dht_revalidate_cbk] 0-mule-dht: 
> revalidate lookup of / returned with op_ret 0 [Structure needs cleaning]" 
> repeated 26 times between [2017-05-31 13:48:51.680757] and [2017-05-31 
> 13:50:46.325368]
> /DAEMON/DEBUG [2017-05-31T15:50:50.589272+02:00] [] [] 
> [logging.c:1830:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed 
> out. About to flush outstanding messages if present
> /DAEMON/DEBUG [2017-05-31T15:50:50.589520+02:00] [] [] 
> [logging.c:1792:__gf_log_inject_timer_event] 0-logging-infra: Starting timer 
> now. Timeout = 120, current buf size = 5
> [2017-05-31 13:50:51.908797] D [MSGID: 0] 
> [dht-common.c:979:dht_revalidate_cbk] 0-mule-dht: revalidate lookup of / 
> returned with op_ret 0 [Structure needs cleaning]
> /DAEMON/DEBUG [2017-05-31T15:51:24.592190+02:00] [] [] 
> [rpc-clnt-ping.c:300:rpc_clnt_start_ping] 0-mule-client-0: returning as 
> transport is already disconnected OR there are no frames (0 || 0)
> /DAEMON/DEBUG [2017-05-31T15:51:24.592469+02:00] [] [] 
> [rpc-clnt-ping.c:300:rpc_clnt_start_ping] 0-mule-client-1: returning as 
> transport is already disconnected OR there are no frames (0 || 0)
> /DAEMON/DEBUG [2017-05-31T15:51:26.324867+02:00] [] [] 
> [rpc-clnt-ping.c:98:rpc_clnt_remove_ping_timer_locked] (--> 
> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7f36b3260192] (--> 
> /lib64/libgfrpc.so.0(rpc_clnt_remove_ping_timer_locked+0x8b)[0x7f36b302f9db] 
> (--> /lib
> 64/libgfrpc.so.0(+0x13fd4)[0x7f36b302ffd4] (--> 
> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x451)[0x7f36b302cf01] (--> 
> /usr/lib64/glusterfs/3.7.20/xlator/protocol/client.so(client_submit_request+0x1fc)[0x7f36a599c33c]
>  ) 0-: 10.3.48.179:49155: ping timer event already remove
> d
> /DAEMON/DEBUG [2017-05-31T15:51:26.325230+02:00] [] [] 
> [rpc-clnt-ping.c:98:rpc_clnt_remove_ping_timer_locked] (--> 
> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7f36b3260192] (--> 
> /lib64/libgfrpc.so.0(rpc_clnt_remove_ping_timer_locked+0x8b)[0x7f36b302f9db] 
> (--> /lib
> 64/libgfrpc.so.0(+0x13fd4)[0x7f36b302ffd4] (--> 
> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x451)[0x7f36b302cf01] (--> 
> /usr/lib64/glusterfs/3.7.20/xlator/protocol/client.so(client_submit_request+0x1fc)[0x7f36a599c33c]
>  ) 0-: 10.3.48.180:49155: ping timer event already remove
> d
> /DAEMON/DEBUG [2017-05-31T15:52:08.595536+02:00] [] [] 
> [rpc-clnt-ping.c:300:rpc_clnt_start_ping] 0-mule-client-0: returning as 
> transport is already disconnected OR there are no frames (0 || 0)
> /DAEMON/DEBUG [2017-05-31T15:52:08.595735+02:00] [] [] 
> [rpc-clnt-ping.c:300:rpc_clnt_start_ping] 0-mule-client-1: returning as 
> transport is already disconnected OR there are no frames (0 || 0)
> /DAEMON/DEBUG [2017-05-31T15:52:12.059895+02:00] [] [] 
> [rpc-clnt-ping.c:98:rpc_clnt_remove_ping_timer_locked] (--> 
> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7f36b3260192] (--> 
> /lib64/libgfrpc.so.0(rpc_clnt_remove_ping_timer_locked+0x8b)[0x7f36b302f9db] 
> (--> /lib
> 64/libgfrpc.so.0(+0x13fd4)[0x7f36b302ffd4] (--> 
> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x451)[0x7f36b302cf01] (--> 
> /usr/lib64/glusterfs/3.7.20/xlator/protocol/client.so(client_submit_request+0x1fc)[0x7f36a599c33c]
>  ) 0-: 10.3.48.179:49155: ping timer event already remove
> d
> /DAEMON/DEBUG [2017-05-31T15:52:12.060170+02:00] [] [] 
> [rpc-clnt-ping.c:98:rpc_clnt_remove_ping_timer_locked] 

Re: [Gluster-users] Gluster client mount fails in mid flight with signum 15

2017-05-31 Thread Sunil Kumar Heggodu Gopala Acharya
Hi Gabriel,

What is the version of gluster you are running on client? Also, please
share the steps you followed to hit the issue.

Regards,

Sunil kumar Acharya

Senior Software Engineer

Red Hat



T: +91-8067935170 


TRIED. TESTED. TRUSTED. 


On Wed, May 31, 2017 at 12:10 PM, Gabriel Lindeborg <
gabriel.lindeb...@svenskaspel.se> wrote:

> Hello again
>
> The volumes are old, since version 3.6 if I remember correctly…
> These är det 40 last rows of the mnt-loggs go all the mounts
>
> ==> /var/log/glusterfs/mnt-gluster-alfresco.log <==
> /DAEMON/INFO [2017-05-31T07:55:12.787035+02:00] [] []
> /DAEMON/INFO [2017-05-31T07:55:18.153779+02:00] [] []
> /DAEMON/INFO [2017-05-31T07:55:18.160995+02:00] [] []
> /DAEMON/INFO [2017-05-31T07:55:18.167075+02:00] [] []
> /DAEMON/INFO [2017-05-31T07:55:18.171872+02:00] [] []
> /DAEMON/INFO [2017-05-31T07:55:46.427057+02:00] [] []
> /DAEMON/INFO [2017-05-31T07:55:46.432239+02:00] [] []
> /DAEMON/INFO [2017-05-31T07:55:46.439572+02:00] [] []
> /DAEMON/INFO [2017-05-31T07:55:46.444286+02:00] [] []
> /DAEMON/INFO [2017-05-31T07:55:52.035710+02:00] [] []
> /DAEMON/INFO [2017-05-31T07:55:52.042366+02:00] [] []
> /DAEMON/INFO [2017-05-31T07:55:52.049401+02:00] [] []
> /DAEMON/INFO [2017-05-31T07:55:52.054483+02:00] [] []
> /DAEMON/INFO [2017-05-31T07:56:13.524103+02:00] [] []
> /DAEMON/INFO [2017-05-31T07:56:13.540314+02:00] [] []
> /DAEMON/INFO [2017-05-31T07:56:13.543748+02:00] [] []
> /DAEMON/INFO [2017-05-31T07:56:13.558459+02:00] [] []
> /DAEMON/INFO [2017-05-31T07:56:18.623109+02:00] [] []
> /DAEMON/INFO [2017-05-31T07:56:18.643979+02:00] [] []
> /DAEMON/INFO [2017-05-31T07:56:18.648717+02:00] [] []
> /DAEMON/INFO [2017-05-31T07:56:18.662941+02:00] [] []
> /DAEMON/INFO [2017-05-31T07:56:26.268446+02:00] [] []
> /DAEMON/INFO [2017-05-31T07:56:26.284765+02:00] [] []
> /DAEMON/INFO [2017-05-31T07:56:26.287503+02:00] [] []
> /DAEMON/INFO [2017-05-31T07:56:26.302461+02:00] [] []
> /DAEMON/INFO [2017-05-31T08:09:49.408757+02:00] [] []
> /DAEMON/INFO [2017-05-31T08:09:49.415831+02:00] [] []
> /DAEMON/INFO [2017-05-31T08:09:49.421736+02:00] [] []
> /DAEMON/INFO [2017-05-31T08:09:49.421887+02:00] [] []
> /DAEMON/INFO [2017-05-31T08:09:49.431820+02:00] [] []
> [2017-05-31 06:09:54.514841] I [MSGID: 108031]
> [afr-common.c:2340:afr_local_discovery_cbk] 0-alfresco-replicate-0:
> selecting local read_child alfresco-client-2
> /DAEMON/INFO [2017-05-31T08:11:10.179031+02:00] [] []
> /DAEMON/INFO [2017-05-31T08:11:10.186811+02:00] [] []
> /DAEMON/INFO [2017-05-31T08:11:10.194886+02:00] [] []
> /DAEMON/INFO [2017-05-31T08:11:10.195062+02:00] [] []
> /DAEMON/INFO [2017-05-31T08:11:10.205582+02:00] [] []
> [2017-05-31 06:11:14.513620] I [MSGID: 108031]
> [afr-common.c:2340:afr_local_discovery_cbk] 0-alfresco-replicate-0:
> selecting local read_child alfresco-client-2
> [2017-05-31 06:21:21.579748] W [glusterfsd.c:1332:cleanup_and_exit]
> (-->/lib64/libpthread.so.0(+0x7dc5) [0x7fa1d2e4cdc5]
> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7fa1d44e4fd5]
> -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7fa1d44e4dfb] ) 0-:
> received signum (15), shutting down
> /DAEMON/INFO [2017-05-31T08:21:21.580626+02:00] [] []
> /DAEMON/INFO [2017-05-31T08:21:21.581440+02:00] [] []
>
> ==> /var/log/glusterfs/mnt-gluster-c1_2.log <==
>  82: type meta
>  83: subvolumes c1
>  84: end-volume
>  85:
> +---
> ---+
> /DAEMON/ERR [2017-05-24T10:39:11.130038+02:00] [] []
> [2017-05-24 08:39:11.130638] E [MSGID: 114058] 
> [client-handshake.c:1538:client_query_portmap_cbk]
> 2-c1-client-1: failed to get the port number for remote subvolume. Please
> run 'gluster volume status' on server to see if brick process is running.
> [2017-05-24 08:39:11.130693] I [MSGID: 114018] 
> [client.c:2276:client_rpc_notify]
> 2-c1-client-1: disconnected from c1-client-1. Client process will keep
> trying to connect to glusterd until brick's port is available
> [2017-05-24 08:39:11.130711] E [MSGID: 108006]
> [afr-common.c:4781:afr_notify] 2-c1-replicate-0: All subvolumes are down.
> Going offline until atleast one of them comes back up.
> /DAEMON/INFO [2017-05-24T10:39:11.471603+02:00] [] []
> /DAEMON/ERR [2017-05-24T10:39:11.473783+02:00] [] []
> /DAEMON/INFO [2017-05-24T10:39:14.479153+02:00] [] []
> /DAEMON/ERR [2017-05-24T10:39:14.481430+02:00] [] []
> [2017-05-24 08:39:14.513688] I [MSGID: 108006]
> [afr-common.c:4923:afr_local_init] 2-c1-replicate-0: no subvolumes up
> /DAEMON/INFO [2017-05-24T10:39:14.513712+02:00] [] []
> [2017-05-24 08:39:14.513858] I [MSGID: 114021] [client.c:2361:notify]
> 0-c1-client-2: current graph is no longer active, destroying rpc_client
> [2017-05-24 08:39:14.513879] I [MSGID: 114021] [client.c:2361:notify]
> 0-c1-client-3: current graph is no longer active, destroying rpc_client
> 

Re: [Gluster-users] Gluster client mount fails in mid flight with signum 15

2017-05-30 Thread Sunil Kumar Heggodu Gopala Acharya
Hi Gabriel,

I am not able to hit the issue mentioned on my setup.

Please share the log files(both brick and client log files) from your
setup. It would be great if you can share the details about steps you
followed to hit the issue.


Regards,

Sunil kumar Acharya

Senior Software Engineer

Red Hat



T: +91-8067935170 


TRIED. TESTED. TRUSTED. 


On Tue, May 30, 2017 at 3:30 PM, Gabriel Lindeborg <
gabriel.lindeb...@svenskaspel.se> wrote:

> Hello
>
> A manual mount failed the same way
>
> Cheers
> Gabbe
>
> 30 maj 2017 kl. 10:24 skrev Sunil Kumar Heggodu Gopala Acharya <
> shegg...@redhat.com>:
>
> Hi Gabriel,
>
> Which gluster version are your running? Are you able to fuse mount the
> volume?
>
> Please share the failure logs.
>
> Regards,
> Sunil kumar Acharya
>
> Senior Software Engineer
> Red Hat
>
> 
>
> T: +91-8067935170 
>
> 
> TRIED. TESTED. TRUSTED. 
>
>
> On Tue, May 30, 2017 at 1:04 PM, Gabriel Lindeborg  svenskaspel.se> wrote:
>
>> Hello All
>>
>> We’ve have a problem with cluster client mounts fail in mid run with this
>> in the log
>> glusterfsd.c:1332:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7dc5)
>> [0x7f640c8b3dc5] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
>> [0x7f640df4bfd5] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b)
>> [0x7f640df4bdfb] ) 0-: received signum (15), shutting down.
>>
>> We’ve tried running debug but have not found anything suspicious
>> happening at the time of the failures
>> We’ve searched the inter web but can not find anyone else having the same
>> problem in mid flight
>>
>> The clients have four mounts of volumes from the same server, all mounts
>> fail simultaneously
>> Peer status looks ok
>> Volume status looks ok
>> Volume info looks like this:
>> Volume Name: GLUSTERVOLUME
>> Type: Replicate
>> Volume ID: ca7af017-4f0f-44cc-baf6-43168eed0748
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: GLUSTERSERVER1:/gluster/GLUSTERVOLUME/brick
>> Brick2: GLUSTERSERVER2:/gluster/GLUSTERVOLUME/brick
>> Options Reconfigured:
>> transport.address-family: inet
>> cluster.self-heal-daemon: enable
>> nfs.disable: on
>> server.allow-insecure: on
>> client.bind-insecure: on
>> network.ping-timeout: 5
>> features.bitrot: on
>> features.scrub: Active
>> features.scrub-freq: weekly
>>
>> Any ideas?
>>
>> Cheers
>> Gabbe
>>
>>
>>
>> AB SVENSKA SPEL
>> 621 80 Visby
>> Norra Hansegatan 17, Visby
>> Växel: +4610-120 00 00
>> https://svenskaspel.se
>>
>> Please consider the environment before printing this email
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster client mount fails in mid flight with signum 15

2017-05-30 Thread Gabriel Lindeborg
Hello

A manual mount failed the same way

Cheers
Gabbe

30 maj 2017 kl. 10:24 skrev Sunil Kumar Heggodu Gopala Acharya 
>:

Hi Gabriel,

Which gluster version are your running? Are you able to fuse mount the volume?

Please share the failure logs.

Regards,
Sunil kumar Acharya

Senior Software Engineer

Red Hat



T: +91-8067935170

[https://www.redhat.com/profiles/rh/themes/redhatdotcom/img/logo-red-hat-black.png]
TRIED. TESTED. TRUSTED.


On Tue, May 30, 2017 at 1:04 PM, Gabriel Lindeborg 
> 
wrote:
Hello All

We’ve have a problem with cluster client mounts fail in mid run with this in 
the log
glusterfsd.c:1332:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7dc5) 
[0x7f640c8b3dc5] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) 
[0x7f640df4bfd5] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f640df4bdfb] 
) 0-: received signum (15), shutting down.

We’ve tried running debug but have not found anything suspicious happening at 
the time of the failures
We’ve searched the inter web but can not find anyone else having the same 
problem in mid flight

The clients have four mounts of volumes from the same server, all mounts fail 
simultaneously
Peer status looks ok
Volume status looks ok
Volume info looks like this:
Volume Name: GLUSTERVOLUME
Type: Replicate
Volume ID: ca7af017-4f0f-44cc-baf6-43168eed0748
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: GLUSTERSERVER1:/gluster/GLUSTERVOLUME/brick
Brick2: GLUSTERSERVER2:/gluster/GLUSTERVOLUME/brick
Options Reconfigured:
transport.address-family: inet
cluster.self-heal-daemon: enable
nfs.disable: on
server.allow-insecure: on
client.bind-insecure: on
network.ping-timeout: 5
features.bitrot: on
features.scrub: Active
features.scrub-freq: weekly

Any ideas?

Cheers
Gabbe



AB SVENSKA SPEL
621 80 Visby
Norra Hansegatan 17, Visby
Växel: +4610-120 00 00
https://svenskaspel.se

Please consider the environment before printing this email
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster client mount fails in mid flight with signum 15

2017-05-30 Thread Gabriel Lindeborg
Hello,

3.10.2

Initial mounting works fine, the fail comes a while after mounting.

This is the mnt.log for one of the mounts just before the fail:
/DAEMON/DEBUG [2017-05-30T09:17:45.371949+02:00] [] []
/DAEMON/INFO [2017-05-30T09:17:45.373441+02:00] [] []
/DAEMON/DEBUG [2017-05-30T09:17:45.373620+02:00] [] []
/DAEMON/INFO [2017-05-30T09:17:45.374734+02:00] [] []
/DAEMON/INFO [2017-05-30T09:17:45.374892+02:00] [] []
/DAEMON/DEBUG [2017-05-30T09:17:45.375301+02:00] [] []
/DAEMON/INFO [2017-05-30T09:17:45.407628+02:00] [] []
[2017-05-30 07:17:48.520770] I [MSGID: 108031] 
[afr-common.c:2340:afr_local_discovery_cbk] 0-alfresco-replicate-0: selecting 
local read_child alfresco-client-2
/DAEMON/INFO [2017-05-30T09:17:54.642644+02:00] [] []
/DAEMON/INFO [2017-05-30T09:17:54.651476+02:00] [] []
/DAEMON/INFO [2017-05-30T09:17:54.656808+02:00] [] []
[2017-05-30 07:17:45.371169] D [MSGID: 0] 
[options.c:1237:xlator_option_reconf_bool] 0-alfresco-dht: option 
lock-migration using set value off
[2017-05-30 07:17:45.371218] D [MSGID: 0] [dht-shared.c:363:dht_init_regex] 
0-alfresco-dht: using regex rsync-hash-regex = ^\.(.+)\.[^.]+$
[2017-05-30 07:17:45.371225] D [MSGID: 0] 
[options.c:1100:xlator_reconfigure_rec] 0-alfresco-dht: reconfigured
[2017-05-30 08:00:47.932460] W [glusterfsd.c:1332:cleanup_and_exit] 
(-->/lib64/libpthread.so.0(+0x7dc5) [0x7f1159819dc5] 
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f115aeb1fd5] 
-->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f115aeb1dfb] ) 0-: received 
signum
(15), shutting down



Cheers
Gabbe


30 maj 2017 kl. 10:24 skrev Sunil Kumar Heggodu Gopala Acharya 
>:

Hi Gabriel,

Which gluster version are your running? Are you able to fuse mount the volume?

Please share the failure logs.

Regards,
Sunil kumar Acharya

Senior Software Engineer

Red Hat



T: +91-8067935170

[https://www.redhat.com/profiles/rh/themes/redhatdotcom/img/logo-red-hat-black.png]
TRIED. TESTED. TRUSTED.


On Tue, May 30, 2017 at 1:04 PM, Gabriel Lindeborg 
> 
wrote:
Hello All

We’ve have a problem with cluster client mounts fail in mid run with this in 
the log
glusterfsd.c:1332:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7dc5) 
[0x7f640c8b3dc5] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) 
[0x7f640df4bfd5] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f640df4bdfb] 
) 0-: received signum (15), shutting down.

We’ve tried running debug but have not found anything suspicious happening at 
the time of the failures
We’ve searched the inter web but can not find anyone else having the same 
problem in mid flight

The clients have four mounts of volumes from the same server, all mounts fail 
simultaneously
Peer status looks ok
Volume status looks ok
Volume info looks like this:
Volume Name: GLUSTERVOLUME
Type: Replicate
Volume ID: ca7af017-4f0f-44cc-baf6-43168eed0748
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: GLUSTERSERVER1:/gluster/GLUSTERVOLUME/brick
Brick2: GLUSTERSERVER2:/gluster/GLUSTERVOLUME/brick
Options Reconfigured:
transport.address-family: inet
cluster.self-heal-daemon: enable
nfs.disable: on
server.allow-insecure: on
client.bind-insecure: on
network.ping-timeout: 5
features.bitrot: on
features.scrub: Active
features.scrub-freq: weekly

Any ideas?

Cheers
Gabbe



AB SVENSKA SPEL
621 80 Visby
Norra Hansegatan 17, Visby
Växel: +4610-120 00 00
https://svenskaspel.se

Please consider the environment before printing this email
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster client mount fails in mid flight with signum 15

2017-05-30 Thread Sunil Kumar Heggodu Gopala Acharya
Hi Gabriel,

Which gluster version are your running? Are you able to fuse mount the
volume?

Please share the failure logs.

Regards,

Sunil kumar Acharya

Senior Software Engineer

Red Hat



T: +91-8067935170 


TRIED. TESTED. TRUSTED. 


On Tue, May 30, 2017 at 1:04 PM, Gabriel Lindeborg <
gabriel.lindeb...@svenskaspel.se> wrote:

> Hello All
>
> We’ve have a problem with cluster client mounts fail in mid run with this
> in the log
> glusterfsd.c:1332:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7dc5)
> [0x7f640c8b3dc5] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
> [0x7f640df4bfd5] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b)
> [0x7f640df4bdfb] ) 0-: received signum (15), shutting down.
>
> We’ve tried running debug but have not found anything suspicious happening
> at the time of the failures
> We’ve searched the inter web but can not find anyone else having the same
> problem in mid flight
>
> The clients have four mounts of volumes from the same server, all mounts
> fail simultaneously
> Peer status looks ok
> Volume status looks ok
> Volume info looks like this:
> Volume Name: GLUSTERVOLUME
> Type: Replicate
> Volume ID: ca7af017-4f0f-44cc-baf6-43168eed0748
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: GLUSTERSERVER1:/gluster/GLUSTERVOLUME/brick
> Brick2: GLUSTERSERVER2:/gluster/GLUSTERVOLUME/brick
> Options Reconfigured:
> transport.address-family: inet
> cluster.self-heal-daemon: enable
> nfs.disable: on
> server.allow-insecure: on
> client.bind-insecure: on
> network.ping-timeout: 5
> features.bitrot: on
> features.scrub: Active
> features.scrub-freq: weekly
>
> Any ideas?
>
> Cheers
> Gabbe
>
>
>
> AB SVENSKA SPEL
> 621 80 Visby
> Norra Hansegatan 17, Visby
> Växel: +4610-120 00 00
> https://svenskaspel.se
>
> Please consider the environment before printing this email
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster client for windows??

2016-09-26 Thread Gilberto Nunes
Well

The fact is, that I have 3,7 TB of data and it seems to me very slow access
same data via CIFS. So just wanna try how to access those data via NFS.
Any way, I give up for now.

Thanks

2016-09-26 2:36 GMT-03:00 Ric Wheeler :

> Hi Gilberto,
>
> I am curious as to why you need NFS instead of CIFS for a windows client.
> Are you sharing gluster between clients with different operating systems?
>
> Regards,
> Ric
>
>
> On 09/23/2016 03:05 PM, Gilberto Nunes wrote:
>
>> Yeah!
>> I well know that
>> But I need NFS or gluster native access... It seems to me there is no
>> such thing.
>>
>> 2016-09-23 8:59 GMT-03:00 Ravishankar N  ravishan...@redhat.com>>:
>>
>> On 09/23/2016 05:20 PM, Gilberto Nunes wrote:
>>
>>> Hello folks
>>>
>>> After search in google, I wonder if there is none gluster client for
>>> windows.
>>> Is that correct??
>>> If somebody know any client, I will thankful if can provide same
>>> link to
>>> dowload.
>>>
>>
>> Access via CIFS is possible:
>> http://gluster.readthedocs.io/en/latest/Administrator%20Guid
>> e/Setting%20Up%20Clients/?highlight=samba#cifs
>> > de/Setting%20Up%20Clients/?highlight=samba#cifs>
>>
>>
>>> Thanks
>>>
>>> --
>>> Gilberto Ferreira
>>> +55 (47) 9676-7530 
>>> Skype: gilberto.nunes36
>>>
>>>
>>>
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org 
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>> 
>>>
>>
>> --
>> Gilberto Ferreira +55 (47) 9676-7530 Skype: gilberto.nunes36
>>
>


-- 

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster client for windows??

2016-09-25 Thread Bipin Kunal
NFS works fine. Just need to enable nfs client service. I have used once
with Windows 2012.

Thanks,
Bipin Kunal

On Mon, Sep 26, 2016 at 11:06 AM, Ric Wheeler  wrote:

> Hi Gilberto,
>
> I am curious as to why you need NFS instead of CIFS for a windows client.
> Are you sharing gluster between clients with different operating systems?
>
> Regards,
> Ric
>
>
> On 09/23/2016 03:05 PM, Gilberto Nunes wrote:
>
>> Yeah!
>> I well know that
>> But I need NFS or gluster native access... It seems to me there is no
>> such thing.
>>
>> 2016-09-23 8:59 GMT-03:00 Ravishankar N  ravishan...@redhat.com>>:
>>
>> On 09/23/2016 05:20 PM, Gilberto Nunes wrote:
>>
>>> Hello folks
>>>
>>> After search in google, I wonder if there is none gluster client for
>>> windows.
>>> Is that correct??
>>> If somebody know any client, I will thankful if can provide same
>>> link to
>>> dowload.
>>>
>>
>> Access via CIFS is possible:
>> http://gluster.readthedocs.io/en/latest/Administrator%20Guid
>> e/Setting%20Up%20Clients/?highlight=samba#cifs
>> > de/Setting%20Up%20Clients/?highlight=samba#cifs>
>>
>>
>>> Thanks
>>>
>>> --
>>> Gilberto Ferreira
>>> +55 (47) 9676-7530 
>>> Skype: gilberto.nunes36
>>>
>>>
>>>
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org 
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>> 
>>>
>>
>> --
>> Gilberto Ferreira +55 (47) 9676-7530 Skype: gilberto.nunes36
>>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster client for windows??

2016-09-25 Thread Ric Wheeler

Hi Gilberto,

I am curious as to why you need NFS instead of CIFS for a windows client. Are 
you sharing gluster between clients with different operating systems?


Regards,
Ric


On 09/23/2016 03:05 PM, Gilberto Nunes wrote:

Yeah!
I well know that
But I need NFS or gluster native access... It seems to me there is no such 
thing.

2016-09-23 8:59 GMT-03:00 Ravishankar N >:


On 09/23/2016 05:20 PM, Gilberto Nunes wrote:

Hello folks

After search in google, I wonder if there is none gluster client for 
windows.
Is that correct??
If somebody know any client, I will thankful if can provide same link to
dowload.


Access via CIFS is possible:

http://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Clients/?highlight=samba#cifs





Thanks

-- 


Gilberto Ferreira
+55 (47) 9676-7530 
Skype: gilberto.nunes36



___
Gluster-users mailing list
Gluster-users@gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-users



--
Gilberto Ferreira +55 (47) 9676-7530 Skype: gilberto.nunes36 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster client for windows??

2016-09-23 Thread Jiffin Thottan
forgot to cc gluster-user list

- Original Message -
From: "Jiffin Thottan" <jthot...@redhat.com>
To: "Gilberto Nunes" <gilberto.nune...@gmail.com>
Sent: Friday, September 23, 2016 5:44:50 PM
Subject: Re: [Gluster-users] Gluster client for windows??

Hi Gilberto,

Windows NFS client should work with gluster nfs and NFS-Ganesha
I guess there is no special requirement for that.

If it is not working with gluster nfs, try set to the option "nfs.mount-udp"
for the volume and check again.

--
Jiffin  
- Original Message -
From: "Gilberto Nunes" <gilberto.nune...@gmail.com>
To: gluster-users@gluster.org
Sent: Friday, September 23, 2016 5:35:24 PM
Subject: Re: [Gluster-users] Gluster client for windows??

Yeah! 
I well know that 
But I need NFS or gluster native access... It seems to me there is no such 
thing. 

2016-09-23 8:59 GMT-03:00 Ravishankar N < ravishan...@redhat.com > : 



On 09/23/2016 05:20 PM, Gilberto Nunes wrote: 



Hello folks 

After search in google, I wonder if there is none gluster client for windows. 
Is that correct?? 
If somebody know any client, I will thankful if can provide same link to 
dowload. 

Access via CIFS is possible: 
http://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Clients/?highlight=samba#cifs
 





Thanks 

-- 

Gilberto Ferreira 
+55 (47) 9676-7530 
Skype: gilberto.nunes36 



___
Gluster-users mailing list Gluster-users@gluster.org 
http://www.gluster.org/mailman/listinfo/gluster-users 






-- 

Gilberto Ferreira 
+55 (47) 9676-7530 
Skype: gilberto.nunes36 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster client for windows??

2016-09-23 Thread Gilberto Nunes
Yeah!
I well know that
But I need NFS or gluster native access... It seems to me there is no such
thing.

2016-09-23 8:59 GMT-03:00 Ravishankar N :

> On 09/23/2016 05:20 PM, Gilberto Nunes wrote:
>
> Hello folks
>
> After search in google, I wonder if there is none gluster client for
> windows.
> Is that correct??
> If somebody know any client, I will thankful if can provide same link to
> dowload.
>
>
> Access via CIFS is possible: http://gluster.readthedocs.io/
> en/latest/Administrator%20Guide/Setting%20Up%20Clients/?highlight=samba#
> cifs
>
>
> Thanks
>
> --
>
> Gilberto Ferreira
> +55 (47) 9676-7530
> Skype: gilberto.nunes36
>
>
>
> ___
> Gluster-users mailing 
> listGluster-users@gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-users
>
>
>


-- 

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster client for windows??

2016-09-23 Thread Ravishankar N

On 09/23/2016 05:20 PM, Gilberto Nunes wrote:

Hello folks

After search in google, I wonder if there is none gluster client for 
windows.

Is that correct??
If somebody know any client, I will thankful if can provide same link 
to dowload.


Access via CIFS is possible: 
http://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Clients/?highlight=samba#cifs




Thanks

--

Gilberto Ferreira
+55 (47) 9676-7530
Skype: gilberto.nunes36



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster client mount fails on fedora 23

2015-12-24 Thread Arno A. Karner
so what do I need to do to use cluster clients greater than
glusterfs:3.7.2:3.fc23:x86_64:(none):(none)
every version I have built on fedora 23 greater than this has mailed to
mount gluster volume.

Are there any best practices, for upgrading the server side, or do just
start from scratch, and copy tera bytes, peda bytes of data :-/ to the
new improved gluster server ?

gluster server
node 1 fedora 22
# rpmq ^gluster
glusterfs:3.6.7:1.fc22:x86_64:Fedora Project:Fedora Project
glusterfs-api:3.6.7:1.fc22:x86_64:Fedora Project:Fedora Project
glusterfs-cli:3.6.7:1.fc22:x86_64:Fedora Project:Fedora Project
glusterfs-fuse:3.6.7:1.fc22:x86_64:Fedora Project:Fedora Project
glusterfs-libs:3.6.7:1.fc22:x86_64:Fedora Project:Fedora Project
glusterfs-server:3.6.7:1.fc22:x86_64:Fedora Project:Fedora Project

nodes 2, 3, 4 centos 7.2
# rpmq ^gluster
glusterfs:3.6.6:1.el7.centos:x86_64:(none):(none)
glusterfs-api:3.6.6:1.el7.centos:x86_64:(none):(none)
glusterfs-cli:3.6.6:1.el7.centos:x86_64:(none):(none)
glusterfs-fuse:3.6.6:1.el7.centos:x86_64:(none):(none)
glusterfs-libs:3.6.6:1.el7.centos:x86_64:(none):(none)
glusterfs-server:3.6.6:1.el7.centos:x86_64:(none):(none)

working clients
all servers mount at least one volume from the server group

centos 6.7
# rpmq ^gluster
glusterfs:3.6.0.54:1.el6:x86_64:(none):CentOS BuildSystem <
http://bugs.centos.org>
glusterfs-api:3.6.0.54:1.el6:x86_64:(none):CentOS BuildSystem <
http://bugs.centos.org>
glusterfs-cli:3.6.0.54:1.el6:x86_64:(none):CentOS BuildSystem <
http://bugs.centos.org>
glusterfs-fuse:3.6.0.54:1.el6:x86_64:(none):CentOS BuildSystem <
http://bugs.centos.org>
glusterfs-libs:3.6.0.54:1.el6:x86_64:(none):CentOS BuildSystem <
http://bugs.centos.org>
glusterfs-rdma:3.6.0.54:1.el6:x86_64:(none):CentOS BuildSystem <
http://bugs.centos.org>

centos 7.2
# rpmq ^gluster
glusterfs:3.7.1:16.el7:x86_64:(none):CentOS BuildSystem <
http://bugs.centos.org>
glusterfs-api:3.7.1:16.el7:x86_64:(none):CentOS BuildSystem <
http://bugs.centos.org>
glusterfs-cli:3.7.1:16.el7:x86_64:(none):CentOS BuildSystem <
http://bugs.centos.org>
glusterfs-client-xlators:3.7.1:16.el7:x86_64:(none):CentOS BuildSystem

glusterfs-fuse:3.7.1:16.el7:x86_64:(none):CentOS BuildSystem <
http://bugs.centos.org>
glusterfs-libs:3.7.1:16.el7:x86_64:(none):CentOS BuildSystem <
http://bugs.centos.org>

fedora 23
# rpmq ^gluster
glusterfs:3.7.2:3.fc23:x86_64:(none):(none)
glusterfs-api:3.7.2:3.fc23:x86_64:(none):(none)
glusterfs-client-xlators:3.7.2:3.fc23:x86_64:(none):(none)
glusterfs-fuse:3.7.2:3.fc23:x86_64:(none):(none)
glusterfs-libs:3.7.2:3.fc23:x86_64:(none):(none)
On Sun, 2015-12-06 at 03:06 -0600, Arno A. Karner wrote:
> the client side
> 
> /var/log/messages
> Dec  6 03:02:19 cli.dom.com: gvs.mount: Unit entered failed state.
> 
> # mount -a
> Mount failed. Please check the log file for more details.
> 
> # grep /gvs /etc/fstab
> srv.dom.com:/gvs /gvs glusterfs defaults 0 0
> 
> # tail /var/log/glusterfs/gvs.log
> [2015-12-06 08:40:34.245021] I [MSGID: 100030]
> [glusterfsd.c:2318:main]
> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version
> 3.7.6 (args: /usr/sbin/glusterfs --volfile-server=srv.dom.com -
> -volfile
> -id=/gvs /gvs)
> [2015-12-06 08:40:34.264452] I [MSGID: 101190] [event
> -epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread
> with
> index 1
> [2015-12-06 08:40:34.264919] W [socket.c:588:__socket_rwv] 0
> -glusterfs:
> readv on 10.255.255.4:24007 failed (No data available)
> [2015-12-06 08:40:34.265376] E [rpc-clnt.c:362:saved_frames_unwind] (
> -
> -> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x1a3)[0x7f481a752183]
> (-
> -> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1cf)[0x7f481a51d41f] (-
> ->
> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f481a51d53e] (-->
> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7e)[0x7f481a51ed0e
> ]
> (--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x88)[0x7f481a51f528] )
> 0
> -glusterfs: forced unwinding frame type(GlusterFS Handshake)
> op(GETSPEC(2)) called at 2015-12-06 08:40:34.264590 (xid=0x1)
> [2015-12-06 08:40:34.265399] E [glusterfsd
> -mgmt.c:1603:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file
> (key:/gvs)
> [2015-12-06 08:40:34.265427] W [glusterfsd.c:1236:cleanup_and_exit] (
> -
> ->/lib64/libgfrpc.so.0(saved_frames_unwind+0x1fa) [0x7f481a51d44a] -
> ->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x498) [0x55e8a25b8068] -
> ->/usr/sbin/glusterfs(cleanup_and_exit+0x56) [0x55e8a25b2416] ) 0-:
> received signum (0), shutting down
> [2015-12-06 08:40:34.265446] I [fuse-bridge.c:5683:fini] 0-fuse:
> Unmounting '/gvs'.
> 
> On Sat, 2015-12-05 at 12:53 +0530, Atin Mukherjee wrote:
> > 
> > On 12/05/2015 06:10 AM, Arno A. Karner wrote:
> > > is there a trick when clients, and servers are not on the same
> > > version
> > > of glusterfs. I have clients and servers on gluserfs 3.6.6-1 both
> > > centos7, and fedora 22, and work fine. Now that fedora 23 has
> > > been
> > > released I'm trying to 

Re: [Gluster-users] gluster client mount fails on fedora 23

2015-12-06 Thread Arno A. Karner
the client side

/var/log/messages
Dec  6 03:02:19 cli.dom.com: gvs.mount: Unit entered failed state.

# mount -a
Mount failed. Please check the log file for more details.

# grep /gvs /etc/fstab
srv.dom.com:/gvs /gvs glusterfs defaults 0 0

# tail /var/log/glusterfs/gvs.log
[2015-12-06 08:40:34.245021] I [MSGID: 100030] [glusterfsd.c:2318:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version
3.7.6 (args: /usr/sbin/glusterfs --volfile-server=srv.dom.com --volfile
-id=/gvs /gvs)
[2015-12-06 08:40:34.264452] I [MSGID: 101190] [event
-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with
index 1
[2015-12-06 08:40:34.264919] W [socket.c:588:__socket_rwv] 0-glusterfs:
readv on 10.255.255.4:24007 failed (No data available)
[2015-12-06 08:40:34.265376] E [rpc-clnt.c:362:saved_frames_unwind] (-
-> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x1a3)[0x7f481a752183] (-
-> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1cf)[0x7f481a51d41f] (-->
/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f481a51d53e] (-->
/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7e)[0x7f481a51ed0e]
(--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x88)[0x7f481a51f528] ) 0
-glusterfs: forced unwinding frame type(GlusterFS Handshake)
op(GETSPEC(2)) called at 2015-12-06 08:40:34.264590 (xid=0x1)
[2015-12-06 08:40:34.265399] E [glusterfsd
-mgmt.c:1603:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file
(key:/gvs)
[2015-12-06 08:40:34.265427] W [glusterfsd.c:1236:cleanup_and_exit] (-
->/lib64/libgfrpc.so.0(saved_frames_unwind+0x1fa) [0x7f481a51d44a] -
->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x498) [0x55e8a25b8068] -
->/usr/sbin/glusterfs(cleanup_and_exit+0x56) [0x55e8a25b2416] ) 0-:
received signum (0), shutting down
[2015-12-06 08:40:34.265446] I [fuse-bridge.c:5683:fini] 0-fuse:
Unmounting '/gvs'.

On Sat, 2015-12-05 at 12:53 +0530, Atin Mukherjee wrote:
> 
> On 12/05/2015 06:10 AM, Arno A. Karner wrote:
> > is there a trick when clients, and servers are not on the same
> > version
> > of glusterfs. I have clients and servers on gluserfs 3.6.6-1 both
> > centos7, and fedora 22, and work fine. Now that fedora 23 has been
> > released I'm trying to use it. The client I am have trouble with is
> > on
> > fedora 23, which comes with glusterfs 3.7.6-1. On nfs I can use
> > options
> > to control version protocol uses, does this exist for glusterfs as
> > well.
> mount log & glusterd log please, are you seeing UN-previleged port
> requests entries in glusterd log file?
> 
> ~Atin
> > 
> > Thanks in advance, Arno.
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-users
> > 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster client mount fails on fedora 23

2015-12-05 Thread Niels de Vos
On Fri, Dec 04, 2015 at 06:40:28PM -0600, Arno A. Karner wrote:
> is there a trick when clients, and servers are not on the same version
> of glusterfs. I have clients and servers on gluserfs 3.6.6-1 both
> centos7, and fedora 22, and work fine. Now that fedora 23 has been
> released I'm trying to use it. The client I am have trouble with is on
> fedora 23, which comes with glusterfs 3.7.6-1. On nfs I can use options
> to control version protocol uses, does this exist for glusterfs as
> well.

I think this has been asked a few weeks back too. Could you check if
this answers the details?

  
http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12955/focus=12958

HTH,
Niels


signature.asc
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster client mount fails on fedora 23

2015-12-04 Thread Atin Mukherjee


On 12/05/2015 06:10 AM, Arno A. Karner wrote:
> is there a trick when clients, and servers are not on the same version
> of glusterfs. I have clients and servers on gluserfs 3.6.6-1 both
> centos7, and fedora 22, and work fine. Now that fedora 23 has been
> released I'm trying to use it. The client I am have trouble with is on
> fedora 23, which comes with glusterfs 3.7.6-1. On nfs I can use options
> to control version protocol uses, does this exist for glusterfs as
> well.
mount log & glusterd log please, are you seeing UN-previleged port
requests entries in glusterd log file?

~Atin
> 
> Thanks in advance, Arno.
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster client crash

2015-06-17 Thread Krutika Dhananjay
Hi, 

Looks like the process crashed. 
Could you provide the logs associated with this process along with the volume 
configuration? 
The process must have dumped a core file. Could you attach the core to gdb and 
provide its backtrace as well? 

-Krutika 

- Original Message -

 From: Mathieu Chateau mathieu.chat...@lotp.fr
 To: gluster-users gluster-users@gluster.org
 Sent: Wednesday, June 17, 2015 7:16:03 PM
 Subject: [Gluster-users] gluster client crash

 Hello,

 On on my gluster mount crashed this morning on one of my gluster client for
 this share.
 I am use fuse.

 Gluster server wad updated to 3.7.1 and this client too but not rebooted.
 Trying to mount again this share failed. After reboot, everything is ok
 again.

 Any clue ?

 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: pending frames:
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: frame : type(0) op(0)
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: frame : type(0) op(0)
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: frame : type(0) op(0)
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: frame : type(0) op(0)
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: frame : type(0) op(0)
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: frame : type(0) op(0)
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: frame : type(0) op(0)
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: frame : type(0) op(0)
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: patchset: git://
 git.gluster.com/glusterfs.git
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: signal received: 11
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: time of crash:
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: 2015-06-17 05:57:32
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: configuration details:
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: argp 1
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: backtrace 1
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: dlfcn 1
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: libpthread 1
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: llistxattr 1
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: setfsid 1
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: spinlock 1
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: epoll.h 1
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: xattr.h 1
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: st_atim.tv_nsec 1
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: package-string:
 glusterfs 3.6.3
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: -

 Cordialement,
 Mathieu CHATEAU
 http://www.lotp.fr

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster client crash

2015-06-17 Thread Mathieu Chateau
Where should be the dump file ?


in the log file (nothing this same day before):

[2015-06-17 05:57:32.034533] I [glusterfsd-mgmt.c:56:mgmt_cbk_spec] 0-mgmt:
Volume file changed
[2015-06-17 05:57:32.085366] I [glusterfsd-mgmt.c:56:mgmt_cbk_spec] 0-mgmt:
Volume file changed
[2015-06-17 05:57:32.085966] I [glusterfsd-mgmt.c:56:mgmt_cbk_spec] 0-mgmt:
Volume file changed
[2015-06-17 05:57:32.086848] I [glusterfsd-mgmt.c:56:mgmt_cbk_spec] 0-mgmt:
Volume file changed
pending frames:
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
patchset: git://git.gluster.com/glusterfs.git
signal received: 11
time of crash:
2015-06-17 05:57:32
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.6.3
/lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0xb2)[0x7f66b17a4362]
/lib64/libglusterfs.so.0(gf_print_trace+0x32d)[0x7f66b17bb85d]
/lib64/libc.so.6(+0x35650)[0x7f66b07bd650]
/lib64/libc.so.6(_IO_vfprintf+0x1564)[0x7f66b07d0a94]
/lib64/libc.so.6(__vasprintf_chk+0xb5)[0x7f66b0895425]
/lib64/libglusterfs.so.0(_gf_log+0x48c)[0x7f66b17a4e4c]
/lib64/libglusterfs.so.0(graphyyerror+0xbf)[0x7f66b17fa41f]
/lib64/libglusterfs.so.0(graphyyparse+0x337)[0x7f66b17fa867]
/lib64/libglusterfs.so.0(glusterfs_graph_construct+0x404)[0x7f66b17fb604]
/lib64/libglusterfs.so.0(glusterfs_volfile_reconfigure+0x4a)[0x7f66b17dac6a]
/usr/sbin/glusterfs(mgmt_getspec_cbk+0x317)[0x7f66b1c574b7]
/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)[0x7f66b1578100]
/lib64/libgfrpc.so.0(rpc_clnt_notify+0x174)[0x7f66b1578374]
/lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7f66b15742c3]
/usr/lib64/glusterfs/3.6.3/rpc-transport/socket.so(+0x8500)[0x7f66a687b500]
/usr/lib64/glusterfs/3.6.3/rpc-transport/socket.so(+0xacf4)[0x7f66a687dcf4]
/lib64/libglusterfs.so.0(+0x766c2)[0x7f66b17f96c2]
/usr/sbin/glusterfs(main+0x502)[0x7f66b1c4efb2]
/lib64/libc.so.6(__libc_start_main+0xf5)[0x7f66b07a9af5]
/usr/sbin/glusterfs(+0x6351)[0x7f66b1c4f351]
-

Cordialement,
Mathieu CHATEAU
http://www.lotp.fr

2015-06-17 16:53 GMT+02:00 Krutika Dhananjay kdhan...@redhat.com:

 Hi,

 Looks like the process crashed.
 Could you provide the logs associated with this process along with the
 volume configuration?
 The process must have dumped a core file. Could you attach the core to gdb
 and provide its backtrace as well?

 -Krutika

 --

 *From: *Mathieu Chateau mathieu.chat...@lotp.fr
 *To: *gluster-users gluster-users@gluster.org
 *Sent: *Wednesday, June 17, 2015 7:16:03 PM
 *Subject: *[Gluster-users] gluster client crash


 Hello,

 On on my gluster mount crashed this morning on one of my gluster client
 for this share.
 I am use fuse.

 Gluster server wad updated to 3.7.1 and this client too but not rebooted.
 Trying to mount again this share failed. After reboot, everything is ok
 again.

 Any clue ?

 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: pending frames:
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: frame : type(0) op(0)
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: frame : type(0) op(0)
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: frame : type(0) op(0)
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: frame : type(0) op(0)
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: frame : type(0) op(0)
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: frame : type(0) op(0)
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: frame : type(0) op(0)
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: frame : type(0) op(0)
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: patchset: git://
 git.gluster.com/glusterfs.git
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: signal received: 11
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: time of crash:
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: 2015-06-17 05:57:32
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: configuration details:
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: argp 1
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: backtrace 1
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: dlfcn 1
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: libpthread 1
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: llistxattr 1
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: setfsid 1
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: spinlock 1
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: epoll.h 1
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: xattr.h 1
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: st_atim.tv_nsec 1
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: package-string:
 glusterfs 3.6.3
 Jun 17 07:57:32 myclientLinux mnt-gluster-xxx[1615]: -

 Cordialement,
 Mathieu CHATEAU
 http://www.lotp.fr

 

Re: [Gluster-users] [gluster client] what's different between mount.glusterfs and glusterfs

2014-12-11 Thread Niels de Vos
On Thu, Dec 11, 2014 at 11:59:18AM +, Jifeng Li wrote:
 Hi,
 
 When mounting Gluster volumes to access data , I find that there are two ways 
 listed below:
 
 
 1)  glusterfs -p /var/run/glusterfs.pid --volfile-server=vol_server 
 --volfile-id=volfile_id  mount_point
 
 2)  mount -t glusterfs -o direct-io-mode=disable,_netdev  
 backupvolfile-server=backup_vol_server   vol_server: volume_name   
 mount_point
 
 
 so,
 
 1.   what's the different between the above ways?
 
 2.   If using way 1, is there option similar backupvolfile-server ?

The glusterfs binary is the low-level tool that functions as a GlusterFS
client. The NFS-server, self-heal and other processes are actually
running the glusterfs binary too, just with a different set of options.

The /sbin/mount.glusterfs script is a mount helper. When executing a
standard mount -t fs-type command, the /sbin/mount.fs-type helper
get executed (if one exists). The script itself will parse the options
that were passed on the mount commandline, and will re-format/arrange
them and executes the glusterfs binary with those options.

You can pass multiple --volfile-server=.. options to the glusterfs
binary. This should try mounting from the next server in case the
current one fails.

HTH,
Niels


pgpLGJe8QFFwa.pgp
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster client crash

2014-01-27 Thread Mingfan Lu
the volume is distributed (replication = 1)


On Mon, Jan 27, 2014 at 4:01 PM, Mingfan Lu mingfan...@gmail.com wrote:

 One of our client (3.3.0.5) crashed when writing data, the log is:

 pending frames:
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(READ)
 frame : type(1) op(READ)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(LOOKUP)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(READ)
 frame : type(1) op(READ)
 frame : type(1) op(READ)
 frame : type(1) op(READ)
 frame : type(1) op(READ)
 frame : type(1) op(READ)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(READ)
 frame : type(1) op(READ)
 frame : type(1) op(READ)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(READ)
 frame : type(1) op(READ)
 frame : type(1) op(READ)
 frame : type(1) op(READ)
 frame : type(1) op(READ)
 frame : type(1) op(READ)
 frame : type(1) op(READ)
 frame : type(1) op(READ)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)
 frame : type(1) op(WRITE)

 patchset: git://git.gluster.com/glusterfs.git
 signal received: 6
 time of crash: 2014-01-27 15:36:32
 configuration details:
 argp 1
 backtrace 1
 dlfcn 1
 fdatasync 1
 libpthread 1
 llistxattr 1
 setfsid 1
 spinlock 1
 epoll.h 1
 xattr.h 1
 st_atim.tv_nsec 1
 package-string: glusterfs 3.3.0.5rhs
 /lib64/libc.so.6[0x32c5a32920]
 /lib64/libc.so.6(gsignal+0x35)[0x32c5a328a5]
 /lib64/libc.so.6(abort+0x175)[0x32c5a34085]
 /lib64/libc.so.6[0x32c5a707b7]
 /lib64/libc.so.6[0x32c5a760e6]

 /usr/lib64/glusterfs/3.3.0.5rhs/xlator/performance/write-behind.so(+0x42be)[0x7f79a63012be]

 /usr/lib64/glusterfs/3.3.0.5rhs/xlator/performance/write-behind.so(wb_sync_cbk+0xa0)[0x7f79a6307ab0]

 /usr/lib64/glusterfs/3.3.0.5rhs/xlator/features/quota.so(quota_writev_cbk+0xed)[0x7f79a651864d]

 /usr/lib64/glusterfs/3.3.0.5rhs/xlator/cluster/distribute.so(dht_writev_cbk+0x14f)[0x7f79a6753aaf]

 /usr/lib64/glusterfs/3.3.0.5rhs/xlator/protocol/client.so(client3_1_writev_cbk+0x600)[0x7f79a6995340]
 /usr/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0xa5)[0x31b020f4f5]
 /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x120)[0x31b020fdb0]
 /usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x28)[0x31b020aeb8]

 /usr/lib64/glusterfs/3.3.0.5rhs/rpc-transport/socket.so(socket_event_poll_in+0x34)[0x7f79a79d4784]

 /usr/lib64/glusterfs/3.3.0.5rhs/rpc-transport/socket.so(socket_event_handler+0xc7)[0x7f79a79d4867]
 /usr/lib64/libglusterfs.so.0[0x31afe3e4e4]
 /usr/sbin/glusterfs(main+0x590)[0x407420]
 /lib64/libc.so.6(__libc_start_main+0xfd)[0x32c5a1ecdd]
 /usr/sbin/glusterfs[0x404289]

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] gluster client crash

2014-01-27 Thread Vijay Bellur

On 01/27/2014 01:34 PM, Mingfan Lu wrote:

the volume is distributed (replication = 1)


Is it possible to obtain a full backtrace using gdb?

Also, what is the complete version string of this glusterfs release?

Thanks,
Vijay




On Mon, Jan 27, 2014 at 4:01 PM, Mingfan Lu mingfan...@gmail.com
mailto:mingfan...@gmail.com wrote:

One of our client (3.3.0.5) crashed when writing data, the log is:

pending frames:
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(LOOKUP)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
frame : type(1) op(WRITE)
patchset: git://git.gluster.com/glusterfs.git
http://git.gluster.com/glusterfs.git
signal received: 6
time of crash: 2014-01-27 15:36:32
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.3.0.5rhs
/lib64/libc.so.6[0x32c5a32920]
/lib64/libc.so.6(gsignal+0x35)[0x32c5a328a5]
/lib64/libc.so.6(abort+0x175)[0x32c5a34085]
/lib64/libc.so.6[0x32c5a707b7]
/lib64/libc.so.6[0x32c5a760e6]

/usr/lib64/glusterfs/3.3.0.5rhs/xlator/performance/write-behind.so(+0x42be)[0x7f79a63012be]

/usr/lib64/glusterfs/3.3.0.5rhs/xlator/performance/write-behind.so(wb_sync_cbk+0xa0)[0x7f79a6307ab0]

/usr/lib64/glusterfs/3.3.0.5rhs/xlator/features/quota.so(quota_writev_cbk+0xed)[0x7f79a651864d]

/usr/lib64/glusterfs/3.3.0.5rhs/xlator/cluster/distribute.so(dht_writev_cbk+0x14f)[0x7f79a6753aaf]

/usr/lib64/glusterfs/3.3.0.5rhs/xlator/protocol/client.so(client3_1_writev_cbk+0x600)[0x7f79a6995340]
/usr/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0xa5)[0x31b020f4f5]
/usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x120)[0x31b020fdb0]
/usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x28)[0x31b020aeb8]

/usr/lib64/glusterfs/3.3.0.5rhs/rpc-transport/socket.so(socket_event_poll_in+0x34)[0x7f79a79d4784]

/usr/lib64/glusterfs/3.3.0.5rhs/rpc-transport/socket.so(socket_event_handler+0xc7)[0x7f79a79d4867]
/usr/lib64/libglusterfs.so.0[0x31afe3e4e4]
/usr/sbin/glusterfs(main+0x590)[0x407420]
/lib64/libc.so.6(__libc_start_main+0xfd)[0x32c5a1ecdd]
/usr/sbin/glusterfs[0x404289]




___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster client can't connect to Gluster volume

2012-05-05 Thread David Coulson
Do you have any firewall rules enabled? I'd start by disabling iptables 
(or at least setting everything to ACCEPT) and as someone else suggested 
setting selinux to permissive/disabled.


Why are your nodes and client using different versions of Gluster? Why 
not just use the 3.2.6 version for everything? Also, I'm not sure where 
port 6996 comes from - Gluster uses 24007 for it's core communications 
and ports above that for individual bricks.


David

On 5/5/12 12:27 AM, Eric wrote:

Hi, All:

I've built a Gluster-based storage cluster on a pair of CentOS 5.7 
(i386) VM's. The nodes are using Gluster 3.2.6 (from source) and the 
host is using Gluster 3.0.0 (from the Mageia package repositories):


|[eric@node1 ~]$ sudo /usr/local/sbin/gluster --version
glusterfs 3.2.6 built on May  3 2012 15:53:02

||[eric@localhost ~]$ rpm -qa | grep glusterfs|
|glusterfs-common-3.0.0-2.mga1|
|glusterfs-client-3.0.0-2.mga1|
|glusterfs-server-3.0.0-2.mga1|
|libglusterfs0-3.0.0-2.mga1|

None of the systems (i.e., neither the two storage nodes nor the 
client) can connect to Port 6996 of the cluster (node1.example.com  
node2.example.com) but the two storage nodes can mount the shared 
volume using the Gluster helper and/or NFS:


|[eric@node1 ~]$ sudo /sbin/lsmod | grep fuse

[eric@node1 ~]$ sudo /sbin/modprobe fuse

[eric@node1 ~]$ sudo /sbin/lsmod | grep fuse
fuse   49237  0

[eric@node1 ~]$ sudo mount -t glusterfs node1:/mirror-1 /mnt

[eric@node1 ~]$ sudo grep gluster /etc/mtab
glusterfs#node1:/mirror-1 /mnt fuse 
rw,allow_other,default_permissions,max_read=131072 0 0|


...but the host system is only able to connect using NFS:

|[eric@localhost ~]$ sudo glusterfs --debug -f /tmp/glusterfs.vol /mnt
[2012-05-04 19:09:09] D [glusterfsd.c:424:_get_specfp] glusterfs: 
loading volume file /tmp/glusterfs.vol


Version  : glusterfs 3.0.0 built on Apr 10 2011 19:12:54
git: 2.0.1-886-g8379edd
Starting Time: 2012-05-04 19:09:09
Command line : glusterfs --debug -f /tmp/glusterfs.vol /mnt
PID  : 30159
System name  : Linux
Nodename : localhost.localdomain
Kernel Release : 2.6.38.8-desktop586-10.mga
Hardware Identifier: i686

Given volfile:
+--+
  1: volume mirror-1
  2:  type protocol/client
  3:  option transport-type tcp
  4:  option remote-host node1.example.com
  5:  option remote-subvolume mirror-1
  6: end-volume
+--+
[2012-05-04 19:09:09] D [glusterfsd.c:1335:main] glusterfs: running in 
pid 30159
[2012-05-04 19:09:09] D [client-protocol.c:6581:init] mirror-1: 
defaulting frame-timeout to 30mins
[2012-05-04 19:09:09] D [client-protocol.c:6592:init] mirror-1: 
defaulting ping-timeout to 42
[2012-05-04 19:09:09] D [transport.c:145:transport_load] transport: 
attempt to load file /usr/lib/glusterfs/3.0.0/transport/socket.so
[2012-05-04 19:09:09] D [transport.c:145:transport_load] transport: 
attempt to load file /usr/lib/glusterfs/3.0.0/transport/socket.so
[2012-05-04 19:09:09] D [client-protocol.c:7005:notify] mirror-1: got 
GF_EVENT_PARENT_UP, attempting connect on transport
[2012-05-04 19:09:09] D [client-protocol.c:7005:notify] mirror-1: got 
GF_EVENT_PARENT_UP, attempting connect on transport
[2012-05-04 19:09:09] D [client-protocol.c:7005:notify] mirror-1: got 
GF_EVENT_PARENT_UP, attempting connect on transport
[2012-05-04 19:09:09] D [client-protocol.c:7005:notify] mirror-1: got 
GF_EVENT_PARENT_UP, attempting connect on transport
[2012-05-04 19:09:09] N [glusterfsd.c:1361:main] glusterfs: 
Successfully started
[2012-05-04 19:09:09] E [socket.c:760:socket_connect_finish] mirror-1: 
connection to  failed (Connection refused)
[2012-05-04 19:09:09] D [fuse-bridge.c:3079:fuse_thread_proc] fuse:  
pthread_cond_timedout returned non zero value ret: 0 errno: 0
[2012-05-04 19:09:09] N [fuse-bridge.c:2931:fuse_init] glusterfs-fuse: 
FUSE inited with protocol versions: glusterfs 7.13 kernel 7.16
[2012-05-04 19:09:09] E [socket.c:760:socket_connect_finish] mirror-1: 
connection to  failed (Connection refused)|


I've read through the Troubleshooting section of the Gluster 
Administration Guide 
http://download.gluster.com/pub/gluster/glusterfs/3.2/Documentation/AG/html/chap-Administration_Guide-Troubleshooting.html 
and the Gluster User Guide 
http://www.gluster.org/community/documentation/index.php/User_Guide#Troubleshootingbut 
can't seem to resolve the problem. (See my post on the Mageia Forum 
https://forums.mageia.org/en/viewtopic.php?f=7amp;t=2358amp;p=17517 for 
all the troubleshooting details: 
https://forums.mageia.org/en/viewtopic.php?f=7t=2358p=17517)


What might be causing this?

TIA,
Eric Pretorious
Truckee, CA

https://forums.mageia.org/en/viewtopic.php?f=7t=2358p=17517


___
Gluster-users mailing list

Re: [Gluster-users] Gluster client can't connect to Gluster volume

2012-05-05 Thread Eric
Thanks, David:

Yes...
* iptables has been disabled on all three systems.
* SELinux is set to permissive on the two systems that employ it - the 
two CentOS nodes.
* Port #6996 is referenced in the Troubleshooting section of the 
Gluster User Guide.
FWIW: All of this except the SELinux question is already documented in  my post 
on the Mageia Forum.

Eric Pretorious
Truckee, CA




 From: David Coulson da...@davidcoulson.net
To: Eric epretori...@yahoo.com 
Cc: gluster-users@gluster.org gluster-users@gluster.org 
Sent: Saturday, May 5, 2012 5:44 AM
Subject: Re: [Gluster-users] Gluster client can't connect to Gluster volume
 

Do you have any firewall rules enabled? I'd start by disabling iptables (or at 
least setting everything to ACCEPT) and as someone else suggested setting 
selinux to permissive/disabled.

Why are your nodes and client using different versions of Gluster?
Why not just use the 3.2.6 version for everything? Also, I'm not
sure where port 6996 comes from - Gluster uses 24007 for it's core
communications and ports above that for individual bricks.

David

On 5/5/12 12:27 AM, Eric wrote: 
Hi, All:

I've built a Gluster-based storage cluster on a pair of CentOS
  5.7 (i386) VM's. The nodes are using Gluster 3.2.6 (from
  source) and the host is using Gluster 3.0.0 (from the Mageia
  package repositories):


[eric@node1 ~]$ sudo /usr/local/sbin/gluster --version
glusterfs 3.2.6 built on May  3 2012 15:53:02

[eric@localhost ~]$ rpm -qa | grep glusterfs
glusterfs-common-3.0.0-2.mga1
glusterfs-client-3.0.0-2.mga1
glusterfs-server-3.0.0-2.mga1
libglusterfs0-3.0.0-2.mga1


None of the systems (i.e., neither the two storage nodes nor the client) can 
connect to Port 6996 of the cluster (node1.example.com  node2.example.com) 
but the two storage nodes can mount the shared volume using the Gluster 
helper and/or NFS:


[eric@node1 ~]$ sudo /sbin/lsmod | grep fuse

[eric@node1 ~]$ sudo /sbin/modprobe fuse

[eric@node1 ~]$ sudo /sbin/lsmod | grep fuse
fuse                   49237  0 

[eric@node1 ~]$ sudo mount -t glusterfs node1:/mirror-1 /mnt

[eric@node1 ~]$ sudo grep gluster /etc/mtab 
glusterfs#node1:/mirror-1 /mnt fuse
rw,allow_other,default_permissions,max_read=131072 0 0


...but the host system is only able to connect using NFS:


[eric@localhost ~]$ sudo glusterfs --debug -f /tmp/glusterfs.vol /mnt
[2012-05-04 19:09:09] D [glusterfsd.c:424:_get_specfp]
glusterfs: loading volume file /tmp/glusterfs.vol

Version      : glusterfs 3.0.0 built on Apr 10 2011 19:12:54
git: 2.0.1-886-g8379edd
Starting Time: 2012-05-04 19:09:09
Command line : glusterfs --debug -f /tmp/glusterfs.vol /mnt 
PID          : 30159
System name  : Linux
Nodename     : localhost.localdomain
Kernel Release : 2.6.38.8-desktop586-10.mga
Hardware Identifier: i686

Given volfile:
+--+
  1: volume mirror-1
  2:  type protocol/client
  3:  option transport-type tcp
  4:  option remote-host node1.example.com
  5:  option remote-subvolume mirror-1
  6: end-volume
+--+
[2012-05-04 19:09:09] D [glusterfsd.c:1335:main] glusterfs:
running in pid 30159
[2012-05-04 19:09:09] D [client-protocol.c:6581:init]
mirror-1: defaulting frame-timeout to 30mins
[2012-05-04 19:09:09] D [client-protocol.c:6592:init]
mirror-1: defaulting ping-timeout to 42
[2012-05-04 19:09:09] D [transport.c:145:transport_load]
transport: attempt to load file
/usr/lib/glusterfs/3.0.0/transport/socket.so
[2012-05-04 19:09:09] D [transport.c:145:transport_load]
transport: attempt to load file
/usr/lib/glusterfs/3.0.0/transport/socket.so
[2012-05-04 19:09:09] D [client-protocol.c:7005:notify]
mirror-1: got GF_EVENT_PARENT_UP, attempting connect on
transport
[2012-05-04 19:09:09] D [client-protocol.c:7005:notify]
mirror-1: got GF_EVENT_PARENT_UP, attempting connect on
transport
[2012-05-04 19:09:09] D [client-protocol.c:7005:notify]
mirror-1: got GF_EVENT_PARENT_UP, attempting connect on
transport
[2012-05-04 19:09:09] D [client-protocol.c:7005:notify]
mirror-1: got GF_EVENT_PARENT_UP, attempting connect on
transport
[2012-05-04 19:09:09] N [glusterfsd.c:1361:main] glusterfs:
Successfully started
[2012-05-04 19:09:09] E [socket.c:760:socket_connect_finish]
mirror-1: connection to  failed (Connection refused)
[2012-05-04 19:09:09] D
[fuse-bridge.c:3079:fuse_thread_proc] fuse: 
pthread_cond_timedout returned non zero value ret: 0 errno:
0
[2012-05-04 19:09:09] N [fuse-bridge.c:2931:fuse_init

Re: [Gluster-users] Gluster client can't connect to Gluster volume

2012-05-05 Thread Eric
Thanks, Xavier:

SELinux is set to permissive on the two systems that have it.
Eric Pretorious

Truckee, CA




 From: Xavier Normand xavier.norm...@gmail.com
To: Eric epretori...@yahoo.com 
Cc: gluster-users@gluster.org gluster-users@gluster.org 
Sent: Friday, May 4, 2012 9:57 PM
Subject: Re: [Gluster-users] Gluster client can't connect to Gluster volume
 

Hi


Do you have selinux enable?

Envoyé de mon iPhone

Le 2012-05-05 à 00:27, Eric epretori...@yahoo.com a écrit :


Hi, All:

I've built a Gluster-based storage cluster on a pair of CentOS 5.7 (i386) 
VM's. The nodes are using Gluster 3.2.6 (from source) and the host is using 
Gluster 3.0.0 (from the Mageia package repositories):


[eric@node1 ~]$ sudo /usr/local/sbin/gluster --version
glusterfs 3.2.6 built on May  3 2012 15:53:02

[eric@localhost ~]$ rpm -qa | grep glusterfs
glusterfs-common-3.0.0-2.mga1
glusterfs-client-3.0.0-2.mga1
glusterfs-server-3.0.0-2.mga1
libglusterfs0-3.0.0-2.mga1


None of the systems (i.e., neither  the two storage nodes nor the client) can 
connect to Port 6996 of the cluster (node1.example.com  node2.example.com) 
but the two storage nodes can mount the shared volume using the Gluster 
helper and/or NFS:


[eric@node1 ~]$ sudo /sbin/lsmod | grep fuse

[eric@node1 ~]$ sudo /sbin/modprobe fuse

[eric@node1 ~]$ sudo /sbin/lsmod | grep fuse
fuse                   49237  0 

[eric@node1 ~]$ sudo mount -t glusterfs node1:/mirror-1 /mnt

[eric@node1 ~]$ sudo grep gluster /etc/mtab 
glusterfs#node1:/mirror-1 /mnt fuse 
rw,allow_other,default_permissions,max_read=131072 0 0


...but the host system is only able to connect using NFS:


[eric@localhost ~]$ sudo glusterfs --debug -f /tmp/glusterfs.vol /mnt
[2012-05-04 19:09:09] D [glusterfsd.c:424:_get_specfp] glusterfs: loading 
volume file /tmp/glusterfs.vol

Version      : glusterfs 3.0.0 built on Apr 10 2011 19:12:54
git: 2.0.1-886-g8379edd
Starting Time: 2012-05-04 19:09:09
Command line : glusterfs --debug -f /tmp/glusterfs.vol /mnt 
PID          : 30159
System name  : Linux
Nodename     : localhost.localdomain
Kernel Release : 2.6.38.8-desktop586-10.mga
Hardware Identifier: i686

Given volfile:
+--+
  1: volume mirror-1
  2:  type
 protocol/client
  3:  option transport-type tcp
  4:  option remote-host node1.example.com
  5:  option remote-subvolume mirror-1
  6: end-volume
+--+
[2012-05-04 19:09:09] D [glusterfsd.c:1335:main] glusterfs: running in pid 
30159
[2012-05-04 19:09:09] D [client-protocol.c:6581:init] mirror-1: defaulting 
frame-timeout to 30mins
[2012-05-04 19:09:09] D [client-protocol.c:6592:init] mirror-1: defaulting 
ping-timeout to 42
[2012-05-04
 19:09:09] D [transport.c:145:transport_load] transport: attempt to load
 file /usr/lib/glusterfs/3.0.0/transport/socket.so
[2012-05-04 
19:09:09] D [transport.c:145:transport_load] transport: attempt to load 
file /usr/lib/glusterfs/3.0.0/transport/socket.so
[2012-05-04 19:09:09] D [client-protocol.c:7005:notify] mirror-1: got 
GF_EVENT_PARENT_UP, attempting connect on transport
[2012-05-04 19:09:09] D [client-protocol.c:7005:notify] mirror-1: got 
GF_EVENT_PARENT_UP, attempting connect on transport
[2012-05-04 19:09:09] D [client-protocol.c:7005:notify] mirror-1: got 
GF_EVENT_PARENT_UP, attempting connect on transport
[2012-05-04 19:09:09] D [client-protocol.c:7005:notify] mirror-1: got 
GF_EVENT_PARENT_UP, attempting connect on transport
[2012-05-04 19:09:09] N [glusterfsd.c:1361:main] glusterfs: Successfully 
started
[2012-05-04 19:09:09] E [socket.c:760:socket_connect_finish] mirror-1: 
connection to  failed (Connection refused)
[2012-05-04
 19:09:09] D [fuse-bridge.c:3079:fuse_thread_proc] fuse:  
pthread_cond_timedout returned non zero value ret: 0 errno: 0
[2012-05-04
 19:09:09] N [fuse-bridge.c:2931:fuse_init] glusterfs-fuse: FUSE inited 
with protocol versions: glusterfs 7.13 kernel 7.16
[2012-05-04 19:09:09] E [socket.c:760:socket_connect_finish] mirror-1: 
connection to  failed (Connection refused)


I've read through the Troubleshooting section of the Gluster Administration 
Guide and the Gluster User Guide but can't seem to resolve the problem. (See 
my post on the Mageia Forum for all the troubleshooting details: 
https://forums.mageia.org/en/viewtopic.php?f=7t=2358p=17517) 


What might be causing this?

TIA,
Eric Pretorious
Truckee, CA


https://forums.mageia.org/en/viewtopic.php?f=7t=2358p=17517
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster client can't connect to Gluster volume

2012-05-05 Thread David Coulson

That's way old documentation.

Start by installing 3.2.6 on your client and see if it works then. I 
don't think anyone expects 3.2 and 3.0 to work correctly.


On 5/5/12 12:09 PM, Eric wrote:

Thanks, David:

Yes...

  * iptables has been disabled on all three systems.
  * SELinux is set to permissive on the two systems that employ it -
the two CentOS nodes.
  * Port #6996 is referenced in the Troubleshooting section of the
Gluster User Guide

http://www.gluster.org/community/documentation/index.php/User_Guide#Troubleshooting.

FWIW: All of this except the SELinux question is already documented in 
my post on the Mageia Forum 
https://forums.mageia.org/en/viewtopic.php?f=7amp;t=2358amp;p=17517.


Eric Pretorious
Truckee, CA


*From:* David Coulson da...@davidcoulson.net
*To:* Eric epretori...@yahoo.com
*Cc:* gluster-users@gluster.org gluster-users@gluster.org
*Sent:* Saturday, May 5, 2012 5:44 AM
*Subject:* Re: [Gluster-users] Gluster client can't connect to
Gluster volume

Do you have any firewall rules enabled? I'd start by disabling
iptables (or at least setting everything to ACCEPT) and as someone
else suggested setting selinux to permissive/disabled.

Why are your nodes and client using different versions of Gluster?
Why not just use the 3.2.6 version for everything? Also, I'm not
sure where port 6996 comes from - Gluster uses 24007 for it's core
communications and ports above that for individual bricks.

David

On 5/5/12 12:27 AM, Eric wrote:

Hi, All:

I've built a Gluster-based storage cluster on a pair of CentOS
5.7 (i386) VM's. The nodes are using Gluster 3.2.6 (from source)
and the host is using Gluster 3.0.0 (from the Mageia package
repositories):

|[eric@node1 ~]$ sudo /usr/local/sbin/gluster --version
glusterfs 3.2.6 built on May  3 2012 15:53:02

||[eric@localhost ~]$ rpm -qa | grep glusterfs|
|glusterfs-common-3.0.0-2.mga1|
|glusterfs-client-3.0.0-2.mga1|
|glusterfs-server-3.0.0-2.mga1|
|libglusterfs0-3.0.0-2.mga1|

None of the systems (i.e., neither the two storage nodes nor the
client) can connect to Port 6996 of the cluster
(node1.example.com http://node1.example.com  node2.example.com
http://node2.example.com) but the two storage nodes can mount
the shared volume using the Gluster helper and/or NFS:

|[eric@node1 ~]$ sudo /sbin/lsmod | grep fuse

[eric@node1 ~]$ sudo /sbin/modprobe fuse

[eric@node1 ~]$ sudo /sbin/lsmod | grep fuse
fuse   49237  0

[eric@node1 ~]$ sudo mount -t glusterfs node1:/mirror-1 /mnt

[eric@node1 ~]$ sudo grep gluster /etc/mtab
glusterfs#node1:/mirror-1 /mnt fuse
rw,allow_other,default_permissions,max_read=131072 0 0|

...but the host system is only able to connect using NFS:

|[eric@localhost ~]$ sudo glusterfs --debug -f /tmp/glusterfs.vol
/mnt
[2012-05-04 19:09:09] D [glusterfsd.c:424:_get_specfp] glusterfs:
loading volume file /tmp/glusterfs.vol


Version  : glusterfs 3.0.0 built on Apr 10 2011 19:12:54
git: 2.0.1-886-g8379edd
Starting Time: 2012-05-04 19:09:09
Command line : glusterfs --debug -f /tmp/glusterfs.vol /mnt
PID  : 30159
System name  : Linux
Nodename : localhost.localdomain
Kernel Release : 2.6.38.8-desktop586-10.mga
Hardware Identifier: i686

Given volfile:

+--+
  1: volume mirror-1
  2:  type protocol/client
  3:  option transport-type tcp
  4:  option remote-host node1.example.com
  5:  option remote-subvolume mirror-1
  6: end-volume

+--+
[2012-05-04 19:09:09] D [glusterfsd.c:1335:main] glusterfs:
running in pid 30159
[2012-05-04 19:09:09] D [client-protocol.c:6581:init] mirror-1:
defaulting frame-timeout to 30mins
[2012-05-04 19:09:09] D [client-protocol.c:6592:init] mirror-1:
defaulting ping-timeout to 42
[2012-05-04 19:09:09] D [transport.c:145:transport_load]
transport: attempt to load file
/usr/lib/glusterfs/3.0.0/transport/socket.so
[2012-05-04 19:09:09] D [transport.c:145:transport_load]
transport: attempt to load file
/usr/lib/glusterfs/3.0.0/transport/socket.so
[2012-05-04 19:09:09] D [client-protocol.c:7005:notify] mirror-1:
got GF_EVENT_PARENT_UP, attempting connect on transport
[2012-05-04 19:09:09] D [client-protocol.c:7005:notify] mirror-1:
got GF_EVENT_PARENT_UP, attempting connect on transport
[2012-05-04 19:09:09] D [client-protocol.c:7005:notify] mirror-1:
got GF_EVENT_PARENT_UP, attempting connect on transport
[2012-05-04 19:09:09] D [client

Re: [Gluster-users] Gluster client can't connect to Gluster volume

2012-05-05 Thread Eric
Thanks, David:

But I'd rather see if I can fix the problem with the Mageia client 
(glusterfs-client-3.0.0-2.mga1):
* This will allow me to file a bug report with the package maintainer 
(if necessary).
* I already know that the systems that have Gluster 3.2.6 from source 
[i.e., the storage nodes] are able to mount the volume.
* I'd rather keep my daily-driver (i.e., the host system) 100% Mageia.
Eric Pretorious
Truckee, CA




 From: David Coulson da...@davidcoulson.net
To: Eric epretori...@yahoo.com 
Cc: gluster-users@gluster.org gluster-users@gluster.org 
Sent: Saturday, May 5, 2012 9:16 AM
Subject: Re: [Gluster-users] Gluster client can't connect to Gluster volume
 

That's way old documentation.

Start by installing 3.2.6 on your client and see if it works then. I
don't think anyone expects 3.2 and 3.0 to work correctly.

On 5/5/12 12:09 PM, Eric wrote: 
Thanks, David:


Yes...
  * iptables has been disabled on all three systems.
  * SELinux is set to permissive on the two systems that employ it - the 
 two CentOS nodes.
  * Port #6996 is referenced in the Troubleshooting section of the 
 Gluster User Guide.
FWIW: All of this except the SELinux question is already documented in  my 
post on the Mageia Forum.


Eric Pretorious
Truckee, CA




 From: David Coulson da...@davidcoulson.net
To: Eric epretori...@yahoo.com 
Cc: gluster-users@gluster.org gluster-users@gluster.org 
Sent: Saturday, May 5, 2012 5:44 AM
Subject: Re: [Gluster-users] Gluster client can't connect to Gluster volume
 

Do you have any firewall rules enabled? I'd start by disabling iptables (or 
at least setting everything to ACCEPT) and as someone else suggested setting 
selinux to permissive/disabled.

Why are your nodes and client using different
versions of Gluster? Why not just use the 3.2.6
version for everything? Also, I'm not sure where
port 6996 comes from - Gluster uses 24007 for it's
core communications and ports above that for
individual bricks.

David

On 5/5/12 12:27 AM, Eric wrote: 
Hi, All:

I've built a Gluster-based storage cluster on
  a pair of CentOS 5.7 (i386) VM's. The nodes
  are using Gluster 3.2.6 (from source) and the
  host is using Gluster 3.0.0 (from the Mageia
  package repositories):


[eric@node1 ~]$ sudo /usr/local/sbin/gluster --version
glusterfs 3.2.6 built on May  3 2012
15:53:02

[eric@localhost ~]$ rpm -qa | grep glusterfs
glusterfs-common-3.0.0-2.mga1
glusterfs-client-3.0.0-2.mga1
glusterfs-server-3.0.0-2.mga1
libglusterfs0-3.0.0-2.mga1


None of the systems (i.e., neither the two storage nodes nor the client) 
can connect to Port 6996 of the cluster (node1.example.com  
node2.example.com) but the two storage nodes can mount the shared volume 
using the Gluster helper and/or NFS:


[eric@node1 ~]$ sudo /sbin/lsmod | grep fuse

[eric@node1 ~]$ sudo /sbin/modprobe fuse

[eric@node1 ~]$ sudo /sbin/lsmod | grep fuse
fuse                   49237  0 

[eric@node1 ~]$ sudo mount -t glusterfs
node1:/mirror-1 /mnt

[eric@node1 ~]$ sudo grep gluster /etc/mtab 
glusterfs#node1:/mirror-1 /mnt fuse
rw,allow_other,default_permissions,max_read=131072
0 0


...but the host system is only able to connect using NFS:


[eric@localhost ~]$ sudo glusterfs --debug -f /tmp/glusterfs.vol /mnt
[2012-05-04 19:09:09] D
[glusterfsd.c:424:_get_specfp] glusterfs:
loading volume file /tmp/glusterfs.vol

Version      : glusterfs 3.0.0 built on Apr
10 2011 19:12:54
git: 2.0.1-886-g8379edd
Starting Time: 2012-05-04 19:09:09
Command line : glusterfs --debug -f
/tmp/glusterfs.vol /mnt 
PID          : 30159
System name  : Linux
Nodename     : localhost.localdomain
Kernel Release : 2.6.38.8-desktop586-10.mga
Hardware Identifier: i686

Given volfile:
+--+
  1: volume mirror-1
  2:  type protocol/client
  3:  option transport-type tcp
  4:  option remote-host node1.example.com
  5:  option remote-subvolume mirror-1
  6: end-volume
+--+
[2012-05-04 19:09:09] D
[glusterfsd.c:1335:main] glusterfs: running
in pid 30159
[2012-05-04 19:09:09] D
[client-protocol.c:6581:init] mirror-1:
defaulting frame-timeout to 30mins
[2012-05-04 19:09:09] D
[client-protocol.c

Re: [Gluster-users] Gluster client can't connect to Gluster volume

2012-05-05 Thread David Coulson
There's probably not a whole lot of 'fixing' since the two versions are 
so different.


You can tell the package maintainer to make a 3.2.6 build. That'll help.

On 5/5/12 1:43 PM, Eric wrote:

Thanks, David:

But I'd rather see if I can fix the problem with the Mageia client 
(|glusterfs-client-3.0.0-2.mga1|):


  * This will allow me to file a bug report with the package
maintainer (if necessary).
  * I already know that the systems that have Gluster 3.2.6 from
source [i.e., the storage nodes] are able to mount the volume.
  * I'd rather keep my daily-driver (i.e., the host system) 100% Mageia.

Eric Pretorious
Truckee, CA


*From:* David Coulson da...@davidcoulson.net
*To:* Eric epretori...@yahoo.com
*Cc:* gluster-users@gluster.org gluster-users@gluster.org
*Sent:* Saturday, May 5, 2012 9:16 AM
*Subject:* Re: [Gluster-users] Gluster client can't connect to
Gluster volume

That's way old documentation.

Start by installing 3.2.6 on your client and see if it works then.
I don't think anyone expects 3.2 and 3.0 to work correctly.

On 5/5/12 12:09 PM, Eric wrote:

Thanks, David:

Yes...

  * iptables has been disabled on all three systems.
  * SELinux is set to permissive on the two systems that employ
it - the two CentOS nodes.
  * Port #6996 is referenced in the Troubleshooting section of
the Gluster User Guide

http://www.gluster.org/community/documentation/index.php/User_Guide#Troubleshooting.

FWIW: All of this except the SELinux question is already
documented in my post on the Mageia Forum
https://forums.mageia.org/en/viewtopic.php?f=7amp;t=2358amp;p=17517.

Eric Pretorious
Truckee, CA


*From:* David Coulson da...@davidcoulson.net
mailto:da...@davidcoulson.net
*To:* Eric epretori...@yahoo.com
mailto:epretori...@yahoo.com
*Cc:* gluster-users@gluster.org
mailto:gluster-users@gluster.org
gluster-users@gluster.org mailto:gluster-users@gluster.org
*Sent:* Saturday, May 5, 2012 5:44 AM
*Subject:* Re: [Gluster-users] Gluster client can't connect
to Gluster volume

Do you have any firewall rules enabled? I'd start by
disabling iptables (or at least setting everything to ACCEPT)
and as someone else suggested setting selinux to
permissive/disabled.

Why are your nodes and client using different versions of
Gluster? Why not just use the 3.2.6 version for everything?
Also, I'm not sure where port 6996 comes from - Gluster uses
24007 for it's core communications and ports above that for
individual bricks.

David

On 5/5/12 12:27 AM, Eric wrote:

Hi, All:

I've built a Gluster-based storage cluster on a pair of
CentOS 5.7 (i386) VM's. The nodes are using Gluster 3.2.6
(from source) and the host is using Gluster 3.0.0 (from the
Mageia package repositories):

|[eric@node1 ~]$ sudo /usr/local/sbin/gluster --version
glusterfs 3.2.6 built on May  3 2012 15:53:02

||[eric@localhost ~]$ rpm -qa | grep glusterfs|
|glusterfs-common-3.0.0-2.mga1|
|glusterfs-client-3.0.0-2.mga1|
|glusterfs-server-3.0.0-2.mga1|
|libglusterfs0-3.0.0-2.mga1|

None of the systems (i.e., neither the two storage nodes nor
the client) can connect to Port 6996 of the cluster
(node1.example.com http://node1.example.com 
node2.example.com http://node2.example.com) but the two
storage nodes can mount the shared volume using the Gluster
helper and/or NFS:

|[eric@node1 ~]$ sudo /sbin/lsmod | grep fuse

[eric@node1 ~]$ sudo /sbin/modprobe fuse

[eric@node1 ~]$ sudo /sbin/lsmod | grep fuse
fuse   49237  0

[eric@node1 ~]$ sudo mount -t glusterfs node1:/mirror-1 /mnt

[eric@node1 ~]$ sudo grep gluster /etc/mtab
glusterfs#node1:/mirror-1 /mnt fuse
rw,allow_other,default_permissions,max_read=131072 0 0|

...but the host system is only able to connect using NFS:

|[eric@localhost ~]$ sudo glusterfs --debug -f
/tmp/glusterfs.vol /mnt
[2012-05-04 19:09:09] D [glusterfsd.c:424:_get_specfp]
glusterfs: loading volume file /tmp/glusterfs.vol


Version  : glusterfs 3.0.0 built on Apr 10 2011 19:12:54
git: 2.0.1-886-g8379edd
Starting Time: 2012-05-04 19:09:09
Command line : glusterfs --debug -f /tmp/glusterfs.vol /mnt
PID  : 30159
System name  : Linux
Nodename : localhost.localdomain
Kernel Release : 2.6.38.8

Re: [Gluster-users] Gluster client can't connect to Gluster volume

2012-05-05 Thread Eric
Thanks, David:

That's fine. If it's just a version-mismatch I can accept that. I'm sure that 
the developers at Mageia are working on catching-up in Mageia, Version 2. I 
just figured that, if it were a version-mismatch, I'd see something to that 
effect in the servers' log files. That's all.


Eric Pretorious
Truckee, CA



 From: David Coulson da...@davidcoulson.net
To: Eric epretori...@yahoo.com 
Cc: gluster-users@gluster.org gluster-users@gluster.org 
Sent: Saturday, May 5, 2012 10:46 AM
Subject: Re: [Gluster-users] Gluster client can't connect to Gluster volume
 

There's probably not a whole lot of 'fixing' since the two versions are so 
different.

You can tell the package maintainer to make a 3.2.6 build. That'll
help.

On 5/5/12 1:43 PM, Eric wrote: 
Thanks, David:

But I'd rather see if I can fix the problem with the Mageia
client (glusterfs-client-3.0.0-2.mga1):
  * This will allow me to file a bug report with the package maintainer 
 (if necessary).
  * I already know that the systems that have Gluster 3.2.6 from source 
 [i.e., the storage nodes] are able to mount the volume.
  * I'd rather keep my daily-driver (i.e., the host system) 100% Mageia.
Eric Pretorious
Truckee, CA




 From: David Coulson da...@davidcoulson.net
To: Eric epretori...@yahoo.com 
Cc: gluster-users@gluster.org gluster-users@gluster.org 
Sent: Saturday, May 5, 2012 9:16 AM
Subject: Re: [Gluster-users] Gluster client can't connect to Gluster volume
 

That's way old documentation.

Start by installing 3.2.6 on your client and see if
it works then. I don't think anyone expects 3.2 and
3.0 to work correctly.

On 5/5/12 12:09 PM, Eric wrote: 
Thanks, David:


Yes...
* iptables has been disabled on all three systems.
* SELinux is set to permissive on the two systems that employ it - the 
 two CentOS nodes.
* Port #6996 is referenced in the Troubleshooting section of the 
 Gluster User Guide.
FWIW: All of this except the SELinux question is already documented in  my 
post on the Mageia Forum.


Eric Pretorious
Truckee, CA




 From: David Coulson da...@davidcoulson.net
To: Eric epretori...@yahoo.com 
Cc: gluster-users@gluster.org gluster-users@gluster.org 
Sent: Saturday, May 5, 2012 5:44 AM
Subject: Re: [Gluster-users] Gluster client can't connect to Gluster volume
 

Do you have any firewall rules enabled? I'd start by disabling iptables 
(or at least setting everything to ACCEPT) and as someone else suggested 
setting selinux to permissive/disabled.

Why are your nodes and client using
different versions of Gluster? Why
not just use the 3.2.6 version for
everything? Also, I'm not sure where
port 6996 comes from - Gluster uses
24007 for it's core communications
and ports above that for individual
bricks.

David

On 5/5/12 12:27 AM, Eric wrote: 
Hi, All:

I've built a Gluster-based
  storage cluster on a pair of
  CentOS 5.7 (i386) VM's. The
  nodes are using Gluster 3.2.6
  (from source) and the host is
  using Gluster 3.0.0 (from the
  Mageia package repositories):


[eric@node1 ~]$ sudo /usr/local/sbin/gluster --version
glusterfs 3.2.6 built on
May  3 2012 15:53:02

[eric@localhost ~]$ rpm -qa | grep glusterfs
glusterfs-common-3.0.0-2.mga1
glusterfs-client-3.0.0-2.mga1
glusterfs-server-3.0.0-2.mga1
libglusterfs0-3.0.0-2.mga1


None of the systems (i.e., neither the two storage nodes nor the client) 
can connect to Port 6996 of the cluster (node1.example.com  
node2.example.com) but the two storage nodes can mount the shared volume 
using the Gluster helper and/or NFS:


[eric@node1 ~]$ sudo /sbin/lsmod | grep fuse

[eric@node1 ~]$ sudo
/sbin/modprobe fuse

[eric@node1 ~]$ sudo
/sbin/lsmod | grep fuse
fuse                 
 49237  0 

[eric@node1 ~]$ sudo mount
-t glusterfs node1:/mirror-1
/mnt

[eric@node1 ~]$ sudo grep
gluster /etc/mtab 
glusterfs#node1:/mirror-1
/mnt fuse

rw,allow_other,default_permissions,max_read=131072
0 0

Re: [Gluster-users] Gluster client can't connect to Gluster volume

2012-05-04 Thread Xavier Normand
Hi

Do you have selinux enable?

Envoyé de mon iPhone

Le 2012-05-05 à 00:27, Eric epretori...@yahoo.com a écrit :

 Hi, All:
 
 I've built a Gluster-based storage cluster on a pair of CentOS 5.7 (i386) 
 VM's. The nodes are using Gluster 3.2.6 (from source) and the host is using 
 Gluster 3.0.0 (from the Mageia package repositories):
 
 [eric@node1 ~]$ sudo /usr/local/sbin/gluster --version
 glusterfs 3.2.6 built on May  3 2012 15:53:02
 
 [eric@localhost ~]$ rpm -qa | grep glusterfs
 glusterfs-common-3.0.0-2.mga1
 glusterfs-client-3.0.0-2.mga1
 glusterfs-server-3.0.0-2.mga1
 libglusterfs0-3.0.0-2.mga1
 
 None of the systems (i.e., neither the two storage nodes nor the client) can 
 connect to Port 6996 of the cluster (node1.example.com  node2.example.com) 
 but the two storage nodes can mount the shared volume using the Gluster 
 helper and/or NFS:
 
 [eric@node1 ~]$ sudo /sbin/lsmod | grep fuse
 
 [eric@node1 ~]$ sudo /sbin/modprobe fuse
 
 [eric@node1 ~]$ sudo /sbin/lsmod | grep fuse
 fuse   49237  0 
 
 [eric@node1 ~]$ sudo mount -t glusterfs node1:/mirror-1 /mnt
 
 [eric@node1 ~]$ sudo grep gluster /etc/mtab 
 glusterfs#node1:/mirror-1 /mnt fuse 
 rw,allow_other,default_permissions,max_read=131072 0 0
 
 ...but the host system is only able to connect using NFS:
 
 [eric@localhost ~]$ sudo glusterfs --debug -f /tmp/glusterfs.vol /mnt
 [2012-05-04 19:09:09] D [glusterfsd.c:424:_get_specfp] glusterfs: loading 
 volume file /tmp/glusterfs.vol
 
 Version  : glusterfs 3.0.0 built on Apr 10 2011 19:12:54
 git: 2.0.1-886-g8379edd
 Starting Time: 2012-05-04 19:09:09
 Command line : glusterfs --debug -f /tmp/glusterfs.vol /mnt 
 PID  : 30159
 System name  : Linux
 Nodename : localhost.localdomain
 Kernel Release : 2.6.38.8-desktop586-10.mga
 Hardware Identifier: i686
 
 Given volfile:
 +--+
   1: volume mirror-1
   2:  type protocol/client
   3:  option transport-type tcp
   4:  option remote-host node1.example.com
   5:  option remote-subvolume mirror-1
   6: end-volume
 +--+
 [2012-05-04 19:09:09] D [glusterfsd.c:1335:main] glusterfs: running in pid 
 30159
 [2012-05-04 19:09:09] D [client-protocol.c:6581:init] mirror-1: defaulting 
 frame-timeout to 30mins
 [2012-05-04 19:09:09] D [client-protocol.c:6592:init] mirror-1: defaulting 
 ping-timeout to 42
 [2012-05-04 19:09:09] D [transport.c:145:transport_load] transport: attempt 
 to load file /usr/lib/glusterfs/3.0.0/transport/socket.so
 [2012-05-04 19:09:09] D [transport.c:145:transport_load] transport: attempt 
 to load file /usr/lib/glusterfs/3.0.0/transport/socket.so
 [2012-05-04 19:09:09] D [client-protocol.c:7005:notify] mirror-1: got 
 GF_EVENT_PARENT_UP, attempting connect on transport
 [2012-05-04 19:09:09] D [client-protocol.c:7005:notify] mirror-1: got 
 GF_EVENT_PARENT_UP, attempting connect on transport
 [2012-05-04 19:09:09] D [client-protocol.c:7005:notify] mirror-1: got 
 GF_EVENT_PARENT_UP, attempting connect on transport
 [2012-05-04 19:09:09] D [client-protocol.c:7005:notify] mirror-1: got 
 GF_EVENT_PARENT_UP, attempting connect on transport
 [2012-05-04 19:09:09] N [glusterfsd.c:1361:main] glusterfs: Successfully 
 started
 [2012-05-04 19:09:09] E [socket.c:760:socket_connect_finish] mirror-1: 
 connection to  failed (Connection refused)
 [2012-05-04 19:09:09] D [fuse-bridge.c:3079:fuse_thread_proc] fuse:  
 pthread_cond_timedout returned non zero value ret: 0 errno: 0
 [2012-05-04 19:09:09] N [fuse-bridge.c:2931:fuse_init] glusterfs-fuse: FUSE 
 inited with protocol versions: glusterfs 7.13 kernel 7.16
 [2012-05-04 19:09:09] E [socket.c:760:socket_connect_finish] mirror-1: 
 connection to  failed (Connection refused)
 
 I've read through the Troubleshooting section of the Gluster Administration 
 Guide and the Gluster User Guide but can't seem to resolve the problem. (See 
 my post on the Mageia Forum for all the troubleshooting details: 
 https://forums.mageia.org/en/viewtopic.php?f=7t=2358p=17517)
 
 What might be causing this?
 
 TIA,
 Eric Pretorious
 Truckee, CA
 
 https://forums.mageia.org/en/viewtopic.php?f=7t=2358p=17517
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster client performance

2011-08-09 Thread Jesse Stroik

Pavan,

Thank you for your help.  We wanted to get back to you with our results 
and observations.  I'm cc'ing gluster-users for posterity.


We did experiment with enable-trickling-writes.  That was one of the 
translator tunables we wanted to know the precise syntax for so that we 
could be certain we were disabling it.  As hoped, disabling trickling 
writes improved performance somewhat.


We are definitely interested in any other undocumented write-buffer 
related tunables.  We've tested the documented tuning parameters.


Performance improved significantly when we switched clients to mainline 
kernel (2.5.35-13).  We also updated to OFED 1.5.3 but it wasn't 
responsible for the performance improvement.


Our findings with 32KB block size (cp) write performance:

250-300MB/sec single stream performance
400MB/sec multiple-stream per client performance

This is much higher than we observed with kernel 2.6.18 series.  Using 
the 2.6.18 line, we also observed virtually no difference between 
running single stream tests and multi stream tests suggesting a 
bottleneck with the fabric.


Both 2.6.18 and 2.6.35-13 performed very well (about 600MB/sec) when 
writing 128KB blocks.


When I disabled write-behind on the 2.6.18 series of kernels as a test, 
performance plummeted to a few MB/sec when writing blocks sizes smaller 
than 128KB.  We did not test this extensively.


Disabling enable-trickling-writes gave us approximately a 20% boost, 
reflected in the numbers above, for single-stream writes.  We observed 
no significant difference with several streams per client due to 
disabling that tunable.


For reference, we are running another cluster file system on the same 
underlying hardware/software.  With both the old kernel (2.6.18.x) and 
the new kernel (2.6.35-13) we get approximately:


450-550MB/sec single stream performance
1200MB+/sec multiple stream per client performance

We set the test directory to write entire files to a single LUN which is 
how we configured gluster in an effort to mitigate differences.


It is treacherous to speculate why we might be more limited with gluster 
over RDMA than the other cluster file system without spending a 
significant amount of analysis.  That said, I wonder if there may be an 
issue with the way in which fuse handles write buffers causing a 
bottleneck for RMDA.


The bottom line is that our observed performance was poor using the 
2.6.18 RHEL 5 kernel line relative to the mainline (2.6.35) kernels. 
Updating to the newer kernels was well worth the testing and downtime. 
Hopefully this information can help others.


Best,
Jesse Stroik
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster client performance

2011-08-09 Thread Pavan T C

On Wednesday 10 August 2011 12:11 AM, Jesse Stroik wrote:

Pavan,

Thank you for your help. We wanted to get back to you with our results
and observations. I'm cc'ing gluster-users for posterity.

We did experiment with enable-trickling-writes. That was one of the
translator tunables we wanted to know the precise syntax for so that we
could be certain we were disabling it. As hoped, disabling trickling
writes improved performance somewhat.

We are definitely interested in any other undocumented write-buffer
related tunables. We've tested the documented tuning parameters.

Performance improved significantly when we switched clients to mainline
kernel (2.5.35-13). We also updated to OFED 1.5.3 but it wasn't
responsible for the performance improvement.

Our findings with 32KB block size (cp) write performance:

250-300MB/sec single stream performance
400MB/sec multiple-stream per client performance


Ok. Lets see if we can improve this further. Please use the following 
tunables as suggested below:


For write-behind -
option cache-size 16MB

For read-ahead -
option page-count 16

For io-cache -
option cache-size 64MB

You will need to place these lines in the client volume file, restart 
the server and remount the volume on the clients.
Your client (fuse) volume file sections will look like below (of course, 
with change in the volume name) -


volume testvol-write-behind
type performance/write-behind
option cache-size 16MB
subvolumes testvol-client-0
end-volume

volume testvol-read-ahead
type performance/read-ahead
option page-count 16
subvolumes testvol-write-behind
end-volume

volume testvol-io-cache
type performance/io-cache
option cache-size 64MB
subvolumes testvol-read-ahead
end-volume

Run your copy command with these tunables. For now, lets have the 
default setting for trickling writes which is 'ENABLED'. You can simply 
remove this tunable from the volume file to get the default behaviour.


Pavan


This is much higher than we observed with kernel 2.6.18 series. Using
the 2.6.18 line, we also observed virtually no difference between
running single stream tests and multi stream tests suggesting a
bottleneck with the fabric.

Both 2.6.18 and 2.6.35-13 performed very well (about 600MB/sec) when
writing 128KB blocks.

When I disabled write-behind on the 2.6.18 series of kernels as a test,
performance plummeted to a few MB/sec when writing blocks sizes smaller
than 128KB. We did not test this extensively.

Disabling enable-trickling-writes gave us approximately a 20% boost,
reflected in the numbers above, for single-stream writes. We observed no
significant difference with several streams per client due to disabling
that tunable.

For reference, we are running another cluster file system on the same
underlying hardware/software. With both the old kernel (2.6.18.x) and
the new kernel (2.6.35-13) we get approximately:

450-550MB/sec single stream performance
1200MB+/sec multiple stream per client performance

We set the test directory to write entire files to a single LUN which is
how we configured gluster in an effort to mitigate differences.

It is treacherous to speculate why we might be more limited with gluster
over RDMA than the other cluster file system without spending a
significant amount of analysis. That said, I wonder if there may be an
issue with the way in which fuse handles write buffers causing a
bottleneck for RMDA.

The bottom line is that our observed performance was poor using the
2.6.18 RHEL 5 kernel line relative to the mainline (2.6.35) kernels.
Updating to the newer kernels was well worth the testing and downtime.
Hopefully this information can help others.

Best,
Jesse Stroik


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster client performance

2011-07-27 Thread Pavan T C

[..]



I don't know why my writes are so slow compared to reads. Let me know
if you're able to get better write speeds with the newer version of
gluster and any of the configurations (if they apply) that I've
posted. It might compel me to upgrade.



From your documentation of nfsspeedtest, I see that the reads can 
happen either via dd or via perl's sysread. I'm not sure if one is 
better over the other.


Secondly - Are you doing direct IO on the backend XFS ? If not, try it 
with direct IO so that you are not misled by the memory situation in the 
system at the time of your test. It will give a clearer picture of what 
your backend is capable of.


Your test is such that you write a file and immediately read the same 
file back. It is possible that a good chunk of it is cached on the 
backend. After the write, do a flush of the filesystem caches by using:

echo 3  /proc/sys/vm/drop_caches. Sleep for a while. Then do the read.
Or as suggested earlier, resort to direct IO while testing the backend FS.

Pavan
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster client performance

2011-07-27 Thread Pavan T C

But that still does not explain why you should get as low as 50 MB/s for
a single stream single client write when the backend can support direct
IO throughput of more than 700 MB/s.

On the server, can you collect:

# iostat -xcdh 2  iostat.log.brickXX

for the duration of the dd command ?

and

# strace -f -o stracelog.server -tt -T -e trace=write,writev -p
glusterfsd.pid
(again for the duration of the dd command)


Hi John,

A small change in the request. I hope you have not already spent time on 
this. The strace command should be:


strace -f -o stracelog.server -tt -T -e trace=pwrite -p
glusterfsd.pid

Thanks,
Pavan



With the above, I want to measure the delay between the writes coming in
from the client. iostat will describe the IO scenario on the server.
Once the exercise is done, please attach the iostat.log.brickXX and
stracelog.server.


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster client performance

2011-07-27 Thread John Lalande

On 07/27/2011 12:53 AM, Pavan T C wrote:




2. What is the disk bandwidth you are getting on the local filesystem
on a given storage node ? I mean, pick any of the 10 storage servers
dedicated for Gluster Storage and perform a dd as below:

Seeing an average of 740 MB/s write, 971 GB/s read.


I presume you did this in one of the /data-brick*/export directories ?
Command output with the command line would have been clearer, but 
thats fine.

That is correct -- we used /data-brick1/export.




3. What is the IB bandwidth that you are getting between the compute
node and the glusterfs storage node? You can run the tool rdma_bw to
get the details:

30407: Bandwidth peak (#0 to #976): 2594.58 MB/sec
30407: Bandwidth average: 2593.62 MB/sec
30407: Service Demand peak (#0 to #976): 978 cycles/KB
30407: Service Demand Avg : 978 cycles/KB



This looks like a DDR connection. ibv_devinfo -v will tell a better 
story about the line width and speed of your infiniband connection.

QDR should have a much higher bandwidth.
But that still does not explain why you should get as low as 50 MB/s 
for a single stream single client write when the backend can support 
direct IO throughput of more than 700 MB/s.
ibv_devinfo shows 4x for active width and 10 Gbps for active speed. Not 
sure why we're not seeing better bandwidth with rdma_bw -- we'll have to 
troubleshoot that some more -- but I agree, it shouldn't be the limiting 
factor as far the Gluster client speed problems we're seeing.


I'll send you the log files you requested off-list.

John

--



John Lalande
University of Wisconsin-Madison
Space Science  Engineering Center
1225 W. Dayton Street, Room 439, Madison, WI 53706
608-263-2268 / john.lala...@ssec.wisc.edu





smime.p7s
Description: S/MIME Cryptographic Signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster client performance

2011-07-26 Thread Pavan T C

On Tuesday 26 July 2011 03:42 AM, John Lalande wrote:

Hi-

I'm new to Gluster, but am trying to get it set up on a new compute
cluster we're building. We picked Gluster for one of our cluster file
systems (we're also using Lustre for fast scratch space), but the
Gluster performance has been so bad that I think maybe we have a
configuration problem -- perhaps we're missing a tuning parameter that
would help, but I can't find anything in the Gluster documentation --
all the tuning info I've found seems geared toward Gluster 2.x.

For some background, our compute cluster has 64 compute nodes. The
gluster storage pool has 10 Dell PowerEdge R515 servers, each with 12 x
2 TB disks. We have another 16 Dell PowerEdge R515s used as Lustre
storage servers. The compute and storage nodes are all connected via QDR
Infiniband. Both Gluster and Lustre are set to use RDMA over Infiniband.
We are using OFED version 1.5.2-20101219, Gluster 3.2.2 and CentOS 5.5
on both the compute and storage nodes.


Hi John,

I would need some more information about your setup to estimate the 
performance you should get with your gluster setup.


1. Can you provide the details of how disks are connected to the storage 
boxes ? Is it via FC ? What raid configuration is it using (if at all any) ?


2. What is the disk bandwidth you are getting on the local filesystem on 
a given storage node ? I mean, pick any of the 10 storage servers 
dedicated for Gluster Storage and perform a dd as below:


Write bandwidth measurement:
dd if=/dev/zero of=/export_directory/10g_file bs=128K count=8 
oflag=direct


Read bandwidth measurement:
dd if=/export_directory/10g_file of=/dev/null bs=128K count=8 
iflag=direct


[The above command is doing a direct IO of 10GB via your backend FS - 
ext4/xfs.]


3. What is the IB bandwidth that you are getting between the compute 
node and the glusterfs storage node? You can run the tool rdma_bw to 
get the details:


On the server, run:
# rdma_bw -b
[ -b measures bi-directional bandwidth]

On the compute node, run,
# rdma_bw -b server

[If you have not already installed it, rdma_bw is available via -
http://mirror.centos.org/centos/5/os/x86_64/CentOS/perftest-1.2.3-1.el5.x86_64.rpm]

Lets start with this, and I will ask for more if necessary.

Pavan



Oddly, it seems like there's some sort of bottleneck on the client side
-- for example, we're only seeing about 50 MB/s write throughput from a
single compute node when writing a 10GB file. But, if we run multiple
simultaneous writes from multiple compute nodes to the same Gluster
volume, we get 50 MB/s from each compute node. However, running multiple
writes from the same compute node does not increase throughput. The
compute nodes have 48 cores and 128 GB RAM, so I don't think the issue
is with the compute node hardware.

With Lustre, on the same hardware, with the same version of OFED, we're
seeing write throughput on that same 10 GB file as follows: 476 MB/s
single stream write from a single compute node and aggregate performance
of more like 2.4 GB/s if we run simultaneous writes. That leads me to
believe that we don't have a problem with RDMA, otherwise Lustre, which
is also using RDMA, should be similarly affected.

We have tried both xfs and ext4 for the backend file system on the
Gluster storage nodes (we're currently using ext4). We went with
distributed (not distributed striped) for the Gluster volume -- the
thought was that if there was a catastrophic failure of one of the
storage nodes, we'd only lose the data on that node; presumably with
distributed striped you'd lose any data striped across that volume,
unless I have misinterpreted the documentation.

So ... what's expected/normal throughput for Gluster over QDR IB to a
relatively large storage pool (10 servers / 120 disks)? Does anyone have
suggested tuning tips for improving performance?

Thanks!

John



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster client performance

2011-07-26 Thread Sabuj Pattanayek
 3. What is the IB bandwidth that you are getting between the compute node
 and the glusterfs storage node? You can run the tool rdma_bw to get the
 details:

This is what I got  on bidirectional :

2638: Bandwidth peak (#0 to #785): 6052.22 MB/sec
2638: Bandwidth average: 6050.02 MB/sec
2638: Service Demand peak (#0 to #785): 364 cycles/KB
2638: Service Demand Avg  : 364 cycles/KB
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster client performance

2011-07-26 Thread John Lalande

Thanks for your help, Pavan!


Hi John,

I would need some more information about your setup to estimate the 
performance you should get with your gluster setup.


1. Can you provide the details of how disks are connected to the 
storage boxes ? Is it via FC ? What raid configuration is it using (if 
at all any) ?
The disks are 2TB near-line SAS direct attached via a PERC H700 
controller (the Dell PowerEdge R515 has 12 3.5 drive bays). They are in 
a RAID6 config, exported as a single volume, that's split into 3 
equal-size partitions (due to ext4's (well, e2fsprogs') 16 TB limit).


2. What is the disk bandwidth you are getting on the local filesystem 
on a given storage node ? I mean, pick any of the 10 storage servers 
dedicated for Gluster Storage and perform a dd as below:

Seeing an average of 740 MB/s write, 971 GB/s read.



3. What is the IB bandwidth that you are getting between the compute 
node and the glusterfs storage node? You can run the tool rdma_bw to 
get the details:

30407: Bandwidth peak (#0 to #976): 2594.58 MB/sec
30407: Bandwidth average: 2593.62 MB/sec
30407: Service Demand peak (#0 to #976): 978 cycles/KB
30407: Service Demand Avg  : 978 cycles/KB


Here's our gluster config:

# gluster volume info data

Volume Name: data
Type: Distribute
Status: Started
Number of Bricks: 30
Transport-type: rdma
Bricks:
Brick1: data-3-1-infiniband.infiniband:/data-brick1/export
Brick2: data-3-3-infiniband.infiniband:/data-brick1/export
Brick3: data-3-5-infiniband.infiniband:/data-brick1/export
Brick4: data-3-7-infiniband.infiniband:/data-brick1/export
Brick5: data-3-9-infiniband.infiniband:/data-brick1/export
Brick6: data-3-11-infiniband.infiniband:/data-brick1/export
Brick7: data-3-13-infiniband.infiniband:/data-brick1/export
Brick8: data-3-15-infiniband.infiniband:/data-brick1/export
Brick9: data-3-17-infiniband.infiniband:/data-brick1/export
Brick10: data-3-19-infiniband.infiniband:/data-brick1/export
Brick11: data-3-1-infiniband.infiniband:/data-brick2/export
Brick12: data-3-3-infiniband.infiniband:/data-brick2/export
Brick13: data-3-5-infiniband.infiniband:/data-brick2/export
Brick14: data-3-7-infiniband.infiniband:/data-brick2/export
Brick15: data-3-9-infiniband.infiniband:/data-brick2/export
Brick16: data-3-11-infiniband.infiniband:/data-brick2/export
Brick17: data-3-13-infiniband.infiniband:/data-brick2/export
Brick18: data-3-15-infiniband.infiniband:/data-brick2/export
Brick19: data-3-17-infiniband.infiniband:/data-brick2/export
Brick20: data-3-19-infiniband.infiniband:/data-brick2/export
Brick21: data-3-1-infiniband.infiniband:/data-brick3/export
Brick22: data-3-3-infiniband.infiniband:/data-brick3/export
Brick23: data-3-5-infiniband.infiniband:/data-brick3/export
Brick24: data-3-7-infiniband.infiniband:/data-brick3/export
Brick25: data-3-9-infiniband.infiniband:/data-brick3/export
Brick26: data-3-11-infiniband.infiniband:/data-brick3/export
Brick27: data-3-13-infiniband.infiniband:/data-brick3/export
Brick28: data-3-15-infiniband.infiniband:/data-brick3/export
Brick29: data-3-17-infiniband.infiniband:/data-brick3/export
Brick30: data-3-19-infiniband.infiniband:/data-brick3/export
Options Reconfigured:
nfs.disable: on

--



John Lalande
University of Wisconsin-Madison
Space Science  Engineering Center
1225 W. Dayton Street, Room 439, Madison, WI 53706
608-263-2268 / john.lala...@ssec.wisc.edu




smime.p7s
Description: S/MIME Cryptographic Signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster client performance

2011-07-26 Thread Pavan T C

On Tuesday 26 July 2011 09:24 PM, John Lalande wrote:

Thanks for your help, Pavan!


Hi John,

I would need some more information about your setup to estimate the
performance you should get with your gluster setup.

1. Can you provide the details of how disks are connected to the
storage boxes ? Is it via FC ? What raid configuration is it using (if
at all any) ?

The disks are 2TB near-line SAS direct attached via a PERC H700
controller (the Dell PowerEdge R515 has 12 3.5 drive bays). They are in
a RAID6 config, exported as a single volume, that's split into 3
equal-size partitions (due to ext4's (well, e2fsprogs') 16 TB limit).


2. What is the disk bandwidth you are getting on the local filesystem
on a given storage node ? I mean, pick any of the 10 storage servers
dedicated for Gluster Storage and perform a dd as below:

Seeing an average of 740 MB/s write, 971 GB/s read.


I presume you did this in one of the /data-brick*/export directories ?
Command output with the command line would have been clearer, but thats 
fine.






3. What is the IB bandwidth that you are getting between the compute
node and the glusterfs storage node? You can run the tool rdma_bw to
get the details:

30407: Bandwidth peak (#0 to #976): 2594.58 MB/sec
30407: Bandwidth average: 2593.62 MB/sec
30407: Service Demand peak (#0 to #976): 978 cycles/KB
30407: Service Demand Avg : 978 cycles/KB


This looks like a DDR connection. ibv_devinfo -v will tell a better 
story about the line width and speed of your infiniband connection.

QDR should have a much higher bandwidth.

But that still does not explain why you should get as low as 50 MB/s for 
a single stream single client write when the backend can support direct 
IO throughput of more than 700 MB/s.


On the server, can you collect:

# iostat -xcdh 2  iostat.log.brickXX

for the duration of the dd command ?

and

# strace -f -o stracelog.server -tt -T -e trace=write,writev -p 
glusterfsd.pid

(again for the duration of the dd command)

With the above, I want to measure the delay between the writes coming in 
from the client. iostat will describe the IO scenario on the server.
Once the exercise is done, please attach the iostat.log.brickXX and 
stracelog.server.


Pavan




Here's our gluster config:

# gluster volume info data

Volume Name: data
Type: Distribute
Status: Started
Number of Bricks: 30
Transport-type: rdma
Bricks:
Brick1: data-3-1-infiniband.infiniband:/data-brick1/export
Brick2: data-3-3-infiniband.infiniband:/data-brick1/export
Brick3: data-3-5-infiniband.infiniband:/data-brick1/export
Brick4: data-3-7-infiniband.infiniband:/data-brick1/export
Brick5: data-3-9-infiniband.infiniband:/data-brick1/export
Brick6: data-3-11-infiniband.infiniband:/data-brick1/export
Brick7: data-3-13-infiniband.infiniband:/data-brick1/export
Brick8: data-3-15-infiniband.infiniband:/data-brick1/export
Brick9: data-3-17-infiniband.infiniband:/data-brick1/export
Brick10: data-3-19-infiniband.infiniband:/data-brick1/export
Brick11: data-3-1-infiniband.infiniband:/data-brick2/export
Brick12: data-3-3-infiniband.infiniband:/data-brick2/export
Brick13: data-3-5-infiniband.infiniband:/data-brick2/export
Brick14: data-3-7-infiniband.infiniband:/data-brick2/export
Brick15: data-3-9-infiniband.infiniband:/data-brick2/export
Brick16: data-3-11-infiniband.infiniband:/data-brick2/export
Brick17: data-3-13-infiniband.infiniband:/data-brick2/export
Brick18: data-3-15-infiniband.infiniband:/data-brick2/export
Brick19: data-3-17-infiniband.infiniband:/data-brick2/export
Brick20: data-3-19-infiniband.infiniband:/data-brick2/export
Brick21: data-3-1-infiniband.infiniband:/data-brick3/export
Brick22: data-3-3-infiniband.infiniband:/data-brick3/export
Brick23: data-3-5-infiniband.infiniband:/data-brick3/export
Brick24: data-3-7-infiniband.infiniband:/data-brick3/export
Brick25: data-3-9-infiniband.infiniband:/data-brick3/export
Brick26: data-3-11-infiniband.infiniband:/data-brick3/export
Brick27: data-3-13-infiniband.infiniband:/data-brick3/export
Brick28: data-3-15-infiniband.infiniband:/data-brick3/export
Brick29: data-3-17-infiniband.infiniband:/data-brick3/export
Brick30: data-3-19-infiniband.infiniband:/data-brick3/export
Options Reconfigured:
nfs.disable: on



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster client performance

2011-07-25 Thread Sabuj Pattanayek
Hi,

Here's our QDR IB gluster setup:

http://piranha.structbio.vanderbilt.edu

We're still using gluster 3.0 on all our servers and clients as well
as CENTOS5.6 kernels and ofed 1.4. To simulate a single stream I use
this nfsSpeedTest script I wrote :

http://code.google.com/p/nfsspeedtest/

From a single QDR IB connected client to our /pirstripe directory
which is a stripe of the gluster storage servers, this is the
performance I get (note use a file size  amount of RAM on client and
server systems, 13GB in this case) :

4k block size :

111 pir4:/pirstripe% /sb/admin/scripts/nfsSpeedTest -s 13g -y
pir4: Write test (dd): 142.281 MB/s 1138.247 mbps 93.561 seconds
pir4: Read test (dd): 274.321 MB/s 2194.570 mbps 48.527 seconds

testing from 8k - 128k block size on the dd, best performance was
achieved at 64k block sizes:

114 pir4:/pirstripe% /sb/admin/scripts/nfsSpeedTest -s 13g -b 64k -y
pir4: Write test (dd): 213.344 MB/s 1706.750 mbps 62.397 seconds
pir4: Read test (dd): 955.328 MB/s 7642.620 mbps 13.934 seconds

This is to the /pirdist directories which are mounted in distribute
mode (file is written to only one of the gluster servers) :

105 pir4:/pirdist% /sb/admin/scripts/nfsSpeedTest -s 13g -y
pir4: Write test (dd): 182.410 MB/s 1459.281 mbps 72.978 seconds
pir4: Read test (dd): 244.379 MB/s 1955.033 mbps 54.473 seconds

106 pir4:/pirdist% /sb/admin/scripts/nfsSpeedTest -s 13g -y -b 64k
pir4: Write test (dd): 204.297 MB/s 1634.375 mbps 65.160 seconds
pir4: Read test (dd): 340.427 MB/s 2723.419 mbps 39.104 seconds

For reference/control, here's the same test writing straight to the
XFS filesystem on one of the gluster storage nodes:

[sabujp@gluster1 tmp]$ /sb/admin/scripts/nfsSpeedTest -s 13g -y
gluster1: Write test (dd): 398.971 MB/s 3191.770 mbps 33.366 seconds
gluster1: Read test (dd): 234.563 MB/s 1876.501 mbps 56.752 seconds

[sabujp@gluster1 tmp]$ /sb/admin/scripts/nfsSpeedTest -s 13g -y -b 64k
gluster1: Write test (dd): 442.251 MB/s 3538.008 mbps 30.101 seconds
gluster1: Read test (dd): 219.708 MB/s 1757.660 mbps 60.590 seconds

The read test seems to scale linearly with the # of storage servers
(almost 1GB/s!). Interestingly, the /pirdist read test at 64k block
size was 120MB/s faster than the read test straight from XFS, however,
it could have been that gluster1 was busy and when I read from
/pirdist the file was actually being read from one of the other 4 less
busy storage nodes.

Here's our storage node setup (many of these settings may not apply to v3.2) :



volume posix-stripe
  type storage/posix
  option directory /export/gluster1/stripe
end-volume

volume posix-distribute
type storage/posix
option directory /export/gluster1/distribute
end-volume

volume locks
  type features/locks
  subvolumes posix-stripe
end-volume

volume locks-dist
  type features/locks
  subvolumes posix-distribute
end-volume

volume iothreads
  type performance/io-threads
  option thread-count 16
  subvolumes locks
end-volume

volume iothreads-dist
  type performance/io-threads
  option thread-count 16
  subvolumes locks-dist
end-volume

volume server
  type protocol/server
  option transport-type ib-verbs
  option auth.addr.iothreads.allow 10.2.178.*
  option auth.addr.iothreads-dist.allow 10.2.178.*
  option auth.addr.locks.allow 10.2.178.*
  option auth.addr.posix-stripe.allow 10.2.178.*
  subvolumes iothreads iothreads-dist locks posix-stripe
end-volume



Here's our stripe client setup :



volume client-stripe-1
  type protocol/client
  option transport-type ib-verbs
  option remote-host gluster1
  option remote-subvolume iothreads
end-volume

volume client-stripe-2
  type protocol/client
  option transport-type ib-verbs
  option remote-host gluster2
  option remote-subvolume iothreads
end-volume

volume client-stripe-3
  type protocol/client
  option transport-type ib-verbs
  option remote-host gluster3
  option remote-subvolume iothreads
end-volume

volume client-stripe-4
  type protocol/client
  option transport-type ib-verbs
  option remote-host gluster4
  option remote-subvolume iothreads
end-volume

volume client-stripe-5
  type protocol/client
  option transport-type ib-verbs
  option remote-host gluster5
  option remote-subvolume iothreads
end-volume

volume readahead-gluster1
  type performance/read-ahead
  option page-count 4   # 2 is default
  option force-atime-update off # default is off
  subvolumes client-stripe-1
end-volume

volume readahead-gluster2
  type performance/read-ahead
  option page-count 4   # 2 is default
  option force-atime-update off # default is off
  subvolumes client-stripe-2
end-volume

volume readahead-gluster3
  type performance/read-ahead
  option page-count 4   # 2 is default
  option force-atime-update off # default is off
  subvolumes client-stripe-3
end-volume

volume readahead-gluster4
  type performance/read-ahead
  option page-count 4   # 2 is default
  option force-atime-update off # default is 

Re: [Gluster-users] Gluster client 32bit

2010-11-19 Thread Anand Avati
GlusterFS code is 32-bit clean. It should work just fine (on GNU/Linux).
However we do not QA our releases on 32bit machines. Hence we neither
release 32bit binaries nor support it officially. This may change in the
future, but no promises.

Avati

On Tue, Nov 16, 2010 at 1:04 PM, Christian Fischer 
christian.fisc...@easterngraphics.com wrote:

 Hmm, seems this thread is dead now. That's pity.

 No statement from the developers about usability of glusterfs client on
 32bit
 systems. But this was probably discussed in earlier threads.

 I think I'll use NFS with UCARP for the production environment.
 What about the performance loss if using NFS instead of GlusterFS, any
 experiences?


 On Monday 15 November 2010 14:41:23 Christian Fischer wrote:
  On Monday 15 November 2010 14:27:34 Stefano Baronio wrote:
   Yes, please, share it with us.
   I've succesfully compiled the rpm packages, but the client is not
 giving
   any errors when it is not able to connect to a glusterfs share...
 
  That's normal, the native client exits (as far i've seen) always true.
  That is an issue of cleanup_and_exit() if debug is off.
 
  Christian
 
   Thanks
   Stefano
  
  
   2010/11/13 Dennis Schafroth den...@schafroth.dk
  
On 12/11/2010, at 18.51, Ken Bigelow wrote:
 We have all 32bit server / clients for Gluster. We did have to
 compile it from source but so far we have had no problems at all.

 A few things had to be tweaked inside the configuration files like
 io thread count and whatnot but in the end it seems to be working
 fine from what we can tell.
   
Can you share what you have done? I am running a test on small 32 bit
boxes
   
cheers,
   
:-Dennis Schafroth
   
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
 
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster client 32bit

2010-11-17 Thread Stephan von Krawczynski
On Tue, 16 Nov 2010 16:54:07 -0800
Craig Carl cr...@gluster.com wrote:


 
 Stephan -
 Based on your feedback, and from other members of the community we have 
 opened discussions internally around adding support for a 32-bit client. 
 We have not made a decision at this point, and I can't make any 
 guarantees but I will do my best to get it added to the next version of 
 the product (3.1.2, (3.1.1 is feature locked)).
 On the sync question you brought up that is only an issue in the rare 
 case of split brain (if I understand the scenario you've brought up). 
 Split brain is a difficult problem with no answer right now. Gluster 3.1 
 added much more aggressive locking to reduce the possibility of split 
 brain. The process you described as ...the deamons are talking with 
 each other about whatever... will also reduce the likelihood of split 
 brain by eliminating the possibility that client or server vol files are 
 not the same across the entire cluster, the cause of a vast majority of 
 split brain issues with Gluster.
 Auto heal is slow, we have some processes along the lines you are 
 thinking, please let me know if these address some of your ideas around 
 stat -
 
 #cd gluster mount
 #find ./ -type f -exec stat /backend device’{}’ \; this will heal only 
 the files on that device.
 
 If you know when you had a failure you want to recover from this is even 
 faster -
 
 #cd gluster mount
 #find ./ -type f -mmin minutes since failure+ some extra -exec stat 
 /backend device’{}’ \; this will heal only the files on that device 
 changed x or more minutes ago.
 
 
 Thanks,
 
 Craig

Hello Craig,

let me repeat a very old suggestion (in fact I believe it was before your time
at gluster). I suggested to create a module (for server) that does only one
thing: maintain a special file in a way that a filename (with path) is added
to it when the server sets acls meaning the file is currently not in sync.
When acls are set to the file that mean it is in sync remove the filename from
the list again. Lets say this special file is named
/.glusterfs-server-ip (root of the mounted glusterfs). Now that would
allow you to have a look at _all_ files on _all_ servers not in sync from the
clients view. All you had to do for healing is to stat only these filelists
and you are done. You can simply drop the auto-healing, because you could as
well do a cronjob for that now as there is no find involved the whole method
uses virtually no resources on the servers and clients.
You have full control, you know what files on what servers are out-of-sync.
This solves all possible questions around replication.

Regards,
Stephan


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster client 32bit

2010-11-16 Thread Stefano Baronio
Hi MArtin,
  the XenServer Dom0 is 32bit whilst the hypervisor is 64 bit.
You need to know it when you install third part sw on the host.
http://forums.citrix.com/thread.jspa?threadID=269924tstart=0

So I need the 32bit compiled version to be able to mount glusterfs directly
from the XenServer host.

Cheers
Stefano


2010/11/16 Deadpan110 deadpan...@gmail.com

 My home testing environment I also use XenServer (again, Citrix - with
 a Centos minimalistic core OS) - even though the Dom0 is 64bit, in any
 Xen setup (maybe even for other virtuali[s\z]ation solutions),
 performance is better using 32bit VM's (DomU).

 My production environment comprises of Xen virtual machines (not
 XenServer, but still Xen), scattered around a remote datacenter.

 I too will be sharing my experiences as GlusterFS offers exactly what
 I need and would like to deploy.

 Martin



 On 16 November 2010 20:39, Stefano Baronio stefano.baro...@gmail.com
 wrote:
  From my point of view, 64 bit on server side is easy to handle but the
  client side can have different needs and limitations.
  For example, we are using XenServer from Citrix, the Dom0 is taken from a
  CentOS 5 distro and it is 32bit. I cannot change that, because is a
 Citrix
  design choice and there might be lots of these situations around.
  Sorry but I can't code any patches..
  Anyway, I will share what our experience will be with 32bit client.
 
  Cheers
  Stefano
 
 
  2010/11/16 Bernard Li bern...@vanhpc.org
 
  Hi Christian:
 
  On Tue, Nov 16, 2010 at 1:34 AM, Christian Fischer
  christian.fisc...@easterngraphics.com wrote:
 
   No statement from the developers about usability of glusterfs client
 on
  32bit
   systems. But this was probably discussed in earlier threads.
 
  I believe the official comment is that Gluster is not going to support
  32-bit systems.  However, it doesn't mean that the community cannot
  support it.  If we find bugs and can code up patches, we should still
  file a bug and submit the patches and hopefully they will be checked
  into the official repository.
 
  Cheers,
 
  Bernard
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
 
 
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
 
 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster client 32bit

2010-11-16 Thread Deadpan110
Ty Stefano

I assumed that because the hypervisor was 64bit, dom0 would be too...
I had never checked!

Many thanks for pointing that out.

Martin

On 17 November 2010 00:06, Stefano Baronio stefano.baro...@gmail.com wrote:
 Hi MArtin,
   the XenServer Dom0 is 32bit whilst the hypervisor is 64 bit.
 You need to know it when you install third part sw on the host.
 http://forums.citrix.com/thread.jspa?threadID=269924tstart=0

 So I need the 32bit compiled version to be able to mount glusterfs directly
 from the XenServer host.

 Cheers
 Stefano


 2010/11/16 Deadpan110 deadpan...@gmail.com

 My home testing environment I also use XenServer (again, Citrix - with
 a Centos minimalistic core OS) - even though the Dom0 is 64bit, in any
 Xen setup (maybe even for other virtuali[s\z]ation solutions),
 performance is better using 32bit VM's (DomU).

 My production environment comprises of Xen virtual machines (not
 XenServer, but still Xen), scattered around a remote datacenter.

 I too will be sharing my experiences as GlusterFS offers exactly what
 I need and would like to deploy.

 Martin



 On 16 November 2010 20:39, Stefano Baronio stefano.baro...@gmail.com
 wrote:
  From my point of view, 64 bit on server side is easy to handle but the
  client side can have different needs and limitations.
  For example, we are using XenServer from Citrix, the Dom0 is taken from
  a
  CentOS 5 distro and it is 32bit. I cannot change that, because is a
  Citrix
  design choice and there might be lots of these situations around.
  Sorry but I can't code any patches..
  Anyway, I will share what our experience will be with 32bit client.
 
  Cheers
  Stefano
 
 
  2010/11/16 Bernard Li bern...@vanhpc.org
 
  Hi Christian:
 
  On Tue, Nov 16, 2010 at 1:34 AM, Christian Fischer
  christian.fisc...@easterngraphics.com wrote:
 
   No statement from the developers about usability of glusterfs client
   on
  32bit
   systems. But this was probably discussed in earlier threads.
 
  I believe the official comment is that Gluster is not going to support
  32-bit systems.  However, it doesn't mean that the community cannot
  support it.  If we find bugs and can code up patches, we should still
  file a bug and submit the patches and hopefully they will be checked
  into the official repository.
 
  Cheers,
 
  Bernard
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
 
 
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
 
 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster client 32bit

2010-11-16 Thread Jeff Anderson-Lee

On 11/16/2010 05:36 AM, Stefano Baronio wrote:

Hi MArtin,
   the XenServer Dom0 is 32bit whilst the hypervisor is 64 bit.
You need to know it when you install third part sw on the host.
http://forums.citrix.com/thread.jspa?threadID=269924tstart=0

So I need the 32bit compiled version to be able to mount glusterfs directly
from the XenServer host.
   
The built-in NFS module is typically as fast or faster than using the 
fuse wrapper on the client side.  So the best way to support 32-bit 
clients is likely via NFS.

Cheers
Stefano


2010/11/16 Deadpan110deadpan...@gmail.com

   

My home testing environment I also use XenServer (again, Citrix - with
a Centos minimalistic core OS) - even though the Dom0 is 64bit, in any
Xen setup (maybe even for other virtuali[s\z]ation solutions),
performance is better using 32bit VM's (DomU).

My production environment comprises of Xen virtual machines (not
XenServer, but still Xen), scattered around a remote datacenter.

I too will be sharing my experiences as GlusterFS offers exactly what
I need and would like to deploy.

Martin



On 16 November 2010 20:39, Stefano Baroniostefano.baro...@gmail.com
wrote:
 

 From my point of view, 64 bit on server side is easy to handle but the
client side can have different needs and limitations.
For example, we are using XenServer from Citrix, the Dom0 is taken from a
CentOS 5 distro and it is 32bit. I cannot change that, because is a
   

Citrix
 

design choice and there might be lots of these situations around.
Sorry but I can't code any patches..
Anyway, I will share what our experience will be with 32bit client.

Cheers
Stefano


2010/11/16 Bernard Libern...@vanhpc.org

   

Hi Christian:

On Tue, Nov 16, 2010 at 1:34 AM, Christian Fischer
christian.fisc...@easterngraphics.com  wrote:

 

No statement from the developers about usability of glusterfs client
   

on
 

32bit
 

systems. But this was probably discussed in earlier threads.
   

I believe the official comment is that Gluster is not going to support
32-bit systems.  However, it doesn't mean that the community cannot
support it.  If we find bugs and can code up patches, we should still
file a bug and submit the patches and hopefully they will be checked
into the official repository.

Cheers,

Bernard
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


   
 
   



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
   


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster client 32bit

2010-11-16 Thread Stephan von Krawczynski
On Tue, 16 Nov 2010 08:51:17 -0800
Jeff Anderson-Lee jo...@eecs.berkeley.edu wrote:

 On 11/16/2010 05:36 AM, Stefano Baronio wrote:
  Hi MArtin,
 the XenServer Dom0 is 32bit whilst the hypervisor is 64 bit.
  You need to know it when you install third part sw on the host.
  http://forums.citrix.com/thread.jspa?threadID=269924tstart=0
 
  So I need the 32bit compiled version to be able to mount glusterfs directly
  from the XenServer host.
 
 The built-in NFS module is typically as fast or faster than using the 
 fuse wrapper on the client side.  So the best way to support 32-bit 
 clients is likely via NFS.

NFS is really something completely different. And - what is also ignored - the
infrastructure usage is completely different when using nfs. nfs does not
replicate at the client side, which means that the data paths explicitly built
for client replication are useless for nfs. Using the nfs translator leads to
server-server replication. For that case a data path exclusively used for this
server traffic would be best (because it cannot interfere with 64 bit client
replication).
So if you happen to upgrade a 2.0.9 setup with 64 bit servers and 64 as well
as 32 bit clients you have to redesign the network for best performance _and_
glusterfsd on the servers have to use the shortest data path for the nfss'
data replication (which I don't know if they are able to do that at all).
In other words: whereas the setup in 2.0.9 was clear and simple, the very same
usage case in 3.X is a _mess_.
Obviously nobody really thought about that - unbelievable for me as it is
really obvious. But I got accustomed to that situation because up to the
current day there is no solution for another most obvious problem: which files
are not in sync in a replication setup? There is no trivial answer to this
question I already brought up in early 2.X development phase...
How can you sell someone a storage platform if you're unable to answer such an 
essential question? Really, nobody needed auto-healing. All you need is the
answer to this question and then stat exactly this file list at a time _of
your choice_.
The good thing about 2.0.X was that you as an admin had quite full control
over things. in 3.X you have exactly nothing, the deamons are talking with
each other about whatever and hopefully things work out. That is no setup I
want to be an admin.

Regards,
Stephan



  Cheers
  Stefano
 
 
  2010/11/16 Deadpan110deadpan...@gmail.com
 
 
  My home testing environment I also use XenServer (again, Citrix - with
  a Centos minimalistic core OS) - even though the Dom0 is 64bit, in any
  Xen setup (maybe even for other virtuali[s\z]ation solutions),
  performance is better using 32bit VM's (DomU).
 
  My production environment comprises of Xen virtual machines (not
  XenServer, but still Xen), scattered around a remote datacenter.
 
  I too will be sharing my experiences as GlusterFS offers exactly what
  I need and would like to deploy.
 
  Martin
 
 
 
  On 16 November 2010 20:39, Stefano Baroniostefano.baro...@gmail.com
  wrote:
   
   From my point of view, 64 bit on server side is easy to handle but the
  client side can have different needs and limitations.
  For example, we are using XenServer from Citrix, the Dom0 is taken from a
  CentOS 5 distro and it is 32bit. I cannot change that, because is a
 
  Citrix
   
  design choice and there might be lots of these situations around.
  Sorry but I can't code any patches..
  Anyway, I will share what our experience will be with 32bit client.
 
  Cheers
  Stefano
 
 
  2010/11/16 Bernard Libern...@vanhpc.org
 
 
  Hi Christian:
 
  On Tue, Nov 16, 2010 at 1:34 AM, Christian Fischer
  christian.fisc...@easterngraphics.com  wrote:
 
   
  No statement from the developers about usability of glusterfs client
 
  on
   
  32bit
   
  systems. But this was probably discussed in earlier threads.
 
  I believe the official comment is that Gluster is not going to support
  32-bit systems.  However, it doesn't mean that the community cannot
  support it.  If we find bugs and can code up patches, we should still
  file a bug and submit the patches and hopefully they will be checked
  into the official repository.
 
  Cheers,
 
  Bernard
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
 
   
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
 
 
 
   
 
 
 
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
 
 

___
Gluster-users mailing list
Gluster-users@gluster.org

Re: [Gluster-users] Gluster client 32bit

2010-11-16 Thread Craig Carl

On 11/16/2010 03:07 PM, Stephan von Krawczynski wrote:

On Tue, 16 Nov 2010 08:51:17 -0800
Jeff Anderson-Leejo...@eecs.berkeley.edu  wrote:


On 11/16/2010 05:36 AM, Stefano Baronio wrote:

Hi MArtin,
the XenServer Dom0 is 32bit whilst the hypervisor is 64 bit.
You need to know it when you install third part sw on the host.
http://forums.citrix.com/thread.jspa?threadID=269924tstart=0

So I need the 32bit compiled version to be able to mount glusterfs directly
from the XenServer host.


The built-in NFS module is typically as fast or faster than using the
fuse wrapper on the client side.  So the best way to support 32-bit
clients is likely via NFS.

NFS is really something completely different. And - what is also ignored - the
infrastructure usage is completely different when using nfs. nfs does not
replicate at the client side, which means that the data paths explicitly built
for client replication are useless for nfs. Using the nfs translator leads to
server-server replication. For that case a data path exclusively used for this
server traffic would be best (because it cannot interfere with 64 bit client
replication).
So if you happen to upgrade a 2.0.9 setup with 64 bit servers and 64 as well
as 32 bit clients you have to redesign the network for best performance _and_
glusterfsd on the servers have to use the shortest data path for the nfss'
data replication (which I don't know if they are able to do that at all).
In other words: whereas the setup in 2.0.9 was clear and simple, the very same
usage case in 3.X is a _mess_.
Obviously nobody really thought about that - unbelievable for me as it is
really obvious. But I got accustomed to that situation because up to the
current day there is no solution for another most obvious problem: which files
are not in sync in a replication setup? There is no trivial answer to this
question I already brought up in early 2.X development phase...
How can you sell someone a storage platform if you're unable to answer such an
essential question? Really, nobody needed auto-healing. All you need is the
answer to this question and then stat exactly this file list at a time _of
your choice_.
The good thing about 2.0.X was that you as an admin had quite full control
over things. in 3.X you have exactly nothing, the deamons are talking with
each other about whatever and hopefully things work out. That is no setup I
want to be an admin.

Regards,
Stephan



Stephan -
Based on your feedback, and from other members of the community we have 
opened discussions internally around adding support for a 32-bit client. 
We have not made a decision at this point, and I can't make any 
guarantees but I will do my best to get it added to the next version of 
the product (3.1.2, (3.1.1 is feature locked)).
On the sync question you brought up that is only an issue in the rare 
case of split brain (if I understand the scenario you've brought up). 
Split brain is a difficult problem with no answer right now. Gluster 3.1 
added much more aggressive locking to reduce the possibility of split 
brain. The process you described as ...the deamons are talking with 
each other about whatever... will also reduce the likelihood of split 
brain by eliminating the possibility that client or server vol files are 
not the same across the entire cluster, the cause of a vast majority of 
split brain issues with Gluster.
Auto heal is slow, we have some processes along the lines you are 
thinking, please let me know if these address some of your ideas around 
stat -


#cd gluster mount
#find ./ -type f -exec stat /backend device’{}’ \; this will heal only 
the files on that device.


If you know when you had a failure you want to recover from this is even 
faster -


#cd gluster mount
#find ./ -type f -mmin minutes since failure+ some extra -exec stat 
/backend device’{}’ \; this will heal only the files on that device 
changed x or more minutes ago.



Thanks,

Craig

--
Craig Carl
Senior Systems Engineer
Gluster
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster client 32bit

2010-11-15 Thread Deadpan110
I asked about 32bit support when glusterfs 3.1.0 was released:

http://www.mail-archive.com/gluster-de...@nongnu.org/msg07150.html

They focus on 64bit due to their own clients requiring it - hence I
understand their commitment.

It is a pity that us 32bit users are without support, but the great
thing about opensource and mailing list communities, we can supply
support for each other and let the devs continue in their great work.

I had not fully tested the 3.1.0 release but did find the fuse client
better (file locking is non existent when mounting over NFS) - but the
performance hit was quite large on my tiny Virtual Machine cluster
setup - so I am unsure if i had short writes just using NFS alone.

(I have a feeling it may be related to a bug that some other 64bit
users encountered).

I will be testing 3.1.1 as soon as it appears.

Martin


On 15 November 2010 17:59, Christian Fischer
christian.fisc...@easterngraphics.com wrote:
 On Friday 12 November 2010 11:29:52 Bernard Li wrote:
 Hi Stefano:

 On Fri, Nov 12, 2010 at 2:18 AM, Stefano Baronio

 stefano.baro...@gmail.com wrote:
    is there a way to have a 32bit Glusterfs client?

 You can definitely build it yourself, but it is not officially
 supported by Gluster.  They recommend you use GlusterFS on 64-bit
 architecture servers.

 The 3.1 documentation states x64 as requirement for server appliances, but no
 word about a x64 limitation for clients. Where did you read that?


 Cheers,

 Bernard
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster client 32bit

2010-11-15 Thread Stefano Baronio
Yes, please, share it with us.
I've succesfully compiled the rpm packages, but the client is not giving any
errors when it is not able to connect to a glusterfs share...

Thanks
Stefano


2010/11/13 Dennis Schafroth den...@schafroth.dk

 On 12/11/2010, at 18.51, Ken Bigelow wrote:
  We have all 32bit server / clients for Gluster. We did have to compile
  it from source but so far we have had no problems at all.
 
  A few things had to be tweaked inside the configuration files like
  io thread count and whatnot but in the end it seems to be working fine
  from what we can tell.

 Can you share what you have done? I am running a test on small 32 bit boxes

 cheers,
 :-Dennis Schafroth

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster client 32bit

2010-11-15 Thread Christian Fischer
On Monday 15 November 2010 14:27:34 Stefano Baronio wrote:
 Yes, please, share it with us.
 I've succesfully compiled the rpm packages, but the client is not giving
 any errors when it is not able to connect to a glusterfs share...

That's normal, the native client exits (as far i've seen) always true.
That is an issue of cleanup_and_exit() if debug is off.

Christian

 
 Thanks
 Stefano
 
 
 2010/11/13 Dennis Schafroth den...@schafroth.dk
 
  On 12/11/2010, at 18.51, Ken Bigelow wrote:
   We have all 32bit server / clients for Gluster. We did have to compile
   it from source but so far we have had no problems at all.
   
   A few things had to be tweaked inside the configuration files like
   io thread count and whatnot but in the end it seems to be working fine
   from what we can tell.
  
  Can you share what you have done? I am running a test on small 32 bit
  boxes
  
  cheers,
  
  :-Dennis Schafroth
  
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster client 32bit

2010-11-15 Thread Christian Fischer
Hmm, seems this thread is dead now. That's pity.

No statement from the developers about usability of glusterfs client on 32bit 
systems. But this was probably discussed in earlier threads.

I think I'll use NFS with UCARP for the production environment.
What about the performance loss if using NFS instead of GlusterFS, any 
experiences?


On Monday 15 November 2010 14:41:23 Christian Fischer wrote:
 On Monday 15 November 2010 14:27:34 Stefano Baronio wrote:
  Yes, please, share it with us.
  I've succesfully compiled the rpm packages, but the client is not giving
  any errors when it is not able to connect to a glusterfs share...
 
 That's normal, the native client exits (as far i've seen) always true.
 That is an issue of cleanup_and_exit() if debug is off.
 
 Christian
 
  Thanks
  Stefano
  
  
  2010/11/13 Dennis Schafroth den...@schafroth.dk
  
   On 12/11/2010, at 18.51, Ken Bigelow wrote:
We have all 32bit server / clients for Gluster. We did have to
compile it from source but so far we have had no problems at all.

A few things had to be tweaked inside the configuration files like
io thread count and whatnot but in the end it seems to be working
fine from what we can tell.
   
   Can you share what you have done? I am running a test on small 32 bit
   boxes
   
   cheers,
   
   :-Dennis Schafroth
   
   ___
   Gluster-users mailing list
   Gluster-users@gluster.org
   http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster client 32bit

2010-11-15 Thread Bernard Li
Hi Christian:

On Tue, Nov 16, 2010 at 1:34 AM, Christian Fischer
christian.fisc...@easterngraphics.com wrote:

 No statement from the developers about usability of glusterfs client on 32bit
 systems. But this was probably discussed in earlier threads.

I believe the official comment is that Gluster is not going to support
32-bit systems.  However, it doesn't mean that the community cannot
support it.  If we find bugs and can code up patches, we should still
file a bug and submit the patches and hopefully they will be checked
into the official repository.

Cheers,

Bernard
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster client 32bit

2010-11-14 Thread Christian Fischer
On Friday 12 November 2010 16:17:34 you wrote:
 Just to add...
 
 My previous mailing list post of compiling glusterfs 3.1.0 on Ubuntu
 Lucid 32bit:
 http://www.mail-archive.com/gluster-users@gluster.org/msg03995.html
 
 As stated in that posting - it is not for the feint hearted -
 definitely do not use in production until you have tested for a while
 (in fact - do not use at all - unless you are a lil nutty like myself)
 
 Results...
 
 (Do not quote these as I did not throughly test)
 
 I managed to get short writes while using a 2 node replicated mirror
 while using small files (may be related to a similar recent 64bit
 issue) - this seemed rare though and only seemed to happen while
 mounting via NFS.
 
 Native fuse mounting worked well and has a better degree of locking...
 but on smaller systems (like VM's), you need at least todays standard
 of minimum memory (approx 1GB+ ?) or you will start to thrash your
 swap and things become bad.
 
 Test test and test...
 
 I love what glusterfs has to offer and can understand why they focus
 their support on 64bit and I would upgrade my distros, but I use VM's
 in Xen and 32bit VM's have a better performance on a 64 bit host than
 a 64 bit VM would.

Martin,
thanks for the warning.

They perform better?
Why they do?

I can understand that their focus is on x64 for server appliances, but i can't 
understand (and i hope someone will tell me why) that the native fuse client 
should work only on x32 without quirks.

XCP is x32, and that's the problem here.

 
 I hope to be testing again... but taking more notes and doing more
 consistent tests on the next release.
 
 So, once again... build and test... and test some more on a non
 production cluster
 
 Martin
 
 On 13 November 2010 01:17, Deadpan110 deadpan...@gmail.com wrote:
  It should work... but it is very unsupported by the devs...
  
  USE AT YOUR OWN RISK...
  
  I successfully used glusterfs 3.1.0 for a while on Ubuntu Lucid 32bit
  - the only problems i encountered are a few of the ones recently
  discussed in this mailing list for 64bit.
  
  I will be implementing it again soon - I hope!
  
  Martin
  
  On 13 November 2010 00:54, Christian Fischer
  
  christian.fisc...@easterngraphics.com wrote:
  On Friday 12 November 2010 11:29:52 Bernard Li wrote:
  Hi Stefano:
  
  On Fri, Nov 12, 2010 at 2:18 AM, Stefano Baronio
  
  stefano.baro...@gmail.com wrote:
 is there a way to have a 32bit Glusterfs client?
  
  You can definitely build it yourself, but it is not officially
  supported by Gluster.  They recommend you use GlusterFS on 64-bit
  architecture servers.
  
  Someone knows the reason for it?
  Are problems to expect on 32bit architecture?
  
  Cheers,
  
  Bernard
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
  
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster client 32bit

2010-11-14 Thread Christian Fischer
On Friday 12 November 2010 11:29:52 Bernard Li wrote:
 Hi Stefano:
 
 On Fri, Nov 12, 2010 at 2:18 AM, Stefano Baronio
 
 stefano.baro...@gmail.com wrote:
is there a way to have a 32bit Glusterfs client?
 
 You can definitely build it yourself, but it is not officially
 supported by Gluster.  They recommend you use GlusterFS on 64-bit
 architecture servers.

The 3.1 documentation states x64 as requirement for server appliances, but no 
word about a x64 limitation for clients. Where did you read that?

 
 Cheers,
 
 Bernard
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster client 32bit

2010-11-12 Thread Bernard Li
Hi Stefano:

On Fri, Nov 12, 2010 at 2:18 AM, Stefano Baronio
stefano.baro...@gmail.com wrote:

   is there a way to have a 32bit Glusterfs client?

You can definitely build it yourself, but it is not officially
supported by Gluster.  They recommend you use GlusterFS on 64-bit
architecture servers.

Cheers,

Bernard
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster client 32bit

2010-11-12 Thread Stefano Baronio
Thanks Bernard,
   I'm actually trying to build it via the src.rpm package.

Thank you.


2010/11/12 Bernard Li bern...@vanhpc.org

 Hi Stefano:

 On Fri, Nov 12, 2010 at 2:18 AM, Stefano Baronio
 stefano.baro...@gmail.com wrote:

is there a way to have a 32bit Glusterfs client?

 You can definitely build it yourself, but it is not officially
 supported by Gluster.  They recommend you use GlusterFS on 64-bit
 architecture servers.

 Cheers,

 Bernard
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster client 32bit

2010-11-12 Thread Christian Fischer
On Friday 12 November 2010 11:29:52 Bernard Li wrote:
 Hi Stefano:
 
 On Fri, Nov 12, 2010 at 2:18 AM, Stefano Baronio
 
 stefano.baro...@gmail.com wrote:
is there a way to have a 32bit Glusterfs client?
 
 You can definitely build it yourself, but it is not officially
 supported by Gluster.  They recommend you use GlusterFS on 64-bit
 architecture servers.

Someone knows the reason for it?
Are problems to expect on 32bit architecture?

 
 Cheers,
 
 Bernard
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster client 32bit

2010-11-12 Thread Deadpan110
It should work... but it is very unsupported by the devs...

USE AT YOUR OWN RISK...

I successfully used glusterfs 3.1.0 for a while on Ubuntu Lucid 32bit
- the only problems i encountered are a few of the ones recently
discussed in this mailing list for 64bit.

I will be implementing it again soon - I hope!

Martin

On 13 November 2010 00:54, Christian Fischer
christian.fisc...@easterngraphics.com wrote:
 On Friday 12 November 2010 11:29:52 Bernard Li wrote:
 Hi Stefano:

 On Fri, Nov 12, 2010 at 2:18 AM, Stefano Baronio

 stefano.baro...@gmail.com wrote:
    is there a way to have a 32bit Glusterfs client?

 You can definitely build it yourself, but it is not officially
 supported by Gluster.  They recommend you use GlusterFS on 64-bit
 architecture servers.

 Someone knows the reason for it?
 Are problems to expect on 32bit architecture?


 Cheers,

 Bernard
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster client 32bit

2010-11-12 Thread Christian Fischer
On Friday 12 November 2010 15:47:05 Deadpan110 wrote:
 It should work... but it is very unsupported by the devs...
 
 USE AT YOUR OWN RISK...
 
 I successfully used glusterfs 3.1.0 for a while on Ubuntu Lucid 32bit
 - the only problems i encountered are a few of the ones recently
 discussed in this mailing list for 64bit.

Well, let's see what happens on XCP.
Thanks
Christian

 
 I will be implementing it again soon - I hope!
 
 Martin
 
 On 13 November 2010 00:54, Christian Fischer
 
 christian.fisc...@easterngraphics.com wrote:
  On Friday 12 November 2010 11:29:52 Bernard Li wrote:
  Hi Stefano:
  
  On Fri, Nov 12, 2010 at 2:18 AM, Stefano Baronio
  
  stefano.baro...@gmail.com wrote:
 is there a way to have a 32bit Glusterfs client?
  
  You can definitely build it yourself, but it is not officially
  supported by Gluster.  They recommend you use GlusterFS on 64-bit
  architecture servers.
  
  Someone knows the reason for it?
  Are problems to expect on 32bit architecture?
  
  Cheers,
  
  Bernard
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
  
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

-- 

EasternGraphics - visualize your business

Christian Fischer
Administration
http://www.EasternGraphics.com
phone: +49 3677 678265

EasternGraphics GmbH - Albert-Einstein-Strasse 1 - DE-98693 Ilmenau
Geschaeftsfuehrer - Ekkehard Beier, Volker Blankenberg, Frank Wicht
Amtsgericht Jena - HRB 304052
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster client 32bit

2010-11-12 Thread Stephan von Krawczynski
I can tell you that 3.1 does not compile under 32bit on my box - I tried
lately.
Honestly I find it a bit strange not to support 32 bit clients as there are
lots of them - and 2.9 did work on 32 bit. Which means you cannot upgrade such
setups.

Regards,
Stephan


On Sat, 13 Nov 2010 01:17:05 +1030
Deadpan110 deadpan...@gmail.com wrote:

 It should work... but it is very unsupported by the devs...
 
 USE AT YOUR OWN RISK...
 
 I successfully used glusterfs 3.1.0 for a while on Ubuntu Lucid 32bit
 - the only problems i encountered are a few of the ones recently
 discussed in this mailing list for 64bit.
 
 I will be implementing it again soon - I hope!
 
 Martin
 
 On 13 November 2010 00:54, Christian Fischer
 christian.fisc...@easterngraphics.com wrote:
  On Friday 12 November 2010 11:29:52 Bernard Li wrote:
  Hi Stefano:
 
  On Fri, Nov 12, 2010 at 2:18 AM, Stefano Baronio
 
  stefano.baro...@gmail.com wrote:
     is there a way to have a 32bit Glusterfs client?
 
  You can definitely build it yourself, but it is not officially
  supported by Gluster.  They recommend you use GlusterFS on 64-bit
  architecture servers.
 
  Someone knows the reason for it?
  Are problems to expect on 32bit architecture?
 
 
  Cheers,
 
  Bernard
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster client 32bit

2010-11-12 Thread Ken Bigelow

We have all 32bit server / clients for Gluster. We did have to compile
it from source but so far we have had no problems at all.

A few things had to be tweaked inside the configuration files like
io thread count and whatnot but in the end it seems to be working fine
from what we can tell.

We working to move to 64bit once we have our mellanox infiniband network
in place.



Stefano Baronio wrote:

Hello,
   is there a way to have a 32bit Glusterfs client?

Thank you.

Stefano

  



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
  

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster client 32bit

2010-11-12 Thread Bernard Li
Hi all:

I ran into issues with gNFS server on 32-bit OS.  Did any of you run
into it as well?

http://gluster.org/pipermail/gluster-users/2010-November/005703.html

Thanks,

Bernard

On Fri, Nov 12, 2010 at 9:51 AM, Ken Bigelow sa...@pytecdesign.com wrote:
 We have all 32bit server / clients for Gluster. We did have to compile
 it from source but so far we have had no problems at all.

 A few things had to be tweaked inside the configuration files like
 io thread count and whatnot but in the end it seems to be working fine
 from what we can tell.

 We working to move to 64bit once we have our mellanox infiniband network
 in place.



 Stefano Baronio wrote:

 Hello,
   is there a way to have a 32bit Glusterfs client?

 Thank you.

 Stefano

  

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster client 32bit

2010-11-12 Thread Deadpan110
I am unsure if that is related to the issue I had and unfortunately my
32bit test nodes (VM's) are not running 3.1.0 at this moment in time
so I am unable to re-test and get accurate results.

I will be testing 3.1.1 when it appears next week though and this time
around I will be extensively testing and watching for any relation to
problems that might be encountered by the officially supported 64bit
crowd.

Martin

On 13 November 2010 04:40, Bernard Li bern...@vanhpc.org wrote:
 Hi all:

 I ran into issues with gNFS server on 32-bit OS.  Did any of you run
 into it as well?

 http://gluster.org/pipermail/gluster-users/2010-November/005703.html

 Thanks,

 Bernard

 On Fri, Nov 12, 2010 at 9:51 AM, Ken Bigelow sa...@pytecdesign.com wrote:
 We have all 32bit server / clients for Gluster. We did have to compile
 it from source but so far we have had no problems at all.

 A few things had to be tweaked inside the configuration files like
 io thread count and whatnot but in the end it seems to be working fine
 from what we can tell.

 We working to move to 64bit once we have our mellanox infiniband network
 in place.



 Stefano Baronio wrote:

 Hello,
   is there a way to have a 32bit Glusterfs client?

 Thank you.

 Stefano

  

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster client problems

2010-09-08 Thread Amar Tumballi
 [2010-09-08 10:42:45] W [write-behind.c:2479:init] writebehind: dangling
 volume. check volfile
 [2010-09-08 10:42:45] E [glusterfsd.c:673:glusterfs_graph_init] glusterfs:
 no valid translator loaded at the top or no mount point given. exiting
 [2010-09-08 10:42:45] E [glusterfsd.c:1395:main] glusterfs: translator
 initialization failed.  exiting


What is the command line used? For the client you need to give mount point..

if you have problems, try 'bash# mount -t glusterfs volume file path
mount-point'.

Regards,
Amar
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster client problems

2010-09-08 Thread Burnash, James
My only suggestion upon reading your client file is to remove the #commented 
lines in the translator section for volume writebehind and all of the commented 
out stat-prefetch and try that configuration. The fact that the comments 
shouldn't matter does not necessarily mean that they don't.

Also - do make sure to push the changed client volfile out to all the clients 
and restart them - unless you're using the method of accessing a centralized 
volfile.

James Burnash, Unix Engineering

-Original Message-
From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Nikola Garafolic
Sent: Wednesday, September 08, 2010 5:20 AM
To: gluster-users@gluster.org
Subject: [Gluster-users] gluster client problems

I am having problems while testing gluster performance with bonnie++. Os
used is centos 5.5 64bit. This is what the log says:


Version  : glusterfs 3.0.5 built on Oct 10 2010 00:00:29
git: v3.0.5
Starting Time: 2010-09-08 10:42:45
Command line : /usr/sbin/glusterfs
PID  : 11029
System name  : Linux
Nodename : hostname.domain.com
Kernel Release : 2.6.18-194.11.1.el5
Hardware Identifier: x86_64

Given volfile:
+--+
   1: ## file auto generated by /usr/bin/glusterfs-volgen (mount.vol)
   2: # Cmd line:
   3: # $ /usr/bin/glusterfs-volgen --name=sh -t tcp glusterfs01:/data/
glusterfs02:/data/
   4:
   5: # TRANSPORT-TYPE tcp
   6: volume glusterfs01-1
   7: type protocol/client
   8: option transport-type tcp
   9: option remote-host 192.168.32.101
  10: option transport.socket.nodelay on
  11: option remote-port 6996
  12: option remote-subvolume brick1
  13: end-volume
  14:
  15: volume glusterfs02-1
  16: type protocol/client
  17: option transport-type tcp
  18: option remote-host 192.168.32.102
  19: option transport.socket.nodelay on
  20: option remote-port 6997
  21: option remote-subvolume brick2
  22: end-volume
  23:
  24: volume distribute
  25: type cluster/distribute
  26: subvolumes glusterfs01-1 glusterfs02-1
  27: end-volume
  28:
  29: volume readahead
  30: type performance/read-ahead
  31: option page-count 4
  32: subvolumes distribute
  33: end-volume
  34:
  35: volume iocache
  36: type performance/io-cache
  37: option cache-size `echo $(( $(grep 'MemTotal' /proc/meminfo |
sed 's/[^0-9]//g') / 5120 ))`MB
  38: option cache-timeout 1
  39: subvolumes readahead
  40: end-volume
  41:
  42: volume quickread
  43: type performance/quick-read
  44: option cache-timeout 1
  45: option max-file-size 64kB
  46: subvolumes iocache
  47: end-volume
  48:
  49: volume writebehind
  50: type performance/write-behind
  51: option cache-size 4MB
  52: ###
  53: #option aggregate-size 65035
  54: subvolumes quickread
  55: end-volume
  56:
  57: #volume statprefetch
  58: #type performance/stat-prefetch
  59: #subvolumes writebehind
  60: #end-volume
  61:

+--+
[2010-09-08 10:42:45] W [write-behind.c:2479:init] writebehind: dangling
volume. check volfile
[2010-09-08 10:42:45] E [glusterfsd.c:673:glusterfs_graph_init]
glusterfs: no valid translator loaded at the top or no mount point
given. exiting
[2010-09-08 10:42:45] E [glusterfsd.c:1395:main] glusterfs: translator
initialization failed.  exiting

--
Nikola Garafolic
SRCE, Sveucilisni racunski centar
tel: +385 1 6165 804
email: nikola.garafo...@srce.hr
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


DISCLAIMER:
This e-mail, and any attachments thereto, is intended only for use by the 
addressee(s) named herein and may contain legally privileged and/or 
confidential information. If you are not the intended recipient of this e-mail, 
you are hereby notified that any dissemination, distribution or copying of this 
e-mail, and any attachments thereto, is strictly prohibited. If you have 
received this in error, please immediately notify me and permanently delete the 
original and any copy of any e-mail and any printout thereof. E-mail 
transmission cannot be guaranteed to be secure or error-free. The sender 
therefore does not accept liability for any errors or omissions in the contents 
of this message which arise as a result of e-mail transmission.
NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group may, at its 
discretion, monitor and review the content of all e-mail communications. 
http://www.knight.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster client hang when using iozone

2010-06-08 Thread Shehjar Tikoo

Tomasz Chmielewski wrote:

Am 07.06.2010 13:55, Daniel Maher wrote:


Any issue what can be wrong here? Neither the client nor the servers
produce anything in logs when it happens (I didn't wait for more than 10
minutes though).


What distro ? What kernel version ? Hardware specs ?


Debian Lenny, 64 bit, 2.6.26 kernel.
The specs are more or less high end.

I see there is a bug entry describing a similar issue:

http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=902

but I don't use any NFS/translators here, so not sure if it's the same 
or not.


The bug also points to a different bug in io-cache, which I also use - 
so I'll try to disable it and see if it changes anything.


That comment on io-cache bug is nfs specific and does not come into play 
when used with FUSE.


-Shehjar







___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster client hang when using iozone

2010-06-07 Thread Daniel Maher

On 06/07/2010 01:50 PM, Tomasz Chmielewski wrote:


Unfortunately, the issue is not solved for me - I can reliably reproduce
the hang with such iozone command line (it usually hangs every 2-3 times):

# iozone -R -l 5 -u 5 -r 4k -s 100m


When I look at the traffic, I can see it still flows between the client
and gluster servers - but at a very low speed, around 10 kB/s (with 1
Gbit link).

Any access to the gluster filesystem hangs.

Killing glusterfs process and mounting the fs again makes the thing
recover (until at least I try to start iozone 2-3 more times).

Any issue what can be wrong here? Neither the client nor the servers
produce anything in logs when it happens (I didn't wait for more than 10
minutes though).


What distro ?  What kernel version ?  Hardware specs ?

As a counter-point, a few months ago i evaluated glfs 3.x for one of our 
internal systems, and ran iozone against a simple three-node client-side 
replication setup (such as the one described by your configs) with no 
problems such as those your described.


--
Daniel Maher dma+gluster AT witbe DOT net
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster client hang when using iozone

2010-06-07 Thread Tomasz Chmielewski

Am 07.06.2010 13:55, Daniel Maher wrote:


Any issue what can be wrong here? Neither the client nor the servers
produce anything in logs when it happens (I didn't wait for more than 10
minutes though).


What distro ? What kernel version ? Hardware specs ?


Debian Lenny, 64 bit, 2.6.26 kernel.
The specs are more or less high end.

I see there is a bug entry describing a similar issue:

http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=902

but I don't use any NFS/translators here, so not sure if it's the same 
or not.


The bug also points to a different bug in io-cache, which I also use - 
so I'll try to disable it and see if it changes anything.



--
Tomasz Chmielewski
http://wpkg.org
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster client hang when using iozone

2010-06-07 Thread Tomasz Chmielewski

Am 07.06.2010 14:10, Tomasz Chmielewski wrote:

Am 07.06.2010 13:55, Daniel Maher wrote:


Any issue what can be wrong here? Neither the client nor the servers
produce anything in logs when it happens (I didn't wait for more than 10
minutes though).


What distro ? What kernel version ? Hardware specs ?


Debian Lenny, 64 bit, 2.6.26 kernel.
The specs are more or less high end.

I see there is a bug entry describing a similar issue:

http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=902

but I don't use any NFS/translators here, so not sure if it's the same
or not.

The bug also points to a different bug in io-cache, which I also use -
so I'll try to disable it and see if it changes anything.


Unfortunately, it still hangs, even if I comment out the io-cache 
section from my gluster client configuration.


Any hints on how to debug this would be appreciated.

--
Tomasz Chmielewski
http://wpkg.org

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] gluster client hang when using iozone

2010-06-07 Thread Tomasz Chmielewski

Am 07.06.2010 14:27, Daniel Maher wrote:

On 06/07/2010 02:24 PM, Tomasz Chmielewski wrote:


What distro ? What kernel version ? Hardware specs ?


Debian Lenny, 64 bit, 2.6.26 kernel.
The specs are more or less high end.

I see there is a bug entry describing a similar issue:

http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=902

but I don't use any NFS/translators here, so not sure if it's the same
or not.

The bug also points to a different bug in io-cache, which I also use -
so I'll try to disable it and see if it changes anything.


Unfortunately, it still hangs, even if I comment out the io-cache
section from my gluster client configuration.

Any hints on how to debug this would be appreciated.


Is it just IOZone that creates undesirable operating conditions, or have
you tried other testing tools as well ?


I've seen some curious hangs once a month or so with glusterfs 2.x and 
normal system usage - but I didn't have a way to reproduce it (I 
didn't try iozone, though).


I upgraded to glusterfs 3.0.4 and hoped it will cure my hangs, then 
thought it's a good idea to test it with iozone... And hence the hangs.


So no, other than iozone, I don't have any other way to reproduce it.


--
Tomasz Chmielewski
http://wpkg.org

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster client and HA

2010-02-10 Thread Vikas Gorur

Mike,

There's a typo in your client volume file. You've specified the same 
server twice in your replicate configuration.


volume pair02 
type cluster/replicate 
subvolumes cf03 cf03 
end-volume 
  

That line should be:

 subvolumes cf03 cf04

I'm guessing that you killed the server cf03 and couldn't access data.

You can also use the gluster-volgen to generate the client volume files. 
The tool

can ensure that typos like this cannot happen.

Vikas
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster client and HA

2010-02-08 Thread Tejas N. Bhise
Hi Mike,

Are you looking for HA or the NFS server or HA of glusterFS itself ? Can you 
please explain your system a little more and also tell us what you want to 
achieve.

Regards,
Tejas.

- Original Message -
From: mike foster mfost...@gmail.com
To: Gluster General Discussion List gluster-users@gluster.org
Sent: Monday, February 8, 2010 11:47:24 PM GMT +05:30 Chennai, Kolkata, Mumbai, 
New Delhi
Subject: [Gluster-users] Gluster client and HA

I was under the impression that by configuring a system as a client
connected to 4 server nodes that if one of the nodes went down the client
would still be able to access the data from some kind of failover to the
other nodes. However I set up a test an failed the server that was listed as
the last connected server from the log file and attempted to access the
exported/mounted filesystem on the client and recieved an Stale NFS file
handle error. Also here is some messages from the log file:

cf02: connection to 10.50.14.32:6996 failed (No route to host)
[2010-02-08 11:09:24] W [fuse-bridge.c:722:fuse_attr_cbk] glusterfs-fuse:
88: LOOKUP() / = -1 (Stale NFS file handle)

Is it not possible for a client to have HA to the exported filesystem?

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users