Re: [Gluster-users] glusterfs client crashes

2016-02-23 Thread Fredrik Widlund
2.17-106.el7 is the latest glibc on CentOS 7. Tried the one-liner on older
versions as well which also results in "likely buggy" according to the test.

Found this CentOS issue - https://bugs.centos.org/view.php?id=10426

# rpm -qa | grep glibc
*glibc*-2.17-106.el7_2.4.x86_64
*glibc*-common-2.17-106.el7_2.4.x86_64

# objdump -r -d /lib64/libc.so.6 | grep -C 20 _int_free | grep -C 10
cmpxchg | head -21 | grep -A 3 cmpxchg | tail -1 | (grep '%r' && echo "Your
libc is likely buggy." || echo "Your libc looks OK.")
   7ca3e: 48 85 c9 test   *%r*cx,*%r*cx
Your libc is likely buggy.

Kind regards,
Fredrik Widlund

On Tue, Feb 23, 2016 at 4:27 PM, Raghavendra Gowdappa 
wrote:

> Came across a glibc bug which could've caused some corruptions. On
> googling about possible problems, we found that there is an issue (
> https://bugzilla.redhat.com/show_bug.cgi?id=1305406) fixed in
> glibc-2.17-121.el7. From the bug we found the following test-script to
> determine if the glibc is buggy. And on running it, we ran it on the local
> setup using the following method given in the bug:
>
> 
> # objdump -r -d /lib64/libc.so.6 | grep -C 20 _int_free | grep -C 10
> cmpxchg | head -21 | grep -A 3 cmpxchg | tail -1 | (grep '%r' && echo "Your
> libc is likely buggy." || echo "Your libc looks OK.")
>
>7cc36:48 85 c9 test   %rcx,%rcx
> Your libc is likely buggy.
> 
>
> Could you check if the above command on your setup gives the same output
> which says "Your libc is likely buggy."
>
> Thanks to Nithya, Krutika and Pranith for working on this.
>
> - Original Message -
> > From: "Fredrik Widlund" 
> > To: glus...@deej.net
> > Cc: gluster-users@gluster.org
> > Sent: Tuesday, February 23, 2016 5:51:37 PM
> > Subject: Re: [Gluster-users] glusterfs client crashes
> >
> > Hi,
> >
> > I have experienced what looks like a very similar crash. Gluster 3.7.6 on
> > CentOS 7. No errors on the bricks or on other at the time mounted
> clients.
> > Relatively high load at the time.
> >
> > Remounting the filesystem brought it back online.
> >
> >
> > pending frames:
> > frame : type(1) op(READ)
> > frame : type(1) op(READ)
> > frame : type(1) op(READ)
> > frame : type(1) op(READ)
> > frame : type(1) op(READ)
> > frame : type(1) op(READ)
> > frame : type(1) op(READ)
> > frame : type(1) op(READ)
> > frame : type(1) op(READ)
> > frame : type(1) op(READ)
> > frame : type(1) op(READ)
> > frame : type(1) op(READ)
> > frame : type(1) op(READ)
> > frame : type(1) op(READ)
> > frame : type(1) op(READ)
> > frame : type(1) op(READ)
> > frame : type(1) op(READ)
> > frame : type(1) op(READ)
> > frame : type(1) op(READ)
> > frame : type(1) op(READ)
> > frame : type(1) op(READ)
> > frame : type(1) op(STAT)
> > frame : type(1) op(STAT)
> > frame : type(1) op(READ)
> > frame : type(1) op(READ)
> > frame : type(1) op(READ)
> > frame : type(1) op(READ)
> > frame : type(1) op(READ)
> > frame : type(1) op(READ)
> > frame : type(1) op(READ)
> > frame : type(0) op(0)
> > patchset: git:// git.gluster.com/glusterfs.git
> > signal received: 6
> > time of crash:
> > 2016-02-22 10:28:45
> > configuration details:
> > argp 1
> > backtrace 1
> > dlfcn 1
> > libpthread 1
> > llistxattr 1
> > setfsid 1
> > spinlock 1
> > epoll.h 1
> > xattr.h 1
> > st_atim.tv_nsec 1
> > package-string: glusterfs 3.7.6
> > /lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0xc2)[0x7f83387f7012]
> > /lib64/libglusterfs.so.0(gf_print_trace+0x31d)[0x7f83388134dd]
> > /lib64/libc.so.6(+0x35670)[0x7f8336ee5670]
> > /lib64/libc.so.6(gsignal+0x37)[0x7f8336ee55f7]
> > /lib64/libc.so.6(abort+0x148)[0x7f8336ee6ce8]
> > /lib64/libc.so.6(+0x75317)[0x7f8336f25317]
> > /lib64/libc.so.6(+0x7cfe1)[0x7f8336f2cfe1]
> > /lib64/libglusterfs.so.0(loc_wipe+0x27)[0x7f83387f4d47]
> >
> /usr/lib64/glusterfs/3.7.6/xlator/performance/md-cache.so(mdc_local_wipe+0x11)[0x7f8329c8e5f1]
> >
> /usr/lib64/glusterfs/3.7.6/xlator/performance/md-cache.so(mdc_stat_cbk+0x10c)[0x7f8329c8f4fc]
> > /lib64/libglusterfs.so.0(default_stat_cbk+0xac)[0x7f83387fcc5c]
> >
> /usr/lib64/glusterfs/3.7.6/xlator/cluster/distribute.so(dht_file_attr_cbk+0x149)[0x7f832ab2a409]
> >
> /usr/lib64/glusterfs/3.7.6/xlator/protocol/client.so(client3_3_stat_cbk+0x3c6)[0x7f832ad6d266]
> > /lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)[0

Re: [Gluster-users] glusterfs client crashes

2016-02-23 Thread Fredrik Widlund
Hi,

I have experienced what looks like a very similar crash. Gluster 3.7.6 on
CentOS 7. No errors on the bricks or on other at the time mounted clients.
Relatively high load at the time.

Remounting the filesystem brought it back online.

pending frames:
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(STAT)
frame : type(1) op(STAT)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(0) op(0)
patchset: git://git.gluster.com/glusterfs.git
signal received: 6
time of crash:
2016-02-22 10:28:45
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.7.6
/lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0xc2)[0x7f83387f7012]
/lib64/libglusterfs.so.0(gf_print_trace+0x31d)[0x7f83388134dd]
/lib64/libc.so.6(+0x35670)[0x7f8336ee5670]
/lib64/libc.so.6(gsignal+0x37)[0x7f8336ee55f7]
/lib64/libc.so.6(abort+0x148)[0x7f8336ee6ce8]
/lib64/libc.so.6(+0x75317)[0x7f8336f25317]
/lib64/libc.so.6(+0x7cfe1)[0x7f8336f2cfe1]
/lib64/libglusterfs.so.0(loc_wipe+0x27)[0x7f83387f4d47]
/usr/lib64/glusterfs/3.7.6/xlator/performance/md-cache.so(mdc_local_wipe+0x11)[0x7f8329c8e5f1]
/usr/lib64/glusterfs/3.7.6/xlator/performance/md-cache.so(mdc_stat_cbk+0x10c)[0x7f8329c8f4fc]
/lib64/libglusterfs.so.0(default_stat_cbk+0xac)[0x7f83387fcc5c]
/usr/lib64/glusterfs/3.7.6/xlator/cluster/distribute.so(dht_file_attr_cbk+0x149)[0x7f832ab2a409]
/usr/lib64/glusterfs/3.7.6/xlator/protocol/client.so(client3_3_stat_cbk+0x3c6)[0x7f832ad6d266]
/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)[0x7f83385c5b80]
/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1bf)[0x7f83385c5e3f]
/lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7f83385c1983]
/usr/lib64/glusterfs/3.7.6/rpc-transport/socket.so(+0x9506)[0x7f832d261506]
/usr/lib64/glusterfs/3.7.6/rpc-transport/socket.so(+0xc3f4)[0x7f832d2643f4]
/lib64/libglusterfs.so.0(+0x878ea)[0x7f83388588ea]
/lib64/libpthread.so.0(+0x7dc5)[0x7f833765fdc5]
/lib64/libc.so.6(clone+0x6d)[0x7f8336fa621d]

Kind regards,
Fredrik Widlund

On Tue, Feb 23, 2016 at 1:00 PM,  wrote:

> Date: Mon, 22 Feb 2016 15:08:47 -0500
> From: Dj Merrill 
> To: Gaurav Garg 
> Cc: gluster-users@gluster.org
> Subject: Re: [Gluster-users] glusterfs client crashes
> Message-ID: <56cb6acf.5080...@deej.net>
> Content-Type: text/plain; charset=utf-8; format=flowed
>
> On 2/21/2016 2:23 PM, Dj Merrill wrote:
>  > Very interesting.  They were reporting both bricks offline, but the
>  > processes on both servers were still running.  Restarting glusterfsd on
>  > one of the servers brought them both back online.
>
> I realize I wasn't clear in my comments yesterday and would like to
> elaborate on this a bit further. The "very interesting" comment was
> sparked because when we were running 3.7.6, the bricks were not
> reporting as offline when a client was having an issue, so this is new
> behaviour now that we are running 3.7.8 (or a different issue entirely).
>
> The other point that I was not clear on is that we may have one client
> reporting the "Transport endpoint is not connected" error, but the other
> 40+ clients all continue to work properly. This is the case with both
> 3.7.6 and 3.7.8.
>
> Curious, how can the other clients continue to work fine if both Gluster
> 3.7.8 servers are reporting the bricks as offline?
>
> What does "offline" mean in this context?
>
>
> Re: the server logs, here is what I've found so far listed on both
> gluster servers (glusterfs1 and glusterfs2):
>
> [2016-02-21 08:06:02.785788] I [glusterfsd-mgmt.c:1596:mgmt_getspec_cbk]
> 0-glusterfs: No change in volfile, continuing
> [2016-02-21 18:48:20.677010] W [socket.c:588:__socket_rwv]
> 0-gv0-client-1: readv on (sanitized IP of glusterfs2):49152 failed (No
> data available)
> [2016-02-21 18:48:20.677096] I [MSGID: 114018]
> [client.c:2030:client_rpc_notify] 0-gv0-client-1: disconnected from
> gv0-client-1. Client process will keep trying to connect to glusterd
> until brick's port is available
> [2016-02-21 18:48:31.148564] E [MSGID: 114058]
> [client-handshake.c:1524:client_query_portmap_cbk] 0-gv0-client-1:
> failed to get the port number for remote subvolume. Please 

Re: [Gluster-users] How about quality of GluterFS?

2010-05-24 Thread Fredrik Widlund
Hi Lesonus,

If I understand you correctly I believe the problem is that concurrent IO 
performs very differently from sequential IO. If you're running Linux you need 
to change the scheduler from the default CFQ to NOOP or DEADLINE, and with the 
drives you mention you should reach 1Gbps, though it depends on raid-hardware, 
configurations, and other things as well. If you're running for example FreeBSD 
you're probably out of luck. 

GlusterFS should not be the problem, and I'm guessing you see the same numbers 
with or without it.

Kind regards,
Fredrik Widlund


-Ursprungligt meddelande-
Från: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] För lesonus
Skickat: den 23 maj 2010 15:29
Till: Gluster-users@gluster.org
Ämne: Re: [Gluster-users] How about quality of GluterFS?

I'm using glusterFs 3.03
My server is running in Heavy load, so result of your command (iozone) 
may by not exactly

But I think there are problem with IO of my system, like that:
My server store many big file, for example 1 file, each file 100MB, 
so all is 10TB.
I user 12HDD SATAII , 2TB, RAID6, so enough to store files.

And problem:
- when 200 user download concurrent (normal work), Webserver must open 
atleast 200 file (100mbyte/file).
Then there are 200 process open 200 file concurrently. (With Gigabit 
nic, each user can reach 1Gbit /200 = 125MByte/200 ~600Kbyte/s, it's a 
normal rate!)
Total IO to RAID will be ~ 1GBit <=> 125MByte/s.
With single file access, my RAID work well with this rate. But, with 200 
concurrent read to RAID, I see, it can not reach 125MB/s (1 Gbit)

--> Result: With Not GluterFS, my server can not service 1Gbit/s access, 
then with GlusterFS, the rate is slower?

So, What solution to resolve my hardware problem? and this solution is 
still good for GlusterFS?

I see this:
http://ftp.zresearch.com/pub/gluster/glusterfs/talks/Z/GlusterFS.pdf
I see 13GBit/s can be reached, but 64 client load with single file, so 
it's simple to reach that rate?

Thank very much for your helps!

Craig Carl wrote:
> Can you tell me what version ot Gluster you are using? Also if you 
> could run "iozone -a -b lesonus_in_gluster.csv" in the Gluster mount 
> point and then again "iozone -a -b lesonus-no-gluster.csv" on the 
> storage server and send the results back I would appreciate it.
>
> We should be able to get things faster for you.
>
> Craig
>
> --
> Craig Carl
> Gluster, Inc.
> Cell - (408) 829-9953 (California, USA)
> Gtalk - craig.c...@gmail.com
>
>
> - Original Message -
> From: "lesonus" 
> To: "Craig Carl" 
> Sent: Friday, May 14, 2010 8:08:46 AM GMT -08:00 US/Canada Pacific
> Subject: Re: [Gluster-users] How about quality of GluterFS?
>
> I need a FileHosting system can service 10, 20, ... 50Gbit ... to
> service users of an ISP
> With 100TB, 200TB, ... 1000 TB or more
>
> I test with GlusterFS, with basic config, one client, one server with
> Centos5.4.
> I have a very basic problem: ls command is very  very slow when my
> system running! But I not use GlusterFS, It run normal
> and change to other config of vol file, but basic problem still
>
> So I wonder how about the Quality of GlusterFS when it has basic problem
> like that!
> In the future, If it can service complex and heavy load?
>
> I need choice a suitable solution in next some days!
> If GLFS can do this?
>
> Thank you!
>
> Craig Carl wrote:
> > Sir -
> >  Gluster is used by several of the largest CDNs in the world as 
> a step in the distribution process. Could you be more specific as to 
> what you are looking for?
> >
> > Craig Carl
> > Gluster
> >
> > Sent from a mobile device, please excuse my tpyos.
> >
> > On May 13, 2010, at 20:52, lesonus  wrote:
> >
> >  
> >> I have a question to prepare apply Gluster for my project:
> >> If GluterFS is enough STRONG to do a FileHosting service, like 
> RapidShare, MegaUpload ...?
> >> Thank you!
> >> ___
> >> Gluster-users mailing list
> >> Gluster-users@gluster.org
> >> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> >>
> >
> >  
>

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Replicate_

2010-02-12 Thread Fredrik Widlund

How do I add a brick to a replicate setup with preexisting data. There is old 
article from almost 2 years back, and I don't think I will try that one out.

Can I use a client with favorite-child option, and just go through the content 
to sync the new brick?

Kind regards,
Fredrik Widlund


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Translators_

2010-02-12 Thread Fredrik Widlund

Hi,

Is there an updated documentation for translators in 3.x? The features/filter 
seems to be gone for example, and I'm trying to find how I can read-only mount 
glusterfs clients.

Kind regards,
Fredrik Widlund
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users