Hi
Instead of guessing and contemplating and using your brain cycles to
figure out the cause, have you instead taken the effort to post the
kernel backtraces you have to the linux-kernel mailing list yet? All
you need to do is compose an email with the attachment you have
already posted here
configure: ./configure --prefix= --disable-ibverbs --disable-bdb
nm /lib/libglusterfs.so.0 | grep gf_proc_dump_info
00030d50 T gf_proc_dump_info
Any idea ?
Does 2.1.0 with cifs and nfs support ? How to use?
2009-09-11
eagleeyes
发件人: Vijay Bellur
发送时间: 2009-09-10 19:07:45
收
HI :
I met a error problem
[saved-frames.c:165 aved_frames_unwind] client1: forced unwinding frame
type(1) op(FINODELK)
what happend?
2009-09-11
eagleeyes
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bi
> Today we have experienced again the same problem but in this case
> the computer which was under heavy load and locked was the
> gluster client (not the server). Unfortunatelly there was
> absolutely nothing in the kernel log, which makes me thing that
> this is produced by
> locks on some part
configure: ./configure --prefix= --disable-ibverbs --disable-bdb
nm /lib/libglusterfs.so.0 | grep gf_proc_dump_info
00030d50 T gf_proc_dump_info
2009-09-11
eagleeyes
发件人: Vijay Bellur
发送时间: 2009-09-10 19:07:45
收件人: eagleeyes
抄送: gluster-users
主题: Re: [Gluster-users] ERROR
Hello,
Can anyone forward me a link (if it exists) to upgrading a production
deployment of gluster from 1.3.1 to 2.0.6? I've dug around but couldn't find
a compiled set of instructions.
A previous admin did the installation on three of our web servers, creating
one volume, AFR/tcp. It looks like
On 09/10/2009 12:26 PM, David Saez Padros wrote:
Instead of guessing and contemplating and using your brain cycles to
figure out the cause, have you instead taken the effort to post the
kernel backtraces you have to the linux-kernel mailing list yet? All
you need to do is compose an email wit
On 09/10/2009 09:38 AM, David Saez Padros wrote:
In particular, if you read about the intent of FUSE - the technology
being used to create a file system, I think you will find that what
Anand is saying is the *exact* purpose for this project.
the lockups are on server side not in client side
David,
We really appreciate you taking time to search through the sources
and documentation and suggest improvements. Please start a new email
thread for this so that it gets suitable attention. I'm quite certain
a lot of us have stopped reading posts on this thread somewhere in the
middle as it
Hi
Really, this was only _one_ quick example of which there are numerous in your
code. Look at all CALLOC/MALLOC calls. Most of them are not safe.
the code is plenty of those, a quick wingrep shows at least those:
ib-vers.c lines 422, 1648, 1801, 2322
socket.c lines 285 and 866
common-utils.c
On Thu, 10 Sep 2009 21:20:04 +0530
Krishna Srinivas wrote:
> Now, failing to check for NULL pointer here is a bug which we will fix
> in future releases (blame it on our laziness for not doing the check
> already!) Thanks for pointing it out.
Really, this was only _one_ quick example of which th
The glusterfs version I'm using is 2.0.6.
- Wei
On Thu, Sep 10, 2009 at 2:05 PM, Wei Dong wrote:
> Hi All,
>
> I complained about the low file creation rate with the glusterfs on my
> cluster weeks ago and Avati suggested I started with a small number of
> nodes. I finally get sometime to seri
Hi All,
I complained about the low file creation rate with the glusterfs on my
cluster weeks ago and Avati suggested I started with a small number of
nodes. I finally get sometime to seriously benchmark glusterfs with
Bonnie++ today and the results confirms that glusterfs is indeed slow in
terms
Hi
Instead of guessing and contemplating and using your brain cycles to figure out
the cause, have you instead taken the effort to post the kernel backtraces you
have to the linux-kernel mailing list yet? All you need to do is compose an
email with the attachment you have already posted here
> > Which actually reinforces the point that glusterfs has very little
> to
> > do with this kernel lockup. It is not even performing the special
> fuse
> > protocol communication with the kernel in question. Just plain
> vanilla
> > POSIX system calls on disk filesystem and send/recv on TCP/IP
>
On Thu, Sep 10, 2009 at 5:37 PM, Stephan von
Krawczynski wrote:
>
>> > Only if backed up. Has the trace been shown to the linux developers?
>> > What do they think?
>
> Maybe we should just ask questions about the source before bothering others...
>
> From 2.0.6 /transport/socket/src/socket.c line
Hi
the lockups are on server side not in client side and fuse is
not used on the server side
Which actually reinforces the point that glusterfs has very little to
do with this kernel lockup. It is not even performing the special fuse
protocol communication with the kernel in question. Just pla
> the lockups are on server side not in client side and fuse is
> not used on the server side
Which actually reinforces the point that glusterfs has very little to
do with this kernel lockup. It is not even performing the special fuse
protocol communication with the kernel in question. Just plain
Hi
In particular, if you read about the intent of FUSE - the
technology being used to create a file system, I think you will find
that what Anand is saying is the *exact* purpose for this project.
the lockups are on server side not in client side and fuse is
not used on the server side
--
Be
On 09/10/2009 06:25 AM, Stephan von Krawczynski wrote:
On Wed, 09 Sep 2009 19:43:15 -0400
Mark Mielke wrote:
In this case, there is too many unknowns - but I agree with Anand's
logic 100%. Gluster should not be able to cause a CPU lock up. It should
be impossible. If it is not impossible -
It is important to understand that this application is a kind of core
technology for data storage. This means people want to be sure that their
setup does not explode just because they made a kernel update or some other
change where their experience tells them it should have no influence on the
> > Only if backed up. Has the trace been shown to the linux developers?
> > What do they think?
Maybe we should just ask questions about the source before bothering others...
>From 2.0.6 /transport/socket/src/socket.c line 867 ff:
new_trans = CALLOC (1, sizeof (*new_tr
Im testing the nufa translator and notice that it doesnt have any
affect in terms of performance
im currently testing 1 node cluster wich is server + client
The test :
on normal h.d
time dd if=/dev/zero of=file bs=10240 count=1000
results
10240 bytes (102 MB) copied, 1.02154 seconds, 100 MB/
eagleeyes wrote:
HI:
I update gluster from 2.0.6 to 2.1.0pre1 ,but when i start glusterfs
,error happend :
glusterfs -V
glusterfs: symbol lookup error: glusterfs: undefined symbol:
gf_proc_dump_info
Can you please check if libglusterfs.so has been installed properl
Hi ,
I have a probelm mounting a client on a server while using nufa
When i mount without nufa(on the server) its ok when adding the nufa part
i see (df -a)
df: `/mnt/cluster': Transport endpoint is not connected
My conf is :
volume posix
type storage/posix
option directory /data
end-volume
On Wed, 09 Sep 2009 19:43:15 -0400
Mark Mielke wrote:
> >
> > On Wed, 9 Sep 2009 23:17:07 +0530
> > Anand Avati wrote:
> >
> >
> >> Please reply back to this thread only after you have a response from
> >> the appropriate kernel developer indicating that the cause of this
> >> lockup is beca
HI:
I update gluster from 2.0.6 to 2.1.0pre1 ,but when i start glusterfs
,error happend :
glusterfs -V
glusterfs: symbol lookup error: glusterfs: undefined symbol:
gf_proc_dump_info
SUSE11 -- 2.6.27.19-5-xen
gdb glusterfs core.22801
NU gdb (GDB; S
Speaking about migration it would be useful to know in which cases is
possible to migrate from a standard filesystem to glusterfs
by simply turning them into glusterfs volumes and exporting them (i'm
specifically thinking about distribute). If this is possible without
physically copying the data it
28 matches
Mail list logo