Roger that!
2015-09-14 12:14 GMT+08:00 Shinobu Kinjo :
> Before dumping core:
>
> ulimit -c unlimited
>
> After dumping core:
>
> gdb
>
> # Just do backtrace
> (gdb)bt
>
> # There would be some signal.
> Then give full output to us.
>
> Shinobu
11:24 GMT+08:00 Shinobu Kinjo :
> Yes, that is exactly what I'm going to do.
> Thanks for your follow-up.
>
> Shinobu
>
> - Original Message -----
> From: "Zheng Yan"
> To: "谷枫"
> Cc: "Shinobu Kinjo" , "ceph-users" <
I deploy the cluster with ceph-deploy and the script made by ceph-deploy.
2015-09-14 10:01 GMT+08:00 Shinobu Kinjo :
> Did you made that script, or be there by default?
>
> Shinobu
>
> - Original Message -----
> From: "谷枫"
> To: "Shinobu Kinjo"
>
me same to the crash time mainly.
2015-09-14 9:48 GMT+08:00 谷枫 :
> Hi,Shinobu
>
> I found the logrotate script at /etc/logrotate.d/ceph. In this script osd
> mon mds will be reload when rotate done.
> The logrotate and the ceph-fuse crash at same time mainly.
> So i think the probl
id="${f#*-}"
initctl reload ceph-$daemon cluster="$cluster"
id="$id" 2>/dev/null || :
fi
done
done
***
Thank you!
2015-09-14 8:50 GMT
tar cvf .tar \
> > /sys/class/net//statistics/*
>
> When did you face this issue?
> From the beginning or...?
>
> Shinobu
>
> - Original Message -
> From: "谷枫"
> To: "Shinobu Kinjo"
> Cc: "ceph-users"
> Sent: Sunday, Sept
ph -s
>
> Shinobu
>
> - Original Message -
> From: "谷枫"
> To: "Shinobu Kinjo"
> Cc: "ceph-users"
> Sent: Sunday, September 13, 2015 10:51:35 AM
> Subject: Re: [ceph-users] ceph-fuse auto down
>
> sorry Shinobu,
> I don't understan
sorry Shinobu,
I don't understand what's the means what you pasted.
Multi ceph-fuse crash just now today.
The ceph-fuse completely unusable for me now.
Maybe i must change the kernal mount with it.
2015-09-12 20:08 GMT+08:00 Shinobu Kinjo :
> In _usr_bin_ceph-fuse.0.crash.client2.tar
>
> What I'm
gt; I went to dentist -;
> Please do not forget CCing ceph-users from the next because there is a
> bunch of really **awesome** guys;
>
> Can you re-attach log files again so that they see?
>
> Shinobu
>
> - Original Message -
> From: "谷枫"
> To: &qu
untu, sorry for that.
> How about:
>
> /var/log/dmesg
>
> I believe you can attach file not paste.
> Pasting a bunch of logs would not be good for me -;
>
> And when did you notice that cephfs was hung?
>
> Shinobu
>
> - Original Message -
> From: &
ach?
>
> Shinobu
>
> - Original Message -
> From: "谷枫"
> To: "ceph-users"
> Sent: Saturday, September 12, 2015 1:30:49 PM
> Subject: [ceph-users] ceph-fuse auto down
>
> Hi,all
> My cephfs cluster deploy on three nodes with Ceph Hammer 0
Hi,all
My cephfs cluster deploy on three nodes with Ceph Hammer 0.94.3 on Ubuntu
14.04 the kernal version is 3.19.0.
I mount the cephfs with ceph-fuse on 9 clients,but some of them (ceph-fuse
process) auto down sometimes and i can't find the reason seems like there
is no other logs can be found ex
ot;mds_cache_size" again?
Thanks.
2015-07-15 13:34 GMT+08:00 谷枫 :
> I change the "mds_cache_size" to 50 from 10 get rid of the
> WARN temporary.
> Now dumping the mds daemon shows like this:
> "inode_max": 50,
> "
means that the MDS might not be able to fully control its memory
> usage (i.e. it can exceed mds_cache_size).
>
> John
>
> On 10/07/2015 05:25, 谷枫 wrote:
>
>> hi,
>> I use CephFS in production environnement with 7osd,1mds,3mon now.
>> So far so good,but i have
hi,
I use CephFS in production environnement with 7osd,1mds,3mon now.
So far so good,but i have a problem with it today.
The ceph status report this:
cluster ad3421a43-9fd4-4b7a-92ba-09asde3b1a228
health HEALTH_WARN
mds0: Client 34271 failing to respond to cache pressure
l=1
c=0x1112b860).reader bad tag 116
2015-06-05 09:58:44.812270 7fac0de07700 0 -- 10.3.1.5:68**/ >>
**.3.1.4:0/18748 pipe(0x146e3000 sd=47 :6801 s=2 pgs=1359 cs=1 l=1
c=0x15310ec0).reader bad tag 32
2015-06-05 22:41 GMT+08:00 谷枫 :
> sorry i send this mail careless, continue
> The md
l=0
c=0x2c9b790).fault, initiating reconnect
2015-06-05 14:31:46.433065 7fed2c579700 0 -- 10.3.1.4:0/14285 >>
10.3.1.5:6800/1365 pipe(0x2cab000 sd=9 :57017 s=1 pgs=186 cs=2 l=0
c=0x2c9b790).fault
2015-06-05 22:51 GMT+08:00 John Spray :
>
>
> On 05/06/2015 15:41, 谷枫 wrote:
>
&
's get nomal.
I want to know is this a ceph bug ? or the ceph-fuse tool's bug?
Should i change the mount type with the mount -t ceph ?
2015-06-05 22:34 GMT+08:00 谷枫 :
> Hi everyone,
> I hava a five nodes ceph cluster with cephfs.Mount the ceph partition with
> ceph-fuse tool
Hi everyone,
I hava a five nodes ceph cluster with cephfs.Mount the ceph partition with
ceph-fuse tools.
I met a serious problem has no omens.
One of the node the ceph-fuse procs down and the ceph partition that
mounted with the ceph-fuse tools change to unavailable.
ls the ceph partition, it's lik
19 matches
Mail list logo