On 15/07/15 04:06, Eric Eastman wrote:
Hi John,
I cut the test down to a single client running only Ganesha NFS
without any ceph drivers loaded on the Ceph FS client. After deleting
all the files in the Ceph file system, rebooting all the nodes, I
restarted the create 5 million file test usin
I change the "mds_cache_size" to 50 from 10 get rid of the
WARN temporary.
Now dumping the mds daemon shows like this:
"inode_max": 50,
"inodes": 124213,
But i have no idea if the "indoes" rises more than 50 , change the
"mds_cache_size" again?
Thanks.
2015-07-15 13
Hi John,
I cut the test down to a single client running only Ganesha NFS
without any ceph drivers loaded on the Ceph FS client. After deleting
all the files in the Ceph file system, rebooting all the nodes, I
restarted the create 5 million file test using 2 NFS clients to the
one Ceph file system
Thanks John. I will back the test down to the simple case of 1 client
without the kernel driver and only running NFS Ganesha, and work forward
till I trip the problem and report my findings.
Eric
On Mon, Jul 13, 2015 at 2:18 AM, John Spray wrote:
>
>
> On 13/07/2015 04:02, Eric Eastman wrote:
>
On 13/07/2015 04:02, Eric Eastman wrote:
Hi John,
I am seeing this problem with Ceph v9.0.1 with the v4.1 kernel on all
nodes. This system is using 4 Ceph FS client systems. They all have
the kernel driver version of CephFS loaded, but none are mounting the
file system. All 4 clients are usin
In the last email, I stated the clients were not mounted using the
ceph file system kernel driver. Re-checking the client systems, the
file systems are mounted, but all the IO is going through Ganesha NFS
using the ceph file system library interface.
On Sun, Jul 12, 2015 at 9:02 PM, Eric Eastman
Hi John,
I am seeing this problem with Ceph v9.0.1 with the v4.1 kernel on all
nodes. This system is using 4 Ceph FS client systems. They all have
the kernel driver version of CephFS loaded, but none are mounting the
file system. All 4 clients are using the libcephfs VFS interface to
Ganesha NFS
Thank you John,
All my server is ubuntu14.04 with 3.16 kernel.
Not all of clients appear this problem, the cluster seems functioning well
now.
As you say,i will change the mds_cache_size to 50 from 10 to take a
test, thanks again!
2015-07-10 17:00 GMT+08:00 John Spray :
>
> This is usuall
This is usually caused by use of older kernel clients. I don't remember
exactly what version it was fixed in, but iirc we've seen the problem
with 3.14 and seen it go away with 3.18.
If your system is otherwise functioning well, this is not a critical
error -- it just means that the MDS mig
hi,
I use CephFS in production environnement with 7osd,1mds,3mon now.
So far so good,but i have a problem with it today.
The ceph status report this:
cluster ad3421a43-9fd4-4b7a-92ba-09asde3b1a228
health HEALTH_WARN
mds0: Client 34271 failing to respond to cache pressure
10 matches
Mail list logo