Hello guys,
I've got a bunch of hang tasks of the nfsd service running over the cephfs
(kernel) mounted file system. Here is an example of one of them.
[433079.991218] INFO: task nfsd:32625 blocked for more than 120 seconds.
[433080.029685] Not tainted 3.15.10-031510-generic #201408132333
One more thing I've missed to say. All failures that i've seen happen when
there is a deep scrubbing process running.
Andrei
- Original Message -
From: Andrei Mikhailovsky and...@arhont.com
To: sj...@redhat.com
Cc: ceph-users@lists.ceph.com
Sent: Thursday, 27 November, 2014
Hi !
Thanks for the quick fix !
However, the compilation is still failing because kinetic.h (called by
KineticStore.h) is not present in the source files :
root@host:~/sources/ceph# ./configure --with-kinetic
root@host:~/sources/ceph# make
[...]
CXX os/libos_la-KeyValueDB.lo
CXX
I am also noticing some delays working with nfs over cephfs. Especially when
making an initial connection. For instance, I run the following:
# time for i in {0..10} ; do time touch /tmp/cephfs/test-$i ; done
where /tmp/cephfs is the nfs mount point running over cephfs
I am noticing that
I've just tried the latest ubuntu-vivid kernel and also seeing hang tasks with
dd tests :
[ 3721.026421] INFO: task nfsd:16596 blocked for more than 120 seconds.
[ 3721.065141] Not tainted 3.17.4-031704-generic #201411211317
[ 3721.103721] echo 0 /proc/sys/kernel/hung_task_timeout_secs
Hi everyone,
I'd like to come back to a discussion from 2012 (thread at
http://marc.info/?l=ceph-develm=134808745719233) to estimate the
expected MDS memory consumption from file metadata caching. I am certain
the following is full of untested assumptions, some of which are
probably inaccurate,
Hi,
After some more tests we’ve found that max_sectors_kb is the reason for
splitting large IOs.
We increased it to 4MB:
echo 4096 cat /sys/block/vdb/queue/max_sectors_kb
and now fio/iostat are showing reads up to 4MB are getting through to the block
device unsplit.
We use 4MB to match the
Yeah, ceph source repo doesn't contain Kinetic header file and library
souce, you need to install kinetic devel package separately.
On Fri, Nov 28, 2014 at 7:02 PM, Julien Lutran julien.lut...@ovh.net wrote:
Hi !
Thanks for the quick fix !
However, the compilation is still failing because
On 11/28/2014 01:04 PM, Florian Haas wrote:
Hi everyone,
I'd like to come back to a discussion from 2012 (thread at
http://marc.info/?l=ceph-develm=134808745719233) to estimate the
expected MDS memory consumption from file metadata caching. I am certain
the following is full of untested
On Fri, Nov 28, 2014 at 3:14 PM, Wido den Hollander w...@42on.com wrote:
On 11/28/2014 01:04 PM, Florian Haas wrote:
Hi everyone,
I'd like to come back to a discussion from 2012 (thread at
http://marc.info/?l=ceph-develm=134808745719233) to estimate the
expected MDS memory consumption from
On 11/28/2014 03:22 PM, Florian Haas wrote:
On Fri, Nov 28, 2014 at 3:14 PM, Wido den Hollander w...@42on.com wrote:
On 11/28/2014 01:04 PM, Florian Haas wrote:
Hi everyone,
I'd like to come back to a discussion from 2012 (thread at
http://marc.info/?l=ceph-develm=134808745719233) to
Dan, are you setting this on the guest vm side? Did you run some tests to see
if this impacts performance? Like small block size performance, etc?
Cheers
- Original Message -
From: Dan Van Der Ster daniel.vanders...@cern.ch
To: ceph-users ceph-users@lists.ceph.com
Sent: Friday, 28
On Fri, Nov 28, 2014 at 3:29 PM, Wido den Hollander w...@42on.com wrote:
On 11/28/2014 03:22 PM, Florian Haas wrote:
On Fri, Nov 28, 2014 at 3:14 PM, Wido den Hollander w...@42on.com wrote:
On 11/28/2014 01:04 PM, Florian Haas wrote:
Hi everyone,
I'd like to come back to a discussion from
Hi Andrei,
Yes, I’m testing from within the guest.
Here is an example. First, I do 2MB reads when the max_sectors_kb=512, and we
see the reads are split into 4. (fio sees 25 iops, though iostat reports 100
smaller iops):
# echo 512 /sys/block/vdb/queue/max_sectors_kb # this is the default
#
On Thu, Nov 27, 2014 at 9:22 PM, Ben b@benjackson.email wrote:
On 2014-11-28 15:42, Yehuda Sadeh wrote:
On Thu, Nov 27, 2014 at 2:15 PM, b b@benjackson.email wrote:
On 2014-11-27 11:36, Yehuda Sadeh wrote:
On Wed, Nov 26, 2014 at 3:49 PM, b b@benjackson.email wrote:
On 2014-11-27 10:21,
Hi,
I would like to shrink a thin provisioned rbd image which has grown to maximum.
90% of the data in the image is deleted data which is still hidden in the image
and marked as deleted.
So I think I can fill the whole Image with zeroes and then qemu-img convert it.
So the newly created image
Take a look at
http://ceph.com/docs/master/rbd/qemu-rbd/#enabling-discard-trim
I think if you enable TRIM support on your RBD, then run fstrim on your
filesystems inside the guest (assuming ext4 / XFS guest filesystem),
Ceph should reclaim the trimmed space.
On 28/11/14 17:05, Christoph Adomeit
On Fri, Nov 28, 2014 at 5:46 PM, Dan Van Der Ster
daniel.vanders...@cern.ch wrote:
Hi Andrei,
Yes, I’m testing from within the guest.
Here is an example. First, I do 2MB reads when the max_sectors_kb=512, and
we see the reads are split into 4. (fio sees 25 iops, though iostat reports
100
I've done some tests using ceph-fuse and it looks far more stable. I've not
experienced any issues so far with ceph-fuse mount point over nfs. Will do more
stress testing over and update.
Anyone experiencing issues with hang tasks using ceph kernel module mount
method?
Thanks
-
On Fri, Nov 28, 2014 at 3:02 PM, Andrei Mikhailovsky and...@arhont.com wrote:
I've just tried the latest ubuntu-vivid kernel and also seeing hang tasks
with dd tests:
[ 3721.026421] INFO: task nfsd:16596 blocked for more than 120 seconds.
[ 3721.065141] Not tainted
Ilya, yes I do! LIke these from different osds:
[ 4422.212204] libceph: osd13 192.168.168.201:6819 socket closed (con state
OPEN)
Andrei
- Original Message -
From: Ilya Dryomov ilya.dryo...@inktank.com
To: Andrei Mikhailovsky and...@arhont.com
Cc: ceph-users
On Fri, Nov 28, 2014 at 8:13 PM, Andrei Mikhailovsky and...@arhont.com wrote:
Ilya, yes I do! LIke these from different osds:
[ 4422.212204] libceph: osd13 192.168.168.201:6819 socket closed (con state
OPEN)
Can you by any chance try a kernel from [1] ? It's based on Ubuntu
config and unless
On Fri, Nov 28, 2014 at 8:19 PM, Ilya Dryomov ilya.dryo...@inktank.com wrote:
On Fri, Nov 28, 2014 at 8:13 PM, Andrei Mikhailovsky and...@arhont.com
wrote:
Ilya, yes I do! LIke these from different osds:
[ 4422.212204] libceph: osd13 192.168.168.201:6819 socket closed (con state
OPEN)
Can
On Fri, Nov 28, 2014 at 8:20 PM, Ilya Dryomov ilya.dryo...@inktank.com wrote:
On Fri, Nov 28, 2014 at 8:19 PM, Ilya Dryomov ilya.dryo...@inktank.com
wrote:
On Fri, Nov 28, 2014 at 8:13 PM, Andrei Mikhailovsky and...@arhont.com
wrote:
Ilya, yes I do! LIke these from different osds:
[
Where can I find this kinetic devel package ?
-- Julien
On 11/28/2014 03:04 PM, Haomai Wang wrote:
Yeah, ceph source repo doesn't contain Kinetic header file and library
souce, you need to install kinetic devel package separately.
On Fri, Nov 28, 2014 at 7:02 PM, Julien Lutran
On Sat, Nov 29, 2014 at 5:19 AM, Julien Lutran julien.lut...@ovh.net wrote:
Where can I find this kinetic devel package ?
I guess you want this (C== kinetic client)? it has kinetic.h at least.
https://github.com/Seagate/kinetic-cpp-client
___
I will give it a go and let you know.
Cheers
- Original Message -
From: Ilya Dryomov ilya.dryo...@inktank.com
To: Andrei Mikhailovsky and...@arhont.com
Cc: ceph-users ceph-users@lists.ceph.com
Sent: Friday, 28 November, 2014 5:28:28 PM
Subject: Re: [ceph-users] Giant + nfs over
I'm confused about requirements for ceph services.
http://ceph.com/docs/master/start/hardware-recommendations/
Monitors simply maintain a master copy of the cluster map, so they are
not CPU intensive
and then in RAM section
Metadata servers and monitors must be capable of serving their data
On 29/11/14 01:50, Yehuda Sadeh wrote:
On Thu, Nov 27, 2014 at 9:22 PM, Ben b@benjackson.email wrote:
On 2014-11-28 15:42, Yehuda Sadeh wrote:
On Thu, Nov 27, 2014 at 2:15 PM, b b@benjackson.email wrote:
On 2014-11-27 11:36, Yehuda Sadeh wrote:
On Wed, Nov 26, 2014 at 3:49 PM, b
On Fri, 28 Nov 2014 08:56:24 PM Ilya Dryomov wrote:
which you are supposed to change on a per-device basis via sysfs.
Is there a way to do this for windows VM's?
--
Lindsay
signature.asc
Description: This is a digitally signed message part.
___
On Fri, 2014-11-28 at 16:37 -0500, Roman Naumenko wrote:
And if I understand correctly, monitors are the access points to the
cluster, so they should provide enough aggregated network output for
all connected clients based on number of OSDs in the cluster?
I'm not sure what you mean by
Ilya, here is what I got shortly after starting the dd test:
[ 288.307993]
[ 288.308004] =
[ 288.308008] [ INFO: possible irq lock inversion dependency detected ]
[ 288.308014] 3.18.0-rc6-ceph-00024-g72ca172 #1 Tainted: G E
[
Ilya,
not sure if dmesg output in the previous is related to the cephfs, but from
what I can see it looks good with your kernel. I would have seen hang tasks by
now, but not anymore. I've ran a bunch of concurrent dd tests and also the file
touch tests and there are no more delays.
So, it
On Fri, Nov 28, 2014 at 1:38 PM, Ben b@benjackson.email wrote:
On 29/11/14 01:50, Yehuda Sadeh wrote:
On Thu, Nov 27, 2014 at 9:22 PM, Ben b@benjackson.email wrote:
On 2014-11-28 15:42, Yehuda Sadeh wrote:
On Thu, Nov 27, 2014 at 2:15 PM, b b@benjackson.email wrote:
On 2014-11-27 11:36,
On 29/11/14 11:40, Yehuda Sadeh wrote:
On Fri, Nov 28, 2014 at 1:38 PM, Ben b@benjackson.email wrote:
On 29/11/14 01:50, Yehuda Sadeh wrote:
On Thu, Nov 27, 2014 at 9:22 PM, Ben b@benjackson.email wrote:
On 2014-11-28 15:42, Yehuda Sadeh wrote:
On Thu, Nov 27, 2014 at 2:15 PM, b
35 matches
Mail list logo