Re: [ceph-users] Giant + nfs over cephfs hang tasks

2014-11-29 Thread Ilya Dryomov
On Sat, Nov 29, 2014 at 2:13 AM, Andrei Mikhailovsky and...@arhont.com wrote: Ilya, here is what I got shortly after starting the dd test: [ 288.307993] [ 288.308004] = [ 288.308008] [ INFO: possible irq lock inversion dependency

Re: [ceph-users] Giant + nfs over cephfs hang tasks

2014-11-29 Thread Andrei Mikhailovsky
Ilya, so, what is the best action plan now? should I continue using the kernel that you've sent me? I am running production infrastructure and not sure if this is the right way forward. Do you have a patch by any chance against the LTS kernel that I can use to recompile the ceph module?

[ceph-users] s3-tests with giant/radosgw, many failures with fastcgi

2014-11-29 Thread Anthony Alba
I am seeing a lot of failures with Giant/radosgw and s3test particularly with fastcgi. I am using community patched apache, fastcgi. civetweb is doing much better. 1. Both tests hangs at s3tests.functional.test_headers.test_object_create_bad_contentlength_mismatch_above I have to exclude this

Re: [ceph-users] Giant + nfs over cephfs hang tasks

2014-11-29 Thread Ilya Dryomov
On Sat, Nov 29, 2014 at 2:33 AM, Andrei Mikhailovsky and...@arhont.com wrote: Ilya, not sure if dmesg output in the previous is related to the cephfs, but from what I can see it looks good with your kernel. I would have seen hang tasks by now, but not anymore. I've ran a bunch of concurrent

Re: [ceph-users] Giant + nfs over cephfs hang tasks

2014-11-29 Thread Andrei Mikhailovsky
Ilya, I will give it a try and get back to you shortly, Andrei - Original Message - From: Ilya Dryomov ilya.dryo...@inktank.com To: Andrei Mikhailovsky and...@arhont.com Cc: ceph-users ceph-users@lists.ceph.com Sent: Saturday, 29 November, 2014 10:40:48 AM Subject: Re:

[ceph-users] Ceph Degraded

2014-11-29 Thread Georgios Dimitrakakis
Hi all! I am setting UP a new cluster with 10 OSDs and the state is degraded! # ceph health HEALTH_WARN 940 pgs degraded; 1536 pgs stuck unclean # There are only the default pools # ceph osd lspools 0 data,1 metadata,2 rbd, with each one having 512 pg_num and 512 pgp_num # ceph osd dump

Re: [ceph-users] Ceph Degraded

2014-11-29 Thread Andrei Mikhailovsky
I think I had a similar issue recently when I've added a new pool. All pgs that corresponded to the new pool were shown as degraded/unclean. After doing a bit of testing I've realized that my issue was down to this: replicated size 2 min_size 2 replicated size and min size was the same. In

Re: [ceph-users] Giant + nfs over cephfs hang tasks

2014-11-29 Thread Andrei Mikhailovsky
Ilya, I think i spoke too soon in my last message. I've not given it more load (running 8 concurrent dds with bs=4M) and about a minute or so after starting i've seen problems in dmesg output. I am attaching kern.log file for you reference. Please check starting with the following line: Nov

Re: [ceph-users] Giant + nfs over cephfs hang tasks

2014-11-29 Thread Ilya Dryomov
On Sat, Nov 29, 2014 at 3:10 PM, Andrei Mikhailovsky and...@arhont.com wrote: Ilya, The 3.17.4 kernel that you've given is also good so far. No hang tasks as seen before. However, I do have the same message in dmesg as with the 3.18 kernel that you've sent. This message I've not seen in the

Re: [ceph-users] Giant + nfs over cephfs hang tasks

2014-11-29 Thread Ilya Dryomov
On Sat, Nov 29, 2014 at 3:22 PM, Andrei Mikhailovsky and...@arhont.com wrote: Ilya, I think i spoke too soon in my last message. I've not given it more load (running 8 concurrent dds with bs=4M) and about a minute or so after starting i've seen problems in dmesg output. I am attaching

Re: [ceph-users] Giant + nfs over cephfs hang tasks

2014-11-29 Thread Ilya Dryomov
On Sat, Nov 29, 2014 at 3:49 PM, Ilya Dryomov ilya.dryo...@inktank.com wrote: On Sat, Nov 29, 2014 at 3:22 PM, Andrei Mikhailovsky and...@arhont.com wrote: Ilya, I think i spoke too soon in my last message. I've not given it more load (running 8 concurrent dds with bs=4M) and about a minute

Re: [ceph-users] ceph RDB question

2014-11-29 Thread Smart Weblications GmbH - Florian Wiessner
Hi, Am 26.11.2014 23:36, schrieb Geoff Galitz: Hi. If I create an RDB instance, and then use fusemount to access it from various locations as a POSIX entity, I assume I'll need to create a filesystem on it. To access it from various remote servers I assume I'd also need a

Re: [ceph-users] Giant + nfs over cephfs hang tasks

2014-11-29 Thread Gregory Farnum
Ilya, do you have a ticket reference for the bug? Andrei, we run NFS tests on CephFS in our nightlies and it does pretty well so in the general case we expect it to work. Obviously not at the moment with whatever bug Ilya is looking at, though. ;) -Greg On Sat, Nov 29, 2014 at 4:51 AM Ilya Dryomov

Re: [ceph-users] Deleting buckets and objects fails to reduce reported cluster usage

2014-11-29 Thread Ben
On 29/11/14 11:40, Yehuda Sadeh wrote: On Fri, Nov 28, 2014 at 1:38 PM, Ben b@benjackson.email wrote: On 29/11/14 01:50, Yehuda Sadeh wrote: On Thu, Nov 27, 2014 at 9:22 PM, Ben b@benjackson.email wrote: On 2014-11-28 15:42, Yehuda Sadeh wrote: On Thu, Nov 27, 2014 at 2:15 PM, b

[ceph-users] Rebuild OSD's

2014-11-29 Thread Lindsay Mathieson
I have 2 OSD's on two nodes top of zfs that I'd like to rebuild in a more standard (xfs) setup. Would the following be a non destructive if somewhat tedious way of doing so? Following the instructions from here:

Re: [ceph-users] Tip of the week: don't use Intel 530 SSD's for journals

2014-11-29 Thread Gregory Farnum
That's not actually so unusual: http://techreport.com/review/26058/the-ssd-endurance-experiment-data-retention-after-600tb The manufacturers are pretty conservative with their ratings and warranties. ;) -Greg On Thu, Nov 27, 2014 at 2:41 AM Andrei Mikhailovsky and...@arhont.com wrote: Mark, if

[ceph-users] Rebuild OSD's

2014-11-29 Thread Lindsay Mathieson
I have 2 OSD's on two nodes top of zfs that I'd like to rebuild in a more standard (xfs) setup. Would the following be a non destructive if somewhat tedious way of doing so? Following the instructions from here:

[ceph-users] Actual size of rbd vm images

2014-11-29 Thread Lindsay Mathieson
According to the docs, Ceph block devices are thin provisioned. But how do I list the actual size of vm images hosted on ceph? I do something like: rbd ls -l rbd But that only lists the provisioned sizes, not the real usage. thanks, -- Lindsay signature.asc Description: This is a digitally

Re: [ceph-users] Actual size of rbd vm images

2014-11-29 Thread Haomai Wang
Yeah, we still have no way to inspect the actual usage of image. But we already have existing bp to impl it. https://wiki.ceph.com/Planning/Blueprints/Hammer/librbd%3A_shared_flag%2C_object_map On Sun, Nov 30, 2014 at 9:13 AM, Lindsay Mathieson lindsay.mathie...@gmail.com wrote: According to

Re: [ceph-users] Actual size of rbd vm images

2014-11-29 Thread Lindsay Mathieson
On Sun, 30 Nov 2014 11:37:06 AM Haomai Wang wrote: Yeah, we still have no way to inspect the actual usage of image. But we already have existing bp to impl it. https://wiki.ceph.com/Planning/Blueprints/Hammer/librbd%3A_shared_flag%2C_ob ject_map Thanks, good to know. I did find this:

Re: [ceph-users] Giant + nfs over cephfs hang tasks

2014-11-29 Thread Ilya Dryomov
On Sun, Nov 30, 2014 at 1:19 AM, Gregory Farnum g...@gregs42.com wrote: Ilya, do you have a ticket reference for the bug? Opened a ticket, assigned to myself. http://tracker.ceph.com/issues/10208 Andrei, we run NFS tests on CephFS in our nightlies and it does pretty well so in the general