Re: [ceph-users] Write freeze when writing to rbd image and rebooting one of the nodes

2015-05-14 Thread Vasiliy Angapov
Thanks, Robert, for sharing so many experience! I feel like I don't deserve it :) I have another but very same situation which I don't understand. Last time i tried to hard kill OSD daemons. This time i add a new node with 2 OSDs to my cluster and also monitor the IO. I wrote a script which adds

Re: [ceph-users] Write freeze when writing to rbd image and rebooting one of the nodes

2015-05-14 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Can you provide the output of the CRUSH map and a copy of the script that you are using to add the OSDs? Can you also provide the pool size and pool min_size? -BEGIN PGP SIGNATURE- Version: Mailvelope v0.13.1 Comment:

Re: [ceph-users] Cisco UCS Blades as MONs? Pros cons ...?

2015-05-14 Thread Jake Young
I have 42 OSDs on 6 servers. I'm planning to double that this quarter by adding 6 more servers to get to 84 OSDs. I have 3 monitor VMs. Two of them are running on two different blades in the same chassis, but their networking is on different fabrics. The third one is on a blade in a different

Re: [ceph-users] Complete freeze of a cephfs client (unavoidable hard reboot)

2015-05-14 Thread John Spray
On 14/05/2015 18:15, Francois Lafont wrote: Hi, I had a problem with a cephfs freeze in a client. Impossible to re-enable the mountpoint. A simple ls /mnt command totally blocked (of course impossible to umount-remount etc.) and I had to reboot the host. But even a normal reboot didn't work,

[ceph-users] Complete freeze of a cephfs client (unavoidable hard reboot)

2015-05-14 Thread Francois Lafont
Hi, I had a problem with a cephfs freeze in a client. Impossible to re-enable the mountpoint. A simple ls /mnt command totally blocked (of course impossible to umount-remount etc.) and I had to reboot the host. But even a normal reboot didn't work, the host didn't stop. I had to do a hard reboot

Re: [ceph-users] Complete freeze of a cephfs client (unavoidable hard reboot)

2015-05-14 Thread Gregory Farnum
On Thu, May 14, 2015 at 10:15 AM, Francois Lafont flafdiv...@free.fr wrote: Hi, I had a problem with a cephfs freeze in a client. Impossible to re-enable the mountpoint. A simple ls /mnt command totally blocked (of course impossible to umount-remount etc.) and I had to reboot the host. But

Re: [ceph-users] export-diff exported only 4kb instead of 200-600gb

2015-05-14 Thread Jason Dillaman
Interesting. The 'rbd diff' operation uses the same librbd API method as 'rbd export-diff' to calculate all the updated image extents, so it's very strange that one works and the other doesn't given that you have a validly formatted export. I tried to recreate your issues on Giant and was

Re: [ceph-users] Complete freeze of a cephfs client (unavoidable hard reboot)

2015-05-14 Thread Lee Revell
On Thu, May 14, 2015 at 2:47 PM, John Spray john.sp...@redhat.com wrote: Greg's response is pretty comprehensive, but for completeness I'll add that the specific case of shutdown blocking is http://tracker.ceph.com/issues/9477 I've seen the same thing before with /dev/rbd mounts when the

Re: [ceph-users] rados cppool

2015-05-14 Thread Daniel Schneller
On 2015-05-14 21:04:06 +, Daniel Schneller said: On 2015-04-23 19:39:33 +, Sage Weil said: On Thu, 23 Apr 2015, Pavel V. Kaygorodov wrote: Hi! I have copied two of my pools recently, because old ones has too many pgs. Both of them contains RBD images, with 1GB and ~30GB of data.

[ceph-users] ceph -w output

2015-05-14 Thread Daniel Schneller
Hi! I am trying to get behind the values in ceph -w, especially those regarding throughput(?) at the end: 2015-05-15 00:54:33.333500 mon.0 [INF] pgmap v26048646: 17344 pgs: 17344 active+clean; 6296 GB data, 19597 GB used, 155 TB / 174 TB avail; 6023 kB/s rd, 549 kB/s wr, 7564 op/s

Re: [ceph-users] rados cppool

2015-05-14 Thread Daniel Schneller
On 2015-04-23 19:39:33 +, Sage Weil said: On Thu, 23 Apr 2015, Pavel V. Kaygorodov wrote: Hi! I have copied two of my pools recently, because old ones has too many pgs. Both of them contains RBD images, with 1GB and ~30GB of data. Both pools was copied without errors, RBD images are

Re: [ceph-users] Firefly to Hammer

2015-05-14 Thread Daniel Schneller
You should be able to do just that. We recently upgraded from Firefly to Hammer like that. Follow the order described on the website. Monitors, OSDs, MDSs. Notice that the Debian packages do not restart running daemons, but they _do_ start up not running ones. So say for some reason before

[ceph-users] ceph-deploy osd activate ERROR

2015-05-14 Thread 张忠波
Hi , I encountered other problems when i installed ceph . #1. When i run the command , ceph-deploy new ceph-0, and got the ceph.conf file . However , there is not any information aboutosd pool default size or public network . [root@ceph-2 my-cluster]# more ceph.conf [global]

Re: [ceph-users] How to debug a ceph read performance problem?

2015-05-14 Thread changqian zuo
Hi, 1. The network problem has been partly resovled, we removed bonding of Juno node (Ceph client side), and now IO comes back: [root@controller fio-rbd]# rados bench -p test 30 seq sec Cur ops started finished avg MB/s cur MB/s last lat avg lat 0 0 0 0

Re: [ceph-users] Find out the location of OSD Journal

2015-05-14 Thread Josef Johansson
I tend to use something along the lines for osd in $(grep osd /etc/mtab | cut -d ' ' -f 2); do echo $(echo $osd | cut -d '-' -f 2): $(readlink -f $(readlink $osd/journal));done | sort -k 2 Cheers, Josef On 08 May 2015, at 02:47, Robert LeBlanc rob...@leblancnet.us wrote: You may also be