Re: [ceph-users] Problems during first install

2014-08-06 Thread Tijn Buijs
Hello Pratik, Thanks for this tip. It was the golden one :). I just deleted all my VMs again and started over with (again) CentOS 6.5 and 1 OSD disk per data VM of 20 GB dynamically allocated. And this time everything worked correctly like they mentioned in the documentation :). I went on my

Re: [ceph-users] Problems during first install

2014-08-06 Thread Christian Balzer
On Wed, 06 Aug 2014 09:18:13 +0200 Tijn Buijs wrote: Hello Pratik, Thanks for this tip. It was the golden one :). I just deleted all my VMs again and started over with (again) CentOS 6.5 and 1 OSD disk per data VM of 20 GB dynamically allocated. And this time everything worked correctly

Re: [ceph-users] Openstack Havana root fs resize don't work

2014-08-06 Thread Hauke Bruno Wollentin
Hi, 1) I have flavors like 1 vCPU, 2GB memory, 20GB root disk. No swap + no ephemeral disk. Then I just create an instance via horizon choosing an image + a flavor. 2) OpenStack itselfs runs on Ubuntu 12.04.4 LTS, for the instances I have some Ubuntu 12.04/14.04s, Debians and CentOS'. 3) In

[ceph-users] slow OSD brings down the cluster

2014-08-06 Thread Luis Periquito
Hi, In the last few days I've had some issues with the radosgw in which all requests would just stop being served. After some investigation I would go for a single slow OSD. I just restarted that OSD and everything would just go back to work. Every single time there was a deep scrub running on

[ceph-users] rados bench no clean cleanup

2014-08-06 Thread Kenneth Waegeman
Hi, I did a test with 'rados -p ecdata bench 10 write' on an ECpool with a cache replicated pool over it (ceph 0.83). The benchmark wrote about 12TB of data. After the 10 seconds run, rados started to delete his benchmark files. But only about 2,5TB got deleted, then rados returned.

Re: [ceph-users] slow OSD brings down the cluster

2014-08-06 Thread Wido den Hollander
On 08/06/2014 10:43 AM, Luis Periquito wrote: Hi, In the last few days I've had some issues with the radosgw in which all requests would just stop being served. After some investigation I would go for a single slow OSD. I just restarted that OSD and everything would just go back to work. Every

Re: [ceph-users] slow OSD brings down the cluster

2014-08-06 Thread Luis Periquito
Hi Wido, as the backing disk is running a deep scrub it's constantly 100% busy, no errors though... I'm running everything on XFS. I had a similar feeling that was the OSD slowing down those requests. What would be the affected pool? .rgw? thanks, On 6 August 2014 10:08, Wido den Hollander

Re: [ceph-users] Problems during first install

2014-08-06 Thread Dennis Jacobfeuerborn
On 06.08.2014 09:25, Christian Balzer wrote: On Wed, 06 Aug 2014 09:18:13 +0200 Tijn Buijs wrote: Hello Pratik, Thanks for this tip. It was the golden one :). I just deleted all my VMs again and started over with (again) CentOS 6.5 and 1 OSD disk per data VM of 20 GB dynamically

[ceph-users] What is difference in storing data between rbd and rados ?

2014-08-06 Thread debian Only
I am confuse to understand how File store in Ceph. I do two test. where is the File or the object for the File ①rados put Python.msi Python.msi -p data ②rbd -p testpool create fio_test --size 2048 rados command of ① means use Ceph as Object storage ? rbd command of ② means use Ceph as Block

Re: [ceph-users] Is possible to use Ramdisk for Ceph journal ?

2014-08-06 Thread debian Only
Thanks for your reply. I have found and test a way myself.. and now share to others Begin On Debian root@ceph01-vm:~# modprobe brd rd_nr=1 rd_size=4194304 max_part=0 root@ceph01-vm:~# mkdir /mnt/ramdisk root@ceph01-vm:~# mkfs.btrfs /dev/ram0 WARNING! - Btrfs Btrfs v0.19 IS EXPERIMENTAL

Re: [ceph-users] Install Ceph nodes without network proxy access

2014-08-06 Thread Alfredo Deza
On Tue, Aug 5, 2014 at 10:47 PM, O'Reilly, Dan daniel.orei...@dish.com wrote: Final update: after a good deal of messing about, I did finally get this to work. Many thanks for the help Would you mind sharing what changed so this would end up working? Just want to make sure that it is not

Re: [ceph-users] Ceph writes stall for long perioids with no disk/network activity

2014-08-06 Thread Chris Kitzmiller
On Aug 5, 2014, at 12:43 PM, Mark Nelson wrote: On 08/05/2014 08:42 AM, Mariusz Gronczewski wrote: On Mon, 04 Aug 2014 15:32:50 -0500, Mark Nelson mark.nel...@inktank.com wrote: On 08/04/2014 03:28 PM, Chris Kitzmiller wrote: On Aug 1, 2014, at 1:31 PM, Mariusz Gronczewski wrote: I got

Re: [ceph-users] Is possible to use Ramdisk for Ceph journal ?

2014-08-06 Thread Daniel Swarbrick
On 06/08/14 13:07, debian Only wrote: Thanks for your reply. I have found and test a way myself.. and now share to others Begin On Debian root@ceph01-vm:~# modprobe brd rd_nr=1 rd_size=4194304 max_part=0 root@ceph01-vm:~# mkdir /mnt/ramdisk root@ceph01-vm:~# mkfs.btrfs /dev/ram0 You

[ceph-users] ceph --status Missing keyring

2014-08-06 Thread O'Reilly, Dan
Any idea what may be the issue here? [ceph@tm1cldcphal01 ~]$ ceph --status 2014-08-06 07:53:21.767255 7fe31fd1e700 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication 2014-08-06 07:53:21.767263 7fe31fd1e700 0 librados: client.admin initialization error (2) No

Re: [ceph-users] [Ceph-community] Remote replication

2014-08-06 Thread Sage Weil
On Tue, 5 Aug 2014, Craig Lewis wrote: There currently isn't a backup tool for CephFS.  CephFS is a POSIX filesystem, so your normal tools should work.  It's a really large POSIX filesystem though, so normal tools may not scale well. Note that CephFS does have one feature that should make

Re: [ceph-users] slow OSD brings down the cluster

2014-08-06 Thread Sage Weil
You can use the ceph osd perf command to get recent queue latency stats for all OSDs. With a bit of sorting this should quickly tell you if any OSDs are going significantly slower than the others. We'd like to automate this in calamari or perhaps even in the monitor, but it is not

Re: [ceph-users] Ceph writes stall for long perioids with no disk/network activity

2014-08-06 Thread Christian Balzer
On Wed, 6 Aug 2014 09:19:57 -0400 Chris Kitzmiller wrote: On Aug 5, 2014, at 12:43 PM, Mark Nelson wrote: On 08/05/2014 08:42 AM, Mariusz Gronczewski wrote: On Mon, 04 Aug 2014 15:32:50 -0500, Mark Nelson mark.nel...@inktank.com wrote: On 08/04/2014 03:28 PM, Chris Kitzmiller wrote: On

[ceph-users] librados: client.admin authentication error

2014-08-06 Thread O'Reilly, Dan
Anybody know why this error occurs, and a solution? [ceph@tm1cldcphal01 ~]$ ceph --version ceph version 0.80.1 (a38fe1169b6d2ac98b427334c12d7cf81f809b74) [ceph@tm1cldcphal01 ~]$ ceph --status 2014-08-06 08:55:13.168770 7f5527929700 0 librados: client.admin authentication error (95) Operation

Re: [ceph-users] librbd tuning?

2014-08-06 Thread Mark Nelson
On 08/05/2014 06:19 PM, Mark Kirkwood wrote: On 05/08/14 23:44, Mark Nelson wrote: On 08/05/2014 02:48 AM, Mark Kirkwood wrote: On 05/08/14 03:52, Tregaron Bayly wrote: Does anyone have any insight on how we can tune librbd to perform closer to the level of the rbd kernel module? In our lab

Re: [ceph-users] librbd tuning?

2014-08-06 Thread Sage Weil
On Wed, 6 Aug 2014, Mark Nelson wrote: On 08/05/2014 06:19 PM, Mark Kirkwood wrote: On 05/08/14 23:44, Mark Nelson wrote: On 08/05/2014 02:48 AM, Mark Kirkwood wrote: On 05/08/14 03:52, Tregaron Bayly wrote: Does anyone have any insight on how we can tune librbd to perform

Re: [ceph-users] librbd tuning?

2014-08-06 Thread Christian Balzer
On Wed, 6 Aug 2014 08:05:33 -0700 (PDT) Sage Weil wrote: On Wed, 6 Aug 2014, Mark Nelson wrote: On 08/05/2014 06:19 PM, Mark Kirkwood wrote: On 05/08/14 23:44, Mark Nelson wrote: On 08/05/2014 02:48 AM, Mark Kirkwood wrote: On 05/08/14 03:52, Tregaron Bayly wrote: Does anyone

Re: [ceph-users] librbd tuning?

2014-08-06 Thread Tregaron Bayly
On Wed, 2014-08-06 at 08:05 -0700, Sage Weil wrote: BTW, do we still need to use something != virtio in order for trim/discard? This was also my first concern when virtio was suggested. We were using ide primarily so we could take advantage of discard. The vms we will be supporting are more

[ceph-users] ceph-deploy disk activate error msg

2014-08-06 Thread German Anders
Hi to all, I'm having some issues while trying to deploy a osd with btrfs: ceph@cephdeploy01:~/ceph-deploy$ ceph-deploy disk activate --fs-type btrfs cephosd02:sdd1:/dev/sde1 [ceph_deploy.cli][INFO ] Invoked (1.4.0): /usr/bin/ceph-deploy disk activate --fs-type btrfs

Re: [ceph-users] ceph-deploy disk activate error msg

2014-08-06 Thread Alfredo Deza
On Wed, Aug 6, 2014 at 11:23 AM, German Anders gand...@despegar.com wrote: Hi to all, I'm having some issues while trying to deploy a osd with btrfs: ceph@cephdeploy01:~/ceph-deploy$ ceph-deploy disk activate --fs-type btrfs cephosd02:sdd1:/dev/sde1 [ceph_deploy.cli][INFO ] Invoked

Re: [ceph-users] slow OSD brings down the cluster

2014-08-06 Thread Mark Nelson
On 08/06/2014 03:43 AM, Luis Periquito wrote: Hi, In the last few days I've had some issues with the radosgw in which all requests would just stop being served. After some investigation I would go for a single slow OSD. I just restarted that OSD and everything would just go back to work. Every

Re: [ceph-users] ceph-deploy disk activate error msg

2014-08-06 Thread Alfredo Deza
Adding ceph-users, back to the discussion. Can you tell me if `ceph-deploy admin cephosd02` was what worked or if it was the scp'ing of keys? On Wed, Aug 6, 2014 at 12:36 PM, German Anders gand...@despegar.com wrote: It work!!! :) thanks a lot Alfredo. I want to ask also if you know how can I

Re: [ceph-users] ceph-deploy disk activate error msg

2014-08-06 Thread German Anders
___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] 10th Anniversary T-Shirts for Contributors

2014-08-06 Thread Patrick McGarry
Hey cephers, Just wanted to let folks know that as a way of saying thank you for 10 years of contributions and growth on the Ceph project we'll be shipping a free limited edition 10th anniversary t-shirt to anyone who has contributed to the project (and wants one). All you have to do to get your

Re: [ceph-users] Openstack Havana root fs resize don't work

2014-08-06 Thread Jeremy Hanmer
And you're using cloud-init in these cases, or are you executing growrootfs via some other means? If you're using cloud-init, you should see some useful messages in /var/log/cloud-init.log (particularly on debian/ubuntu; I've found centos' logs to not be as helpful). Also, if you're using

[ceph-users] Ceph can't seem to forget.

2014-08-06 Thread Sean Sullivan
I forgot to register before posting so reposting. I think I have a split issue or I can't seem to get rid of these objects. How can I tell ceph to forget the objects and revert? How this happened is that due to the python 2.7.8/ceph bug ( a whole rack of ceph went town (it had ubuntu 14.10 and

[ceph-users] Fresh deploy of ceph 0.83 has OSD down

2014-08-06 Thread Mark Kirkwood
Hi, I'm doing a fresh install of ceph 0.83 (src build) to an Ubuntu 14.04 VM using ceph-deploy 1.59. Everything goes well until the osd creation, which fails to start with a journal open error. The steps are shown below (ceph is the deploy target host): (ceph1) $ uname -a Linux ceph1

Re: [ceph-users] Dependency issues in fresh ceph/CentOS 7 install

2014-08-06 Thread Kyle Bader
Can you paste me the whole output of the install? I am curious why/how you are getting el7 and el6 packages. priority=1 required in /etc/yum.repos.d/ceph.repo entries -- Kyle ___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] ceph rbd volume can't remove because image still has watchers

2014-08-06 Thread 杨万元
Hi all: we use ceph rbd with openstack ,recently there are some dirty data in my cinder-volume databases such as volumes status like error-deleting. So we need manually delete this volumes。 but when I delete the volume on ceph node,ceph tell me this error [root@ceph-node3 ~]# rbd