Re: [ceph-users] Ceph Not getting into a clean state

2014-05-08 Thread Georg Höllrigl
Hello, I've already thought about that - but even after changing the replication level (size) I'm not getting a clean cluster (there are only the default pools ATM): root@ceph-m-02:~#ceph -s cluster b04fc583-9e71-48b7-a741-92f4dff4cfef health HEALTH_WARN 232 pgs stuck unclean; recove

Re: [ceph-users] Replace journals disk

2014-05-08 Thread Indra Pramana
Hi Gandalf and Sage, Just would like to confirm if my steps below to replace a journal disk are correct? Presuming the journal disk to be replaced is /dev/sdg and the two affected OSDs using the disk as journals are osd.30 and osd.31: - ceph osd set noout - stop affected OSDs sudo stop ceph-osd

Re: [ceph-users] List users not listing users

2014-05-08 Thread Shanil S
Hi Yehuda, Also, if we use /admin/metadata/user then it will list only the usernames and we won't get like user id, username, number of buckets,sub users etc. Is it possible to list out these details too ? Still i am not sure why the '/admin/user' is not working. What do you think about this issu

Re: [ceph-users] Ceph Not getting into a clean state

2014-05-08 Thread Mark Kirkwood
So that's two hosts - if this is a new cluster chances are the pools have replication size=3, and won't place replica pgs on the same host... 'ceph osd dump' will let you know if this is the case. If it is ether reduce size to 2, add another host or edit your crush rules to allow replica pgs on

Re: [ceph-users] Ceph Not getting into a clean state

2014-05-08 Thread Georg Höllrigl
#ceph osd tree # idweight type name up/down reweight -1 76.47 root default -2 32.72 host ceph-s-01 0 7.27osd.0 up 1 1 7.27osd.1 up 1 2 9.09osd.2 up 1 3 9.09

[ceph-users] Fwd: Bad performance of CephFS (first use)

2014-05-08 Thread Michal Pazdera
Hi everyone, I am new to the Ceph. I have 5 PC test cluster on wich id like to test CephFS behavior and performance. I have used ceph-deploy on nod pc1 and installed ceph software (emeperor 0.72.2-0.el6) on all 5 machines. Then set pc1 as mon and mds. PC2, PC3 as OSD and PC4, PC5 as ceph clien

Re: [ceph-users] Suggestions on new cluster

2014-05-08 Thread Christian Balzer
Hello, On Thu, 8 May 2014 15:14:59 + Carlos M. Perez wrote: > Hi, > > We're just getting started with ceph, and were going to pull the order > to get the needed hardware ordered. Wondering if anyone would offer any > insight/suggestions on the following setup of 3 nodes: > > Dual 6-core X

Re: [ceph-users] List users not listing users

2014-05-08 Thread Yehuda Sadeh
You have a typo there. Should be metadata, not metadeta. Yehuda On May 8, 2014 9:46 PM, "Shanil S" wrote: > Hi Yehuda, > > The meta permission is already have the admin user, these are the admin > user details > > "swift_keys": [], > "caps": [ > { "type": "admin", > "perm": "

Re: [ceph-users] List users not listing users

2014-05-08 Thread Shanil S
Hi Yehuda, The meta permission is already have the admin user, these are the admin user details "swift_keys": [], "caps": [ { "type": "admin", "perm": "*"}, { "type": "caps", "perm": "*"}, { "type": "metadeta", "perm": "*"}, { "type"

Re: [ceph-users] List users not listing users

2014-05-08 Thread Shanil S
Hi Yehuda, Okay.. Thanks.. I will add and check it. I will let you know the results On Fri, May 9, 2014 at 10:08 AM, Yehuda Sadeh wrote: > You're missing the correct caps for the user, iirc you need to add the > 'metadata' read cap to the user. > > Yehuda > > On Thu, May 8, 2014 at 9:37 PM, S

Re: [ceph-users] List users not listing users

2014-05-08 Thread Yehuda Sadeh
You're missing the correct caps for the user, iirc you need to add the 'metadata' read cap to the user. Yehuda On Thu, May 8, 2014 at 9:37 PM, Shanil S wrote: > Hi Yehuda, > > Thanks for your quick reply.. > > I tried with the above but getting the same error as like above > > This is the log fi

Re: [ceph-users] List users not listing users

2014-05-08 Thread Yehuda Sadeh
You can use the metadata api to list the users, something along the lines of: GET /admin/metadata/user?format=json Yehuda On Thu, May 8, 2014 at 9:19 PM, Shanil S wrote: > Hi Punit, > > I hope someone in the community forum can solve this issue. I am waiting for > their responses.. > > > On T

Re: [ceph-users] List users not listing users

2014-05-08 Thread Shanil S
Hi Punit, I hope someone in the community forum can solve this issue. I am waiting for their responses.. On Thu, May 8, 2014 at 7:38 PM, Punit Dambiwal wrote: > Hi Shanil, > > I am also facing the same issue...can anyone from the community will > answer this ?? > > > > > On Mon, May 5, 2014 at

Re: [ceph-users] Delete pool .rgw.bucket and objects within it

2014-05-08 Thread Thanh Tran
Hi Irek, I stopped radosgw, then I deleted all pools that is default for radosgw as below, wait for ceph delete objects, and re-created the pools. I stopped and started the whole cluster including starting radosgw. Now it is very unstable. Osds is usually marked down or crash. Please see a part of

Re: [ceph-users] Slow IOPS on RBD compared to journal and backing devices

2014-05-08 Thread Christian Balzer
On Wed, 7 May 2014 22:13:53 -0700 Gregory Farnum wrote: > Oh, I didn't notice that. I bet you aren't getting the expected > throughput on the RAID array with OSD access patterns, and that's > applying back pressure on the journal. > In the a "picture" being worth a thousand words tradition, I gi

[ceph-users] too slowly upload on ceph object storage

2014-05-08 Thread wsnote
Hi, everyone! I am testing ceph rgw and found it upload slowly. I don't know where may be the bottleneck. OS: CentOS 6.5 Version: Ceph 0.79 Hardware: CPU: 2 * quad-core Mem: 32GB Disk: 2TB*1+3TB*11 Network: 1*1GB Ethernet NIC Ceph Cluster: My cluster was composed of 4 servers(called ceph1-4)

[ceph-users] subscribe ceph mail list

2014-05-08 Thread Sean Cao
Best Regards 图片1 曹世银 Sean Cao ZeusCloud Storage Engineer 移动电话 : +86-13162662069 电子邮件 : sean_...@zeuscloud.cn 电话 : +86-21-5169 5876 分机 : 1043 传真 : +86-21-5169 5876 地址 : 上海市闸北区广中西路777弄55号启迪大厦15层 200072 Privileged/Confidential information may be cont

Re: [ceph-users] NFS over CEPH - best practice

2014-05-08 Thread Stuart Longland
On 07/05/14 19:46, Andrei Mikhailovsky wrote: > Hello guys, > > I would like to offer NFS service to the XenServer and VMWare > hypervisors for storing vm images. I am currently running ceph rbd with > kvm, which is working reasonably well. > > What would be the best way of running NFS services o

Re: [ceph-users] 0.80 Firefly Debian/Ubuntu Trusty Packages

2014-05-08 Thread Michael
Ah, thanks for the info. Will keep an eye on it there instead and clean the ceph.com from the sources list. -Michael On 08/05/2014 21:48, Henrik Korkuc wrote: hi, trusty will include ceph in usual repos. I am tracking http://packages.ubuntu.com/trusty/ceph and https://bugs.launchpad.net/ubuntu

Re: [ceph-users] 0.80 Firefly Debian/Ubuntu Trusty Packages

2014-05-08 Thread Henrik Korkuc
hi, trusty will include ceph in usual repos. I am tracking http://packages.ubuntu.com/trusty/ceph and https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1278466 for release On 2014.05.08 23:45, Michael wrote: > Hi, > > Have these been missed or have they been held back for a specific reason? > ht

[ceph-users] 0.80 Firefly Debian/Ubuntu Trusty Packages

2014-05-08 Thread Michael
Hi, Have these been missed or have they been held back for a specific reason? http://ceph.com/debian-firefly/dists/ looks like Trusty is the only one that hasn't been updated. -Michael ___ ceph-users mailing list ceph-users@lists.ceph.com http://list

Re: [ceph-users] NFS over CEPH - best practice

2014-05-08 Thread Leen Besselink
On Thu, May 08, 2014 at 01:24:17AM +0200, Gilles Mocellin wrote: > Le 07/05/2014 15:23, Vlad Gorbunov a écrit : > >It's easy to install tgtd with ceph support. ubuntu 12.04 for example: > > > >Connect ceph-extras repo: > >echo deb http://ceph.com/packages/ceph-extras/debian $(lsb_release > >-sc) ma

Re: [ceph-users] 0.80 binaries?

2014-05-08 Thread Henrik Korkuc
hi, I am not sure about your link, but I use: http://ceph.com/rpm-firefly/ reference: http://ceph.com/docs/master/install/get-packages/ On 2014.05.08 19:32, Shawn Edwards wrote: > The links on the download page for 0.80 still shows 0.72 bins. Did > the 0.80 binaries get deployed yet? > > I'm lo

Re: [ceph-users] Info firefly qemu rbd

2014-05-08 Thread Josh Durgin
On 05/08/2014 09:42 AM, Federico Iezzi wrote: I guys, First of all congratulations on Firefly Release! IMHO I think that this release is a huge step for Ceph Project! Just for fun, this morning I upgraded one my staging Ceph cluster used by an OpenStack Havana installation (Canonical cloud arch

Re: [ceph-users] Ceph Not getting into a clean state

2014-05-08 Thread Craig Lewis
What does `ceph osd tree` output? On 5/8/14 07:30 , Georg Höllrigl wrote: Hello, We've a fresh cluster setup - with Ubuntu 14.04 and ceph firefly. By now I've tried this multiple times - but the result keeps the same and shows me lots of troubles (the cluster is empty, no client has accessed

Re: [ceph-users] Replace journals disk

2014-05-08 Thread Indra Pramana
Hi Gandalf and Sage, Many thanks! Will try this and share the outcome. Cheers. On Fri, May 9, 2014 at 12:55 AM, Gandalf Corvotempesta < gandalf.corvotempe...@gmail.com> wrote: > 2014-05-08 18:43 GMT+02:00 Indra Pramana : > > Since we don't use ceph.conf to indicate the data and journal paths,

Re: [ceph-users] Replace journals disk

2014-05-08 Thread Gandalf Corvotempesta
2014-05-08 18:43 GMT+02:00 Indra Pramana : > Since we don't use ceph.conf to indicate the data and journal paths, how can > I recreate the journal partitions? 1. Dump the partition scheme: sgdisk --backup=/tmp/journal_table /dev/sdd 2. Replace the journal disk device 3. Restore the old partition

Re: [ceph-users] Replace journals disk

2014-05-08 Thread Indra Pramana
Hi Sage, On Fri, May 9, 2014 at 12:32 AM, Sage Weil wrote: > > > On Fri, 9 May 2014, Indra Pramana wrote: > > > Hi Sage, > > Thanks for your reply! > > > > Actually what I want is to replace the journal disk only, while I want > to keep the OSD FS > > intact. > > > > I have 4 OSDs on a node of 4

[ceph-users] Info firefly qemu rbd

2014-05-08 Thread Federico Iezzi
I guys, First of all congratulations on Firefly Release! IMHO I think that this release is a huge step for Ceph Project! Just for fun, this morning I upgraded one my staging Ceph cluster used by an OpenStack Havana installation (Canonical cloud archive, Kernel 3.11, Ubuntu 12.04) I had one issu

[ceph-users] 0.80 binaries?

2014-05-08 Thread Shawn Edwards
The links on the download page for 0.80 still shows 0.72 bins. Did the 0.80 binaries get deployed yet? I'm looking here: http://ceph.com/rpm/el6/x86_64/ Should I be looking elsewhere? -- Shawn Edwards Beware programmers with screwdrivers. They tend to spill them on their keyboards.

Re: [ceph-users] Replace journals disk

2014-05-08 Thread Sage Weil
On Fri, 9 May 2014, Indra Pramana wrote: > Hi Sage, > Thanks for your reply! > > Actually what I want is to replace the journal disk only, while I want to > keep the OSD FS > intact. > > I have 4 OSDs on a node of 4 spinning disks (sdb, sdc, sdd, sde) and 2 SSDs > (sdf and sdg) > > osd.28 o

Re: [ceph-users] Replace journals disk

2014-05-08 Thread Indra Pramana
Hi Sage, Thanks for your reply! Actually what I want is to replace the journal disk only, while I want to keep the OSD FS intact. I have 4 OSDs on a node of 4 spinning disks (sdb, sdc, sdd, sde) and 2 SSDs (sdf and sdg) osd.28 on /dev/sdb, journal on /dev/sdf1 osd.29 on /dev/sdc, journal on /de

Re: [ceph-users] 0.67.7 rpms changed today??

2014-05-08 Thread Gregory Farnum
On Thu, May 8, 2014 at 9:09 AM, Dan van der Ster wrote: > Dear ceph repo admins, > > Today our repo synchronization detected that the 0.67.7 rpms from > http://ceph.com/rpm-dumpling/el6/x86_64/ have changed, namely: > > Repo: ceph-dumpling-el6 > + ceph-0.67.7-0.el6.x86_64.

Re: [ceph-users] Replace journals disk

2014-05-08 Thread Sage Weil
Hi Indra, The simplest way to do the fs and journal creation is to use the ceph-disk tool: ceph-disk prepare FSDDISK JOURNALDISK For example, ceph-disk prepare /dev/sdb # put fs and journal on same disk, or ceph-disk prepare /dev/sdb /dev/sdc # fs on sdb, journal on (a new part o

[ceph-users] 0.67.7 rpms changed today??

2014-05-08 Thread Dan van der Ster
Dear ceph repo admins, Today our repo synchronization detected that the 0.67.7 rpms from http://ceph.com/rpm-dumpling/el6/x86_64/ have changed, namely: Repo: ceph-dumpling-el6 + ceph-0.67.7-0.el6.x86_64.rpm (22683 kiB) + ceph-devel-0.67.7-0.el6.x86_64.rpm

Re: [ceph-users] Replace journals disk

2014-05-08 Thread Indra Pramana
Hi Sage, Sorry to chip you in, do you have any comments on this? Since I noted you advised Tim Snider on similar situation before. :) http://www.spinics.net/lists/ceph-users/msg05142.html Looking forward to your reply, thank you. Cheers. On Wed, May 7, 2014 at 11:31 AM, Indra Pramana wrote:

Re: [ceph-users] Unable to remove RBD volume

2014-05-08 Thread Jonathan Gowar
ceph@ceph-admin:~$ rbd info cloudstack/20fcd781-2423-436e-afc6-21e75d85111d | grep prefix block_name_prefix: rbd_data.50f613006c83e ceph@ceph-admin:~$ rados -p cloudstack listwatchers rbd_header.50f613006c83e watcher=10.x.x.23:0/10014542 client.728679 cookie=1 watcher=10.x.x.23:0/11014542 c

Re: [ceph-users] Slow IOPS on RBD compared to journal and backing devices

2014-05-08 Thread Christian Balzer
Hello, On Thu, 08 May 2014 17:20:59 +0200 Udo Lembke wrote: > Hi, > I think not that's related, but how full is your ceph-cluster? Perhaps > it's has something to do with the fragmentation on the xfs-filesystem > (xfs_db -c frag -r device)? > As I wrote, this cluster will go into production next

Re: [ceph-users] Slow IOPS on RBD compared to journal and backing devices

2014-05-08 Thread Udo Lembke
Hi again, sorry, too fast - but this can't be an problem due to your 4GB cache... Udo Am 08.05.2014 17:20, schrieb Udo Lembke: > Hi, > I think not that's related, but how full is your ceph-cluster? Perhaps > it's has something to do with the fragmentation on the xfs-filesystem > (xfs_db -c frag -r

Re: [ceph-users] Slow IOPS on RBD compared to journal and backing devices

2014-05-08 Thread Udo Lembke
Hi, I think not that's related, but how full is your ceph-cluster? Perhaps it's has something to do with the fragmentation on the xfs-filesystem (xfs_db -c frag -r device)? Udo Am 08.05.2014 02:57, schrieb Christian Balzer: > > Hello, > > ceph 0.72 on Debian Jessie, 2 storage nodes with 2 OSDs

[ceph-users] Suggestions on new cluster

2014-05-08 Thread Carlos M. Perez
Hi, We're just getting started with ceph, and were going to pull the order to get the needed hardware ordered. Wondering if anyone would offer any insight/suggestions on the following setup of 3 nodes: Dual 6-core Xeon (L5639/L5640/X5650 depending on what we find) 96GB RAM LSI 2008 based contr

[ceph-users] Unable to remove RBD volume

2014-05-08 Thread Jonathan Gowar
ceph@ceph-admin:~$ rbd snap purge cloudstack/20fcd781-2423-436e-afc6-21e75d85111d Removing all snapshots: 100% complete...done. ceph@ceph-admin:~$ rbd rm cloudstack/20fcd781-2423-436e-afc6-21e75d85111d Removing image: 99% complete...failed. rbd: error: image still has watchers This means the image

Re: [ceph-users] 16 osds: 11 up, 16 in

2014-05-08 Thread Dimitri Maziuk
On 5/7/2014 7:35 PM, Craig Lewis wrote: Because of the very low recovery parameters, there's on a single backfill running. `iostat -dmx 5 5` did report 100% util on the osd that is backfilling, but I expected that. Once backfilling moves on to a new osd, the 100% util follows the backfill oper

[ceph-users] Ceph Not getting into a clean state

2014-05-08 Thread Georg Höllrigl
Hello, We've a fresh cluster setup - with Ubuntu 14.04 and ceph firefly. By now I've tried this multiple times - but the result keeps the same and shows me lots of troubles (the cluster is empty, no client has accessed it) #ceph -s cluster b04fc583-9e71-48b7-a741-92f4dff4cfef health

Re: [ceph-users] List users not listing users

2014-05-08 Thread Punit Dambiwal
Hi Shanil, I am also facing the same issue...can anyone from the community will answer this ?? On Mon, May 5, 2014 at 5:42 PM, Shanil S wrote: > Hi, > > I am creating the code to list out all users, but am unable to list out > the users but the authentication is correct. These are i saw in t

Re: [ceph-users] v0.80 Firefly released

2014-05-08 Thread Andrey Korolyov
Mike, would you mind to write your experience if you`ll manage to get this flow through first? I hope I`ll be able to conduct some tests related to 0.80 only next week, including maintenance combined with primary pointer relocation - one of most crucial things remaining in Ceph for the production p

[ceph-users] Error while running rados gateway

2014-05-08 Thread Srinivasa Rao Ragolu
Hi, I have configured ceph.conf like below for integration of radosgw with keystone. [client.radosgw.gateway] host = mon keyring = /etc/ceph/ceph.client.radosgw.keyring rgw socket path = /tmp/radosgw.sock log file = /var/log/ceph/radosgw.log rgw keystone url =

Re: [ceph-users] Hey, about radosgw , always encounter internal server error .

2014-05-08 Thread Peter
I had similar issues. Do you get any message when running /etc/init.d/radosgw start? you should be getting: starting**radosgw.gateway or similar. If not, the radosgw process is not starting up. It doesn't give much output to say what is wrong. i found it was due to the actual hostname o

[ceph-users] Does ceph has impact on imp IO performance

2014-05-08 Thread duan . xufeng
Hi,All while I use ceph as virtual machine backend, and execute imp operation, IO performance is 1/10 as in physical machine,about 600kb/s but execute dd for IO performance test,such as dd /dev/zero bs=64k count=100 of=/1.file average IO speed is about 50m/s here is physical machine res

[ceph-users] Does ceph has impact on imp IO performance

2014-05-08 Thread duan . xufeng
Hi,All while I use ceph as virtual machine backend, and execute imp operation, IO performance is 1/10 as in physical machine,about 600kb/s but execute dd for IO performance test,such as dd /dev/zero bs=64k count=100 of=/1.file average IO speed is about 50m/s here is physical machine res

Re: [ceph-users] Slow IOPS on RBD compared to journal and backing devices

2014-05-08 Thread Christian Balzer
Hello, On Thu, 08 May 2014 11:31:54 +0200 (CEST) Alexandre DERUMIER wrote: > > The OSD processes are quite busy, reading well over 200% on atop, but > > the system is not CPU or otherwise resource starved at that moment. > > osd use 2 threads by default (could explain the 200%) > > maybe can y

[ceph-users] Errors while integrating Rados Gateway with Keystone

2014-05-08 Thread Srinivasa Rao Ragolu
Hi Yehuda/All, I have configured Rados Gateway as suggested in ceph tutorial to integrate with Keystone and my ceph.conf looks like below. *[client.radosgw.gateway]host = mon keyring = /etc/ceph/ceph.client.radosgw.keyringrgw socket path = /tmp/radosgw.sock

[ceph-users] Hey, about radosgw , always encounter internal server error .

2014-05-08 Thread peng
Hey, I have proceeded to use radosgw. I follow the instructions in the document of ceph.com. In rgw.conf , I write : FastCgiExternalServer /var/www/html/s3gw.fcgi -socket /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock And I have set the same socket-file in /etc/cep

Re: [ceph-users] Slow IOPS on RBD compared to journal and backing devices

2014-05-08 Thread Alexandre DERUMIER
> The OSD processes are quite busy, reading well over 200% on atop, but > the system is not CPU or otherwise resource starved at that moment. osd use 2 threads by default (could explain the 200%) maybe can you try to put in ceph.conf osd op threads = 8 (don't known how many cores you have)