Re: [ceph-users] Any concern about Ceph on CentOS

2013-07-17 Thread Haomai Wang
Hi Kasper, Can you talk about how you make use of Ceph and detail information on CentOS? I guess that you use the CephFS on Ceph Cluster? Best regards, Wheats 在 2013-7-17,下午2:16,Kasper Dieter dieter.kas...@ts.fujitsu.com 写道: Hi Xiaoxi, we are really running Ceph on CentOS-6.4 (6 server

Re: [ceph-users] Any concern about Ceph on CentOS

2013-07-17 Thread Chen, Xiaoxi
Hi Dieter, Thanks a lot for the information. Could I learn more about your use case ? Would you care performance ? And may I know why you want CentOS+ Custom kernel ? I thought people use CentOS for stability concern but if using CentOS + Custom Kernel, why don't you just use Ubuntu or

Re: [ceph-users] Any concern about Ceph on CentOS

2013-07-17 Thread Kasper Dieter
Hi Xiaoxi, Wheats, you hit the right point: We are looking for an Enterprise Linux distribution as base for Ceph. RHEL and CentOS has a very broad distribution and a high acceptance in the Data Center due to our observation. The pain of this distro (from Ceph point of view) is the old kernel.

Re: [ceph-users] Any concern about Ceph on CentOS

2013-07-17 Thread Liu Yuan
On Wed, Jul 17, 2013 at 08:59:43AM +0200, Kasper Dieter wrote: Hi Xiaoxi, Wheats, you hit the right point: We are looking for an Enterprise Linux distribution as base for Ceph. RHEL and CentOS has a very broad distribution and a high acceptance in the Data Center due to our observation.

[ceph-users] ceph -w warning I don't have pgid 0.2c8?

2013-07-17 Thread Ta Ba Tuan
Hi everyone, I converted every osds from 2TB to 4TB, and when moving complete, show log Ceph realtimeceph -w: displays error: *I don't have pgid 0.2c8* after then, I run: ceph pg force_create_pg 0.2c8 Ceph warning: pgmap v55175: 22944 pgs: 1 creating, 22940 active+clean, 3

Re: [ceph-users] all oas crush on start

2013-07-17 Thread Vladislav Gorbunov
Sorry, not send to ceph-users later. I check mon.1 log and found that cluster was not in HEALTH_OK when set ruleset to iscsi: 2013-07-14 15:52:15.715871 7fe8a852a700 0 log [INF] : pgmap v16861121: 19296 pgs: 19052 active+clean, 73 active+remapped+wait_backfill, 171 active+remapped+b ackfilling;

[ceph-users] ceph fio read test hangs

2013-07-17 Thread Da Chun
On Ubuntu 13.04, ceph 0.61.4. I was running an fio read test as below, then it hung: root@ceph-node2:/mnt# fio -filename=/dev/rbd1 -direct=1 -iodepth 1 -thread -rw=read -ioengine=psync -bs=4k -size=50G -numjobs=16 -group_reporting -name=mytest mytest: (g=0): rw=read, bs=4K-4K/4K-4K,

Re: [ceph-users] Libvirt, quemu, ceph write cache settings

2013-07-17 Thread Maciej Gałkiewicz
Hello Is there any way to verify that cache is enabled? My machine is running with following parameters: qemu-system-x86_64 -machine accel=kvm:tcg -name instance-0302 -S -machine pc-i440fx-1.5,accel=kvm,usb=off -cpu

Re: [ceph-users] Should the disk write cache be disabled?

2013-07-17 Thread Da Chun
Do you mean the write barrier? So all ceph disk partitions are mounted with barrier=1? -- Original -- From: Gregory Farnumg...@inktank.com; Date: Wed, Jul 17, 2013 00:29 AM To: Da Chunng...@qq.com; Cc: ceph-usersceph-users@lists.ceph.com; Subject: Re:

Re: [ceph-users] Should the disk write cache be disabled?

2013-07-17 Thread Gregory Farnum
On Wed, Jul 17, 2013 at 9:50 AM, Da Chun ng...@qq.com wrote: Do you mean the write barrier? So all ceph disk partitions are mounted with barrier=1? Yes. Ceph requires very strong safety semantics in order to maintain consistency, and it's very careful to do so in a performant fashion (instead

[ceph-users] subscribe

2013-07-17 Thread Kasper Dieter
subscribe Thanks, Dieter ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] ceph -w warning I don't have pgid 0.2c8?

2013-07-17 Thread Samuel Just
What version are you running? How did you move the osds from 2TB to 4TB? -Sam On Wed, Jul 17, 2013 at 12:59 AM, Ta Ba Tuan tua...@vccloud.vn wrote: Hi everyone, I converted every osds from 2TB to 4TB, and when moving complete, show log Ceph realtimeceph -w: displays error: I don't have pgid

Re: [ceph-users] ceph hbase:

2013-07-17 Thread Noah Watkins
On Wed, Jul 17, 2013 at 11:07 AM, ker can kerca...@gmail.com wrote: Hi, Has anyone got hbase working on ceph ? I've got ceph (cuttlefish) and hbase-0.94.9. My setup is erroring out looking for getDefaultReplication getDefaultBlockSize ... but I can see those defined in

Re: [ceph-users] two osd stack on peereng after start osd to recovery

2013-07-17 Thread Dominik Mostowiec
Hi, Something interesting, osd whith problems eats much more memory. Standard is about 300m, This osd eats even 30G. Can i do any tests to help find where the problem is? -- Regards Dominik 2013/7/16 Dominik Mostowiec dominikmostow...@gmail.com: Hi, I noticed that problem is more frequent at

Re: [ceph-users] all oas crush on start

2013-07-17 Thread Gregory Farnum
On Wed, Jul 17, 2013 at 4:40 AM, Vladislav Gorbunov vadi...@gmail.com wrote: Sorry, not send to ceph-users later. I check mon.1 log and found that cluster was not in HEALTH_OK when set ruleset to iscsi: 2013-07-14 15:52:15.715871 7fe8a852a700 0 log [INF] : pgmap v16861121: 19296 pgs: 19052

Re: [ceph-users] feature set mismatch

2013-07-17 Thread Josh Durgin
[please keep replies on the list] On 07/17/2013 04:04 AM, Gaylord Holder wrote: On 07/16/2013 09:22 PM, Josh Durgin wrote: On 07/16/2013 06:06 PM, Gaylord Holder wrote: Now whenever I try to map an RBD to a machine, mon0 complains: feature set mismatch, my 2 server's 2040002, missing

Re: [ceph-users] Libvirt, quemu, ceph write cache settings

2013-07-17 Thread Josh Durgin
On 07/17/2013 05:59 AM, Maciej Gałkiewicz wrote: Hello Is there any way to verify that cache is enabled? My machine is running with following parameters: qemu-system-x86_64 -machine accel=kvm:tcg -name instance-0302 -S -machine pc-i440fx-1.5,accel=kvm,usb=off -cpu

Re: [ceph-users] Problems mounting the ceph-FS

2013-07-17 Thread Gregory Farnum
Hmm, I was thinking I'd seen an ENOMEM output before and had it turn out to be something strange that didn't involve memory issues, but I can't find it now. Since you're running in a VM I'm thinking it might actually be running out of memory; do you have swap enabled on your VM, and can you try

Re: [ceph-users] ceph hbase:

2013-07-17 Thread ker can
this is probably something i introduced in my private version ... when i merged the 1.0 branch with the hadoop-topo branch. Let me fix this and try again. On Wed, Jul 17, 2013 at 5:35 PM, ker can kerca...@gmail.com wrote: Some more from lastIOE.printStackTrace(): Caused by:

Re: [ceph-users] ceph hbase:

2013-07-17 Thread Mike Bryant
Yup, that was me. We have hbase working here. You'll want to disable localized reads, as per bug #5388. That bug will cause your regionservers to crash fairly often when doing compaction. You'll also want to restart each of the regionservers and masters often (We're doing it once a day) to

Re: [ceph-users] ceph hbase:

2013-07-17 Thread ker can
Yep, its working now. I guess I the deprecated annotation for createNonRecursive threw me off. :o) @Deprecated public FSDataOutputStream createNonRecursive(Path path, FsPermission permission, boolean overwrite, ___ ceph-users mailing list

[ceph-users] weird: -23/116426 degraded (-0.020%)

2013-07-17 Thread Mikaël Cluseau
Hi list, not a real problem but weird thing under cuttlefish : 2013-07-18 10:51:01.597390 mon.0 [INF] pgmap v266324: 216 pgs: 215 active+clean, 1 active+remapped+backfilling; 144 GB data, 305 GB used, 453 GB / 766 GB avail; 3921KB/s rd, 2048KB/s wr, 288op/s; 1/116426 degraded (0.001%);

Re: [ceph-users] all oas crush on start

2013-07-17 Thread Vladislav Gorbunov
That's what I did: cluster state HEALTH_OK 1. load crush map from cluster: https://dl.dropboxusercontent.com/u/2296931/ceph/crushmap1.txt 2. modify crush map for adding pool and ruleset iscsi with 2 datacenters, upload crush map to cluster:

Re: [ceph-users] ceph -w warning I don't have pgid 0.2c8?

2013-07-17 Thread Ta Ba Tuan
I'm using Ceph-0.61.4, I removed each osds (2TB) on data hosts and re-create with disks (4TB). When converting finish, Ceph warns that have 4 pgs in stale state and warning: i don't have pgid pgid after, I created 4 pgs by command: ceph pg force_create_pg pgid Now (after the long time), Ceph