Hi Kasper,
Can you talk about how you make use of Ceph and detail information on CentOS?
I guess that you use the CephFS on Ceph Cluster?
Best regards,
Wheats
在 2013-7-17,下午2:16,Kasper Dieter dieter.kas...@ts.fujitsu.com 写道:
Hi Xiaoxi,
we are really running Ceph on CentOS-6.4
(6 server
Hi Dieter,
Thanks a lot for the information. Could I learn more about your use case ?
Would you care performance ?
And may I know why you want CentOS+ Custom kernel ? I thought people use
CentOS for stability concern but if using CentOS + Custom Kernel, why don't you
just use Ubuntu or
Hi Xiaoxi, Wheats,
you hit the right point: We are looking for an Enterprise Linux distribution as
base for Ceph.
RHEL and CentOS has a very broad distribution and a high acceptance in the Data
Center due to our observation.
The pain of this distro (from Ceph point of view) is the old kernel.
On Wed, Jul 17, 2013 at 08:59:43AM +0200, Kasper Dieter wrote:
Hi Xiaoxi, Wheats,
you hit the right point: We are looking for an Enterprise Linux distribution
as base for Ceph.
RHEL and CentOS has a very broad distribution and a high acceptance in the
Data Center due to our observation.
Hi everyone,
I converted every osds from 2TB to 4TB, and when moving complete, show
log Ceph realtimeceph -w:
displays error: *I don't have pgid 0.2c8*
after then, I run: ceph pg force_create_pg 0.2c8
Ceph warning: pgmap v55175: 22944 pgs: 1 creating, 22940 active+clean, 3
Sorry, not send to ceph-users later.
I check mon.1 log and found that cluster was not in HEALTH_OK when set
ruleset to iscsi:
2013-07-14 15:52:15.715871 7fe8a852a700 0 log [INF] : pgmap
v16861121: 19296 pgs: 19052 active+clean, 73
active+remapped+wait_backfill, 171 active+remapped+b
ackfilling;
On Ubuntu 13.04, ceph 0.61.4.
I was running an fio read test as below, then it hung:
root@ceph-node2:/mnt# fio -filename=/dev/rbd1 -direct=1 -iodepth 1 -thread
-rw=read -ioengine=psync -bs=4k -size=50G -numjobs=16 -group_reporting
-name=mytest
mytest: (g=0): rw=read, bs=4K-4K/4K-4K,
Hello
Is there any way to verify that cache is enabled? My machine is running
with following parameters:
qemu-system-x86_64 -machine accel=kvm:tcg -name instance-0302 -S
-machine pc-i440fx-1.5,accel=kvm,usb=off -cpu
Do you mean the write barrier?
So all ceph disk partitions are mounted with barrier=1?
-- Original --
From: Gregory Farnumg...@inktank.com;
Date: Wed, Jul 17, 2013 00:29 AM
To: Da Chunng...@qq.com;
Cc: ceph-usersceph-users@lists.ceph.com;
Subject: Re:
On Wed, Jul 17, 2013 at 9:50 AM, Da Chun ng...@qq.com wrote:
Do you mean the write barrier?
So all ceph disk partitions are mounted with barrier=1?
Yes. Ceph requires very strong safety semantics in order to maintain
consistency, and it's very careful to do so in a performant fashion
(instead
subscribe
Thanks,
Dieter
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
What version are you running? How did you move the osds from 2TB to 4TB?
-Sam
On Wed, Jul 17, 2013 at 12:59 AM, Ta Ba Tuan tua...@vccloud.vn wrote:
Hi everyone,
I converted every osds from 2TB to 4TB, and when moving complete, show log
Ceph realtimeceph -w:
displays error: I don't have pgid
On Wed, Jul 17, 2013 at 11:07 AM, ker can kerca...@gmail.com wrote:
Hi,
Has anyone got hbase working on ceph ? I've got ceph (cuttlefish) and
hbase-0.94.9.
My setup is erroring out looking for getDefaultReplication
getDefaultBlockSize ... but I can see those defined in
Hi,
Something interesting, osd whith problems eats much more memory.
Standard is about 300m,
This osd eats even 30G.
Can i do any tests to help find where the problem is?
--
Regards
Dominik
2013/7/16 Dominik Mostowiec dominikmostow...@gmail.com:
Hi,
I noticed that problem is more frequent at
On Wed, Jul 17, 2013 at 4:40 AM, Vladislav Gorbunov vadi...@gmail.com wrote:
Sorry, not send to ceph-users later.
I check mon.1 log and found that cluster was not in HEALTH_OK when set
ruleset to iscsi:
2013-07-14 15:52:15.715871 7fe8a852a700 0 log [INF] : pgmap
v16861121: 19296 pgs: 19052
[please keep replies on the list]
On 07/17/2013 04:04 AM, Gaylord Holder wrote:
On 07/16/2013 09:22 PM, Josh Durgin wrote:
On 07/16/2013 06:06 PM, Gaylord Holder wrote:
Now whenever I try to map an RBD to a machine, mon0 complains:
feature set mismatch, my 2 server's 2040002, missing
On 07/17/2013 05:59 AM, Maciej Gałkiewicz wrote:
Hello
Is there any way to verify that cache is enabled? My machine is running
with following parameters:
qemu-system-x86_64 -machine accel=kvm:tcg -name instance-0302 -S
-machine pc-i440fx-1.5,accel=kvm,usb=off -cpu
Hmm, I was thinking I'd seen an ENOMEM output before and had it turn
out to be something strange that didn't involve memory issues, but I
can't find it now. Since you're running in a VM I'm thinking it might
actually be running out of memory; do you have swap enabled on your
VM, and can you try
this is probably something i introduced in my private version ... when i
merged the 1.0 branch with the hadoop-topo branch. Let me fix this and try
again.
On Wed, Jul 17, 2013 at 5:35 PM, ker can kerca...@gmail.com wrote:
Some more from lastIOE.printStackTrace():
Caused by:
Yup, that was me.
We have hbase working here.
You'll want to disable localized reads, as per bug #5388. That bug
will cause your regionservers to crash fairly often when doing
compaction.
You'll also want to restart each of the regionservers and masters
often (We're doing it once a day) to
Yep, its working now. I guess I the deprecated annotation for
createNonRecursive threw me off. :o)
@Deprecated
public FSDataOutputStream createNonRecursive(Path path, FsPermission
permission,
boolean overwrite,
___
ceph-users mailing list
Hi list,
not a real problem but weird thing under cuttlefish :
2013-07-18 10:51:01.597390 mon.0 [INF] pgmap v266324: 216 pgs: 215
active+clean, 1 active+remapped+backfilling; 144 GB data, 305 GB used,
453 GB / 766 GB avail; 3921KB/s rd, 2048KB/s wr, 288op/s; 1/116426
degraded (0.001%);
That's what I did:
cluster state HEALTH_OK
1. load crush map from cluster:
https://dl.dropboxusercontent.com/u/2296931/ceph/crushmap1.txt
2. modify crush map for adding pool and ruleset iscsi with 2
datacenters, upload crush map to cluster:
I'm using Ceph-0.61.4,
I removed each osds (2TB) on data hosts and re-create with disks (4TB).
When converting finish, Ceph warns that have 4 pgs in stale state and
warning: i don't have pgid pgid
after, I created 4 pgs by command: ceph pg force_create_pg pgid
Now (after the long time), Ceph
24 matches
Mail list logo