hi,all
1.
how to set region's endpoints? how to know there are how many endpoints?
2.
i follow the step of 'create a region', but after that, i can list the new
region. default region is always there.
3.
there is one rgw for each zone. after rgw starts up. i can find the pools
related to
i ignore a detail, to set FastCgiWrapper off
thanks
At 2014-10-30 10:01:19, "yuelongguang" wrote:
lists ceph, hi:
how do you solve this issue? i run into it when i tryy to deploy 2 rgws on one
ceph cluster in default region and default zone.
thanks
At 2014-07-0
lists ceph, hi:
how do you solve this issue? i run into it when i tryy to deploy 2 rgws on one
ceph cluster in default region and default zone.
thanks
At 2014-07-01 09:06:24, "Brian Rak" wrote:
>That sounds like you have some kind of odd situation going on. We only
>use radosgw wit
hi, clewis:
my environment:
one ceph cluster, 3 nodes, each node has one monitor and one osd. one
rgw(rgw1) which is on one of them(osd1). before i deploy the second rgw(rgw2),
the first rgw works well.
after i deploy a second rgw, which can not start.
the number of radosgw process increases
with
cdn?
thanks, looking forward to your reply.
在 2014-10-28 02:29:03,"Craig Lewis" 写道:
On Sun, Oct 26, 2014 at 9:08 AM, yuelongguang wrote:
hi,
1. if one radosgw daemon corresponds to one zone ? the rate is 1:1
Not necessarily. You need at least one rado
h of
fixes for Civetweb, so I'm leaning towards "not on Firefly" unless somebody
more knowledgeable tells me otherwise.
On Thu, Oct 23, 2014 at 11:04 PM, yuelongguang wrote:
hi,yehuda
1.
can we deploy multi-rgws on one ceph cluster?
if so does it bring us any problems?
2. w
hi,yehuda
1.
can we deploy multi-rgws on one ceph cluster?
if so does it bring us any problems?
2. what is the major difference between apache and civetweb?
what is civetweb's advantage?
thanks
___
ceph-users mailing list
ceph-users@lists.cep
hi,all
can we deploy multi-rgws on one ceph cluster?
if so does it bring us any problems?
thanks___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
1. why erasure coded pool does not work with rbd?
2. i used rados command to put a file into erasue coded pool,then rm it. why
the file remains on osd's backend fs all the time?
3. what is the best use case with erasure coded pool?
4. command of 'rados ls' is to list objects, where are the objec
hi,all
pool size/min_size does not make any effect on erasure-coded pool,right?
and erasure-coded pool does support rbd?
thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
hi,all
pool size/min_size does not make any effect on erasure-coded pool,right?
thanks___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
thanks, sage weil.
writing fs is a serious matter,we should make it clear, includes coding style.
there are other places we should fix.
thanks
At 2014-09-29 12:10:52, "Sage Weil" wrote:
>On Mon, 29 Sep 2014, yuelongguang wrote:
>> hi, sage will1.
>> you me
ks
At 2014-09-29 11:23:37, "Sage Weil" wrote:
>On Mon, 29 Sep 2014, yuelongguang wrote:
>> hi,all
>> 1.
>>
>> and who will connect it? as for osd, this ms_objecter is listen socket.
>> it is not included in osdmap. so how to know ms_objecter's
hi,all
1.
and who will connect it? as for osd, this ms_objecter is listen socket.
it is not included in osdmap. so how to know ms_objecter's listen address and
connect it.
thanks___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.c
thanks. i have not configured switch.
i just know about it.
在 2014-09-25 12:38:48,"Irek Fasikhov" 写道:
You have configured the switch?
2014-09-25 5:07 GMT+04:00 yuelongguang :
hi,all
after i set mtu=9000, ceph-deply waits reply all the time , 'detecting
pl
hi,all
after i set mtu=9000, ceph-deply waits reply all the time , 'detecting
platform for host.'
how to know what commands ceph-deploy need that osd to do?
thanks___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listi
hi,all
my question is from my test.
let's take a example. object1(4MB)--> pg 0.1 --> osd 1,2,3,p1
when client is writing object1, during the write , osd1 is down. let suppose
2MB is writed.
1.
when the connection to osd1 is down, what does client do? ask monitor for
new osdmap? or only
hi,all
take a look at the link ,
http://www.ceph.com/docs/master/architecture/#smart-daemons-enable-hyperscale
could you explain point 2, 3 in that picture.
1.
at point 2,3, before primary writes data to next osd, where is the data? it
is in momory or on disk already?
2. where is the
hi,all
in order to test ceph stability. i try to kill osds.
in this case ,i kill 3 osds(osd3,2,0) that store the same pg 2.30.
---crush---
osdmap e1342 pool 'rbd' (2) object 'rbd_data.19d92ae8944a.' ->
pg 2.c59a45b0 (2.30) -> up ([3,2,0], p3) acting ([3,2,0], p3)
[root@cephosd5
hi,all
i want to test some cases that lost data mostlikely.
now i just test killing osds.
do you have any such test cases?
thanks___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
fio paramenters
--fio
[global]
ioengine=libaio
direct=1
rw=randwrite
filename=/dev/vdb
time_based
runtime=300
stonewall
[iodepth32]
iodepth=32
bs=4k
At 2014-09-11 05:04:09, "yuelongguang" wrote:
hi, josh durgin:
please look at my test. inside vm us
37:23,"Josh Durgin" 写道:
>On 09/09/2014 07:06 AM, yuelongguang wrote:
>> hi, josh.durgin:
>> i want to know how librbd launch io request.
>> use case:
>> inside vm, i use fio to test rbd-disk's io performance.
>> fio's pramaters are bs=4k, direct i
as for the second question, could you tell me where the code is.
how ceph makes size/min_szie copies?
thanks
At 2014-09-11 12:19:18, "Gregory Farnum" wrote:
>On Wed, Sep 10, 2014 at 8:29 PM, yuelongguang wrote:
>>
>>
>>
>>
>> as for ac
hi,all
i am testing rbd performance, now there is only one vm which is using rbd as
its disk, and inside it fio is doing r/w.
the big diffenence is that i set a big iodepth other than iodepth=1.
according to my test, the bigger iodepth, the bigger cpu usage.
analyse the output of top comm
uot;ondisk".
I assume you're using btrfs; the ack is returned after the write is applied
in-memory and readable by clients. The ondisk (commit) message is returned
after it's durable to the journal or the backing filesystem.
-Greg
On Wednesday, September 10, 2014, yuelongguang wrot
hi,all
i am testing rbd performance, now there is only one vm which is using rbd as
its disk, and inside it fio is doing r/w.
the big diffenence is that i set a big iodepth other than iodepth=1.
how do you think about it, which part is using up cpu? i want to find the root
cause.
---de
hi,all
i recently debug ceph rbd, the log tells that one write to osd can get two if
its reply.
the difference between them is seq.
why?
thanks
---log-
reader got message 6 0x7f58900010a0 osd_op_reply(15
rbd_data.19d92ae8944a.0001 [set-alloc-hint object_size 4194304
write_
hi, josh.durgin:
i want to know how librbd launch io request.
use case:
inside vm, i use fio to test rbd-disk's io performance.
fio's pramaters are bs=4k, direct io, qemu cache=none.
in this case, if librbd just send what it gets from vm, i mean no
gather/scatter. the rate , io inside vm : i
hi,all
that is crazy.
1.
all my osds are down, but ceph -s tells they are up and in. why?
2.
now all osds are down, a vm is using rbd as its disk, and inside vm fio is
r/wing the disk , but it hang ,can not be killed. why ?
thanks
[root@cephosd2-monb ~]# ceph -v
ceph version 0.81 (8de9501df
hi, joao,mark nelson, both of you.
where monmap is stored?
how to dump monitor's data in /var/lib/ceph/mon/ceph-cephosd1-mona/store.db/?
thanks
At 2014-08-28 09:00:41, "Mark Nelson" wrote:
>On 08/28/2014 07:48 AM, yuelongguang wrote:
>> hi,all
>> what
hi,all
what is in directory, /var/lib/ceph/mon/ceph-cephosd1-mona/store.db/
how to dump?
where monmap is stored?
thanks___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ephx auth keys.
-Michael
On 26/08/2014 12:26, yuelongguang wrote:
hi,all
i have 5 osds and 3 mons. its status is ok then.
to be mentioned , this cluster has no any data. i just deploy it and to be
familar with some command lines.
what is the probpem and how to fix?
thanks
---environment-
hi,all
i have 5 osds and 3 mons. its status is ok then.
to be mentioned , this cluster has no any data. i just deploy it and to be
familar with some command lines.
what is the probpem and how to fix?
thanks
---environment-
ceph-release-1-0.el6.noarch
ceph-deploy-1.5.11-0.noarch
ceph
fio.
For ceph rbd has a special engine:
https://telekomcloud.github.io/ceph/2014/02/26/ceph-performance-analysis_fio_rbd.html
2014-08-26 12:15 GMT+04:00 yuelongguang :
hi,all
i am planning to do a test on ceph, include performance, throughput,
scalability,availability.
in order to get a full t
hi,all
i am planning to do a test on ceph, include performance, throughput,
scalability,availability.
in order to get a full test result, i hope you all can give me some advice.
meanwhile i can send the result to you,if you like.
as for each category test( performance, throughput, scalability,
hi,all
is there a way to get rbd,ko and ceph.ko for centos 6.X.
or i have to build them from source code? which is the least kernel version?
thanks___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-
hi,all
By reading the code , i notice everything of a OP is encoded into Transaction
which is writed into journal later.
does journal record everything(meta,xattr,file data...) of a OP.
if so everything is writed into disk twice, and journal always reaches full
state, right?
thanks ___
For your second question, I'd start by looking at the source code in
src/osd/ReplicatedPG.cc (for standard replication), or src/osd/ECBackend.cc
(for Erasure Coding). I'm not a Ceph developer though, so that might not be
the right place to start.
On Tue, Aug 12, 2014 at 7:08 PM, yuel
2014-08-11 10:17:04.591497 7f0ec9b4f7a0 10 osd.0 pg_epoch: 153 pg[5.63( empty
local-les=153 n=0 ec=81 les/c 153/153 152/152/152) [0] r=0 lpr=153 crt=0'0
mlcod 0'0 inactive] null
2014-08-11 10:17:04.591501 7f0eb2b8f700 5 osd.0 pg_epoch: 155 pg[0.10( empty
local-les=153 n=0 ec=1 les/c 153/153 152
hi,all
1.
can osd start up if journal is lost and it has not been replayed?
2.
how it catchs up latest epoch? take osd as example, where is the code? it
better you consider journal is lost or not.
in my mind journal only includes meta/R/W operations, does not include
data(file data).
t
hi,all
i know ceph differentiates network, mostly it uses public and cluster
,heartbeat network.
do mon and mds have those network? i only know osd has.
is there a place to introduce ceph's network?
thanks.
___
ceph-users mailing list
ceph-users@lis
hi,all
i am using ceph-rbd with openstack as its backends storage.
is there a best practice?
1.
it needs at least how many osds,mons, and their proportion ?
2. how you deploy the network?public , cluster network...
3.as for performance, what do you do? journal..
4. anything it promotes
hi,all
look at the code.
case Transaction::OP_MKCOLL:
{
coll_t cid = i.get_cid();
...}
1 .
what is COLL and cid?
is coll is a pg and cid is pgid?
2. what is the relation between cid and 'current/meta'? or what is in
current/meta?
thanks very much.
hi,all
recently i dive into the source code, i am a little confused about them,
maybe because of many threads,wait,seq.
1. what does apply_manager do? it is related to filestore and filejournal.
2. what does SubmitManager do?
3. how they interact and work together?
what a big question :), th
hi,all
1.
it seems that there are 2 kinds of function that get/set xattrs.
one kind start with collection_*,the another one start with omap_*.
what is the differences between them, and what xattrs use which kind of
function?
2.
there is a xattr that tells whethe xattrs are stored on leveldb , w
45 matches
Mail list logo