Ok thanks but wouldn't it make sense to set the default to the same as rbd
has? How is the value for rbd calculated? I've also seen that rbd has a
different crushmap. What's the difference between crushmap 0 and 2?
Stefan
Am 19.06.2012 um 00:41 schrieb Dan Mick dan.m...@inktank.com:
Yes,
Sorry i meant crush_ruleset
Am 19.06.2012 08:06, schrieb Stefan Priebe:
Ok thanks but wouldn't it make sense to set the default to the same as rbd
has? How is the value for rbd calculated? I've also seen that rbd has a
different crushmap. What's the difference between crushmap 0 and 2?
Am 19.06.2012 06:41, schrieb Alexandre DERUMIER:
Hi Stephann
recommandations are 30-50 PGS by osd if I remember.
rbd, data and metadata have 2176 PGs with 12 OSD. This is 181,3
per OSD?!
Stefan
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a
Hello list,
i've now patched my qemu to include the qemu cache settings for rbd.
Now i can change my drive caching mode. But there are also these
settings for global section in ceph.conf.
rbd_cache = true
rbd_cache_size = 33554432
rbd_cache_max_age = 2.0
How do they
On 06/18/2012 11:37 PM, Stefan Priebe - Profihost AG wrote:
Hello list,
i've now patched my qemu to include the qemu cache settings for rbd.
Now i can change my drive caching mode. But there are also these
settings for global section in ceph.conf.
rbd_cache = true
rbd_cache_size = 33554432
Am 19.06.2012 08:42, schrieb Josh Durgin:
In qemu, the cache settings map to:
writeback:
rbd_cache = true
writethrough:
rbd_cache = true
rbd_cache_max_dirty = 0
none:
rbd_cache = false
qemu's settings are overridden by any custom settings you have in a
config file or on the qemu command
Hi,
Do you think this is needed?
The osdmap update need to hold this sem.As in function
void ceph_osdc_handle_map(struct ceph_osd_client *osdc, struct ceph_msg *msg)
and
function static int __map_request(),
etc.
Thanks a lot for your reply!
thanks,
Guanjun
On 6/15/2012 at 04:32 PM, in
We dereference con-in_msg on the line after it was set to NULL.
Signed-off-by: Dan Carpenter dan.carpen...@oracle.com
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index 5e9f61d..6aa671c 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -437,10 +437,10 @@ static void
On 06/19/2012 01:32 AM, Stefan Priebe - Profihost AG wrote:
Am 19.06.2012 06:41, schrieb Alexandre DERUMIER:
Hi Stephann
recommandations are 30-50 PGS by osd if I remember.
rbd, data and metadata have 2176 PGs with 12 OSD. This is 181,3
per OSD?!
Stefan
That's probably fine, it
Am 19.06.2012 15:01, schrieb Mark Nelson:
On 06/19/2012 01:32 AM, Stefan Priebe - Profihost AG wrote:
Am 19.06.2012 06:41, schrieb Alexandre DERUMIER:
Hi Stephann
recommandations are 30-50 PGS by osd if I remember.
rbd, data and metadata have 2176 PGs with 12 OSD. This is 181,3
per
On Tue, Jun 19, 2012 at 08:27:19AM -0500, Alex Elder wrote:
On 06/19/2012 05:33 AM, Dan Carpenter wrote:
We dereference con-in_msg on the line after it was set to NULL.
Signed-off-by: Dan Carpenter dan.carpen...@oracle.com
Yikes.
Actually I think I prefer a different fix, which is
I have already incorporated the following in the Ceph master
branch (which is used for the -next build). We will also send
this to Linus soon.
-Alex
=
We dereference con-in_msg on the line after it was set to NULL.
Signed-off-by: Dan Carpenter
Hi,
Is it possible to do random write bench with rados bench command ?
I have very base random write performance with 4K block size inside qemu-kvm,
1000 iops/s max with 3 nodes with 3x 5 disk 15k
(Maybe it's related to my constant disk writes, like datas are not flushed
sequentially to disk)
[Sorry for not responding earlier!]
On Tue, 19 Jun 2012, Guan Jun He wrote:
Hi,
Do you think this is needed?
The osdmap update need to hold this sem.As in function
void ceph_osdc_handle_map(struct ceph_osd_client *osdc, struct ceph_msg *msg)
and
function static int __map_request(),
On Tue, 19 Jun 2012, Stefan Priebe - Profihost AG wrote:
Am 19.06.2012 15:01, schrieb Mark Nelson:
On 06/19/2012 01:32 AM, Stefan Priebe - Profihost AG wrote:
Am 19.06.2012 06:41, schrieb Alexandre DERUMIER:
Hi Stephann
recommandations are 30-50 PGS by osd if I remember.
rbd,
Am 19.06.2012 um 17:42 schrieb Sage Weil s...@inktank.com:
But this number 2176 of PGs were set while doing mkcephfs - how is it
calculated?
num_pgs = num_osds osd_pg_bits
which is configurable via --osd-pg-bits N or ceph.conf (at mkcephfs time).
The default is 6.
What happens if
On Tue, 19 Jun 2012, Stefan Priebe wrote:
Am 19.06.2012 um 17:42 schrieb Sage Weil s...@inktank.com:
But this number 2176 of PGs were set while doing mkcephfs - how is it
calculated?
num_pgs = num_osds osd_pg_bits
which is configurable via --osd-pg-bits N or ceph.conf (at
The number doesn't change currently (and can't currently be set manually).
On Jun 19, 2012, at 9:24 AM, Stefan Priebe s.pri...@profihost.ag wrote:
Am 19.06.2012 um 17:42 schrieb Sage Weil s...@inktank.com:
But this number 2176 of PGs were set while doing mkcephfs - how is it
calculated?
I almost posted this to http://tracker.newdream.net/issues/2595, but
didn't want to piggy-back on an issue marked resolved.
When I run mkcephfs, I get:
2012-06-19 09:36:29.211737 7fc7021d7780 -1 journal FileJournal::_open:
unable to open journal: open() failed: (22) Invalid argument
2012-06-19
On Tue, 19 Jun 2012, Travis Rhoden wrote:
I almost posted this to http://tracker.newdream.net/issues/2595, but
didn't want to piggy-back on an issue marked resolved.
When I run mkcephfs, I get:
2012-06-19 09:36:29.211737 7fc7021d7780 -1 journal FileJournal::_open:
unable to open journal:
Great! Thanks, that was it. I did see mention of that param in the
mailing list, and thought that might be it. But I failed to find that
option in the docs here:
http://ceph.com/docs/master/config-ref/osd-config/
So I wasn't sure where to put it. =)
Thanks again.
On Tue, Jun 19, 2012 at
Hi all,
I have an HP ProLiant ML350 G5 server that is currently sitting idle
at the moment. Would it be possible to virtualize a Ceph cluster so I
can mess around and begin contributing back to the community?
Specs:
ML350 G5
2 x Quad Core Xeon E5430
16GB RAM
7x 146GB SAS disks
1x 120 GB Intel
You don't need to virtualize anything — I'd recommend running Ubuntu
12.04 on it (you don't need to, but a lot of things will be more
performant), building from source, and then setting up the daemons so
everybody gets a separate disk.
Check out http://ceph.com/docs/master/source/ and the other
Hey folks,
Ran into this today. Not sure what I did wrong. =)
I had an RBD successfully mounted and was done with it. Proceeded to
do the following:
root@spcnode2:~# ls /sys/bus/rbd/devices/
0
root@spcnode2:~# echo 0 /sys/bus/rbd/remove
root@spcnode2:~# ls /sys/bus/rbd/devices/ --- At
On 06/19/2012 01:32 PM, Travis Rhoden wrote:
Hey folks,
Ran into this today. Not sure what I did wrong. =)
It appears you are running Linux 3.2.0. This has symptoms that
could be explained by a bug that has been fixed in newer Ceph
code. Specifically, I think this is the fix that, without
Awesome. Thanks Alex. I'll eagerly await 0.48 once it has finished QA.
- Travis
On Tue, Jun 19, 2012 at 2:45 PM, Alex Elder el...@dreamhost.com wrote:
On 06/19/2012 01:32 PM, Travis Rhoden wrote:
Hey folks,
Ran into this today. Not sure what I did wrong. =)
It appears you are running
Official results!
1. Cephalopods / Marine Animals
2. Oceans / Seas
3. Oceanic Trenches
4. Pirates
5. Ocean Currents
It looks like the first Ceph stable release will be code-named 'Argonaut'.
Yehuda
On Tue, Jun 12, 2012 at 4:50 PM, Yehuda Sadeh yeh...@inktank.com wrote:
When will there be 1.0?
Yeah, it's certainly doable to run all the daemons on one server; they
don't even really need a separate disk, but that's generally a nice
partitioning.
You can usually get help on irc://irc.oftc.net/#ceph, too.
On 06/19/2012 10:54 AM, Gregory Farnum wrote:
You don't need to virtualize
Actually it appears this fix is in the kernel (repo 'ceph-client'), so I
don't think 0.48 will contain it (I could be wrong). You may need to
grab that repo and build the kernel (or wait until that sha1 gets into
your distro's kernel release)
On 06/19/2012 11:50 AM, Travis Rhoden wrote:
Everything works out, but because RHEL6 lacks syncfs support your performance
will be less predictable. If you have more than one OSD on a box without
syncfs() support you'll certainly want to run btrfs if you can.
(Ceph daemons are very concerned with their data integrity — for the obvious
30 matches
Mail list logo