On Thu 16-02-12 13:04:37, Alex Elder wrote:
On Thu, 2012-02-16 at 14:46 +0100, Jan Kara wrote:
CC: Sage Weil s...@newdream.net
CC: ceph-devel@vger.kernel.org
Signed-off-by: Jan Kara j...@suse.cz
This will update the timestamp even if a write
fault fails, which is different from
Hi,
On 02/20/2012 03:36 AM, Paul Pettigrew wrote:
G'day Wido
Great advice, thanks! We settled on 1x LVM partition on SSD for OSD-Journal.
A quick follow up if I may please?
A last note, if you use a SSD for your journaling, make sure that you align your
partitions which the page size of
On Mon, 20 Feb 2012, Oliver Francke wrote:
Hi Sage,
On 02/20/2012 06:41 PM, Sage Weil wrote:
On Mon, 20 Feb 2012, Oliver Francke wrote:
Hi,
we are just in trouble after some mess with trying to include a new
OSD-node
into our cluster.
We get some weird libceph: corrupt
After increase number pg_num from 8 to 100 in .rgw.buckets i have some
serious problems.
pool name category KB objects clones
degraded unfound rdrd KB wr
wr KB
.intent-log - 4662 19
and this in ceph -w
2012-02-20 20:34:13.531857 log 2012-02-20 20:34:07.611270 osd.76
10.177.64.8:6872/5395 49 : [ERR] mkpg 7.e up [76,11] != acting [76]
2012-02-20 20:34:13.531857 log 2012-02-20 20:34:07.611308 osd.76
10.177.64.8:6872/5395 50 : [ERR] mkpg 7.16 up [76,11] != acting [76]
Ooh, the pg split functionality is currently broken, and we weren't
planning on fixing it for a while longer. I didn't realize it was still
possible to trigger from the monitor.
I'm looking at how difficult it is to make it work (even inefficiently).
How much data do you have in the
On Thu 16-02-12 11:13:53, Sage Weil wrote:
On Thu, 16 Feb 2012, Alex Elder wrote:
On Thu, 2012-02-16 at 14:46 +0100, Jan Kara wrote:
CC: Sage Weil s...@newdream.net
CC: ceph-devel@vger.kernel.org
Signed-off-by: Jan Kara j...@suse.cz
This will update the timestamp even if a
Thanks Sage
So following through by two examples, to confirm my understanding
HDD SPECS:
8x 2TB SATA HDD's able to do sustained read/write speed of 138MB/s each
1x SSD able to do sustained read/write speed of 475MB/s
CASE1
(not using SSD)
8x OSD's each for the SATA HDD's
Therefore able
On Mon, Feb 20, 2012 at 4:44 PM, Paul Pettigrew
paul.pettig...@mach.com.au wrote:
Thanks Sage
So following through by two examples, to confirm my understanding
HDD SPECS:
8x 2TB SATA HDD's able to do sustained read/write speed of 138MB/s each
1x SSD able to do sustained read/write
On Tue, 21 Feb 2012, Paul Pettigrew wrote:
Thanks Sage
So following through by two examples, to confirm my understanding
HDD SPECS:
8x 2TB SATA HDD's able to do sustained read/write speed of 138MB/s each
1x SSD able to do sustained read/write speed of 475MB/s
CASE1
(not using
G'day Greg, thanks for the fast response.
Yes, I forgot to explicitly state the Journal would go to SATA Journals in
CASE1, and it is easy to appreciate the performance impact of this case as you
documented nicely in your response.
Re your second point:
The other big advantage an SSD
On Mon, Feb 20, 2012 at 3:01 PM, Sage Weil s...@newdream.net wrote:
v0.42 is ready! This has mostly been a stabilization release, with a few
critical bugs fixed. There is also an across-the-board change in data
structure encoding that is not backwards-compatible, but is designed to
allow
On Mon, 20 Feb 2012, Diego Woitasen wrote:
Production ready, right?
We are very close, at least with RADOS and RBD.
This is what we are focused on:
- improving qa coverage. it's grown by leaps and bounds over the last
several months, and is getting better.
- osd stability. we are
40 GB in 3 copies in rgw bucket, and some data in RBD, but they can be
destroyed.
Ceph -s reports 224 GB in normal state.
Pozdrawiam
iSS
Dnia 20 lut 2012 o godz. 21:19 Sage Weil s...@newdream.net napisał(a):
Ooh, the pg split functionality is currently broken, and we weren't
planning on
14 matches
Mail list logo