follow-on
"advanced" guide.
Should it go to the Ceph Blog? Or elsewhere?
If somebody can please advise how this contribution can be made, thanks!
Paul
==
Paul Pettigrew
w: http://mach.com.au/
Offices & Data Centres: Noosa | Brisbane | Cooroy | USA
-Original Message-
From:
AM
To: Paul Pettigrew
Cc: ceph-devel@vger.kernel.org
Subject: Re: Crush not deliverying data uniformly -> HEALTH_ERR full osd
Hi Paul,
What version of Ceph are you running, perhaps your issue could be related to an
issue with the choose_local_tries parameter used in earlier versions of
riginal Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Yehuda Sadeh
Sent: Monday, 6 August 2012 11:16 AM
To: Paul Pettigrew
Cc: ceph-devel@vger.kernel.org
Subject: Re: Crush not deliverying data uniformly -> HEALTH_ERR full osd
On Sun, Aug
Hi Ceph community
We are at the stage of performance capacity testing, where significant amounts
of backup data is being written to Ceph.
The issue we have, is that the underlying HDD's are not being populated
(roughly) uniformly, and our Ceph system hits a brick wall after a couple of
days ou
100% resolved the issue for us.
Cheers,
Paul
-Original Message-
From: Sage Weil [mailto:s...@inktank.com]
Sent: Wednesday, 11 July 2012 9:35 AM
To: Paul Pettigrew
Cc: ceph-devel@vger.kernel.org
Subject: RE: mkcephfs failing on v0.48 "argonaut"
Hi Paul,
Were you able to make a
IwsEnfZ --prepare-osdfs
osd.0'
failed: '/sbin/mkcephfs -d /tmp/mkcephfs.kJjIwsEnfZ --prepare-osdfs osd.0'
+ exit 1
+ rm -rf /tmp/mkcephfs.kJjIwsEnfZ
+ exit
-Original Message-
From: Sage Weil [mailto:s...@inktank.com]
Sent: Saturday, 7 July 2012 2:20 PM
To: Paul Pettigrew
ipt terminate.
Many thanks - will be a relief to sort this out, as all our Ceph project works
are on hold til we can sort this one out.
Cheers
Paul
-Original Message-
From: Sage Weil [mailto:s...@inktank.com]
Sent: Friday, 6 July 2012 2:09 PM
To: Paul Pettigrew
Cc: ceph-devel@vger.
aul
-Original Message-
From: Sage Weil [mailto:s...@inktank.com]
Sent: Friday, 6 July 2012 2:09 PM
To: Paul Pettigrew
Cc: ceph-devel@vger.kernel.org
Subject: RE: mkcephfs failing on v0.48 "argonaut"
On Fri, 6 Jul 2012, Paul Pettigrew wrote:
> Hi Sage - thanks so much for the quick
hen re-creating a partition and mounting
manually, then writing data to it is all fine. Same errors if we substitute any
of the other HDD's in the server as 1st/osd.0. Ie, cannot see any issues with
the hardware.
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:c
Firstly, well done guys on achieving this version milestone. I successfully
upgraded to the 0.48 format uneventfully on a live (test) system.
The same system was then going through "rebuild" testing, to confirm that also
worked fine.
Unfortunately, the mkcephfs command is failing:
root@dsanb1
Hi all
Running Ceph version 0.45-1precise on 12.04 x64.
I cannot find the right syntax to mount a specific pool created, that has a
customised replica level set, which is different to the default "data" pool
(i.e. we have an additional pool called "backup"). Can somebody please advise
the righ
Many thanks Sage - we will apply this 0.42.1 version and continue with the
project
Paul
-Original Message-
From: Sage Weil [mailto:s...@newdream.net]
Sent: Friday, 24 February 2012 9:36 AM
To: Paul Pettigrew
Cc: ceph-devel@vger.kernel.org
Subject: Re: Failure to run "rbd" co
G'day all
Today's testing was on having a client (i.e. Ubuntu 12.04 server running KVM to
be virtualisation host, named "server") connect to the 3x node v0.42 Ceph
Cluster (names "ceph1, ceph2 & ceph3").
A very simple command "rbd ls" is failing per below:
root@server:/# rbd ls
terminate calle
G'day all
We wish to document a simple way to determine what version of ceph/rbd is
included with a given kernel on a Client machine, i.e. not the Ceph Cluster
nodes themselves (which we are running v0.42 on). "modinfo ceph" (per below)
gives a "srcversion" value, but I do not know how to corre
ges, add the -j option to use multiple
cores. -j(numcores*2) is advised"
Cheers
Paul
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Paul Pettigrew
Sent: Tuesday, 21 February 2012 6:06 PM
To: ceph-devel@vger.kernel.or
G'day all
Looking forward to upgrading our Ceph cluster from v0.41 to v0.42, started to
compile the .deb packages per the procedure below (see end of email).
However, the same server (Ubuntu 12.04 x64) that successfully produced v0.41 is
giving the below now with v0.42:
...
libtool: install: /u
l which config options tune this, and an example use case
to illustrate?
Many thanks
Paul
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Gregory Farnum
Sent: Tuesday, 21 February 2012 10:50 AM
To: Paul Pettigrew
Cc: Sage We
nt: Monday, 20 February 2012 1:16 PM
To: Paul Pettigrew
Cc: Wido den Hollander; ceph-devel@vger.kernel.org
Subject: RE: Which SSD method is better for performance?
On Mon, 20 Feb 2012, Paul Pettigrew wrote:
> And secondly, should the SSD Journal sizes be large or small? Ie, is
> say 1G par
ginal Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Wido den Hollander
Sent: Tuesday, 14 February 2012 10:46 PM
To: Paul Pettigrew
Cc: ceph-devel@vger.kernel.org
Subject: Re: Which SSD method is better for performance?
Hi,
On 02/14/2012 0
d very flexible to create/resize, etc
Cheers
Paul
-Original Message-
From: Leander Yu [mailto:leander...@gmail.com]
Sent: Wednesday, 15 February 2012 2:26 AM
To: Wido den Hollander
Cc: Paul Pettigrew; ceph-devel@vger.kernel.org
Subject: Re: Which SSD method is better for performance?
Hi
G'day all
About to commence an R&D eval of the Ceph platform having been impressed with
the momentum achieved over the past 12mths.
I have one question re design before rolling out to metal
I will be using 1x SSD drive per storage server node (assume it is /dev/sdb for
this discussion)
21 matches
Mail list logo