Hi,
We're using a 24 server / 48 OSD (3 replicas) Ceph cluster (version 0.67.3) for
RBD storage only and it is working great, but if a failed disk is replaced by a
brand new one and the system starts to backfill it gives a lot of slow requests
messages for 5 to 10 minutes. Then it does become
Hi,
On 11/04/14 07:29, Wido den Hollander wrote:
Op 11 april 2014 om 7:13 schreef Greg Poirier greg.poir...@opower.com:
One thing to note
All of our kvm VMs have to be rebooted. This is something I wasn't
expecting. Tried waiting for them to recover on their own, but that's not
Op 11 april 2014 om 8:50 schreef Josef Johansson jo...@oderland.se:
Hi,
On 11/04/14 07:29, Wido den Hollander wrote:
Op 11 april 2014 om 7:13 schreef Greg Poirier greg.poir...@opower.com:
One thing to note
All of our kvm VMs have to be rebooted. This is something I
Hi!
I want to receive email notifications for any ceph errors/warnings and for
osd/mon disk full/near_full states. For example, I want to know it immediately
if free space on any osd/mon becomes less then 10%.
How to properly monitor ceph cluster health?
Pavel.
It’s pretty basic, but we run this hourly:
https://github.com/cernceph/ceph-scripts/blob/master/ceph-health-cron/ceph-health-cron
-- Dan van der Ster || Data Storage Services || CERN IT Department --
On 11 Apr 2014 at 09:12:13, Pavel V. Kaygorodov
(pa...@inasan.rumailto:pa...@inasan.ru)
-- Forwarded message --
From: 常乐 changle...@gmail.com
Date: Mar 31, 2014 11:35 AM
Subject: openstack + ceph
To: ceph-us...@ceph.com
Cc:
hi all,
I am trying to get ceph work with openstack havana. I am following the
instructions here.
On Mon, Mar 31, 2014 at 11:35 AM, 常乐 changle...@gmail.com wrote:
hi all,
I am trying to get ceph work with openstack havana. I am following the
instructions here.
https://ceph.com/docs/master/rbd/rbd-openstack/
However, I found i probably need more details on openstack. The instruction
Hello,
I am testing out federated gateways. I have created one gateway with one
region and one zone. The gateway appears to work. I am trying to test it
with s3cmd before I continue with more regions and zones.
I create a test gateway user:
radosgw-admin user create --uid=test
Hi All,
On our private distribution, I have compiled ceph and could able to install
ceph.
Now I have added /etc/ceph/ceph.conf as
[global]
fsid = e5a14ff4-148a-473a-8721-53bda59c74a2
mon initial members = mon
mon host = 192.168.0.102
auth cluster required = cephx
auth service required = cephx
Hi all,
Context : CEPH dumpling on Ubuntu 12.04
I would like to manage as accurately as possible the pools assigned to Rados
gateway
My goal is to apply specific SLA to applications which use http-driven storage.
I 'd like to store the contents by associating a pool to a bucket, or a pool to
a
Hi to all,
my name is Matteo Favaro
I'm a employee of CNAF and I'm trying to get working a test base of ceph.
I have learnt a lot about ceph and i know quite well how to build and
modify it but there is a question and a problem that I don't know how to
resolve.
my cluster is made in this
On 04/11/2014 12:23 PM, Matteo Favaro wrote:
Hi to all,
my name is Matteo Favaro
I'm a employee of CNAF and I'm trying to get working a test base of ceph.
I have learnt a lot about ceph and i know quite well how to build and
modify it but there is a question and a problem that I don't know how
Hi Srinivasa,
I met this problem at beginning and I made my recipe to achieve the
initial monitor working
I hope that it can be useful to you ( is a recipe, it is not a manual
please pay attention on the path and the ways that I followed):
1) ssh into the node
2) verify that the directory
On 04/11/2014 09:23 AM, Josef Johansson wrote:
On 11/04/14 09:07, Wido den Hollander wrote:
Op 11 april 2014 om 8:50 schreef Josef Johansson jo...@oderland.se:
Hi,
On 11/04/14 07:29, Wido den Hollander wrote:
Op 11 april 2014 om 7:13 schreef Greg Poirier greg.poir...@opower.com:
One
So... our storage problems persisted for about 45 minutes. I gave an entire
hypervisor worth of VM's time to recover (approx. 30 vms), and none of them
recovered on their own. In the end, we had to stop and start every VM
(easily done, it was just alarming). Once rebooted, the VMs of course were
Guys,
I'm new to ceph in general but I'm wondering if anyone out there is using
Ceph with any of the Data Center Bridging (DCB) technologies? I'm
specifically thinking of DCB-X support provided by the open-lldp package.
I'm wondering if there is any benefit to be gained by making use of the
Hi,
Just another e-mail to promote this Meetup in Amsterdam.
If you want to join the meetup, just add your name to the Wiki:
https://wiki.ceph.com/Community/Meetups/The_Netherlands/Ceph_meetup_Amsterdam
Wido
On 03/27/2014 10:50 PM, Wido den Hollander wrote:
Hi all,
I think it's time to
Hi Greg,
How many monitors do you have?
1 . :)
It's also possible that re-used numbers won't get caught in this,
depending on the process you went through to clean them up, but I
don't remember the details of the code here.
Yeah, too bad. I'm following the standard removal procedure in
If you never ran osd rm then the monitors still believe it's an existing
OSD. You can run that command after doing the crush rm stuff, but you
should definitely do so.
On Friday, April 11, 2014, Chad Seys cws...@physics.wisc.edu wrote:
Hi Greg,
How many monitors do you have?
1 . :)
It's
On Fri, Apr 11, 2014 at 3:20 AM, ghislain.cheval...@orange.com wrote:
Hi all,
Context : CEPH dumpling on Ubuntu 12.04
I would like to manage as accurately as possible the pools assigned to Rados
gateway
My goal is to apply specific SLA to applications which use http-driven
storage.
I 'd
So, setting pgp_num to 2048 to match pg_num had a more serious impact than
I expected. The cluster is rebalancing quite substantially (8.5% of objects
being rebalanced)... which makes sense... Disk utilization is evening out
fairly well which is encouraging.
We are a little stumped as to why a
Can ceph act in a raid-5 (or raid-6) mode, storing objects so that storage
overhead of n/(n-1)? For some systems where the underlying OSD's are known to
be very reliable, but where the storage is very tight, this could be useful.
-Original Message-
From:
On 04/11/2014 12:23 PM, Daniel Schwager wrote:
Hi guys,
I'm interesting in the question, which setup will be faster:
a) a pool with journals on separate SSD's, OSD's on normal disks OR
b) a caching pool with SSDs only (osd+journal) in front
of a normal pool (osd+journal on normal
Hey all,
Just a quick reminder that the OpenStack User Survey
https://www.openstack.org/user-survey
will only be open for another 4 hours or so. If you haven't already
provided your view of the world with OpenStack Ceph, please do!
This user survey helps to give a snapshot of what the
On 04/11/2014 02:45 PM, Greg Poirier wrote:
So... our storage problems persisted for about 45 minutes. I gave an
entire hypervisor worth of VM's time to recover (approx. 30 vms), and
none of them recovered on their own. In the end, we had to stop and
start every VM (easily done, it was just
Hi Christian,
Thanks for your reply...now it's clear to me.thanks for your help...
On Fri, Apr 11, 2014 at 10:25 AM, Christian Balzer ch...@gol.com wrote:
Hello,
On Fri, 11 Apr 2014 09:48:56 +0800 Punit Dambiwal wrote:
Hi,
What is the drawback to run the journals on the
26 matches
Mail list logo