On Sun, Dec 28, 2014 at 02:49:08PM +0900, Christian Balzer wrote:
You really, really want size 3 and a third node for both performance
(reads) and redundancy.
How does it benefit read performance? I thought all reads are made only
from the active primary OSD.
--
Tomasz Kuzemko
i want to move mds from one host to another.
how to do it ?
what did i do as below, but ceph health not ok, mds was not removed :
*root@ceph06-vm:~# ceph mds rm 0 mds.ceph06-vm*
*mds gid 0 dne*
*root@ceph06-vm:~# ceph health detail*
*HEALTH_WARN mds ceph06-vm is laggy*
*mds.ceph06-vm at
On Mon, Dec 29, 2014 at 12:47 PM, Tomasz Kuzemko tomasz.kuze...@ovh.net wrote:
On Sun, Dec 28, 2014 at 02:49:08PM +0900, Christian Balzer wrote:
You really, really want size 3 and a third node for both performance
(reads) and redundancy.
How does it benefit read performance? I thought all
Hi all,
we have a ceph cluster, with currently 360 OSDs in 11 Systems. Last week
we were replacing one OSD System with a new one. During that, we had a
lot of problems with OSDs crashing on all of our systems. But that is
not our current problem.
After we got everything up and running again, we
On Mon, 29 Dec 2014 07:04:47 PM Mark Kirkwood wrote:
Thanks all, I'll definitely stick with nobarrier
Maybe you meant to say *barrier* ?
Oops :) Yah
--
Lindsay
signature.asc
Description: This is a digitally signed message part.
___
ceph-users
Hi Max,
I do use CephFS (Giant) in a production environment. It works really
well, but I have backups ready to use, just in case.
As Wido said, kernel version is not really relevant if you use ceph-fuse
(which I recommend over cephfs kernel, for stability and ease of upgrade
reasons).
However,
Hi Christian,
Sure ZFS is way more mature than Btrfs, but what is ZFS status in
Linux ?
I use ZFS on FreeBSD (72TB - 12 disks (2 vdevs RaidZ2) for backup
purposes) and it works great, but it's something I would be afraid to do
in Linux.
--
Thomas Lemarchand
Cloud Solutions SAS - Responsable
Hello,
On Mon, 29 Dec 2014 00:05:40 +1000 Lindsay Mathieson wrote:
Appreciate the detailed reply Christian.
On Sun, 28 Dec 2014 02:49:08 PM Christian Balzer wrote:
On Sun, 28 Dec 2014 08:59:33 +1000 Lindsay Mathieson wrote:
I'm looking to improve the raw performance on my small setup
Hello,
On Mon, 29 Dec 2014 11:22:34 +0100 Thomas Lemarchand wrote:
Hi Christian,
Sure ZFS is way more mature than Btrfs, but what is ZFS status in
Linux ?
I use ZFS on FreeBSD (72TB - 12 disks (2 vdevs RaidZ2) for backup
purposes) and it works great, but it's something I would be
Hello,
On Mon, 29 Dec 2014 13:49:49 +0400 Andrey Korolyov wrote:
On Mon, Dec 29, 2014 at 12:47 PM, Tomasz Kuzemko
tomasz.kuze...@ovh.net wrote:
On Sun, Dec 28, 2014 at 02:49:08PM +0900, Christian Balzer wrote:
You really, really want size 3 and a third node for both performance
(reads)
I too dislike the fact that it's not native (ie developed inside the
Linux Kernel), and this is why I'm not sure this project is a good
solution.
The userbase is necessarily much lower it would be if this was native,
so less tests, less feedbacks, and potentially less security.
When I use ZFS on
Hey Christian,
Christian Eichelmann [Mon, Dec 29, 2014 at 10:56:59AM +0100]:
[incomplete PG / RBD hanging, osd lost also not helping]
that is very interesting to hear, because we had a similar situation
with ceph 0.80.7 and had to re-create a pool, after I deleted 3 pg
directories to allow OSDs
Hi, I have a Disk server having 14 disks in which I created 1 osd per disk
and other 12 servers with 1 disk each .
Now In ceph.conf file
do i have to list all OSDs ? as below , since all 14 disks lies on same
host ..
[osd.0]
host=server1
[osd.1]
host=server1
.
.
.
[osd.14]
host=server1
and
Hi Christian,
I had a similar problem about a month ago.
After trying lots of helpful suggestions, I found none of it worked and
I could only delete the affected pools and start over.
I opened a feature request in the tracker:
http://tracker.ceph.com/issues/10098
If you find a way, let
On Mon, 29 Dec 2014 11:12:06 PM Christian Balzer wrote:
Is that a private cluster network just between Ceph storage nodes or is
this for all ceph traffic (including clients)?
The later would probably be better, a private cluster network twice as
fast as the client one isn't particular helpful
On Sun, 28 Dec 2014 04:08:03 PM Nick Fisk wrote:
If you can't add another full host, your best bet would be to add another
2-3 disks to each server. This should give you a bit more performance. It's
much better to have lots of small disks rather than large multi-TB ones from
a performance
On Mon, 29 Dec 2014 11:29:11 PM Christian Balzer wrote:
Reads will scale up (on a cluster basis, individual clients might
not benefit as much) linearly with each additional device (host/OSD).
I'm taking that to mean individual clients as a whole will be limited by the
speed of individual
You would need to modify the crush map, so that it would store two of the
same replica's on the same host, however I'm not sure how you would go about
this and still make sure that at least 1 other replica is on a different
host. But to be honest with the amount of OSD's you will have, the data
On Sun, 28 Dec 2014 04:08:03 PM Nick Fisk wrote:
This should give you a bit more performance. It's
much better to have lots of small disks rather than large multi-TB ones from
a performance perspective. So maybe look to see if you can get 500GB/1TB
drives cheap.
Is this from the docs still
On 12/27/2014 02:32 AM, Lindsay Mathieson wrote:
I see a lot of people mount their xfs osd's with nobarrier for extra
performance, certainly it makes a huge difference to my small system.
However I don't do it as my understanding is this runs a risk of data
corruption in the event of power
I am getting this on ubuntu 14.04
when do apt-get update
Err http://ceph.com trusty/main amd64 Packages
403 Forbidden
W: Failed to fetch
http://ceph.com/debian-giant/dists/trusty/main/binary-amd64/Packages 403
Forbidden
W: Failed to fetch
On Dec 29, 2014, Christian Eichelmann christian.eichelm...@1und1.de wrote:
After we got everything up and running again, we still have 3 PGs in the
state incomplete. I was checking one of them directly on the systems
(replication factor is 3).
I have run into this myself at least twice
Hello.
I’m having an issue with ceph-deploy on Fedora 21.
- Installed ceph-deploy via ‘yum install ceph-deploy'
- created non-root user
- assigned sudo privs as per documentation -
http://ceph.com/docs/master/rados/deployment/preflight-checklist/
Hello.
I’m having an issue with ceph-deploy on Fedora 21.
- Installed ceph-deploy via ‘yum install ceph-deploy'
- created non-root user
- assigned sudo privs as per documentation -
http://ceph.com/docs/master/rados/deployment/preflight-checklist/
Hello,
On Tue, 30 Dec 2014 08:12:21 +1000 Lindsay Mathieson wrote:
On Mon, 29 Dec 2014 11:12:06 PM Christian Balzer wrote:
Is that a private cluster network just between Ceph storage nodes or is
this for all ceph traffic (including clients)?
The later would probably be better, a private
On Tue, 30 Dec 2014 12:48:58 PM Christian Balzer wrote:
Looks like I misunderstood the purpose of the monitors, I presumed they
were just for monitoring node health. They do more than that?
They keep the maps and the pgmap in particular is of course very busy.
All that action is at:
On Tue, 30 Dec 2014 08:22:01 +1000 Lindsay Mathieson wrote:
On Mon, 29 Dec 2014 11:29:11 PM Christian Balzer wrote:
Reads will scale up (on a cluster basis, individual clients might
not benefit as much) linearly with each additional device (host/OSD).
I'm taking that to mean individual
On Tue, 30 Dec 2014 14:08:32 +1000 Lindsay Mathieson wrote:
On Tue, 30 Dec 2014 12:48:58 PM Christian Balzer wrote:
Looks like I misunderstood the purpose of the monitors, I presumed
they were just for monitoring node health. They do more than that?
They keep the maps and the
On Tue, 30 Dec 2014 12:48:58 PM Christian Balzer wrote:
Looks like I misunderstood the purpose of the monitors, I presumed they
were just for monitoring node health. They do more than that?
They keep the maps and the pgmap in particular is of course very busy.
All that action is at:
On 30 December 2014 at 14:28, Christian Balzer ch...@gol.com wrote:
Use a good monitoring tool like atop to watch how busy things are.
And do that while running a normal rados bench like this from a client
node:
rados -p rbd bench 60 write -t 32
And again like this:
rados -p rbd bench 60
30 matches
Mail list logo