Thanks!
Sébastien Han
Cloud Engineer
Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 10, rue de la Victoire - 75009 Paris
Web : www.enovance.com - Twitter : @enovance
On 01 Jan 2014, at 10:41, Loic Dachary
Hi ceph-users,
After reading through the GC related code, I am thinking to use a much larger
value for rgw gc max obis (like 997), and I don't see any side effect if we
increase this value. Did I miss anything?
Thanks,
Guang
Begin forwarded message:
From: redm...@tracker.ceph.com
Subject:
I created a cluster, four monitors, and 12 OSDs using the ceph-deploy tool.
I initially created this cluster with one monitor, then added a public
network statement in ceph.conf so that I could use ceph-deploy to add the
other monitors. When I run ceph -w now everything checks out and all
Matt,
first of all: four monitors is a bad idea. use an odd number for mons, e. g.
three. your other problem is your configuration file. the mon_initial members
and mon_host directives should include all monitor daemons. see my cluster:
mon_initial_members = node01,node02,node03
mon_host =
I only have four because I want to remove the original one I used to create
the cluster. I tried what you suggested and rebooted all my nodes but I'm
still having the same problem. I'm running Emperor on Ubuntu 12.04 on all
my nodes by the way. Here is what I'm seeing as I run ceph -w and
Thank you Juan for all info!
So, if I understand well, just create three nodes, one OSD per hard drive
(without having RAID) and that's all?
Will Ceph be able to choose itself where to store data?
Let's try!
Thank you very much,
bye!
2013/12/23 JuanJose Galvez juanjose.gal...@inktank.com
On
Push
2013/12/31 Kuo Hugo tonyt...@gmail.com
Hi all,
I have several question about osd scrub.
- Does the scrub job run in the background automatically? Is it
working periodically ?
- Need I to trigger scrub or deep-scrub progress ?
- How to know the current scrub progressing
Hi all,
I run 3 nodes connected with a 10Gbit network, each running 2 OSDs.
Disks are 4TB Seagate Constellation ST4000NM0033-9ZM (xfs, journal on same
disk).
# ceph tell osd.0 bench
{ bytes_written: 1073741824,
blocksize: 4194304,
bytes_per_sec: 56494242.00}
So a single OSD can write
Matt,
what does 'ceph mon stat' say when your cluster is healthy and what does
it say when it's unhealty?
Again my example:
# ceph mon stat
e3: 3 mons at
{node01=10.32.0.181:6789/0,node02=10.32.0.182:6789/0,node03=10.32.0.183:6789/0},
election epoch 14, quorum 0,1,2 node01,node02,node03