Hi,
May I know if it's possible to replace an OSD drive without removing /
re-adding back the OSD? I want to avoid the time and the excessive I/O load
which will happen during the recovery process at the time when:
- the OSD is removed; and
- the OSD is being put back into the cluster.
I read
Hi.
I was wondering what would be correct way to migrate system VMs
(storage,console,VR) from local storage to CEPH.
I'm on CS 4.2.1 and will be soon updating to 4.3...
Is it enough to just change global setting system.vm.use.local.storage =
true, to FALSE, and then destroy system VMs
Hi to all,
I need your help to understand how to tune my ceph configuration to achieve
some performance result.
my installation is built as follow:
5 server with 16gb ram and 8 core
5 client (same machine)
Each computer is connected with a 1gb/s ethernet to the same switch.
The device storage
Hello All,
I am using “ceph-deploy” to setup my ceph cluster with 3 nodes. I am getting an
error when running “sudo ceph-deploy mon create gfsnode5”. Would you someone
please give me a pointer what the problem is?
Thanks in advance!
-Jimmy
[cuser@gfsnode5 my-cluster]$ sudo ceph-deploy mon
This is a possible bug which was resolved. It was due to leveldb version. My
node is already running on version 1.12.
[root@gfsnode5 my-cluster]# rpm -qa | grep -i leveldb
leveldb-1.12.0-3.el6.x86_64
[root@gfsnode5 my-cluster]#
Thanks,
Jimmy
From: J L
I was able to dig up an archive of an IRC chat from Sage. The suggestion from
the chat was to downgrade leveldb from 1.12 to 1.7.0. After the downgrade, I
was able to run sudo ceph-deploy mon create gfsnode5.
-Jimmy
From: J L j...@yahoo-inc.commailto:j...@yahoo-inc.com
Date: Friday, May 2,
Hi,
It is time for the first elections of the Ceph User Committee ! I've enjoyed
the position for the past six months. It is a little time consuming (about
eight hours a week) but it's also a great opportunity to be in the center of
the storage (r)evolution. If you're tempted, feel free to add
On 5/2/14 05:15 , Fabrizio G. Ventola wrote:
Hello everybody,
I'm making some tests with ceph and its editable cluster map and I'm
trying to define a rack layer for its hierarchy in this way:
ceph osd tree:
# id weight type name up/down reweight
-1 0.84 root default
-7 0.28 rack rack1
-2 0.14
Sorry forgot to cc the list.
On 3 May 2014 08:00, Indra Pramana in...@sg.or.id wrote:
Hi Andrey,
I actually wanted to try this (instead of remove and readd OSD) to avoid
remapping of PGs to other OSDs and the unnecessary I/O load.
Are you saying that doing this will also trigger remapping?