Hi,
My monitor node and osd nodes are running fine. But my cluster health is
stale+active+clean
root@node1:/etc/ceph# ceph status
cluster a7f64266-0894-4f1e-a635-d0aeaca0e993
health HEALTH_WARN 2856 pgs stale; 2856 pgs stuck stale
monmap e1: 1 mons at {mon=192.168.0.102:6789/0},
I've not defined cluster IPs for each OSD server but only the whole subnet.
Should I define each IP for each OSD ? This is not wrote on docs and
could be tricky to do this in big environments with hundreds of nodes
2014-04-24 20:04 GMT+02:00 McNamara, Bradley bradley.mcnam...@seattle.gov:
Do you
Some discussion about this can be found here:
http://ceph.com/dev-notes/incremental-snapshots-with-rbd/
Cheers
Mark
On 25/04/14 08:25, Brian Rak wrote:
Is there a recommended way to copy an RBD image between two different
clusters?
My initial thought was 'rbd export - | ssh rbd import -',
This usually means that your OSDs all stopped running at the same time, and
will eventually be marked down by the monitors. You should verify that
they're running.
-Greg
On Saturday, April 26, 2014, Srinivasa Rao Ragolu srag...@mvista.com
wrote:
Hi,
My monitor node and osd nodes are running
Hi Mark,
That seems pretty good. What is the block level sequential read
bandwidth of your disks? What configuration did you use? What was the
replica size, read_ahead for your rbds and what were the number of
workloads you used? I used btrfs in my experiments as well.
Thanks,
Xing
On
Hi Greg,
Actually our cluster is pretty empty, but we suspect we had a temporary
network disconnection to one of our OSD, not sure if this caused the
problem.
Anyway we don't mind try the method you mentioned, how can we do that?
Regards,
Luke
On Saturday, April 26, 2014, Gregory Farnum
hi, Im on a site with no access to the internet and Im trying to install
ceph
during the installation it tries to download files from the internet and
then I get an error
I tried to download the files and make my own repository, also i have
changed the installation code to point to a different
Hi Gregory,
Thanks very much for your quick reply. When I started to look into Ceph,
Bobtail was the latest stable release and that was why I picked that version
and started to make a few modifications. I have not ported my changes to 0.79
yet. The plan is if v-0.79 can provide a higher disk
On Sat, Apr 26, 2014 at 9:56 AM, Jingyuan Luke jyl...@gmail.com wrote:
Hi Greg,
Actually our cluster is pretty empty, but we suspect we had a temporary
network disconnection to one of our OSD, not sure if this caused the
problem.
Anyway we don't mind try the method you mentioned, how can we