The first step is incorrect:
echo deb http://ppa.launchpad.net/saltstack/salt/ubuntu lsb_release -sc
main | sudo tee /etc/apt/sources.list.d/saltstack.list
should be
echo deb http://ppa.launchpad.net/saltstack/salt/ubuntu $(lsb_release -sc)
main | sudo tee /etc/apt/sources.list.d/saltstack.list
I added the osd pool default min size = 1 to test the behavior when 2 of 3
OSDs are down, but the behavior is exactly the same as without it: when the
2nd OSD is killed, all client writes start to block and these
pipe.(stuff).fault messages begin:
2015-03-26 16:08:50.775848 7fce177fe700 0
Ah, thanks, got it, I wasn't thinking that mons and osds on the same node
isn't a likely real world thing.
You have to admit that pipe/fault log message is a bit cryptic.
Thanks,
Lee
___
ceph-users mailing list
ceph-users@lists.ceph.com
On Thu, Mar 26, 2015 at 4:40 PM, Gregory Farnum g...@gregs42.com wrote:
Has the OSD actually been detected as down yet?
I believe it has, however I can't directly check because ceph health
starts to hang when I down the second node.
You'll also need to set that min size on your existing
I have a virtual test environment of an admin node and 3 mon + osd nodes,
built by just following the quick start guide. It seems to work OK but
ceph is constantly complaining about clock skew much greater than reality.
Clocksource on the virtuals is kvm-clock and they also run ntpd.
I have a virtual test environment of an admin node and 3 mon + osd nodes,
built by just following the quick start guide. It seems to work OK but
ceph is constantly complaining about clock skew much greater than reality.
Clocksource on the virtuals is kvm-clock and they also run ntpd.
at 11:21 AM, Sage Weil s...@newdream.net wrote:
On Thu, 26 Mar 2015, Gregory Farnum wrote:
On Thu, Mar 26, 2015 at 7:44 AM, Lee Revell rlrev...@gmail.com wrote:
I have a virtual test environment of an admin node and 3 mon + osd
nodes,
built by just following the quick start guide. It seems
So I had extra drives on my lab cluster's admin node and decided to use
them for more OSDs. The weird thing is, unlike all the other nodes, the
OSDs don't start on boot - i have to manually activate them whenever the
cluster is rebooted. Manually running service ceph-all start doesn't start
them
On Thu, May 14, 2015 at 2:47 PM, John Spray john.sp...@redhat.com wrote:
Greg's response is pretty comprehensive, but for completeness I'll add
that the specific case of shutdown blocking is
http://tracker.ceph.com/issues/9477
I've seen the same thing before with /dev/rbd mounts when the