636: 4096 pgs, 2 pools, 12709 GB data, 3180 kobjects
35486 GB used, 513 TB / 547 TB avail
1804769/9770742 objects degraded (18.471%)
2247 active+degraded
1849 active+clean
Thanks,
Matt Conner
Keeper Technology
On Fri, Dec 11, 2015 at 3:26
1 TB / 547 TB avail
4052687/22088253 objects degraded (18.348%)
4594 active+degraded
1 active+clean+scrubbing+deep
3789 active+clean
Matt Conner
Keeper Technology
On Tue, Dec 8, 2015 at 5:59 AM, Ilya Dryomov <idryo...@gmail.com&
e - We've tried using different kernels all the way up to 4.3.0 but
the problem persists.
Thanks,
Matt Conner
Keeper Technology
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://
replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type rack
step emit
}
# end crush map
Matt Conner
keepertechnology
matt.con...@keepertech.com
(240) 461-2657
On Thu, Mar 19, 2015 at 11:01 AM, Sage Weil s...@newdream.net wrote:
On Thu, 19
I'm working with a 6 rack, 18 server (3 racks of 2 servers , 3 racks
of 4 servers), 640 OSD cluster and have run into an issue when failing
a storage server or rack where the OSDs are not getting marked down
until the monitor timeout is reached - typically resulting in all
writes being blocked