Hello everyone,
I'm an OpenNebula [1] developer. We've been working on integrating
OpenNebula with native Ceph drivers (using libvirt). The integration
is now complete, ready for testing. You can find more information
about its usage here [2].
We will maintain these drivers officially and extend
Like i say, yes. Now it is only option, to migrate data from one
cluster to other, and now it must be enough, with some auto features.
But is there any timeline, or any brainstorming in ceph internal
meetings, about any possible replication in block level, or something
like that ??
On 20 lut
Hi Kirin -
The Ceph 0.56.3 (Bobtail) release includes Fedora18 rpms. You can find those
at: http://www.ceph.com/rpm-bobtail/fc18/x86_64/
Cheers,
Gary
On Feb 19, 2013, at 7:01 PM, Kiran Patil wrote:
Hello,
Ceph Rpm Packages are up to Fedora 17.
May I know when will Fedora 18 Rpm
On Wed, 20 Feb 2013, S?awomir Skowron wrote:
Like i say, yes. Now it is only option, to migrate data from one
cluster to other, and now it must be enough, with some auto features.
But is there any timeline, or any brainstorming in ceph internal
meetings, about any possible replication in
On Wed, 20 Feb 2013, Noah Watkins wrote:
On Feb 19, 2013, at 4:39 PM, Sage Weil s...@inktank.com wrote:
However, we do have host and rack information in the crush map, at least
for non-customized installations. How about something like
string ceph_get_osd_crush_location(int osd,
Hi,
I have some problem. After OSD expand, and cluster crush re-organize i
have 1 pg in incomplete state. How can i solve this problem ??
ceph -s
health HEALTH_WARN 1 pgs incomplete; 1 pgs stuck inactive; 1 pgs
stuck unclean
monmap e21: 3 mons at
On Feb 20, 2013, at 9:31 AM, Sage Weil s...@inktank.com wrote:
or something like this that replaces the current extent-to-sockaddr
interface? The proposed interface about would do the host/ip mapping, as
well as the topology mapping?
Yeah. The ceph_offset_to_osds should probably also
On Tue, Feb 19, 2013 at 2:52 PM, Alexandre Oliva ol...@gnu.org wrote:
It recently occurred to me that I messed up an OSD's storage, and
decided that the easiest way to bring it back was to roll it back to an
earlier snapshot I'd taken (along the lines of clustersnap) and let it
recover from
On Wed, 20 Feb 2013, Bo-Syung Yang wrote:
Hi,
I have a crush map (may not be practical, but just for demo) applied
to a two-host cluster (each host has two OSDs) to test ceph osd crush
reweight:
# begin crush map
# devices
device 0 sdc-host0
device 1 sdd-host0
device 2 sdc-host1
On Wed, Feb 20, 2013 at 12:39 PM, Sage Weil s...@inktank.com wrote:
On Wed, 20 Feb 2013, Bo-Syung Yang wrote:
Hi,
I have a crush map (may not be practical, but just for demo) applied
to a two-host cluster (each host has two OSDs) to test ceph osd crush
reweight:
# begin crush map
#
On Wed, 20 Feb 2013, Bo-Syung Yang wrote:
On Wed, Feb 20, 2013 at 12:39 PM, Sage Weil s...@inktank.com wrote:
On Wed, 20 Feb 2013, Bo-Syung Yang wrote:
Hi,
I have a crush map (may not be practical, but just for demo) applied
to a two-host cluster (each host has two OSDs) to test ceph
On Wed, Feb 20, 2013 at 1:19 PM, Sage Weil s...@inktank.com wrote:
On Wed, 20 Feb 2013, Bo-Syung Yang wrote:
On Wed, Feb 20, 2013 at 12:39 PM, Sage Weil s...@inktank.com wrote:
On Wed, 20 Feb 2013, Bo-Syung Yang wrote:
Hi,
I have a crush map (may not be practical, but just for demo)
Hi Jim,
I'm resurrecting an ancient thread here, but: we've just observed this on
another big cluster and remembered that this hasn't actually been fixed.
I think the right solution is to make an option that will setsockopt on
SO_RECVBUF to some value (say, 256KB). I pushed a branch that does
Thanks Gary.
On Wed, Feb 20, 2013 at 10:34 PM, Gary Lowell gary.low...@inktank.com wrote:
Hi Kirin -
The Ceph 0.56.3 (Bobtail) release includes Fedora18 rpms. You can find those
at: http://www.ceph.com/rpm-bobtail/fc18/x86_64/
Cheers,
Gary
On Feb 19, 2013, at 7:01 PM, Kiran Patil
On Wed, Feb 13, 2013 at 12:22 AM, Stefan Priebe s.pri...@profihost.ag wrote:
Hi,
is there a speed limit option for rbd export? Right now i'm able to produce
several SLOW requests from IMPORTANT valid requests while just exporting a
snapshot which is not really important.
rbd export runs
15 matches
Mail list logo