The rbd diff-related commands compare points in time of a single
image. Since children are identical to their parent when they're cloned,
if I created a snapshot right after it was cloned, I could export
the diff between the used child and the parent. Something like:
rbd clone child parent@snap
Hello,
On Sat, 23 Aug 2014 20:23:55 + Bruce McFarland wrote:
Firstly while the runtime changes you injected into the cluster
should have done something (and I hope some Ceph developer comments
on that) you're asking for tuning advice which really isn't the issue here.
Your cluster should
Hello guys,
Is it possible to export rbd image while preserving the clones structure? So,
if I've got a single clone rbd image and 10 vm images that were cloned from the
original one, would the rbd export preserve this structure on the destination
pool, or would it waste space and create 10
On 08/24/2014 08:27 PM, Andrei Mikhailovsky wrote:
Hello guys,
I am planning to do rbd images off-site backup with rbd export-diff and I was
wondering if ceph has checksumming functionality so that I can compare source
and destination files for consistency? If so, how do I retrieve the
On 25 August 2014 10:31, Wido den Hollander w...@42on.com wrote:
On 08/24/2014 08:27 PM, Andrei Mikhailovsky wrote:
Hello guys,
I am planning to do rbd images off-site backup with rbd export-diff and I
was wondering if ceph has checksumming functionality so that I can compare
source and
Does rbd export and export-diff and like wise import and import-diff grantee
the consistency of data? So, that if the image is damaged during the
transfer, would this be flagged by the other side? Or would it simply leave the
broken image on the destination cluster?
Cheers
- Original
The rbd diff-related commands compare points in time of a single
image. Since children are identical to their parent when they're cloned,
if I created a snapshot right after it was cloned, I could export
the diff between the used child and the parent. Something like:
rbd clone child parent@snap
Hi Guys,
I have been looking to try out a test ceph cluster in my lab to see if we can
replace it with our traditional storage. Heard a lot of good things about Ceph
but need some guidance on how to begin with.
I have read some stuff on ceph.com but wanted to get a first hand info and
Hi Jens,
There's a bug in cinder that causes, at least, to get size wrong from
cinder. If you search a little bit you will find it. I think it's still
not solved.
El 21/08/14 a las #4, Jens-Christian Fischer escribió:
I am working with Cinder Multi Backends on an Icehouse installation and
Hi Jiten,
The Ceph quick-start guide here was pretty helpful to me when I was
starting with my test cluster: http://ceph.com/docs/master/start/
ceph-deploy is a very easy way to get a test cluster up quickly, even with
minimal experience with Ceph.
If you use puppet, the puppet-ceph module has
See inline:
Ceph version:
[root@ceph2 ceph]# ceph -v
ceph version 0.80.5 (38b73c67d375a2552d8ed67843c8a65c2c0feba6)
Initial testing was with 30 osd's 10/storage server with the following HW:
4TB SATA disks - 1 hdd/osd - 30hdd's/server - 6 ssd's/server - forming a md
raid0 virtual drive with
On Sat, Aug 23, 2014 at 11:06 PM, Bruce McFarland
bruce.mcfarl...@taec.toshiba.com wrote:
I see osd’s being failed for heartbeat reporting default
osd_heartbeat_grace of 20 but the run time config shows that the grace is
set to 30. Is there another variable for the osd or the mon I need to set
Thanks Steve. Appreciate your help.
On Aug 25, 2014, at 9:58 AM, Stephen Jahl stephenj...@gmail.com wrote:
Hi Jiten,
The Ceph quick-start guide here was pretty helpful to me when I was starting
with my test cluster: http://ceph.com/docs/master/start/
ceph-deploy is a very easy way to
I just added osd_heartbeat_grace to the [mon] section of ceph.conf, restarted
ceph-mon, and now the monitor is reporting a 35 second osd_heartbeat_grace:
[root@ceph-mon01 ceph]# ceph --admin-daemon
/var/run/ceph/ceph-mon.ceph-mon01.asok config show | grep osd_heartbeat_grace
Each daemon only reads conf values from its section (or its
daemon-type section, or the global section). You'll need to either
duplicate the osd heartbeat grace value in the [mon] section or put
it in the [global] section instead. This is one of the misleading
values; sorry about that...
Anyway,
After looking a little closer now that I have a better understanding of
osd_heartbeat_grace for the monitor all the osd failures are coming from 1 node
in the cluster. Yes your hunch was correct and that node had stale in the
iptables. After disabling iptables the osd flapping has stopped.
I have built a couple of ceph test clusters, and am attempting to mount the
storage through ceph-fuse on a RHEL 6.4 VM (the clusters are also in VMs). The
first one I built under v0.80, using directories for the ceph OSDs (as per the
Storage Cluster Quick Start at
Hi James,
On 26 August 2014 07:17, LaBarre, James (CTR) A6IT james.laba...@cigna.com
wrote:
[ceph@first_cluster ~]$ ceph -s
cluster e0433b49-d64c-4c3e-8ad9-59a47d84142d
health HEALTH_OK
monmap e1: 1 mons at {first_cluster=10.25.164.192:6789/0}, election
epoch 2, quorum 0
Hi,
Ceph-deploy does partition and mount the OSD/journal drive for the user. I
can't find any option of supplying mount options like discard,noatiome etc
suitable for SSDs during ceph-deploy.
Is there a way to control it ? If not, what could be the workaround ?
Thanks Regards
Somnath
The mounting is actually done by ceph-disk, which can also run from a
udev rule. It gets options from the ceph configuration option osd
mount options {fstype}, which you can set globally or per-daemon as
with any other ceph option.
On 08/25/2014 04:11 PM, Somnath Roy wrote:
Hi,
Ceph-deploy
Message: 25
Date: Fri, 15 Aug 2014 15:06:49 +0200
From: Loic Dachary l...@dachary.org
To: Erik Logtenberg e...@logtenberg.eu, ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Best practice K/M-parameters EC pool
Message-ID: 53ee05e9.1040...@dachary.org
Content-Type: text/plain;
Thanks Dan !
Yes, I saw that in the ceph-disk scripts and it is using ceph-conf utility to
parse the config option.
But, while installing with ceph-deploy, the default config file is created by
ceph-deploy only. So, I need to do the following while installing I guess.
Correct me if I am wrong.
Precisely.
On 08/25/2014 05:26 PM, Somnath Roy wrote:
Thanks Dan !
Yes, I saw that in the ceph-disk scripts and it is using ceph-conf utility to
parse the config option.
But, while installing with ceph-deploy, the default config file is created by
ceph-deploy only. So, I need to do the
Hi,
I am running a few tests for exporting volumes with rbd export and noticing
very poor performance. It takes almost 3 hours to export 100GB volume. Servers
are pretty idle during the export.
The performance of the cluster itself is way faster. How can I increase the
speed of rbd export?
ceph-deploy --release dumpling or previously ceph-deploy --stable
dumpling now results in Firefly (0.80.1) being installed, is this
intentional?
I'm adding another host with more OSDs and guessing it is preferable
to deploy the same version.
___
25 matches
Mail list logo