of that issue?
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: Chu Duc Minh chu.ducm...@gmail.com
To: ceph-de...@vger.kernel.org, ceph-users@lists.ceph.com
ceph-users@lists.ceph.com ceph-users@lists.ceph.com
Sent: Friday, November 7, 2014 7
In the longer term, there is an in-progress RBD feature request to add a new
RBD command to see image disk usage: http://tracker.ceph.com/issues/7746
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: Sébastien Han sebastien
. The whole object would not be written to the OSDs unless you
wrote data to the whole object.
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: Xu (Simon) Chen xche...@gmail.com
To: ceph-users@lists.ceph.com
Sent: Wednesday, February
/projects/rbd/issues?
Thanks,
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: koukou73gr koukou7...@yahoo.com
To: ceph-users@lists.ceph.com
Sent: Monday, March 2, 2015 7:16:08 AM
Subject: [ceph-users] qemu-kvm and cloned rbd image
An RBD image is split up into (by default 4MB) objects within the OSDs. When
you delete an RBD image, all the objects associated with the image are removed
from the OSDs. The objects are not securely erased from the OSDs if that is
what you are asking.
--
Jason Dillaman
Red Hat
dilla
Yes, when you flatten an image, the snapshots will remain associated to the
original parent. This is a side-effect from how librbd handles CoW with
clones. There is an open RBD feature request to add support for flattening
snapshots as well.
--
Jason Dillaman
Red Hat
dilla
-features' when creating the image?
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: Nikola Ciprich nikola.cipr...@linuxbox.cz
To: Jason Dillaman dilla...@redhat.com
Cc: ceph-users@lists.ceph.com
Sent: Monday, April 20, 2015 12:41:26 PM
Can you add debug rbd = 20 your ceph.conf, re-run the command, and provide a
link to the generated librbd log messages?
Thanks,
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: Nikola Ciprich nikola.cipr...@linuxbox.cz
To: ceph-users
The issue appears to be tracked with the following BZ for RHEL 7:
https://bugzilla.redhat.com/show_bug.cgi?id=1187533
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: Wido den Hollander w...@42on.com
To: Somnath Roy somnath
, I would recommend waiting for the full
toolset to become available.
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: Christoph Adomeit christoph.adom...@gatworks.de
To: ceph-users@lists.ceph.com
Sent: Tuesday, April 28, 2015 10:06:12
the two
snapshots and no trim operations released your changes back? If you diff from
move2db24-20150428 to HEAD, do you see all your changes?
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: Ultral ultral...@gmail.com
To: ceph-users
You are correct -- it is little endian like the other values. I'll open a
ticket to correct the document.
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: Ultral ultral...@gmail.com
To: ceph-us...@ceph.com
Sent: Thursday, May 7
/master/install/get-packages/#add-ceph-development
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: Pavel V. Kaygorodov pa...@inasan.ru
To: Tuomas Juntunen tuomas.juntu...@databasement.fi
Cc: ceph-users ceph-users@lists.ceph.com
Sent
and was unable to recreate
it. I would normally ask for a log dump with 'debug rbd = 20', but given the
size of your image, that log will be astronomically large.
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: Ultral ultral...@gmail.com
)? Also, would it be possible for you to create a new, test image in the
same pool, snapshot it, use 'rbd bench-write' to generate some data, and then
verify if export-diff is properly working against the new image?
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
that librbd no longer thinks
any image is a child of another.
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: Matthew Monaco m...@monaco.cx
To: Jason Dillaman dilla...@redhat.com
Cc: ceph-users@lists.ceph.com
Sent: Monday, April 13, 2015 8:02
/rbd_children to see the data within the files.
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http://www.redhat.com
- Original Message -
From: Matthew Monaco m...@monaco.cx
To: ceph-users@lists.ceph.com
Sent: Sunday, April 12, 2015 10:57:46 PM
Subject: [ceph-users] rbd: incorrect
/to/my/new/ceph.conf QEMU parameter where the RBD cache is
explicitly disabled [2].
[1]
http://git.qemu.org/?p=qemu.git;a=blob;f=block/rbd.c;h=fbe87e035b12aab2e96093922a83a3545738b68f;hb=HEAD#l478
[2] http://ceph.com/docs/master/rbd/qemu-rbd/#usage
--
Jason Dillaman
Red Hat
dilla...@redhat.com
In the past we've hit some performance issues with RBD cache that we've
fixed, but we've never really tried pushing a single VM beyond 40+K read
IOPS in testing (or at least I never have). I suspect there's a couple
of possibilities as to why it might be slower, but perhaps joshd can
chime
It sounds like you have rados CLI tool from an earlier Ceph release ( Hammer)
installed and it is attempting to use the librados shared library from a newer
(= Hammer) version of Ceph.
Jason
- Original Message -
From: Aakanksha Pudipeddi-SSI aakanksha...@ssi.samsung.com
To:
That rbd CLI command is a new feature that will be included with the upcoming
infernalis release. In the meantime, you can use this approach [1] to estimate
your RBD image usage.
[1] http://ceph.com/planet/real-size-of-a-ceph-rbd-image/
--
Jason Dillaman
Red Hat Ceph Storage Engineering
There currently is no mechanism to rename snapshots without hex editing the RBD
image header data structure. I created a new Ceph feature request [1] to add
this ability in the future.
[1] http://tracker.ceph.com/issues/12678
--
Jason Dillaman
Red Hat Ceph Storage Engineering
dilla
associated RADOS objects, download the
objects one at a time, and perform a scan for fully zeroed blocks. It's not
the most CPU efficient script, but it should get the job done.
[1] http://fpaste.org/248755/43803526/
--
Jason Dillaman
Red Hat Ceph Storage Engineering
dilla...@redhat.com
http
) that indicate the
percentage of block-size, zeroed extents within the clone images' RADOS
objects? If there is a large amount of waste, it might be possible /
worthwhile to optimize how RBD handles copy-on-write operations against the
clone.
--
Jason Dillaman
Red Hat
dilla...@redhat.com
http
> On Tue, 20 Oct 2015, Jason Dillaman wrote:
> > There is no such interface currently on the librados / OSD side to abort
> > IO operations. Can you provide some background on your use-case for
> > aborting in-flight IOs?
>
> The internal Objecter has a cancel interf
ill has parent snap info?
> overlap: 1024 MB
Because deep-flatten wasn't enabled on the clone.
> Another question is since deep-flatten operations are applied to cloned
> image, why we need to create parent image with deep-flatten image feat
] http://tracker.ceph.com/issues/13559
--
Jason Dillaman
- Original Message -
> From: "Andrei Mikhailovsky" <and...@arhont.com>
> To: ceph-us...@ceph.com
> Sent: Wednesday, October 21, 2015 8:17:39 AM
> Subject: [ceph-users] [urgent] KVM issues after upg
ine properties [1]. If you have "rbd cache =
true" in your ceph.conf, it would override "cache=none" in your qemu
command-line.
[1] https://lists.nongnu.org/archive/html/qemu-devel/2015-06/msg03078.html
--
Jason Dillaman
___
ce
clone from a parent image even if snapshots exist due to the
changes to copyup.
--
Jason Dillaman
- Original Message -
> From: "Zhongyan Gu" <zhongyan...@gmail.com>
> To: dilla...@redhat.com
> Sent: Thursday, October 22, 2015 5:11:56 AM
> Subject: how to un
> Hi Jason dillaman
> Recently I worked on the feature http://tracker.ceph.com/issues/13500 , when
> I read the code about librbd, I was confused by RBD_FLAG_OBJECT_MAP_INVALID
> flag.
> When I create a rbd with “—image-features = 13 ” , we enable object-map
> featu
It sounds like you ran into this issue [1]. It's been fixed in upstream master
and infernalis branches, but the backport is still awaiting release on hammer.
[1] http://tracker.ceph.com/issues/12885
--
Jason Dillaman
- Original Message -
> From: "Giuseppe C
I don't see the read request hitting the wire, so I am thinking your client
cannot talk to the primary PG for the 'rb.0.16cf.238e1f29.' object.
Try adding "debug objecter = 20" to your configuration to get more details.
--
Jason Dillaman
- Original Message --
r its been enabled.
--
Jason Dillaman
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
immediately race to
re-establish the lost watch/notify connection before you could disassociate the
cache tier.
--
Jason Dillaman
- Original Message -
> From: "Robert LeBlanc" <rob...@leblancnet.us>
> To: ceph-users@lists.ceph.com
> Sent: Monday, October 26, 2015 12:
. This is actually what librbd does internally for the C interface.
--
Jason Dillaman
- Original Message -
> From: "Nikola Ciprich" <nikola.cipr...@linuxbox.cz>
> To: "ceph-users" <ceph-users@lists.ceph.com>
> Sent: Sunday, November 8, 2015
volume
internal to the VM.
--
Jason Dillaman
- Original Message -
> From: "Lazuardi Nasution" <mrxlazuar...@gmail.com>
> To: ceph-users@lists.ceph.com
> Sent: Sunday, November 8, 2015 12:34:16 PM
> Subject: [ceph-users] Multiple Cache Pool with Single St
.
You are correct that by using a local (host) persistent cache, you have
effectively removed the ability to safely live-migrate.
--
Jason Dillaman
- Original Message -
> From: "Lazuardi Nasution" <mrxlazuar...@gmail.com>
> To: "Jason Dillaman" <di
I've seen this issue before when you (somehow) mix-and-match librbd, librados,
and rbd builds on the same machine. The packaging should prevent you from
mixing versions, but perhaps somehow you have different package versions
installed.
--
Jason Dillaman
- Original Message
I'd recommend running your program through valgrind first to see if something
pops out immediately.
--
Jason Dillaman
- Original Message -
> From: "min fang" <louisfang2...@gmail.com>
> To: ceph-users@lists.ceph.com
> Sent: Saturday, October 31, 2015
/wiki/Clustered_SCSI_target_using_RBD
--
Jason Dillaman
- Original Message -
> From: "Gaetan SLONGO" <gslo...@it-optics.com>
> To: ceph-users@lists.ceph.com
> Sent: Tuesday, November 3, 2015 10:00:59 AM
> Subject: [ceph-users] iSCSI over RDB is a good
, it appears that
oVirt might even have some development to containerize a small Cinder/Glance
OpenStack setup [2].
[1] https://www.youtube.com/watch?v=elEkGfjLITs
[2] http://www.ovirt.org/CinderGlance_Docker_Integration
--
Jason Dillaman
Red Hat Ceph Storage Engineering
dilla...@redhat.com
--
Jason Dillaman
- Original Message -
> From: "Jackie" <hzguanqi...@gmail.com>
> To: ceph-users@lists.ceph.com
> Sent: Tuesday, November 3, 2015 8:47:19 PM
> Subject: [ceph-users] Can snapshot of image still be used while flattening
> the image?
&
) to all users of
image2 (and its descendants) that its parent has been removed. If you had a
clone of image2 open at the time, the clone of image2 would then know it would
no longer need to access image1 since the link from image1 to image2 was
removed.
--
Jason Dillaman
> - Origi
Most likely not going to be related to 13045 since you aren't actively
exporting an image diff. The most likely problem is that the RADOS IO context
is being closed prior to closing the RBD image.
--
Jason Dillaman
- Original Message -
> From: "Voloshanenko Igor" &l
I can't say that I know too much about Cloudstack's integration with RBD to
offer much assistance. Perhaps if Cloudstack is receiving an exception for
something, it is not properly handling object lifetimes in this case.
--
Jason Dillaman
- Original Message -
> F
request than your cache can allocate.
[1] http://tracker.ceph.com/issues/13388
--
Jason Dillaman
- Original Message -
> From: "Joe Ryner" <jry...@cait.org>
> To: "Jason Dillaman" <dilla...@redhat.com>
> Cc: ceph-us...@ceph.com
> Sent: Thursda
Can you retry with 'rbd --rbd-cache=false -p images export joe /root/joe.raw'?
--
Jason Dillaman
- Original Message -
> From: "Joe Ryner" <jry...@cait.org>
> To: "Jason Dillaman" <dilla...@redhat.com>
> Cc: ceph-us...@ceph.com
> Sent: Thu
On the bright side, at least your week of export-related pain should result in
a decent speed boost when your clients get 64MB of cache instead of 64B.
--
Jason Dillaman
- Original Message -
> From: "Joe Ryner" <jry...@cait.org>
> To: "Jason Dillaman&
enabled.
[1] https://github.com/ceph/ceph/pull/6135
--
Jason Dillaman
- Original Message -
> From: "Ken Dreyer" <kdre...@redhat.com>
> To: "Goncalo Borges" <gonc...@physics.usyd.edu.au>
> Cc: ceph-users@lists.ceph.com
> Sent: Thursday, Octo
erwrite, etc).
--
Jason Dillaman
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
by decoupling objects from the underlying filesystem's actual
storage path.
[1]
https://github.com/ceph/ceph/blob/master/doc/rados/configuration/journal-ref.rst
--
Jason Dillaman
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Can you provide more details on your setup and how you are running the rbd
export? If clearing the pagecache, dentries, and inodes solves the issue, it
sounds like it's outside of Ceph (unless you are exporting to a CephFS or krbd
mount point).
--
Jason Dillaman
- Original Message
There is no such interface currently on the librados / OSD side to abort IO
operations. Can you provide some background on your use-case for aborting
in-flight IOs?
--
Jason Dillaman
- Original Message -
> From: "min fang" <louisfang2...@gmail.com&g
to the object, so they
will be read via LevelDB or RocksDB (depending on your configuration) within
the object's PG's OSD.
--
Jason Dillaman
- Original Message -
> From: "Allen Liao" <aliao.svsga...@gmail.com>
> To: ceph-users@lists.ceph.com
> Sent: Monday, Oc
Unfortunately, the tool the dynamically enable/disable image features (rbd
feature disable ) was added during the Infernalis
development cycle. Therefore, in the short-term you would need to recreate the
images via export/import or clone/flatten.
There are several object map / exclusive
> I have a coredump with the size of 1200M compressed .
>
> Where shall i put the dump ?
>
I believe you can use the ceph-post-file utility [1] to upload the core and
your current package list to ceph.com.
Jason
[1] http://ceph.com/docs/master/man/8/ceph-post-file/
Any particular reason why you have the image mounted via the kernel client
while performing a benchmark? Not to say this is the reason for the crash, but
strange since 'rbd bench-write' will not test the kernel IO speed since it uses
the user-mode library. Are you able to test bench-write
> The client version is what was installed by the ceph-deploy install
> ceph-client command. Via the debian-hammer repo. Per the quickstart doc.
> Are you saying I need to install a different client version somehow?
You listed the version as 0.80.10 which is a Ceph Firefly release -- Hammer is
The following advice assumes these images don't have associated snapshots
(since keeping the non-sparse snapshots will keep utilizing the storage space):
Depending on how you have your images set up, you could snapshot and clone the
images, flatten the newly created clone, and delete the
It looks like the issue you are experiencing was fixed in the Infernalis/master
branches [1]. I've opened a new tracker ticket to backport the fix to Hammer
[2].
--
Jason Dillaman
[1]
https://github.com/sponce/ceph/commit/e4c27d804834b4a8bc495095ccf5103f8ffbcc1e
[2] http
approach via "rbd lock
add/remove" to verify that no other client has the image mounted before
attempting to mount it locally.
--
Jason Dillaman
- Original Message -
> From: "Allen Liao" <aliao.svsga...@gmail.com>
> To: ceph-users@lists.ceph.com
> Se
As a background, I believe LTTng-UST is disabled for RHEL7 in the Ceph project
only due to the fact that EPEL 7 doesn't provide the required packages [1].
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1235461
--
Jason Dillaman
- Original Message -
> From: "Paul M
new exclusive-lock feature is managed via 'rbd feature enable/disable'
commands and does ensure that only the current lock owner can manipulate the
RBD image. It was introduced to support the RBD object map feature (which can
track which backing RADOS objects are in-use in order to activate some
optimi
> On 22/09/15 17:46, Jason Dillaman wrote:
> > As a background, I believe LTTng-UST is disabled for RHEL7 in the Ceph
> > project only due to the fact that EPEL 7 doesn't provide the required
> > packages [1].
>
> interesting. so basically our program might onl
ying the image while at the same time not
crippling other use cases. librbd also supports cooperative exclusive lock
transfer, which is used in the case of qemu VM migrations where the image needs
to be opened R/W by two clients at the same time.
--
Jason Dillaman
- Original Message
You can run the program under 'gdb' with a breakpoint on the 'abort' function
to catch the program's abnormal exit. Assuming you have debug symbols
installed, you should hopefully be able to see which probe is being
re-registered.
--
Jason Dillaman
- Original Message -
> F
This is usually indicative of the same tracepoint event being included by both
a static and dynamic library. See the following thread regarding this issue
within Ceph when LTTng-ust was first integrated [1]. Since I don't have any
insight into your application, are you somehow linking against
lock_exclusive() / lock_shared() methods are not related to image watchers.
Instead, it is tied to the advisory locking mechanism -- and list_lockers() can
be used to query who has a lock.
--
Jason Dillaman
- Original Message -
> From: "NEVEU Stephane"
Sorry, none of the librbd configuration properties can be live-updated
currently.
--
Jason Dillaman
- Original Message -
> From: "Daniel Schwager" <daniel.schwa...@dtnet.de>
> To: "ceph-us...@ceph.com" <ceph-us...@ceph.com>
> Sent: Frida
I am surprised by the error you are seeing with exclusive lock enabled. The
rbd CLI should be able to send the 'snap create' request to QEMU without an
error. Are you able to provide "debug rbd = 20" logs from shortly before and
after your snapshot attempt?
--
Jaso
to Hammer but it looks
like it was missed. I will open a new tracker ticket to start that process.
[1] https://github.com/ceph/ceph/commit/333f3a01a9916c781f266078391c580efb81a0fc
--
Jason Dillaman
- Original Message -
> From: "Emile Snyder" <emsny...@ebay.com&g
tracing = false # enable librbd LTTng tracing
You can dynamically enable LTTng on a running process via the admin socket as
well. I created a tracker ticket for updating the documentation [1].
[1] http://tracker.ceph.com/issues/14219
--
Jason Dillaman
- Original Message
tracing events do you see being generated from ceph-mon? I didn't realize
it had any registered tracepoint events.
--
Jason Dillaman
- Original Message -
> From: "Aakanksha Pudipeddi-SSI" <aakanksha...@ssi.samsung.com>
> To: ceph-users@lists.ceph.com
> S
/19/real-size-of-a-ceph-rbd-image/
--
Jason Dillaman
- Original Message -
> From: "Allen Liao" <al...@svsgames.com>
> To: ceph-users@lists.ceph.com
> Sent: Friday, August 21, 2015 3:24:54 PM
> Subject: [ceph-users] rbd du
> Hi all,
> The online ma
Couldn't hurt to open a feature request for this on the tracker.
--
Jason Dillaman
- Original Message -
> From: "Haomai Wang" <haomaiw...@gmail.com>
> To: "Allen Liao" <aliao.svsga...@gmail.com>
> Cc: ceph-users@lists.ceph.com
> Sent: Satur
RBD_FEATURE_STRIPINGV2 = 2
RBD_FEATURE_EXCLUSIVE_LOCK = 4
RBD_FEATURE_OBJECT_MAP = 8
RBD_FEATURE_FAST_DIFF = 16
RBD_FEATURE_DEEP_FLATTEN = 32
RBD_FEATURE_JOURNALING = 64
--
Jason Dillaman
- Original Message -
> From: "Gregory Farnum" <gfar...@redhat.com>
> To: "N
Does child image "images/0a38b10d-2184-40fc-82b8-8bbd459d62d2" have snapshots?
--
Jason Dillaman
- Original Message -
> From: "Jackie" <hzguanqi...@gmail.com>
> To: ceph-users@lists.ceph.com
> Sent: Thursday, November 19, 2015 12:05:12 AM
>
It's unique per-pool.
--
Jason Dillaman
- Original Message -
> From: "louisfang2013" <louisfang2...@gmail.com>
> To: "Jason Dillaman" <dilla...@redhat.com>
> Cc: "ceph-users" <ceph-users@lists.ceph.com>
> Sent: Tuesday, Ja
Can you run "rbd info" against that image? I suspect it is a harmless
but alarming error message. I actually just opened a tracker ticket
this morning for a similar issue for rbd-mirror [1] when it bootstraps
an image to a peer cluster. In that case, it was a harmless error
message that we will
ct-map, fast-diff,
> deep-flatten
> flags:
> parent: rbd/xenial-base@gold-copy
> overlap: 8192 MB
>
>
> Brendan
>
>
> From: Jason Dillaman [jdill...@redhat.com]
> Sent: Tuesday, June 07, 2016 6:56
ecified explicitly for osd class dir. I suspect it
> might be related to the OSDs being restarted during the package upgrade
> process before all libraries are upgraded.
>
>
>> -Original Message-
>> From: Jason Dillaman [mailto:jdill...@redhat.com]
>> Sent: Monda
That command is used for debugging to show the notifications sent by librbd
whenever image properties change. These notifications are used by other
librbd clients with the same image open to synchronize state (e.g. a
snapshot was created so instruct the other librbd client to refresh the
image's
On Tue, Jun 14, 2016 at 8:15 AM, Fran Barrera wrote:
> 2016-06-14 14:02:54.634 2256 DEBUG glance_store.capabilities [-] Store
> glance_store._drivers.rbd.Store doesn't support updating dynamic storage
> capabilities. Please overwrite 'update_capabilities' method of the
Alternatively, if you are using RBD format 2 images, you can run
"rados -p listomapvals rbd_directory" to ensure it has
a bunch of key/value pairs for your images. There was an issue noted
[1] after upgrading to Jewel where the omap values were all missing on
several v2 RBD image headers --
On Fri, Jun 10, 2016 at 12:37 PM, Юрий Соколов wrote:
> Good day, all.
>
> I found this issue: https://github.com/ceph/ceph/pull/5991
>
> Did this issue affected librados ?
No -- this affected the start-up and shut-down of librbd as described
in the associated tracker
Are you able to successfully run the following command successfully?
rados -p glebe-sata get rbd_id.hypervtst-lun04
On Sun, Jun 5, 2016 at 8:49 PM, Adrian Saul
wrote:
>
> I upgraded my Infernalis semi-production cluster to Jewel on Friday. While
> the upgrade
d.test02
>> rbd_id.cloud2sql-lun02
>> rbd_id.fiotest2
>> rbd_id.radmast02-lun04
>> rbd_id.hypervtst-lun04
>> rbd_id.cloud2fs-lun00
>> rbd_id.radmast02-lun03
>> rbd_id.hypervtst-lun00
>> rbd_id.cloud2sql-lun00
>> rbd_id.radmast02-lun02
&g
rbd_id.glbcluster3-vm17
>> rbd_id.holder <<< a create that said it failed while I was debugging this
>> rbd_id.pvtcloud-nfs01
>> rbd_id.hypervtst-lun05
>> rbd_id.test02
>> rbd_id.cloud2sql-lun02
>> rbd_id.fiotest2
>> rbd_id.radmast02-lun04
>> rbd_
On Wed, Jun 1, 2016 at 8:32 AM, Alexandre DERUMIER wrote:
> Hi,
>
> I'm begin to look at rbd mirror features.
>
> How much space does it take ? Is it only a journal with some kind of list of
> block changes ?
There is a per-image journal which is a log of all modifications
On Thu, Jun 16, 2016 at 8:14 PM, Mavis Xiang wrote:
> clientname=client.admin
Try "clientname=admin" -- I think it's treating the client "name" as
the "id", so specifying "client.admin" is really treated as
"client.client.admin".
--
Jason
The librbd API is stable between releases. While new API methods
might be added, the older API methods are kept for backwards
compatibility. For example, qemu-kvm under RHEL 7 is built against a
librbd from Firefly but can function using a librbd from Jewel.
On Tue, Jun 21, 2016 at 1:47 AM, min
I'm not sure why I never received the original list email, so I
apologize for the delay. Is /dev/sda1, from your example, fresh with
no data to actually discard or does it actually have lots of data to
discard?
Thanks,
On Wed, Jun 22, 2016 at 1:56 PM, Brian Andrus wrote:
>
On Thu, Jun 23, 2016 at 10:16 AM, Ishmael Tsoaela wrote:
> cluster_master@nodeC:~$ rbd --image data_01 -p data info
> rbd image 'data_01':
> size 102400 MB in 25600 objects
> order 22 (4096 kB objects)
> block_name_prefix: rbd_data.105f2ae8944a
> format: 2
> features:
It's constant for an RBD image and is tied to the image's internal unique ID.
--
Jason Dillaman
- Original Message -
> From: "min fang" <louisfang2...@gmail.com>
> To: "ceph-users" <ceph-users@lists.ceph.com>
> Sent: Friday, January 8, 20
I would need to see the log from the point where you've frozen the disks until
the point when you attempt to create a snapshot. The logs below just show
normal IO.
I've opened a new ticket [1] where you can attach the logs.
[1] http://tracker.ceph.com/issues/14373
--
Jason Dillaman
Definitely would like to see the "debug rbd = 20" logs from 192.168.254.17 when
this occurs. If you are co-locating your OSDs, MONs, and qemu-kvm processes,
make sure your ceph.conf has "log file = " defined in the
[global] or [client] section.
--
Jason Dillaman
--
krbd.
I wouldn't see this as a replacement for krbd, but rather another tool to
support certain RBD use-cases [2].
[1] http://docs.ceph.com/docs/master/man/8/rbd/#commands
[2] https://github.com/ceph/ceph/pull/6595
--
Jason Dillaman
- Original Message -
> From: "Bill
Can you provide the 'rbd info' dump from one of these corrupt images?
--
Jason Dillaman
- Original Message -
> From: "Udo Waechter" <r...@zoide.net>
> To: "Jason Dillaman" <dilla...@redhat.com>
> Cc: "ceph-users" <ceph-users@list
The ability to list watchers wasn't added until the cuttlefish release, which
explains why you are seeing "operation not supported". Is your rbd CLI also
from bobtail?
--
Jason Dillaman
- Original Message -
> From: "Tahir Raza" <tahirr...@g
What release of Infernalis are you running? When you encounter this error, is
the partition table zeroed out or does it appear to be random corruption?
--
Jason Dillaman
- Original Message -
> From: "Udo Waechter" <r...@zoide.net>
> To: "ceph-users&
1 - 100 of 802 matches
Mail list logo