Re: [ceph-users] RBD command crash can't delete volume!

2014-11-07 Thread Jason Dillaman
of that issue? -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: Chu Duc Minh chu.ducm...@gmail.com To: ceph-de...@vger.kernel.org, ceph-users@lists.ceph.com ceph-users@lists.ceph.com ceph-users@lists.ceph.com Sent: Friday, November 7, 2014 7

Re: [ceph-users] RBD - possible to query used space of images/clones ?

2014-11-07 Thread Jason Dillaman
In the longer term, there is an in-progress RBD feature request to add a new RBD command to see image disk usage: http://tracker.ceph.com/issues/7746 -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: Sébastien Han sebastien

Re: [ceph-users] Question regarding rbd cache

2015-03-03 Thread Jason Dillaman
. The whole object would not be written to the OSDs unless you wrote data to the whole object. -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: Xu (Simon) Chen xche...@gmail.com To: ceph-users@lists.ceph.com Sent: Wednesday, February

Re: [ceph-users] qemu-kvm and cloned rbd image

2015-03-03 Thread Jason Dillaman
/projects/rbd/issues? Thanks, -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: koukou73gr koukou7...@yahoo.com To: ceph-users@lists.ceph.com Sent: Monday, March 2, 2015 7:16:08 AM Subject: [ceph-users] qemu-kvm and cloned rbd image

Re: [ceph-users] Rbd image's data deletion

2015-03-04 Thread Jason Dillaman
An RBD image is split up into (by default 4MB) objects within the OSDs. When you delete an RBD image, all the objects associated with the image are removed from the OSDs. The objects are not securely erased from the OSDs if that is what you are asking. -- Jason Dillaman Red Hat dilla

Re: [ceph-users] rbd: incorrect metadata

2015-04-13 Thread Jason Dillaman
Yes, when you flatten an image, the snapshots will remain associated to the original parent. This is a side-effect from how librbd handles CoW with clones. There is an open RBD feature request to add support for flattening snapshots as well. -- Jason Dillaman Red Hat dilla

Re: [ceph-users] hammer (0.94.1) - image must support layering(38) Function not implemented on v2 image

2015-04-20 Thread Jason Dillaman
-features' when creating the image? -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: Nikola Ciprich nikola.cipr...@linuxbox.cz To: Jason Dillaman dilla...@redhat.com Cc: ceph-users@lists.ceph.com Sent: Monday, April 20, 2015 12:41:26 PM

Re: [ceph-users] hammer (0.94.1) - image must support layering(38) Function not implemented on v2 image

2015-04-20 Thread Jason Dillaman
Can you add debug rbd = 20 your ceph.conf, re-run the command, and provide a link to the generated librbd log messages? Thanks, -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: Nikola Ciprich nikola.cipr...@linuxbox.cz To: ceph-users

Re: [ceph-users] RBD storage pool support in Libvirt not enabled on CentOS

2015-04-30 Thread Jason Dillaman
The issue appears to be tracked with the following BZ for RHEL 7: https://bugzilla.redhat.com/show_bug.cgi?id=1187533 -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: Wido den Hollander w...@42on.com To: Somnath Roy somnath

Re: [ceph-users] Use object-map Feature on existing rbd images ?

2015-04-29 Thread Jason Dillaman
, I would recommend waiting for the full toolset to become available. -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: Christoph Adomeit christoph.adom...@gatworks.de To: ceph-users@lists.ceph.com Sent: Tuesday, April 28, 2015 10:06:12

Re: [ceph-users] export-diff exported only 4kb instead of 200-600gb

2015-05-08 Thread Jason Dillaman
the two snapshots and no trim operations released your changes back? If you diff from move2db24-20150428 to HEAD, do you see all your changes? -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: Ultral ultral...@gmail.com To: ceph-users

Re: [ceph-users] wrong diff-export format description

2015-05-07 Thread Jason Dillaman
You are correct -- it is little endian like the other values. I'll open a ticket to correct the document. -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: Ultral ultral...@gmail.com To: ceph-us...@ceph.com Sent: Thursday, May 7

Re: [ceph-users] RBD images -- parent snapshot missing (help!)

2015-05-13 Thread Jason Dillaman
/master/install/get-packages/#add-ceph-development -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: Pavel V. Kaygorodov pa...@inasan.ru To: Tuomas Juntunen tuomas.juntu...@databasement.fi Cc: ceph-users ceph-users@lists.ceph.com Sent

Re: [ceph-users] export-diff exported only 4kb instead of 200-600gb

2015-05-14 Thread Jason Dillaman
and was unable to recreate it. I would normally ask for a log dump with 'debug rbd = 20', but given the size of your image, that log will be astronomically large. -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: Ultral ultral...@gmail.com

Re: [ceph-users] export-diff exported only 4kb instead of 200-600gb

2015-05-12 Thread Jason Dillaman
)? Also, would it be possible for you to create a new, test image in the same pool, snapshot it, use 'rbd bench-write' to generate some data, and then verify if export-diff is properly working against the new image? -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com

Re: [ceph-users] rbd: incorrect metadata

2015-04-14 Thread Jason Dillaman
that librbd no longer thinks any image is a child of another. -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: Matthew Monaco m...@monaco.cx To: Jason Dillaman dilla...@redhat.com Cc: ceph-users@lists.ceph.com Sent: Monday, April 13, 2015 8:02

Re: [ceph-users] rbd: incorrect metadata

2015-04-13 Thread Jason Dillaman
/rbd_children to see the data within the files. -- Jason Dillaman Red Hat dilla...@redhat.com http://www.redhat.com - Original Message - From: Matthew Monaco m...@monaco.cx To: ceph-users@lists.ceph.com Sent: Sunday, April 12, 2015 10:57:46 PM Subject: [ceph-users] rbd: incorrect

Re: [ceph-users] rbd cache + libvirt

2015-06-08 Thread Jason Dillaman
/to/my/new/ceph.conf QEMU parameter where the RBD cache is explicitly disabled [2]. [1] http://git.qemu.org/?p=qemu.git;a=blob;f=block/rbd.c;h=fbe87e035b12aab2e96093922a83a3545738b68f;hb=HEAD#l478 [2] http://ceph.com/docs/master/rbd/qemu-rbd/#usage -- Jason Dillaman Red Hat dilla...@redhat.com

Re: [ceph-users] rbd_cache, limiting read on high iops around 40k

2015-06-09 Thread Jason Dillaman
In the past we've hit some performance issues with RBD cache that we've fixed, but we've never really tried pushing a single VM beyond 40+K read IOPS in testing (or at least I never have). I suspect there's a couple of possibilities as to why it might be slower, but perhaps joshd can chime

Re: [ceph-users] Rados: Undefined symbol error

2015-08-21 Thread Jason Dillaman
It sounds like you have rados CLI tool from an earlier Ceph release ( Hammer) installed and it is attempting to use the librados shared library from a newer (= Hammer) version of Ceph. Jason - Original Message - From: Aakanksha Pudipeddi-SSI aakanksha...@ssi.samsung.com To:

Re: [ceph-users] rbd du

2015-08-24 Thread Jason Dillaman
That rbd CLI command is a new feature that will be included with the upcoming infernalis release. In the meantime, you can use this approach [1] to estimate your RBD image usage. [1] http://ceph.com/planet/real-size-of-a-ceph-rbd-image/ -- Jason Dillaman Red Hat Ceph Storage Engineering

Re: [ceph-users] rbd rename snaps?

2015-08-12 Thread Jason Dillaman
There currently is no mechanism to rename snapshots without hex editing the RBD image header data structure. I created a new Ceph feature request [1] to add this ability in the future. [1] http://tracker.ceph.com/issues/12678 -- Jason Dillaman Red Hat Ceph Storage Engineering dilla

Re: [ceph-users] Best method to limit snapshot/clone space overhead

2015-07-27 Thread Jason Dillaman
associated RADOS objects, download the objects one at a time, and perform a scan for fully zeroed blocks. It's not the most CPU efficient script, but it should get the job done. [1] http://fpaste.org/248755/43803526/ -- Jason Dillaman Red Hat Ceph Storage Engineering dilla...@redhat.com http

Re: [ceph-users] Best method to limit snapshot/clone space overhead

2015-07-24 Thread Jason Dillaman
) that indicate the percentage of block-size, zeroed extents within the clone images' RADOS objects? If there is a large amount of waste, it might be possible / worthwhile to optimize how RBD handles copy-on-write operations against the clone. -- Jason Dillaman Red Hat dilla...@redhat.com http

Re: [ceph-users] How ceph client abort IO

2015-10-21 Thread Jason Dillaman
> On Tue, 20 Oct 2015, Jason Dillaman wrote: > > There is no such interface currently on the librados / OSD side to abort > > IO operations. Can you provide some background on your use-case for > > aborting in-flight IOs? > > The internal Objecter has a cancel interf

Re: [ceph-users] how to understand deep flatten implementation

2015-10-23 Thread Jason Dillaman
ill has parent snap info? > overlap: 1024 MB Because deep-flatten wasn't enabled on the clone. > Another question is since deep-flatten operations are applied to cloned > image, why we need to create parent image with deep-flatten image feat

Re: [ceph-users] [urgent] KVM issues after upgrade to 0.94.4

2015-10-21 Thread Jason Dillaman
] http://tracker.ceph.com/issues/13559 -- Jason Dillaman - Original Message - > From: "Andrei Mikhailovsky" <and...@arhont.com> > To: ceph-us...@ceph.com > Sent: Wednesday, October 21, 2015 8:17:39 AM > Subject: [ceph-users] [urgent] KVM issues after upg

Re: [ceph-users] [urgent] KVM issues after upgrade to 0.94.4

2015-10-21 Thread Jason Dillaman
ine properties [1]. If you have "rbd cache = true" in your ceph.conf, it would override "cache=none" in your qemu command-line. [1] https://lists.nongnu.org/archive/html/qemu-devel/2015-06/msg03078.html -- Jason Dillaman ___ ce

Re: [ceph-users] how to understand deep flatten implementation

2015-10-22 Thread Jason Dillaman
clone from a parent image even if snapshots exist due to the changes to copyup. -- Jason Dillaman - Original Message - > From: "Zhongyan Gu" <zhongyan...@gmail.com> > To: dilla...@redhat.com > Sent: Thursday, October 22, 2015 5:11:56 AM > Subject: how to un

Re: [ceph-users] Question about rbd flag(RBD_FLAG_OBJECT_MAP_INVALID)

2015-10-27 Thread Jason Dillaman
> Hi Jason dillaman > Recently I worked on the feature http://tracker.ceph.com/issues/13500 , when > I read the code about librbd, I was confused by RBD_FLAG_OBJECT_MAP_INVALID > flag. > When I create a rbd with “—image-features = 13 ” , we enable object-map > featu

Re: [ceph-users] Core dump while getting a volume real size with a python script

2015-10-29 Thread Jason Dillaman
It sounds like you ran into this issue [1]. It's been fixed in upstream master and infernalis branches, but the backport is still awaiting release on hammer. [1] http://tracker.ceph.com/issues/12885 -- Jason Dillaman - Original Message - > From: "Giuseppe C

Re: [ceph-users] rbd hang

2015-10-29 Thread Jason Dillaman
I don't see the read request hitting the wire, so I am thinking your client cannot talk to the primary PG for the 'rb.0.16cf.238e1f29.' object. Try adding "debug objecter = 20" to your configuration to get more details. -- Jason Dillaman - Original Message --

Re: [ceph-users] Question about rbd flag(RBD_FLAG_OBJECT_MAP_INVALID)

2015-10-28 Thread Jason Dillaman
r its been enabled. -- Jason Dillaman ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Not possible to remove cache tier with RBDs open?

2015-10-26 Thread Jason Dillaman
immediately race to re-establish the lost watch/notify connection before you could disassociate the cache tier. -- Jason Dillaman - Original Message - > From: "Robert LeBlanc" <rob...@leblancnet.us> > To: ceph-users@lists.ceph.com > Sent: Monday, October 26, 2015 12:

Re: [ceph-users] python binding - snap rollback - progress reporting

2015-11-09 Thread Jason Dillaman
. This is actually what librbd does internally for the C interface. -- Jason Dillaman - Original Message - > From: "Nikola Ciprich" <nikola.cipr...@linuxbox.cz> > To: "ceph-users" <ceph-users@lists.ceph.com> > Sent: Sunday, November 8, 2015

Re: [ceph-users] Multiple Cache Pool with Single Storage Pool

2015-11-09 Thread Jason Dillaman
volume internal to the VM. -- Jason Dillaman - Original Message - > From: "Lazuardi Nasution" <mrxlazuar...@gmail.com> > To: ceph-users@lists.ceph.com > Sent: Sunday, November 8, 2015 12:34:16 PM > Subject: [ceph-users] Multiple Cache Pool with Single St

Re: [ceph-users] Multiple Cache Pool with Single Storage Pool

2015-11-09 Thread Jason Dillaman
. You are correct that by using a local (host) persistent cache, you have effectively removed the ability to safely live-migrate. -- Jason Dillaman - Original Message - > From: "Lazuardi Nasution" <mrxlazuar...@gmail.com> > To: "Jason Dillaman" <di

Re: [ceph-users] rbd create => seg fault

2015-11-12 Thread Jason Dillaman
I've seen this issue before when you (somehow) mix-and-match librbd, librados, and rbd builds on the same machine. The packaging should prevent you from mixing versions, but perhaps somehow you have different package versions installed. -- Jason Dillaman - Original Message

Re: [ceph-users] segmentation fault when using librbd interface

2015-11-02 Thread Jason Dillaman
I'd recommend running your program through valgrind first to see if something pops out immediately. -- Jason Dillaman - Original Message - > From: "min fang" <louisfang2...@gmail.com> > To: ceph-users@lists.ceph.com > Sent: Saturday, October 31, 2015

Re: [ceph-users] iSCSI over RDB is a good idea ?

2015-11-04 Thread Jason Dillaman
/wiki/Clustered_SCSI_target_using_RBD -- Jason Dillaman - Original Message - > From: "Gaetan SLONGO" <gslo...@it-optics.com> > To: ceph-users@lists.ceph.com > Sent: Tuesday, November 3, 2015 10:00:59 AM > Subject: [ceph-users] iSCSI over RDB is a good

Re: [ceph-users] iSCSI over RDB is a good idea ?

2015-11-05 Thread Jason Dillaman
, it appears that oVirt might even have some development to containerize a small Cinder/Glance OpenStack setup [2]. [1] https://www.youtube.com/watch?v=elEkGfjLITs [2] http://www.ovirt.org/CinderGlance_Docker_Integration -- Jason Dillaman Red Hat Ceph Storage Engineering dilla...@redhat.com

Re: [ceph-users] Can snapshot of image still be used while flattening the image?

2015-11-04 Thread Jason Dillaman
-- Jason Dillaman - Original Message - > From: "Jackie" <hzguanqi...@gmail.com> > To: ceph-users@lists.ceph.com > Sent: Tuesday, November 3, 2015 8:47:19 PM > Subject: [ceph-users] Can snapshot of image still be used while flattening > the image? &

Re: [ceph-users] Can snapshot of image still be used while flattening the image?

2015-11-04 Thread Jason Dillaman
) to all users of image2 (and its descendants) that its parent has been removed. If you had a clone of image2 open at the time, the clone of image2 would then know it would no longer need to access image1 since the link from image1 to image2 was removed. -- Jason Dillaman > - Origi

Re: [ceph-users] Cloudstack agent crashed JVM with exception in librbd

2015-11-02 Thread Jason Dillaman
Most likely not going to be related to 13045 since you aren't actively exporting an image diff. The most likely problem is that the RADOS IO context is being closed prior to closing the RBD image. -- Jason Dillaman - Original Message - > From: "Voloshanenko Igor" &l

Re: [ceph-users] Cloudstack agent crashed JVM with exception in librbd

2015-11-02 Thread Jason Dillaman
I can't say that I know too much about Cloudstack's integration with RBD to offer much assistance. Perhaps if Cloudstack is receiving an exception for something, it is not properly handling object lifetimes in this case. -- Jason Dillaman - Original Message - > F

Re: [ceph-users] rbd hang

2015-11-05 Thread Jason Dillaman
request than your cache can allocate. [1] http://tracker.ceph.com/issues/13388 -- Jason Dillaman - Original Message - > From: "Joe Ryner" <jry...@cait.org> > To: "Jason Dillaman" <dilla...@redhat.com> > Cc: ceph-us...@ceph.com > Sent: Thursda

Re: [ceph-users] rbd hang

2015-11-05 Thread Jason Dillaman
Can you retry with 'rbd --rbd-cache=false -p images export joe /root/joe.raw'? -- Jason Dillaman - Original Message - > From: "Joe Ryner" <jry...@cait.org> > To: "Jason Dillaman" <dilla...@redhat.com> > Cc: ceph-us...@ceph.com > Sent: Thu

Re: [ceph-users] rbd hang

2015-11-05 Thread Jason Dillaman
On the bright side, at least your week of export-related pain should result in a decent speed boost when your clients get 64MB of cache instead of 64B. -- Jason Dillaman - Original Message - > From: "Joe Ryner" <jry...@cait.org> > To: "Jason Dillaman&

Re: [ceph-users] Annoying libust warning on ceph reload

2015-10-08 Thread Jason Dillaman
enabled. [1] https://github.com/ceph/ceph/pull/6135 -- Jason Dillaman - Original Message - > From: "Ken Dreyer" <kdre...@redhat.com> > To: "Goncalo Borges" <gonc...@physics.usyd.edu.au> > Cc: ceph-users@lists.ceph.com > Sent: Thursday, Octo

Re: [ceph-users] Ceph journal - isn't it a bit redundant sometimes?

2015-10-19 Thread Jason Dillaman
erwrite, etc). -- Jason Dillaman ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph journal - isn't it a bit redundant sometimes?

2015-10-14 Thread Jason Dillaman
by decoupling objects from the underlying filesystem's actual storage path. [1] https://github.com/ceph/ceph/blob/master/doc/rados/configuration/journal-ref.rst -- Jason Dillaman ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] rbd export hangs / does nothing without regular drop_cache

2015-10-20 Thread Jason Dillaman
Can you provide more details on your setup and how you are running the rbd export? If clearing the pagecache, dentries, and inodes solves the issue, it sounds like it's outside of Ceph (unless you are exporting to a CephFS or krbd mount point). -- Jason Dillaman - Original Message

Re: [ceph-users] How ceph client abort IO

2015-10-20 Thread Jason Dillaman
There is no such interface currently on the librados / OSD side to abort IO operations. Can you provide some background on your use-case for aborting in-flight IOs? -- Jason Dillaman - Original Message - > From: "min fang" <louisfang2...@gmail.com&g

Re: [ceph-users] How expensive are 'rbd ls' and 'rbd snap ls' calls?

2015-10-12 Thread Jason Dillaman
to the object, so they will be read via LevelDB or RocksDB (depending on your configuration) within the object's PG's OSD. -- Jason Dillaman - Original Message - > From: "Allen Liao" <aliao.svsga...@gmail.com> > To: ceph-users@lists.ceph.com > Sent: Monday, Oc

Re: [ceph-users] How to disable object-map and exclusive features ?

2015-08-31 Thread Jason Dillaman
Unfortunately, the tool the dynamically enable/disable image features (rbd feature disable ) was added during the Infernalis development cycle. Therefore, in the short-term you would need to recreate the images via export/import or clone/flatten. There are several object map / exclusive

Re: [ceph-users] How to disable object-map and exclusive features ?

2015-09-04 Thread Jason Dillaman
> I have a coredump with the size of 1200M compressed . > > Where shall i put the dump ? > I believe you can use the ceph-post-file utility [1] to upload the core and your current package list to ceph.com. Jason [1] http://ceph.com/docs/master/man/8/ceph-post-file/

Re: [ceph-users] crash on rbd bench-write

2015-09-04 Thread Jason Dillaman
Any particular reason why you have the image mounted via the kernel client while performing a benchmark? Not to say this is the reason for the crash, but strange since 'rbd bench-write' will not test the kernel IO speed since it uses the user-mode library. Are you able to test bench-write

Re: [ceph-users] crash on rbd bench-write

2015-09-08 Thread Jason Dillaman
> The client version is what was installed by the ceph-deploy install > ceph-client command. Via the debian-hammer repo. Per the quickstart doc. > Are you saying I need to install a different client version somehow? You listed the version as 0.80.10 which is a Ceph Firefly release -- Hammer is

Re: [ceph-users] possibility to delete all zeros

2015-10-02 Thread Jason Dillaman
The following advice assumes these images don't have associated snapshots (since keeping the non-sparse snapshots will keep utilizing the storage space): Depending on how you have your images set up, you could snapshot and clone the images, flatten the newly created clone, and delete the

Re: [ceph-users] lttng duplicate registration problem when using librados2 and libradosstriper

2015-09-23 Thread Jason Dillaman
It looks like the issue you are experiencing was fixed in the Infernalis/master branches [1]. I've opened a new tracker ticket to backport the fix to Hammer [2]. -- Jason Dillaman [1] https://github.com/sponce/ceph/commit/e4c27d804834b4a8bc495095ccf5103f8ffbcc1e [2] http

Re: [ceph-users] rbd map failing for image with exclusive-lock feature

2015-09-24 Thread Jason Dillaman
approach via "rbd lock add/remove" to verify that no other client has the image mounted before attempting to mount it locally. -- Jason Dillaman - Original Message - > From: "Allen Liao" <aliao.svsga...@gmail.com> > To: ceph-users@lists.ceph.com > Se

Re: [ceph-users] lttng duplicate registration problem when using librados2 and libradosstriper

2015-09-22 Thread Jason Dillaman
As a background, I believe LTTng-UST is disabled for RHEL7 in the Ceph project only due to the fact that EPEL 7 doesn't provide the required packages [1]. [1] https://bugzilla.redhat.com/show_bug.cgi?id=1235461 -- Jason Dillaman - Original Message - > From: "Paul M

Re: [ceph-users] rbd and exclusive lock feature

2015-09-22 Thread Jason Dillaman
new exclusive-lock feature is managed via 'rbd feature enable/disable' commands and does ensure that only the current lock owner can manipulate the RBD image. It was introduced to support the RBD object map feature (which can track which backing RADOS objects are in-use in order to activate some optimi

Re: [ceph-users] lttng duplicate registration problem when using librados2 and libradosstriper

2015-09-22 Thread Jason Dillaman
> On 22/09/15 17:46, Jason Dillaman wrote: > > As a background, I believe LTTng-UST is disabled for RHEL7 in the Ceph > > project only due to the fact that EPEL 7 doesn't provide the required > > packages [1]. > > interesting. so basically our program might onl

Re: [ceph-users] rbd and exclusive lock feature

2015-09-22 Thread Jason Dillaman
ying the image while at the same time not crippling other use cases. librbd also supports cooperative exclusive lock transfer, which is used in the case of qemu VM migrations where the image needs to be opened R/W by two clients at the same time. -- Jason Dillaman - Original Message

Re: [ceph-users] lttng duplicate registration problem when using librados2 and libradosstriper

2015-09-22 Thread Jason Dillaman
You can run the program under 'gdb' with a breakpoint on the 'abort' function to catch the program's abnormal exit. Assuming you have debug symbols installed, you should hopefully be able to see which probe is being re-registered. -- Jason Dillaman - Original Message - > F

Re: [ceph-users] lttng duplicate registration problem when using librados2 and libradosstriper

2015-09-21 Thread Jason Dillaman
This is usually indicative of the same tracepoint event being included by both a static and dynamic library. See the following thread regarding this issue within Ceph when LTTng-ust was first integrated [1]. Since I don't have any insight into your application, are you somehow linking against

Re: [ceph-users] rbd_inst.create

2015-12-07 Thread Jason Dillaman
lock_exclusive() / lock_shared() methods are not related to image watchers. Instead, it is tied to the advisory locking mechanism -- and list_lockers() can be used to query who has a lock. -- Jason Dillaman - Original Message - > From: "NEVEU Stephane"

Re: [ceph-users] Possible to change RBD-Caching settings while rbd device is in use ?

2015-12-14 Thread Jason Dillaman
Sorry, none of the librbd configuration properties can be live-updated currently. -- Jason Dillaman - Original Message - > From: "Daniel Schwager" <daniel.schwa...@dtnet.de> > To: "ceph-us...@ceph.com" <ceph-us...@ceph.com> > Sent: Frida

Re: [ceph-users] How to do quiesced rbd snapshot in libvirt?

2016-01-04 Thread Jason Dillaman
I am surprised by the error you are seeing with exclusive lock enabled. The rbd CLI should be able to send the 'snap create' request to QEMU without an error. Are you able to provide "debug rbd = 20" logs from shortly before and after your snapshot attempt? -- Jaso

Re: [ceph-users] rbd bench-write vs dd performance confusion

2016-01-04 Thread Jason Dillaman
to Hammer but it looks like it was missed. I will open a new tracker ticket to start that process. [1] https://github.com/ceph/ceph/commit/333f3a01a9916c781f266078391c580efb81a0fc -- Jason Dillaman - Original Message - > From: "Emile Snyder" <emsny...@ebay.com&g

Re: [ceph-users] letting and Infernalis

2016-01-04 Thread Jason Dillaman
tracing = false # enable librbd LTTng tracing You can dynamically enable LTTng on a running process via the admin socket as well. I created a tracker ticket for updating the documentation [1]. [1] http://tracker.ceph.com/issues/14219 -- Jason Dillaman - Original Message

Re: [ceph-users] Unable to see LTTng tracepoints in Ceph

2016-01-08 Thread Jason Dillaman
tracing events do you see being generated from ceph-mon? I didn't realize it had any registered tracepoint events. -- Jason Dillaman - Original Message - > From: "Aakanksha Pudipeddi-SSI" <aakanksha...@ssi.samsung.com> > To: ceph-users@lists.ceph.com > S

Re: [ceph-users] rbd du

2015-12-18 Thread Jason Dillaman
/19/real-size-of-a-ceph-rbd-image/ -- Jason Dillaman - Original Message - > From: "Allen Liao" <al...@svsgames.com> > To: ceph-users@lists.ceph.com > Sent: Friday, August 21, 2015 3:24:54 PM > Subject: [ceph-users] rbd du > Hi all, > The online ma

Re: [ceph-users] librbd - threads grow with each Image object

2015-11-23 Thread Jason Dillaman
Couldn't hurt to open a feature request for this on the tracker. -- Jason Dillaman - Original Message - > From: "Haomai Wang" <haomaiw...@gmail.com> > To: "Allen Liao" <aliao.svsga...@gmail.com> > Cc: ceph-users@lists.ceph.com > Sent: Satur

Re: [ceph-users] rbd_inst.create

2015-11-30 Thread Jason Dillaman
RBD_FEATURE_STRIPINGV2 = 2 RBD_FEATURE_EXCLUSIVE_LOCK = 4 RBD_FEATURE_OBJECT_MAP = 8 RBD_FEATURE_FAST_DIFF = 16 RBD_FEATURE_DEEP_FLATTEN = 32 RBD_FEATURE_JOURNALING = 64 -- Jason Dillaman - Original Message - > From: "Gregory Farnum" <gfar...@redhat.com> > To: "N

Re: [ceph-users] After flattening the children image, snapshot still can not be unprotected

2015-11-19 Thread Jason Dillaman
Does child image "images/0a38b10d-2184-40fc-82b8-8bbd459d62d2" have snapshots? -- Jason Dillaman - Original Message - > From: "Jackie" <hzguanqi...@gmail.com> > To: ceph-users@lists.ceph.com > Sent: Thursday, November 19, 2015 12:05:12 AM >

Re: [ceph-users] 回复: can rbd block_name_prefix be changed?

2016-01-12 Thread Jason Dillaman
It's unique per-pool. -- Jason Dillaman - Original Message - > From: "louisfang2013" <louisfang2...@gmail.com> > To: "Jason Dillaman" <dilla...@redhat.com> > Cc: "ceph-users" <ceph-users@lists.ceph.com> > Sent: Tuesday, Ja

Re: [ceph-users] RBD rollback error mesage

2016-06-07 Thread Jason Dillaman
Can you run "rbd info" against that image? I suspect it is a harmless but alarming error message. I actually just opened a tracker ticket this morning for a similar issue for rbd-mirror [1] when it bootstraps an image to a peer cluster. In that case, it was a harmless error message that we will

Re: [ceph-users] RBD rollback error mesage

2016-06-07 Thread Jason Dillaman
ct-map, fast-diff, > deep-flatten > flags: > parent: rbd/xenial-base@gold-copy > overlap: 8192 MB > > > Brendan > > > From: Jason Dillaman [jdill...@redhat.com] > Sent: Tuesday, June 07, 2016 6:56

Re: [ceph-users] Jewel upgrade - rbd errors after upgrade

2016-06-06 Thread Jason Dillaman
ecified explicitly for osd class dir. I suspect it > might be related to the OSDs being restarted during the package upgrade > process before all libraries are upgraded. > > >> -Original Message- >> From: Jason Dillaman [mailto:jdill...@redhat.com] >> Sent: Monda

Re: [ceph-users] what does the 'rbd watch ' mean?

2016-06-03 Thread Jason Dillaman
That command is used for debugging to show the notifications sent by librbd whenever image properties change. These notifications are used by other librbd clients with the same image open to synchronize state (e.g. a snapshot was created so instruct the other librbd client to refresh the image's

Re: [ceph-users] Ceph and Openstack

2016-06-14 Thread Jason Dillaman
On Tue, Jun 14, 2016 at 8:15 AM, Fran Barrera wrote: > 2016-06-14 14:02:54.634 2256 DEBUG glance_store.capabilities [-] Store > glance_store._drivers.rbd.Store doesn't support updating dynamic storage > capabilities. Please overwrite 'update_capabilities' method of the

Re: [ceph-users] which CentOS 7 kernel is compatible with jewel?

2016-06-13 Thread Jason Dillaman
Alternatively, if you are using RBD format 2 images, you can run "rados -p listomapvals rbd_directory" to ensure it has a bunch of key/value pairs for your images. There was an issue noted [1] after upgrading to Jewel where the omap values were all missing on several v2 RBD image headers --

Re: [ceph-users] librados and multithreading

2016-06-14 Thread Jason Dillaman
On Fri, Jun 10, 2016 at 12:37 PM, Юрий Соколов wrote: > Good day, all. > > I found this issue: https://github.com/ceph/ceph/pull/5991 > > Did this issue affected librados ? No -- this affected the start-up and shut-down of librbd as described in the associated tracker

Re: [ceph-users] Jewel upgrade - rbd errors after upgrade

2016-06-05 Thread Jason Dillaman
Are you able to successfully run the following command successfully? rados -p glebe-sata get rbd_id.hypervtst-lun04 On Sun, Jun 5, 2016 at 8:49 PM, Adrian Saul wrote: > > I upgraded my Infernalis semi-production cluster to Jewel on Friday. While > the upgrade

Re: [ceph-users] Jewel upgrade - rbd errors after upgrade

2016-06-05 Thread Jason Dillaman
d.test02 >> rbd_id.cloud2sql-lun02 >> rbd_id.fiotest2 >> rbd_id.radmast02-lun04 >> rbd_id.hypervtst-lun04 >> rbd_id.cloud2fs-lun00 >> rbd_id.radmast02-lun03 >> rbd_id.hypervtst-lun00 >> rbd_id.cloud2sql-lun00 >> rbd_id.radmast02-lun02 &g

Re: [ceph-users] Jewel upgrade - rbd errors after upgrade

2016-06-05 Thread Jason Dillaman
rbd_id.glbcluster3-vm17 >> rbd_id.holder <<< a create that said it failed while I was debugging this >> rbd_id.pvtcloud-nfs01 >> rbd_id.hypervtst-lun05 >> rbd_id.test02 >> rbd_id.cloud2sql-lun02 >> rbd_id.fiotest2 >> rbd_id.radmast02-lun04 >> rbd_

Re: [ceph-users] rbd mirror : space and io requirements ?

2016-06-02 Thread Jason Dillaman
On Wed, Jun 1, 2016 at 8:32 AM, Alexandre DERUMIER wrote: > Hi, > > I'm begin to look at rbd mirror features. > > How much space does it take ? Is it only a journal with some kind of list of > block changes ? There is a per-image journal which is a log of all modifications

Re: [ceph-users] rbd ioengine for fio

2016-06-16 Thread Jason Dillaman
On Thu, Jun 16, 2016 at 8:14 PM, Mavis Xiang wrote: > clientname=client.admin Try "clientname=admin" -- I think it's treating the client "name" as the "id", so specifying "client.admin" is really treated as "client.client.admin". -- Jason

Re: [ceph-users] librbd compatibility

2016-06-21 Thread Jason Dillaman
The librbd API is stable between releases. While new API methods might be added, the older API methods are kept for backwards compatibility. For example, qemu-kvm under RHEL 7 is built against a librbd from Firefly but can function using a librbd from Jewel. On Tue, Jun 21, 2016 at 1:47 AM, min

Re: [ceph-users] Ceph RBD object-map and discard in VM

2016-06-22 Thread Jason Dillaman
I'm not sure why I never received the original list email, so I apologize for the delay. Is /dev/sda1, from your example, fresh with no data to actually discard or does it actually have lots of data to discard? Thanks, On Wed, Jun 22, 2016 at 1:56 PM, Brian Andrus wrote: >

Re: [ceph-users] image map failed

2016-06-23 Thread Jason Dillaman
On Thu, Jun 23, 2016 at 10:16 AM, Ishmael Tsoaela wrote: > cluster_master@nodeC:~$ rbd --image data_01 -p data info > rbd image 'data_01': > size 102400 MB in 25600 objects > order 22 (4096 kB objects) > block_name_prefix: rbd_data.105f2ae8944a > format: 2 > features:

Re: [ceph-users] can rbd block_name_prefix be changed?

2016-01-08 Thread Jason Dillaman
It's constant for an RBD image and is tied to the image's internal unique ID. -- Jason Dillaman - Original Message - > From: "min fang" <louisfang2...@gmail.com> > To: "ceph-users" <ceph-users@lists.ceph.com> > Sent: Friday, January 8, 20

Re: [ceph-users] How to do quiesced rbd snapshot in libvirt?

2016-01-14 Thread Jason Dillaman
I would need to see the log from the point where you've frozen the disks until the point when you attempt to create a snapshot. The logs below just show normal IO. I've opened a new ticket [1] where you can attach the logs. [1] http://tracker.ceph.com/issues/14373 -- Jason Dillaman

Re: [ceph-users] How to do quiesced rbd snapshot in libvirt?

2016-01-13 Thread Jason Dillaman
Definitely would like to see the "debug rbd = 20" logs from 192.168.254.17 when this occurs. If you are co-locating your OSDs, MONs, and qemu-kvm processes, make sure your ceph.conf has "log file = " defined in the [global] or [client] section. -- Jason Dillaman --

Re: [ceph-users] v10.0.2 released

2016-01-14 Thread Jason Dillaman
krbd. I wouldn't see this as a replacement for krbd, but rather another tool to support certain RBD use-cases [2]. [1] http://docs.ceph.com/docs/master/man/8/rbd/#commands [2] https://github.com/ceph/ceph/pull/6595 -- Jason Dillaman - Original Message - > From: "Bill

Re: [ceph-users] SSD-Cache Tier + RBD-Cache = Filesystem corruption?

2016-02-10 Thread Jason Dillaman
Can you provide the 'rbd info' dump from one of these corrupt images? -- Jason Dillaman - Original Message - > From: "Udo Waechter" <r...@zoide.net> > To: "Jason Dillaman" <dilla...@redhat.com> > Cc: "ceph-users" <ceph-users@list

Re: [ceph-users] Help with deleting rbd image - rados listwatchers

2016-02-10 Thread Jason Dillaman
The ability to list watchers wasn't added until the cuttlefish release, which explains why you are seeing "operation not supported". Is your rbd CLI also from bobtail? -- Jason Dillaman - Original Message - > From: "Tahir Raza" <tahirr...@g

Re: [ceph-users] SSD-Cache Tier + RBD-Cache = Filesystem corruption?

2016-02-09 Thread Jason Dillaman
What release of Infernalis are you running? When you encounter this error, is the partition table zeroed out or does it appear to be random corruption? -- Jason Dillaman - Original Message - > From: "Udo Waechter" <r...@zoide.net> > To: "ceph-users&

  1   2   3   4   5   6   7   8   9   >