If there is a partition table on the device, you need to get Linux to
scan the partition table and build the sub-devices. Try running
"kpartx -a /dev/rbd0" to create the devices. Since you have LVM on the
second partition, ensure that it is configured to not filter out the
new partition device and
(moving this to the ceph-users mailing list)
Can you verify that cephx user that you are using for QEMU has access
to the image on the hypervisor host. You should be able to run "rbd
--id export - | md5sum" to compare
freshly cloned images between clusters and ensure that they match. As
for logg
watch is still active?
>
> Thanks,
> Nick
>
>> -Original Message-
>> From: Jason Dillaman [mailto:jdill...@redhat.com]
>> Sent: 24 August 2016 15:54
>> To: Nick Fisk
>> Cc: ceph-users
>> Subject: Re: [ceph-users] RBD Watch Notify for snapshots
>>
>
On Fri, Sep 9, 2016 at 10:33 AM, Alexandre DERUMIER wrote:
> The main bottleneck with rbd currently, is cpu usage (limited to 1 iothread
> by disk)
Yes, definitely a bottleneck -- but you can bypass the librbd IO
dispatch thread by setting "rbd_non_blocking_aio = false" in your Ceph
client confi
An RBD image is comprised of individual backing objects, each object
no more than 4MB in size (assuming the default RBD object size when
the image was created). You can refer to this document [1] to get a
better idea of how striping is used. Effectively, a standard RBD image
has a stripe count of 1
Unfortunately, it sounds like the image's header object was lost
during your corruption event. While you can manually retrieve the
image data blocks from the cluster, undoubtedly many might be lost
and/or corrupted as well.
You'll first need to determine the internal id of your image:
$ rados --po
-han.fr/blog/2015/01/29/ceph-recover-a-rbd-image-from-a-dead-cluster/
> but I don't know if this can help me.
>
> Any suggestions?
>
> Thanks.
>
> 2016-09-21 22:35 GMT+02:00 Jason Dillaman :
>>
>> Unfortunately, it sounds like the image's header object wa
It definitely isn't actively maintained. It was contributed nearly two
years ago for a worst-case recovery when the Ceph cluster itself
cannot be started but all the individual data objects are still
readily available.
For RBD disaster recovery, I would suggest taking a look at RBD
mirroring suppo
Yes, the "journal_data" objects can be stored in a separate pool from
the image. The rbd CLI allows you to use the "--journal-pool" argument
when creating, copying, cloning, or importing and image with
journaling enabled. You can also specify the journal data pool when
dynamically enabling the jour
o say.
>
> Christian
>
>> Is there perhaps a hole in the documentation here? I've not been able to
>> find anything in the man page for RBD nor on the Ceph website?
>>
>> Regards,
>> Cory
>>
>>
>> -Original Message-
>> F
On Wed, Oct 12, 2016 at 2:45 AM, Frédéric Nass
wrote:
> Can we use rbd journaling without using rbd mirroring in Jewel ?
Yes, you can use journaling without using rbd-mirroring, but ...
>So that we
> can set rbd journals on SSD pools and improve write IOPS on standard (no
> mirrored) RBD images.
librbd (used by QEMU to provide RBD-backed disks) uses librados and
provides the necessary handling for striping across multiple backing
objects. When you don't specify "fancy" striping options via
"--stripe-count" and "--stripe-unit", it essentially defaults to
stripe count of 1 and stripe unit of
On Wed, Oct 19, 2016 at 6:52 PM, yan cui wrote:
> 2016-10-19 15:46:44.843053 7f35c9925d80 -1 librbd: cannot obtain exclusive
> lock - not removing
Are you attempting to delete the primary or non-primary image? I would
expect any attempts to delete the non-primary image to fail since the
non-prima
On Thu, Oct 20, 2016 at 1:51 AM, Ahmed Mostafa
wrote:
> different OSDs
PGs -- but more or less correct since the OSDs will process requests
for a particular PG sequentially and not in parallel.
--
Jason
___
ceph-users mailing list
ceph-users@lists.cep
It's in the build and has tests to verify that it is properly being
triggered [1].
$ git tag --contains 5498377205523052476ed81aebb2c2e6973f67ef
v10.2.3
What are your tests that say otherwise?
[1]
https://github.com/ceph/ceph/pull/10797/commits/5498377205523052476ed81aebb2c2e6973f67ef
On Fri,
Trusty and the cluster is Xenial.
>
> dd if=/dev/zero of=/dev/vdd bs=4K count=1000 oflag=direct
>
> fio -name iops -rw=write -bs=4k -direct=1 -runtime=60 -iodepth 1 -filename
> /dev/vde -ioengine=libaio
>
> Thanks,
> -Pavan.
>
> On 10/21/16, 6:15 PM, "Jason
On Fri, Oct 21, 2016 at 1:15 PM, Pavan Rallabhandi
wrote:
> The QEMU cache is none for all of the rbd drives
Hmm -- if you have QEMU cache disabled, I would expect it to disable
the librbd cache.
I have to ask, but did you (re)start/live-migrate these VMs you are
testing against after you upgrad
on the QEMU
> command line, the QEMU command line settings override the Ceph configuration
> file settings.”
>
> Thanks,
> -Pavan.
>
> On 10/21/16, 11:31 PM, "Jason Dillaman" wrote:
>
> On Fri, Oct 21, 2016 at 1:15 PM, Pavan Rallabhandi
> wrote:
&
I think you are looking for the "id" option -- not "name".
[1] https://github.com/fujita/tgt/blob/master/doc/README.rbd#L36
On Mon, Oct 24, 2016 at 3:58 AM, Lu Dillon wrote:
> Sorry for spam again.
>
>
> By the tgtadm's man, I tried to add "bsopts" option in the tgt's
> configuration, but failed
On Tue, Oct 25, 2016 at 2:46 PM, J David wrote:
> Are long-running RBD clients (like Qemu virtual machines) placed at
> risk of instability or data corruption if they are not updated and
> restarted before, during, or after such an upgrade?
No, we try very hard to ensure forward and backwards com
I am not aware of any similar reports against librbd on Firefly. Do you use
any configuration overrides? Does the filesystem corruption appears while
the instances are running or only after a shutdown / restart of the
instance?
On Wed, Oct 26, 2016 at 12:46 AM, wrote:
> No , we are using Firefly
t;> fixed.
>>
>>
>>
>> [image: cid:image007.jpg@01D1747D.DB260110]
>>
>> *Keynes Lee**李* *俊* *賢*
>>
>> Direct:
>>
>> +886-2-6612-1025
>>
>> Mobile:
>>
>> +886-9-1882-3787
>>
>> Fax:
>>
>&g
On Thu, Oct 27, 2016 at 6:29 PM, Ahmed Mostafa
wrote:
> I sat rbd cache to false and tried to create the same number of instances,
> but the same issue happened again
>
Did you disable it both in your hypervisor node's ceph.conf and also
disable the cache via the QEMU "cache=none" option? Just t
> params:path=iscsi/tgt,bstype=rbd
> Oct 26 14:05:59 datanode1 tgtd: bs_rbd_init(542) bs_rbd_init bsopts=(null)
> Oct 26 14:05:59 datanode1 tgtd: bs_rbd_init(565) bs_rbd_init:
> rados_connect: -2
>
>
> thanks for any advise.
>
> Dillon
>
>
>
> -
On Sun, Oct 30, 2016 at 5:40 AM, Bill WONG wrote:
> any ideas or comments?
Can you set "rbd non blocking aio = false" in your ceph.conf and retry
librbd? This will eliminate at least one context switch on the read IO
path -- which result in increased latency under extremely low queue
depths.
--
osd_disk_threads = 4
> osd_map_cache_size = 1024
> osd_map_cache_bl_size = 128
> osd_mount_options_xfs = "rw,noexec,nodev,noatime,nodiratime,nobarrier"
> osd_recovery_op_priority = 4
> osd_recovery_max_active = 10
> osd_max_backfills = 4
> rbd non blocking aio = false
&g
> Not using qemu in this scenario. Just rbd map && rbd lock. It's more
> that I can't match the output from "rbd lock" against the output from
> "rbd status", because they are using different librados instances.
> I'm just trying to capture who has an image mapped and locked, and to
> those not i
You are correct -- rbd uses the pool id as a reference and now your
pool has a new id. There was a thread on this mailing list a year ago
for the same issue [1].
[1] http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-May/001456.html
On Sun, Nov 20, 2016 at 2:19 AM, Craig Chi wrote:
> Hi Ce
On Tue, Nov 22, 2016 at 5:31 AM, Zhongyan Gu wrote:
> So if initial snapshot is NOT specified, then:
> rbd export-diff image@snap1 will diff all data to snap1. this cmd equals to
> :
> rbd export image@snap1. Is my understand right or not??
While they will both export all data associated w/ imag
ost zero.
>> I don't think it is a designed behavior. export-diff A@snap1 should always
>> get a stable output no matter image A is cloned or not.
>>
>> Please correct me if anything wrong.
>>
>> Thanks,
>> Zhongyan
>>
>>
>>
>>
m
>> Exporting image: 100% complete...done.
>> bbf7cf69a84f3978c66f5eb082fb91ec -
>> output shows A@snap1 DOES NOT equal B@snap1
>>
>> The second case can always be reproduced. What is wrong with the second
>> case?
>>
>> Thanks,
>> Zhongyan
>>
To optimize for non-direct, sequential IO, you'd actually most likely
be better off with smaller RBD object sizes. The rationale is that
each backing object is handled by a single PG and by using smaller
objects, you can distribute the IO load to more PGs (and associated
OSDs) in parallel. The 4MB
s
> export /import diff feature.
> We found this issue and hopefully it can be confirmed and then fixed soon.
> If you need any more information for debugging, Please just let me know.
>
> Thanks,
> Zhongyan
>
> On Mon, Nov 28, 2016 at 10:05 PM, Jason Dillaman
> wrote:
The rbd CLI has a built-in disk usage command with the Jewel release
that no longer requires the awk example. If you wanted to implement
something similar using the Python API, you would need to use the
"diff_iterate" API method to locate all used extents within an object
and add them up to calcula
eph/pull/12218
>
> >
> > Zhongyan
> >
> > On Tue, Nov 29, 2016 at 10:52 PM, Jason Dillaman
> > wrote:
> >
> > > You are correct that there is an issue with the Hammer version when
> > > calculating diffs. If the clone has an object that obsc
On Sat, Dec 3, 2016 at 2:34 AM, Rakesh Parkiti
wrote:
> Hi All,
>
>
> I. Firstly, As per my understanding, RBD image features (exclusive-lock,
> object-map, fast-diff, deep-flatten, journaling) are not yet ready for ceph
> Jewel version?
Incorrect -- these features are the default enabled feature
CCing in ceph-users:
That is a pretty old version of fio and I know a couple rbd-related
bugs / crashes have been fixed since fio 2.2.8. Can you retry using a
more up-to-date version of fio?
On Tue, Dec 6, 2016 at 2:40 AM, wrote:
> Hello Jason,
>
> I'm from ZTE corporation, and we are using cep
The kernel krbd driver doesn't support the new RBD data pool feature.
This will only function using librbd-backed clients.
The error that you are seeing with "rbd feature disable" is unexpected
-- any chance your OSDs are old and don't support the feature?
On Wed, Dec 7, 2016 at 6:07 AM, Aravind
I cannot recreate that "rbd feature disable" error using a master
branch build from yesterday. Can you still reproduce this where your
rbd CLI can create a data pool image but cannot access the image
afterwards?
As for how to run against a librbd-backed client, it depends on what
your end goal is.
On Thu, Dec 8, 2016 at 6:53 AM, Aravind Ramesh
wrote:
> I did a make install in my ceph build and also did make install on the fio
> and ensured the latest binaries were installed. Now, fio is failing with
> below errors for the rbd device with EC pool as data pool. I have shared the
> "rbd ls"
On Sat, Dec 10, 2016 at 6:11 AM, zhong-yan.gu wrote:
> Hi Jason,
> sorry to bother you. A question about io consistency in osd down case :
> 1. a write op arrives primary osd A
> 2. osd A does local write and sends out replica writes to osd B and C
> 3. B finishes write and return ACK to A. Howev
I should clarify that if the OSD has silently failed (e.g. the TCP
connection wasn't reset and packets are just silently being dropped /
not being acked), IO will pause for up to "osd_heartbeat_grace" before
IO can proceed again.
On Sat, Dec 10, 2016 at 8:46 AM, Jason Dillaman
ed its object until its
peers acked the op. You should be able to run a small three OSD test
cluster to see what the OSDs are doing in each case when you down one
or two OSDs.
> Zhongyan
>
>
>
>
>
>
> At 2016-12-10 22:00:20, "Jason Dillaman" wrote:
>>I should
Do you happen to know if there is an existing bugzilla ticket against
this issue?
On Mon, Dec 19, 2016 at 3:46 PM, Mike Lowe wrote:
> It looks like the libvirt (2.0.0-10.el7_3.2) that ships with centos 7.3 is
> broken out of the box when it comes to hot plugging new virtio-scsi devices
> backed
You are unfortunately the second person today to hit an issue where
"rbd remove" incorrectly proceeds when it hits a corner-case error.
First things first, when you configure your new user, you needed to
give it "rx" permissions to the parent image's pool. If you attempted
the clone operation usin
; s/\n/|/; ta' -e
's/\./\\./g'))" | grep -E '(rbd_data|journal|rbd_object_map)'
Once you tweak / verify the list, you can pipe it to the rados rm command.
On Wed, Dec 21, 2016 at 11:17 AM, Ruben Kerkhof wrote:
> Hi Jason,
>
> On Wed, Dec 21, 2016 at 4:53 PM,
The backing objects are most likely sparse and the diff extents just
represent the maximum offset written within each backing object.
Calculating the size of an image composed over potentially hundreds of
thousands of backing objects is not a cheap operation, so it's best to
consider this (and outp
On Thu, Jan 5, 2017 at 7:24 AM, Klemen Pogacnik wrote:
> I'm playing with rbd mirroring with openstack. The final idea is to use it
> for disaster recovery of DB server running on Openstack cluster, but would
> like to test this functionality first.
> I've prepared this configuration:
> - 2 openst
There are very few configuration settings passed between Cinder and
Nova when attaching a volume. I think the only real possibility
(untested) would be to configure two Cinder backends against the same
Ceph cluster using two different auth user ids -- one for cache
enabled and another for cache dis
There have been several similar reports on the mailing list about this
[1][2][3][4] that are always a result of skipping step 6 from the Luminous
upgrade guide [5]. The new (starting Luminous) 'profile rbd'-style caps are
designed to try to simplify caps going forward [6].
TL;DR: your Openstack Ce
Hmm ... it looks like there is a bug w/ RBD locks and IPv6 addresses since
it is failing to parse the address as valid. Perhaps it's barfing on the
"%eth0" scope id suffix within the address.
On Mon, Jul 9, 2018 at 2:47 PM Kevin Olbrich wrote:
> Hi!
>
> I tried to convert an qcow2 file to rbd an
, Jul 9, 2018 at 3:14 PM Jason Dillaman wrote:
> Hmm ... it looks like there is a bug w/ RBD locks and IPv6 addresses since
> it is failing to parse the address as valid. Perhaps it's barfing on the
> "%eth0" scope id suffix within the address.
>
> On Mon, Jul 9, 2018 a
ote:
> 2018-07-09 21:25 GMT+02:00 Jason Dillaman :
>
>> BTW -- are you running Ceph on a one-node computer? I thought IPv6
>> addresses starting w/ fe80 were link-local addresses which would probably
>> explain why an interface scope id was appended. The current IPv6 a
On Tue, Jul 10, 2018 at 2:37 AM Kevin Olbrich wrote:
> 2018-07-10 0:35 GMT+02:00 Jason Dillaman :
>
>> Is the link-local address of "fe80::219:99ff:fe9e:3a86%eth0" at least
>> present on the client computer you used? I would have expected the OSD to
>> determi
That's not possible right now, but work is in-progress to add that to a
future release of Ceph [1].
[1] https://github.com/ceph/ceph/pull/21114
On Mon, Jul 16, 2018 at 7:12 PM Andrei Mikhailovsky
wrote:
> Dear cephers,
>
> Could someone tell me how to check the rbd volumes modification times in
Yes, you just need to enable the "admin socket" in your ceph.conf and then
use "ceph --admin-daemon /path/to/image/admin/socket.asok perf dump".
On Tue, Jul 17, 2018 at 8:53 AM Mateusz Skala (UST, POL) <
mateusz.sk...@ust-global.com> wrote:
> Hi,
>
> It is possible to get statistics of issued rea
te operations on
> specific RBD image, only for osd and total client operations. I need to get
> statistics on one specific RBD image, to get top used images. It is
> possible?
>
> Regards
>
> Mateusz
>
>
>
> *From:* Jason Dillaman [mailto:jdill...@redhat.com]
>
On Wed, Jul 18, 2018 at 10:55 AM Nikola Ciprich
wrote:
> Hi,
>
> historically I've found many discussions about this topic in
> last few years, but it seems to me to be still a bit unresolved
> so I'd like to open the question again..
>
> In all flash deployments, under 12.2.5 luminous and qemu
On Wed, Jul 18, 2018 at 12:36 PM Nikola Ciprich
wrote:
> ;6QHi Janon,
>
> > Just to clarify: modern / rebased krbd block drivers definitely support
> > layering. The only missing features right now are object-map/fast-diff,
> > deep-flatten, and journaling (for RBD mirroring).
>
> I thought it as
On Wed, Jul 18, 2018 at 12:58 PM Nikola Ciprich
wrote:
> > What's the output from "rbd info nvme/centos7"?
> that was it! the parent had some of unsupported features
> enabled, therefore the child could not be mapped..
>
> so the error message is a bit confusing, but now after disabling
> the fea
On Wed, Jul 18, 2018 at 1:08 PM Nikola Ciprich
wrote:
> > Care to share your "bench-rbd" script (on pastebin or similar)?
> sure, no problem.. it's so short I hope nobody will get offended if I
> paste it right
> here :)
>
> #!/bin/bash
>
> #export LD_PRELOAD="/usr/lib64/libtcmalloc.so.4"
> numjo
:* dilla...@redhat.com
> *Cc:* ceph-users@lists.ceph.com
> *Subject:* [Possibly Forged Email] Re: [ceph-users] Read/write statistics
> per RBD image
>
>
>
> Thank You for help, it is exactly that I need.
>
> Regards
>
> Mateusz
>
>
>
> *From:* Jason Dillaman [ma
need to scrap all
of them. The librbd json dictionary key for librbd contains the image name
so you can determine which is which after you dump the perf counters.
> Regards
>
> Mateusz
>
>
>
> *From:* Jason Dillaman [mailto:jdill...@redhat.com]
> *Sent:* Tuesday, July 24,
On Wed, Jul 25, 2018 at 5:41 PM Alex Gorbachev
wrote:
> I am not sure this related to RBD, but in case it is, this would be an
> important bug to fix.
>
> Running LVM on top of RBD, XFS filesystem on top of that, consumed in RHEL
> 7.4.
>
> When running a large read operation and doing LVM snapsh
On Fri, Jul 27, 2018 at 10:25 AM Paul Emmerich
wrote:
> Does your keyring have the "profile rbd" capabilities on the mon?
>
+1 -- your Nova user will require the privilege to blacklist the dead peer
from the cluster in order to break the exclusive lock.
>
>
> Paul
>
> 2018-07-27 13:49 GMT+02:0
On Fri, Aug 10, 2018 at 3:01 AM Glen Baars
wrote:
> Hello Ceph Users,
>
>
>
> I am trying to implement image journals for our RBD images ( required for
> mirroring )
>
>
>
> rbd feature enable SLOWPOOL/RBDImage journaling --journal-pool SSDPOOL
>
>
>
> When we run the above command we still find
Is this a clean (new) cluster and RBD image you are using for your test or
has it been burned in? When possible (i.e. it has enough free space),
bluestore will essentially turn your random RBD image writes into
sequential writes. This optimization doesn't work for random reads unless
your read patt
On Mon, Aug 13, 2018 at 9:32 AM Emmanuel Lacour
wrote:
> Le 13/08/2018 à 15:21, Jason Dillaman a écrit :
> > Is this a clean (new) cluster and RBD image you are using for your
> > test or has it been burned in? When possible (i.e. it has enough free
> > space), bluestore
On Mon, Aug 13, 2018 at 10:01 AM Emmanuel Lacour
wrote:
> Le 13/08/2018 à 15:55, Jason Dillaman a écrit :
>
>
>
>>
>> so problem seems located on "rbd" side ...
>>
>
> That's a pretty big apples-to-oranges comparison (4KiB random IO to 4MiB
On Mon, Aug 13, 2018 at 10:44 AM Emmanuel Lacour
wrote:
> Le 13/08/2018 à 16:29, Jason Dillaman a écrit :
>
>
>
> For such a small benchmark (2 GiB), I wouldn't be surprised if you are not
> just seeing the Filestore-backed OSDs hitting the page cache for the reads
> wh
On Fri, Aug 10, 2018 at 8:29 AM Sage Weil wrote:
> On Fri, 10 Aug 2018, Paweł Sadowski wrote:
> > On 08/09/2018 04:39 PM, Alex Elder wrote:
> > > On 08/09/2018 08:15 AM, Sage Weil wrote:
> > >> On Thu, 9 Aug 2018, Piotr Dałek wrote:
> > >>> Hello,
> > >>>
> > >>> At OVH we're heavily utilizing sn
[1] in order
to optimize the worst-case memory usage of the rbd-mirror daemon for
environments w/ hundreds or thousands of replicated images.
> Kind regards,
>
> *Glen Baars*
>
>
>
> *From:* Jason Dillaman
> *Sent:* Saturday, 11 August 2018 11:28 PM
> *To:* Glen Baars
On Tue, Jul 31, 2018 at 11:10 AM Ilja Slepnev wrote:
> Hi,
>
> is it possible to establish RBD mirroring between replicated and erasure
> coded pools?
> I'm trying to setup replication as described on
> http://docs.ceph.com/docs/master/rbd/rbd-mirroring/ without success.
> Ceph 12.2.5 Luminous
>
gt;
> features: layering, exclusive-lock, object-map, fast-diff,
> deep-flatten, journaling
>
> flags:
>
> create_timestamp: Sat May 5 11:39:07 2018
>
> journal: 37c8974b0dc51
>
> mirroring state: disabled
>
>
any old
> issues from previous versions.
>
> Kind regards,
>
> *Glen Baars*
>
>
>
> *From:* Jason Dillaman
> *Sent:* Tuesday, 14 August 2018 7:43 PM
> *To:* Glen Baars
> *Cc:* dillaman ; ceph-users <
> ceph-users@lists.ceph.com>
> *Subject:* Re: [
ere are 0 metadata on this image.
>
> Kind regards,
>
> *Glen Baars*
>
>
>
> *From:* Jason Dillaman
> *Sent:* Tuesday, 14 August 2018 9:00 PM
> *To:* Glen Baars
> *Cc:* dillaman ; ceph-users <
> ceph-users@lists.ceph.com>
> *Subject:* Re: [ceph-users] RBD j
holding the
lock, the request to enable journaling is sent over to that client but it's
missing all the journal options. I'll open a tracker ticket to fix the
issue.
Thanks.
> Kind regards,
>
> *Glen Baars*
>
>
>
> *From:* Jason Dillaman
> *Sent:* Tuesday, 14 Aug
t; *Sent:* Tuesday, 14 August 2018 9:36 PM
> *To:* dilla...@redhat.com
> *Cc:* ceph-users
> *Subject:* Re: [ceph-users] RBD journal feature
>
>
>
> Hello Jason,
>
>
>
> Thanks for your help. Here is the output you asked for also.
>
>
>
> https://pastebin.
The RBD snapshot create timestamp required a change on the OSD side to
record the timestamp. Therefore, for older clusters or pre-luminous
snapshots, there is no way to retrieve a creation timestamp since it just
doesn't exist anywhere.
On Fri, Aug 17, 2018 at 1:39 AM hnuzhoulin2 wrote:
> Sorry,
our help 😊
>>
>> Kind regards,
>>
>> *Glen Baars*
>>
>> *From:* Jason Dillaman
>> *Sent:* Thursday, 16 August 2018 10:21 PM
>>
>>
>> *To:* Glen Baars
>> *Cc:* ceph-users
>> *Subject:* Re: [ceph-users] RBD journal feature
>&
Can you collect any librados / librbd debug logs and provide them via
pastebin? Just add / tweak the following in your "/etc/ceph/ceph.conf"
file's "[client]" section and re-run to gather the logs.
[client]
log file = /path/to/a/log/file
debug ms = 1
debug monc = 20
debug objecter = 20
debug rados
On Thu, Aug 23, 2018 at 10:56 AM sat wrote:
>
> Hi,
>
>
> I'm trying to make a one-way RBD mirroed cluster between two Ceph clusters.
> But it
> hasn't worked yet. It seems to sucecss, but after making an RBD image from
> local cluster,
> it's considered as "unknown".
>
> ```
> $ sudo rbd --clus
On Sat, Aug 25, 2018 at 10:29 AM Fyodor Ustinov wrote:
>
> Hi!
>
> Configuration:
> rbd - erasure pool
> rbdtier - tier pool for rbd
>
> ceph osd tier add-cache rbd rbdtier 549755813888
> ceph osd tier cache-mode rbdtier writeback
>
> Create new rbd block device:
> rbd create --size 16G rbdtest
>
On Mon, Aug 27, 2018 at 3:29 AM Bartosz Rabiega
wrote:
>
> Bumping the topic.
>
>
> So, what do you think guys?
Not sure if you saw my response from August 13th, but I stated that
this is something that you should be able to build right now using the
RADOS Python bindings and the rbd CLI. It woul
On Mon, Aug 27, 2018 at 11:19 PM đức phạm xuân wrote:
>
> Hello Jason Dillaman,
>
> I'm working with Ceph Object Storage Multi-Site v2, ceph's version is mimic.
> Now I want to delay replicate data from a master site to a slave site. I
> don't know whether do
On Mon, Sep 10, 2018 at 6:36 AM 展荣臻 wrote:
>
> hi!everyone:
>
> I want to export ceph rbd via iscsi。
> ceph version is 10.2.11,centos 7.5 kernel 3.10.0-862.el7.x86_64,
> and i also installed
> tcmu-runner、targetcli-fb、python-rtslib、ceph-iscsi-config、 ceph-iscsi-cli。
> but when i l
On Mon, Sep 10, 2018 at 1:35 PM wrote:
>
> Hi,
> We utilize Ceph RBDs for our users' storage and need to keep data
> synchronized across data centres. For this we rely on 'rbd export-diff /
> import-diff'. Lately we have been noticing cases in which the file system on
> the 'destination RBD' is
Any chance you know the LBA or byte offset of the corruption so I can
compare it against the log?
On Wed, Sep 12, 2018 at 8:32 PM wrote:
>
> Hi Jason,
>
> On 2018-09-10 11:15:45-07:00 ceph-users wrote:
>
> On 2018-09-10 11:04:20-07:00 Jason Dillaman wrote:
>
>
> &
On Wed, Sep 12, 2018 at 10:15 PM wrote:
>
> On 2018-09-12 17:35:16-07:00 Jason Dillaman wrote:
>
>
> Any chance you know the LBA or byte offset of the corruption so I can
> compare it against the log?
>
> The LBAs of the corruption are 0xA74F000 through 175435776
Are y
On Thu, Sep 13, 2018 at 1:54 PM wrote:
>
> On 2018-09-12 19:49:16-07:00 Jason Dillaman wrote:
>
>
> On Wed, Sep 12, 2018 at 10:15 PM <patrick.mcl...@sony.com> wrote:
> >
> > On 2018-09-12 17:35:16-07:00 Jason Dillaman wrote:
> >
> >
>
Thanks for reporting this -- it looks like we broke the part where
command-line config overrides were parsed out from the parsing. I've
opened a tracker ticket against the issue [1].
On Wed, Sep 19, 2018 at 2:49 PM Vikas Rana wrote:
>
> Hi there,
>
> With default cluster name "ceph" I can map rbd
On Fri, Sep 21, 2018 at 6:48 AM Glen Baars wrote:
>
> Hello Ceph Users,
>
>
>
> We have been using ceph-iscsi-cli for some time now with vmware and it is
> performing ok.
>
>
>
> We would like to use the same iscsi service to store our Hyper-v VMs via
> windows clustered shared volumes. When we
It *should* work against any recent upstream kernel (>=4.16) and
up-to-date dependencies [1]. If you encounter any distro-specific
issues (like the PR that Mike highlighted), we would love to get them
fixed.
[1] http://docs.ceph.com/docs/master/rbd/iscsi-target-cli-manual-install/
On Mon, Sep 24,
d come down to your personal preferences re: baked-in time
for the release vs release EOL timing.
> Le lun. 24 sept. 2018 à 18:08, Jason Dillaman a écrit :
>>
>> It *should* work against any recent upstream kernel (>=4.16) and
>> up-to-date dependencies [1]. If you encounter a
lun. 24 sept. 2018 à 19:11, Jason Dillaman a écrit :
>>
>> On Mon, Sep 24, 2018 at 12:18 PM Florian Florensa
>> wrote:
>> >
>> > Currently building 4.18.9 on ubuntu to try it out, also wondering if i
>> > should plan for xenial+luminous or directly t
/09/28 2:26 pm, Andre Goree wrote:
> > On 2018/08/21 1:24 pm, Jason Dillaman wrote:
> >> Can you collect any librados / librbd debug logs and provide them via
> >> pastebin? Just add / tweak the following in your "/etc/ceph/ceph.conf"
> >> file'
On Tue, Oct 2, 2018 at 1:25 PM Andre Goree wrote:
>
> On 2018/10/02 10:26 am, Andre Goree wrote:
> > On 2018/10/02 9:54 am, Jason Dillaman wrote:
> >> Perhaps that pastebin link has the wrong log pasted? The provided log
> >> looks like it's associated wit
On Tue, Oct 2, 2018 at 1:48 PM Andre Goree wrote:
>
> On 2018/10/02 1:29 pm, Jason Dillaman wrote:
> > On Tue, Oct 2, 2018 at 1:25 PM Andre Goree wrote:
> >>
> >>
> >> Unfortunately, it would appear that I'm not getting anything in the
> >>
On Tue, Oct 2, 2018 at 4:47 PM Vikas Rana wrote:
>
> Hi,
>
> We have a CEPH 3 node cluster at primary site. We created a RBD image and the
> image has about 100TB of data.
>
> Now we installed another 3 node cluster on secondary site. We want to
> replicate the image at primary site to this new
re your OSDs on that 192.168.3.x subnet? What daemons are running on
192.168.3.21?
> I could do ceph -s from both side and they can see each other. Only rbd
> command is having issue.
>
> Thanks,
> -Vikas
>
>
>
>
> On Tue, Oct 2, 2018 at 5:14 PM Jason Dillaman wrote:
301 - 400 of 825 matches
Mail list logo