the UNIX domain socket to '/var/run/ceph/cephdr-client.admin.asok': (17)
> File exists
>
>
>
> Did we missed anything and why the snapshot didn't replicated to DR side?
>
> Thanks,
> -Vikas
>
> -Original Message-
> From: Jason Dillaman
>
ev/nbd0: can't read superblock
Doesn't look like you are mapping at a snapshot.
>
> Any suggestions to test the DR copy any other way or if I'm doing something
> wrong?
>
> Thanks,
> -Vikas
>
> -Original Message-
> From: Jason Dillaman
> Sent: T
unt: /mnt: WARNING: device write-protected, mounted read-only.
$ ll /mnt/
total 0
-rw-r--r--. 1 root root 0 Nov 21 10:20 hello.world
> Thanks,
> -Vikas
>
> -Original Message-
> From: Jason Dillaman
> Sent: Thursday, November 21, 2019 9:58 AM
> To: Vikas Rana
> Cc: ce
On Thu, Nov 21, 2019 at 9:56 AM Jason Dillaman wrote:
>
> On Thu, Nov 21, 2019 at 8:49 AM Vikas Rana wrote:
> >
> > Thanks Jason for such a quick response. We are on 12.2.10.
> >
> > Checksuming a 200TB image will take a long time.
>
> How would mounting an
way.
>
> Thanks,
> -Vikas
>
> -Original Message-
> From: Jason Dillaman
> Sent: Thursday, November 21, 2019 8:33 AM
> To: Vikas Rana
> Cc: ceph-users
> Subject: Re: [ceph-users] RBD Mirror DR Testing
>
> On Thu, Nov 21, 2019 at 8:29 AM Vikas Rana wrote:
&g
On Thu, Nov 21, 2019 at 8:29 AM Vikas Rana wrote:
>
> Hi all,
>
>
>
> We have a 200TB RBD image which we are replicating using RBD mirroring.
>
> We want to test the DR copy and make sure that we have a consistent copy in
> case primary site is lost.
>
>
>
> We did it previously and promoted the
On Tue, Nov 19, 2019 at 4:42 PM Florian Haas wrote:
>
> On 19/11/2019 22:34, Jason Dillaman wrote:
> >> Oh totally, I wasn't arguing it was a bad idea for it to do what it
> >> does! I just got confused by the fact that our mon logs showed what
> >> looked l
On Tue, Nov 19, 2019 at 4:31 PM Florian Haas wrote:
>
> On 19/11/2019 22:19, Jason Dillaman wrote:
> > On Tue, Nov 19, 2019 at 4:09 PM Florian Haas wrote:
> >>
> >> On 19/11/2019 21:32, Jason Dillaman wrote:
> >>>> What, exactly, is the &quo
On Tue, Nov 19, 2019 at 4:09 PM Florian Haas wrote:
>
> On 19/11/2019 21:32, Jason Dillaman wrote:
> >> What, exactly, is the "reasonably configured hypervisor" here, in other
> >> words, what is it that grabs and releases this lock? It's evidently not
>
On Tue, Nov 19, 2019 at 2:49 PM Florian Haas wrote:
>
> On 19/11/2019 20:03, Jason Dillaman wrote:
> > On Tue, Nov 19, 2019 at 1:51 PM shubjero wrote:
> >>
> >> Florian,
> >>
> >> Thanks for posting about this issue. This is something that we ha
On Tue, Nov 19, 2019 at 1:51 PM shubjero wrote:
>
> Florian,
>
> Thanks for posting about this issue. This is something that we have
> been experiencing (stale exclusive locks) with our OpenStack and Ceph
> cloud more frequently as our datacentre has had some reliability
> issues recently with pow
> [rep01 (7T)]
Did you rescan the LUNs in VMware after this latest resize attempt?
What kernel and tcmu-runner version are you using?
> On Fri, 25 Oct 2019 at 09:24, Jason Dillaman wrote:
>>
>> On Fri, Oct 25, 2019 at 9:13 AM Steven Vacaroaia wrote:
>> >
On Fri, Oct 25, 2019 at 9:13 AM Steven Vacaroaia wrote:
>
> Hi,
> I am trying to increase size of a datastore made available through ceph iscsi
> rbd
> The steps I followed are depicted below
> Basically gwcli report correct data and even VMware device capacity is
> correct but when tried to inc
Have you updated your "/etc/multipath.conf" as documented here [1]?
You should have ALUA configured but it doesn't appear that's the case
w/ your provided output.
On Wed, Oct 16, 2019 at 11:36 PM 展荣臻(信泰) wrote:
>
>
>
>
> > -原始邮件-
> > 发件人: &qu
On Wed, Oct 16, 2019 at 9:52 PM 展荣臻(信泰) wrote:
>
>
>
>
> > -原始邮件-----
> > 发件人: "Jason Dillaman"
> > 发送时间: 2019-10-16 20:33:47 (星期三)
> > 收件人: "展荣臻(信泰)"
> > 抄送: ceph-users
> > 主题: Re: [ceph-users] ceph iscsi question
> &
On Wed, Oct 16, 2019 at 2:35 AM 展荣臻(信泰) wrote:
>
> hi,all
> we deploy ceph with ceph-ansible.osds,mons and daemons of iscsi runs in
> docker.
> I create iscsi target according to
> https://docs.ceph.com/docs/luminous/rbd/iscsi-target-cli/.
> I discovered and logined iscsi target on another
On Wed, Oct 2, 2019 at 9:50 AM Kilian Ries wrote:
>
> Hi,
>
>
> i'm running a ceph mimic cluster with 4x ISCSI gateway nodes. Cluster was
> setup via ceph-ansible v3.2-stable. I just checked my nodes and saw that only
> two of the four configured iscsi gw nodes are working correct. I first
> no
On Fri, Sep 27, 2019 at 5:18 AM Matthias Leopold
wrote:
>
>
> Hi,
>
> I was positively surprised to to see ceph-iscsi-3.3 available today.
> Unfortunately there's an error when trying to install it from yum repo:
>
> ceph-iscsi-3.3-1.el7.noarch.rp FAILED
> 100%
> [=
er session and not the status updates. I've opened a
tracker ticker for this issue [1].
Thanks.
On Fri, Sep 13, 2019 at 12:44 PM Oliver Freyermuth
wrote:
>
> Am 13.09.19 um 18:38 schrieb Jason Dillaman:
> > On Fri, Sep 13, 2019 at 11:30 AM Oliver Freyermuth
> > wrote:
>
On Fri, Sep 13, 2019 at 11:30 AM Oliver Freyermuth
wrote:
>
> Am 13.09.19 um 17:18 schrieb Jason Dillaman:
> > On Fri, Sep 13, 2019 at 10:41 AM Oliver Freyermuth
> > wrote:
> >>
> >> Am 13.09.19 um 16:30 schrieb Jason Dillaman:
> >>> On Fri, Sep 1
On Fri, Sep 13, 2019 at 10:41 AM Oliver Freyermuth
wrote:
>
> Am 13.09.19 um 16:30 schrieb Jason Dillaman:
> > On Fri, Sep 13, 2019 at 10:17 AM Jason Dillaman wrote:
> >>
> >> On Fri, Sep 13, 2019 at 10:02 AM Oliver Freyermuth
> >> wrote:
> >>>
On Fri, Sep 13, 2019 at 10:17 AM Jason Dillaman wrote:
>
> On Fri, Sep 13, 2019 at 10:02 AM Oliver Freyermuth
> wrote:
> >
> > Dear Jason,
> >
> > thanks for the very detailed explanation! This was very instructive.
> > Sadly, the watchers look correct -
On Fri, Sep 13, 2019 at 10:02 AM Oliver Freyermuth
wrote:
>
> Dear Jason,
>
> thanks for the very detailed explanation! This was very instructive.
> Sadly, the watchers look correct - see details inline.
>
> Am 13.09.19 um 15:02 schrieb Jason Dillaman:
> > On Thu, Sep
chers" command:
$ rados -p listwatchers rbd_mirroring
watcher=1.2.3.4:0/199388543 client.4154 cookie=94769010788992
watcher=1.2.3.4:0/199388543 client.4154 cookie=94769061031424
In my case, the "4154" from "client.4154" is the unique global id for
my connection to the clust
he socket ... and as we have already discussed, it's closing the
socket due to the IO timeout being hit ... and it's hitting the IO
timeout due to a deadlock due to memory pressure from rbd-nbd causing
IO to pushed from the XFS cache back down into rbd-nbd.
> Am 10.09.19 um 16:10 schri
like we accidentally broke this in Nautilus when
image live-migration support was added. I've opened a new tracker
ticket to fix this [1].
> Cheers and thanks again,
> Oliver
>
> On 2019-09-10 23:17, Oliver Freyermuth wrote:
> > Dear Jason,
> >
> > On 20
On Wed, Sep 11, 2019 at 7:48 AM Mike O'Connor wrote:
>
> Hi All
>
> I'm having a problem running rbd export from cron, rbd expects a tty which
> cron does not provide.
> I tried the --no-progress but this did not help.
>
> Any ideas ?
I don't think that error is coming from the 'rbd' CLI:
$ (se
On Tue, Sep 10, 2019 at 2:08 PM Oliver Freyermuth
wrote:
>
> Dear Jason,
>
> On 2019-09-10 18:50, Jason Dillaman wrote:
> > On Tue, Sep 10, 2019 at 12:25 PM Oliver Freyermuth
> > wrote:
> >>
> >> Dear Cephalopodians,
> >>
> >> I have t
On Tue, Sep 10, 2019 at 12:25 PM Oliver Freyermuth
wrote:
>
> Dear Cephalopodians,
>
> I have two questions about RBD mirroring.
>
> 1) I can not get it to work - my setup is:
>
> - One cluster holding the live RBD volumes and snapshots, in pool "rbd",
> cluster name "ceph",
> running l
On Tue, Sep 10, 2019 at 9:46 AM Marc Schöchlin wrote:
>
> Hello Mike,
>
> as described i set all the settings.
>
> Unfortunately it crashed also with these settings :-(
>
> Regards
> Marc
>
> [Tue Sep 10 12:25:56 2019] Btrfs loaded, crc32c=crc32c-intel
> [Tue Sep 10 12:25:57 2019] EXT4-fs (dm-0):
On Wed, Jul 17, 2019 at 3:07 PM wrote:
>
> All;
>
> I'm trying to firm up my understanding of how Ceph works, and ease of
> management tools and capabilities.
>
> I stumbled upon this:
> http://docs.ceph.com/docs/nautilus/rados/configuration/mon-lookup-dns/
>
> It got me wondering; how do you co
bd default features = 125[global]
> fsid = 494971c1-75e7-4866-b9fb-e98cb8171473
> mon_initial_members = meghdootctr
> mon_host = 10.236.228.XX
> auth_cluster_required = cephx
> auth_service_required = cephx
> auth_client_required = cephx
> public network = 10.236.228.64/27
>
On Mon, Aug 26, 2019 at 5:01 AM Wido den Hollander wrote:
>
>
>
> On 8/22/19 5:49 PM, Jason Dillaman wrote:
> > On Thu, Aug 22, 2019 at 11:29 AM Wido den Hollander wrote:
> >>
> >>
> >>
> >> On 8/22/19 3:59 PM, Jason Dillaman wrote:
> >
On Thu, Aug 22, 2019 at 11:29 AM Wido den Hollander wrote:
>
>
>
> On 8/22/19 3:59 PM, Jason Dillaman wrote:
> > On Thu, Aug 22, 2019 at 9:23 AM Wido den Hollander wrote:
> >>
> >> Hi,
> >>
> >> In a couple of situations I have encountered th
On Thu, Aug 22, 2019 at 9:23 AM Wido den Hollander wrote:
>
> Hi,
>
> In a couple of situations I have encountered that Virtual Machines
> running on RBD had a high I/O-wait, nearly 100%, on their vdX (VirtIO)
> or sdX (Virtio-SCSI) devices while they were performing CPU intensive tasks.
>
> These
On Tue, Aug 20, 2019 at 9:23 AM V A Prabha wrote:
> I too face the same problem as mentioned by Sat
> All the images created at the primary site are in the state : down+
> unknown
> Hence in the secondary site the images is 0 % up + syncing all time
> No progress
> The only error log th
ility to define timeouts.
>
> Whats next? Is i a good idea to do a binary search between 12.2.12 and 12.2.5?
>
> From my point of view (without in depth-knowledge of rbd-nbd/librbd) my
> assumption is that this problem might be caused by rbd-nbd code and not by
> librbd.
> The
On Fri, Jul 26, 2019 at 9:26 AM Mykola Golub wrote:
>
> On Fri, Jul 26, 2019 at 04:40:35PM +0530, Ajitha Robert wrote:
> > Thank you for the clarification.
> >
> > But i was trying with openstack-cinder.. when i load some data into the
> > volume around 50gb, the image sync will stop by 5 % or som
ds
> Marc
>
> Am 24.07.19 um 07:55 schrieb Marc Schöchlin:
> > Hi Jason,
> >
> > Am 24.07.19 um 00:40 schrieb Jason Dillaman:
> >>> Sure, which kernel do you prefer?
> >> You said you have never had an issue w/ rbd-nbd 12.2.5 in your Xen
>
On Tue, Jul 23, 2019 at 6:58 AM Marc Schöchlin wrote:
>
>
> Am 23.07.19 um 07:28 schrieb Marc Schöchlin:
> >
> > Okay, i already experimented with high timeouts (i.e 600 seconds). As i can
> > remember this leaded to pretty unusable system if i put high amounts of io
> > on the ec volume.
> > Th
ble.. By enabling mirroring to entire cinder pool as pool mode
> instead of mirror mode of rbd mirroring.. And we can skip the
> replication_enabled is true spec in cinder type..
Journaling is required for RBD mirroring.
>
>
>
> On Mon, Jul 22, 2019 at 11:13 PM Jason Dillaman wrot
he last object it attempted to
copy. From that, you could use "ceph osd map" to figure out the
primary OSD for that object.
> the image which was in syncing mode, showed read only status in secondary.
>
>
>
> On Mon, 22 Jul 2019, 17:36 Jason Dillaman, wrote:
>>
&g
On Sun, Jul 21, 2019 at 8:25 PM Ajitha Robert wrote:
>
> I have a rbd mirroring setup with primary and secondary clusters as peers
> and I have a pool enabled image mode.., In this i created a rbd image ,
> enabled with journaling.
>
> But whenever i enable mirroring on the image, I m getting
On Thu, Jul 18, 2019 at 1:47 PM Marc Schöchlin wrote:
>
> Hello cephers,
>
> rbd-nbd crashes in a reproducible way here.
I don't see a crash report in the log below. Is it really crashing or
is it shutting down? If it is crashing and it's reproducable, can you
install the debuginfo packages, atta
On Mon, Jul 15, 2019 at 4:50 PM Michel Raabe wrote:
>
> Hi,
>
>
> On 15.07.19 22:42, dhils...@performair.com wrote:
> > Paul;
> >
> > If I understand you correctly:
> > I will have 2 clusters, each named "ceph" (internally).
> > As such, each will have a configuration file at: /etc/ceph/ceph
On Mon, Jul 8, 2019 at 10:07 AM M Ranga Swami Reddy
wrote:
>
> Thanks Jason.
> Btw, we use Ceph with OpenStack Cinder and Cinder Release (Q and above)
> supports multi attach. can we use the OpenStack Cinder with Q release with
> Ceph rbd for multi attach functionality?
I can't speak to the Ope
On Mon, Jul 8, 2019 at 8:33 AM M Ranga Swami Reddy wrote:
>
> Hello - Is ceph rbd support multi attach volumes (with ceph luminous
> version0)?
Yes, you just need to ensure the exclusive-lock and dependent features
are disabled on the image. When creating a new image, you can use the
"--image-sh
dvanced debug_rbd_replay0/0
> global advanced debug_refs 0/0
> global basiclog_file/dev/null
> *
> global advanced mon_cluster_log_file/dev/null
>
pened a tracker ticket [1] to support re-opening the
admin socket after the MON configs are received (if not overridden in
the local conf).
> On 6/24/2019 1:12 PM, Jason Dillaman wrote:
> > On Mon, Jun 24, 2019 at 2:05 PM Alex Litvak
> > wrote:
> >>
> >> Jason,
&g
On Mon, Jun 24, 2019 at 4:05 PM Paul Emmerich wrote:
>
> No.
>
> tcmu-runner disables the cache automatically overriding your ceph.conf
> setting.
Correct. For safety purposes, we don't want to support a writeback
cache when fallover between different gateways is possible
>
> Paul
>
> --
> Paul
4
> global advanced osd_scrub_load_threshold0.01
> global advanced osd_scrub_sleep 0.10
> global advanced perftrue
> global advanced public_network 10.0.40.0/23
> *
> global
On Sun, Jun 23, 2019 at 4:27 PM Alex Litvak
wrote:
>
> Hello everyone,
>
> I encounter this in nautilus client and not with mimic. Removing admin
> socket entry from config on client makes no difference
>
> Error:
>
> rbd ls -p one
> 2019-06-23 12:58:29.344 7ff2710b0700 -1 set_mon_vals failed to
On Wed, Jun 19, 2019 at 6:25 PM Brett Chancellor
wrote:
>
> Background: We have a few ceph clusters, each serves multiple Openstack
> cluster. Each cluster has it's own set of pools.
>
> I'd like to move ~50TB of volumes from an old cluster (we'll call the pool
> cluster01-volumes) to an existin
On Wed, Jun 12, 2019 at 9:50 AM Rafael Diaz Maurin
wrote:
>
> Hello Jason,
>
> Le 11/06/2019 à 15:31, Jason Dillaman a écrit :
> >> 4- I export the snapshot from the source pool and I import the snapshot
> >> towards the destination pool (in the pipe)
> >>
nd 9.
>
You will need this PR [1] to bump the version support in the
dashboard. It should have been backported to Nautilus as part of
v14.2.2.
> Thanks again.
>
>
>
>
>
>
>
>
> From: Jason Dillaman
> Sent: Tuesday, June 11, 2019 9:37 AM
> To: Wesley Dillingha
On Tue, Jun 11, 2019 at 9:29 AM Wesley Dillingham
wrote:
>
> Hello,
>
> I am hoping to expose a REST API to a remote client group who would like to
> do things like:
>
>
> Create, List, and Delete RBDs and map them to gateway (make a LUN)
> Create snapshots, list, delete, and rollback
> Determine
On Tue, Jun 11, 2019 at 9:25 AM Rafael Diaz Maurin
wrote:
>
> Hello,
>
> I have a problem when I want to validate (using md5 hashes) rbd
> export/import diff from a rbd source-pool (the production pool) towards
> another rbd destination-pool (the backup pool).
>
> Here is the algorythm :
> 1- Firs
On Mon, Jun 10, 2019 at 1:50 PM Jonas Jelten wrote:
>
> When I run:
>
> rbd map --name client.lol poolname/somenamespace/imagename
>
> The image is mapped to /dev/rbd0 and
>
> /dev/rbd/poolname/imagename
>
> I would expect the rbd to be mapped to (the rbdmap tool tries this name):
>
> /dev/r
On Fri, Jun 7, 2019 at 7:22 AM Sakirnth Nagarasa
wrote:
>
> On 6/6/19 5:09 PM, Jason Dillaman wrote:
> > On Thu, Jun 6, 2019 at 10:13 AM Sakirnth Nagarasa
> > wrote:
> >>
> >> On 6/6/19 3:46 PM, Jason Dillaman wrote:
> >>> Can you run "rbd
On Thu, Jun 6, 2019 at 10:13 AM Sakirnth Nagarasa
wrote:
>
> On 6/6/19 3:46 PM, Jason Dillaman wrote:
> > Can you run "rbd trash ls --all --long" and see if your image
> > is listed?
>
> No, it is not listed.
>
> I did run:
> rbd trash ls --all --long ${
On Thu, Jun 6, 2019 at 5:07 AM Sakirnth Nagarasa
wrote:
>
> Hello,
>
> Our ceph version is ceph nautilus (14.2.1).
> We create periodically snapshots from an rbd image (50 TB). In order to
> restore some data, we have cloned a snapshot.
> To delete the snapshot we ran: rbd rm ${POOLNAME}/${IMAGE}
ately the command seems to be stuck and nothing happens, both ports
> > 7800 / 6789 / 22.
> >
> > We can't find no logs on any monitors.
> >
> > Thanks !
> >
> > -Message d'origine-
> > De : ceph-users De la part de Jason
> &g
22.
>
> We can't find no logs on any monitors.
Try running "rbd -c /path/to/conf --keyring /path/to/keyring ls" or
"ceph -c /path/to/conf --keyring /path/to/keyring health" just to test
connectivity first.
> Thanks !
>
> -Message d'origine-
>
> 7800 / 6789 / 22.
>
> We can't find no logs on any monitors.
>
> Thanks !
>
> -----Message d'origine-
> De : ceph-users De la part de Jason
> Dillaman
> Envoyé : 04 June 2019 14:14
> À : 解决
> Cc : ceph-users
> Objet : Re: [ceph-users] rbd.
On Tue, Jun 4, 2019 at 4:55 AM 解决 wrote:
>
> Hi all,
> We use ceph(luminous) + openstack(queens) in my test environment。The
> virtual machine does not start properly after the disaster test and the image
> of virtual machine can not create snap.The procedure is as follows:
> #!/usr/bin/env p
On Tue, Jun 4, 2019 at 8:07 AM Jason Dillaman wrote:
>
> On Tue, Jun 4, 2019 at 4:45 AM Burkhard Linke
> wrote:
> >
> > Hi,
> >
> > On 6/4/19 10:12 AM, CUZA Frédéric wrote:
> >
> > Hi everyone,
> >
> >
> >
> > We want to mig
On Tue, Jun 4, 2019 at 4:45 AM Burkhard Linke
wrote:
>
> Hi,
>
> On 6/4/19 10:12 AM, CUZA Frédéric wrote:
>
> Hi everyone,
>
>
>
> We want to migrate datas from one cluster (Hammer) to a new one (Mimic). We
> do not wish to upgrade the actual cluster as all the hardware is EOS and we
> upgrade t
On Fri, May 24, 2019 at 6:09 PM Marc Roos wrote:
>
>
> I have still some account listing either "allow" or not. What should
> this be? Should this not be kept uniform?
What if the profile in the future adds denials? What does "allow
profile XYX" (or "deny profile rbd") mean when it has other embe
s Alva
> Sent from Gmail Mobile
>
> On Tue, May 21, 2019, 4:49 AM Jason Dillaman wrote:
>>
>> On Mon, May 20, 2019 at 2:17 PM Marc Schöchlin wrote:
>> >
>> > Hello cephers,
>> >
>> > we have a few systems which utilize a rbd-bd map/mount to
On Tue, May 21, 2019 at 11:28 AM Marc Schöchlin wrote:
>
> Hello Jason,
>
> Am 20.05.19 um 23:49 schrieb Jason Dillaman:
>
> On Mon, May 20, 2019 at 2:17 PM Marc Schöchlin wrote:
>
> Hello cephers,
>
> we have a few systems which utilize a rbd-bd map/mount to
On Mon, May 20, 2019 at 2:17 PM Marc Schöchlin wrote:
>
> Hello cephers,
>
> we have a few systems which utilize a rbd-bd map/mount to get access to a rbd
> volume.
> (This problem seems to be related to "[ceph-users] Slow requests from
> bluestore osds" (the original thread))
>
> Unfortunately
On Mon, May 20, 2019 at 11:14 AM Rainer Krienke wrote:
>
> Am 20.05.19 um 09:06 schrieb Jason Dillaman:
>
> >> $ rbd --namespace=testnamespace map rbd/rbdtestns --name client.rainer
> >> --keyring=/etc/ceph/ceph.keyring
> >> rbd: sysfs write failed
> &g
On Mon, May 20, 2019 at 9:08 AM Rainer Krienke wrote:
>
> Hello,
>
> just saw this message on the client when trying and failing to map the
> rbd image:
>
> May 20 08:59:42 client kernel: libceph: bad option at
> '_pool_ns=testnamespace'
You will need kernel v4.19 (or later) I believe to utilize
On Mon, May 20, 2019 at 8:56 AM Rainer Krienke wrote:
>
> Hello,
>
> on a ceph Nautilus cluster (14.2.1) running on Ubuntu 18.04 I try to set
> up rbd images with namespaces in order to allow different clients to
> access only their "own" rbd images in different namespaces in just one
> pool. The
On Wed, May 8, 2019 at 7:26 AM wrote:
>
> Hi.
>
> I'm fishing a bit here.
>
> What we see is that when we have new VM/RBD/SSD-backed images the
> time before they are "fully written" first time - can be lousy
> performance. Sort of like they are thin-provisioned and the subsequent
> growing of the
On Wed, May 1, 2019 at 5:00 PM Marc Roos wrote:
>
>
> Do you need to tell the vm's that they are on a ssd rbd pool? Or does
> ceph and the libvirt drivers do this automatically for you?
Like discard, any advanced QEMU options would need to be manually specified.
> When testing a nutanix acropoli
AFAIK, the kernel clients for CephFS and RBD do not support msgr2 yet.
On Wed, Apr 24, 2019 at 4:19 PM Aaron Bassett
wrote:
>
> Hi,
> I'm standing up a new cluster on nautilus to play with some of the new
> features, and I've somehow got my monitors only listening on msgrv2 port
> (3300) and no
On Thu, Apr 18, 2019 at 3:47 PM Wesley Dillingham
wrote:
>
> I am trying to determine some sizing limitations for a potential iSCSI
> deployment and wondering whats still the current lay of the land:
>
> Are the following still accurate as of the ceph-iscsi-3.0 implementation
> assuming CentOS 7
On Wed, Apr 17, 2019 at 10:48 AM Wesley Dillingham
wrote:
>
> The man page for gwcli indicates:
>
> "Disks exported through the gateways use ALUA attributes to provide
> ActiveOptimised and ActiveNonOptimised access to the rbd images. Each disk
> is assigned a primary owner at creation/import
On Fri, Apr 12, 2019 at 10:48 AM Magnus Grönlund wrote:
>
>
>
> Den fre 12 apr. 2019 kl 16:37 skrev Jason Dillaman :
>>
>> On Fri, Apr 12, 2019 at 9:52 AM Magnus Grönlund wrote:
>> >
>> > Hi Jason,
>> >
>> > Tried to follow the instruc
parent image that hasn't been
mirrored. The ENOMSG error seems to indicate that there might be some
corruption in a journal and it's missing expected records (like a
production client crashed), but it should be able to recover from
that.
> https://pastebin.com/1bTETNGs
>
> Best re
ot;
Wouldn't it be converted to bytes since all rbd API methods are in bytes? [1]
>>
>>>
>>> We tried to copy same image in different pool and resulted same
>>> incorrect checksum.
>>>
>>>
>>> Thanks & Regards,
>>> Br
#x27;s say 4MiB per chunk to match
the object size) and compare that to a 4MiB chunked "md5sum" CLI
results from the associated "rbd export" data file (split -b 4194304
--filter=md5sum). That will allow you to isolate the issue down to a
specific section of the image.
>
> T
On Wed, Apr 10, 2019 at 1:46 AM Brayan Perera wrote:
>
> Dear All,
>
> Ceph Version : 12.2.5-2.ge988fb6.el7
>
> We are facing an issue on glance which have backend set to ceph, when
> we try to create an instance or volume out of an image, it throws
> checksum error.
> When we use rbd export and u
/path/to/asok config set debug_rbd_mirror 0/5
... and collect the rbd-mirror log from /var/log/ceph/ (should have
lots of "rbd::mirror"-like log entries.
On Tue, Apr 9, 2019 at 12:23 PM Magnus Grönlund wrote:
>
>
>
> Den tis 9 apr. 2019 kl 17:48 skrev Jason Dillaman :
>
Any chance your rbd-mirror daemon has the admin sockets available
(defaults to /var/run/ceph/cephdr-clientasok)? If
so, you can run "ceph --admin-daemon /path/to/asok rbd mirror status".
On Tue, Apr 9, 2019 at 11:26 AM Magnus Grönlund wrote:
>
>
>
> Den tis 9 apr. 201
On Tue, Apr 9, 2019 at 11:08 AM Magnus Grönlund wrote:
>
> >On Tue, Apr 9, 2019 at 10:40 AM Magnus Grönlund wrote:
> >>
> >> Hi,
> >> We have configured one-way replication of pools between a production
> >> cluster and a backup cluster. But unfortunately the rbd-mirror or the
> >> backup clust
> cheers,
>
> Samuel
>
>
> huxia...@horebdata.cn
>
>
> From: Jason Dillaman
> Date: 2019-04-03 23:03
> To: huxia...@horebdata.cn
> CC: ceph-users
> Subject: Re: [ceph-users] How to tune Ceph RBD mirroring parameters to speed
> up replicati
On Tue, Apr 9, 2019 at 10:40 AM Magnus Grönlund wrote:
>
> Hi,
> We have configured one-way replication of pools between a production cluster
> and a backup cluster. But unfortunately the rbd-mirror or the backup cluster
> is unable to keep up with the production cluster so the replication fails
ow). Also,
please use pastebin or similar service to avoid mailing the logs to
the list.
> Rbd-mirror is running as "rbd-mirror --cluster=cephdr"
>
>
> Thanks,
> -Vikas
>
> -Original Message-
> From: Jason Dillaman
> Sent: Monday, April 8, 2019 9:30 AM
og file.
>
> We removed the pool to make sure there's no image left on DR site and
> recreated an empty pool.
>
> Thanks,
> -Vikas
>
> -Original Message-
> From: Jason Dillaman
> Sent: Friday, April 5, 2019 2:24 PM
> To: Vikas Rana
> Cc: ceph-use
What is the version of rbd-mirror daemon and your OSDs? It looks it
found two replicated images and got stuck on the "wait_for_deletion"
step. Since I suspect those images haven't been deleted, it should
have immediately proceeded to the next step of the image replay state
machine. Are there any ad
For better or worse, out of the box, librbd and rbd-mirror are
configured to conserve memory at the expense of performance to support
the potential case of thousands of images being mirrored and only a
single "rbd-mirror" daemon attempting to handle the load.
You can optimize writes by adding "rbd
# ceph version
> ceph version 14.1.0-559-gf1a72cff25
> (f1a72cff2522833d16ff057ed43eeaddfc17ea8a) nautilus (dev)
>
> Regards,
> Eugen
>
>
> Zitat von Jason Dillaman :
>
> > On Tue, Apr 2, 2019 at 4:19 AM Nikola Ciprich
> > wrote:
> >>
> >
On Tue, Apr 2, 2019 at 4:19 AM Nikola Ciprich
wrote:
>
> Hi,
>
> on one of my clusters, I'm getting error message which is getting
> me a bit nervous.. while listing contents of a pool I'm getting
> error for one of images:
>
> [root@node1 ~]# rbd ls -l nvme > /dev/null
> rbd: error processing ima
What happens when you run "rados -p rbd lock list gateway.conf"?
On Fri, Mar 29, 2019 at 12:19 PM Matthias Leopold
wrote:
>
> Hi,
>
> I upgraded my test Ceph iSCSI gateways to
> ceph-iscsi-3.0-6.g433bbaa.el7.noarch.
> I'm trying to use the new parameter "cluster_client_name", which - to me
> - so
For upstream, "deprecated" might be too strong of a word; however, it
is strongly cautioned against using [1]. There is ongoing work to
replace cache tiering with a new implementation that hopefully works
better and avoids lots of the internal edge cases that the cache
tiering v1 design required.
When using cache pools (which are essentially deprecated functionality
BTW), you should always reference the base tier pool. The fact that a
cache tier sits in front of a slower, base tier is transparently
handled.
On Tue, Mar 26, 2019 at 5:41 PM Götz Reinicke
wrote:
>
> Hi,
>
> I have a rbd in a
support is in-place, we can tweak the resync logic to only copy the
deltas by comparing hashes of the objects.
> I'm trying to estimate how long will it take to get a 200TB image in sync.
>
> Thanks,
> -Vikas
>
>
> -Original Message-
> From: Jason Dillaman
&g
It's just the design of the iSCSI protocol. Sure, you can lower the
timeouts (see "fast_io_fail_tmo" [1]) but you will just end up w/ more
false-positive failovers.
[1] http://docs.ceph.com/docs/master/rbd/iscsi-initiator-linux/
On Thu, Mar 21, 2019 at 10:46 AM li jerry wrote:
>
> Hi Maged
>
> t
1 - 100 of 825 matches
Mail list logo