On Tue, Jan 21, 2020 at 7:51 PM Hayashida, Mami wrote:
>
> Ilya,
>
> Thank you for your suggestions!
>
> `dmsg` (on the client node) only had `libceph: mon0 10.33.70.222:6789 socket
> error on write`. No further detail. But using the admin key (client.admin)
> for mounting CephFS solved my pro
On Tue, Jan 21, 2020 at 6:02 PM Hayashida, Mami wrote:
>
> I am trying to set up a CephFS with a Cache Tier (for data) on a mini test
> cluster, but a kernel-mount CephFS client is unable to write. Cache tier
> setup alone seems to be working fine (I tested it with `rados put` and `osd
> map`
On Fri, Jan 17, 2020 at 2:21 AM Aaron wrote:
>
> No worries, can definitely do that.
>
> Cheers
> Aaron
>
> On Thu, Jan 16, 2020 at 8:08 PM Jeff Layton wrote:
>>
>> On Thu, 2020-01-16 at 18:42 -0500, Jeff Layton wrote:
>> > On Wed, 2020-01-15 at 08:05 -0500, Aaron wrote:
>> > > Seeing a weird mou
On Thu, Jan 9, 2020 at 2:52 PM Kyriazis, George
wrote:
>
> Hello ceph-users!
>
> My setup is that I’d like to use RBD images as a replication target of a
> FreeNAS zfs pool. I have a 2nd FreeNAS (in a VM) to act as a backup target
> in which I mount the RBD image. All this (except the source F
On Mon, Jan 6, 2020 at 2:51 PM M Ranga Swami Reddy wrote:
>
> Thank you.
> Can you please share a simple example here?
>
> Thanks
> Swami
>
> On Mon, Jan 6, 2020 at 4:02 PM wrote:
>>
>> Hi,
>>
>> rbd are thin provisionned, you need to trim on the upper level, either
>> via the fstrim command, or
On Thu, Dec 12, 2019 at 9:12 AM Ashley Merrick wrote:
>
> Due to the recent 5.3.x kernel having support for Object-Map and other
> features required in KRBD I have now enabled object-map,fast-diff on some RBD
> images with CEPH (14.2.5), I have rebuilt the object map using "rbd
> object-map reb
On Thu, Oct 24, 2019 at 5:45 PM Paul Emmerich wrote:
>
> Could it be related to the broken backport as described in
> https://tracker.ceph.com/issues/40102 ?
>
> (It did affect 4.19, not sure about 5.0)
It does, I have just updated the linked ticket to reflect that.
Thanks,
Ilya
On Sat, Oct 19, 2019 at 2:00 PM Lei Liu wrote:
>
> Hello llya,
>
> After updated client kernel version to 3.10.0-862 , ceph features shows:
>
> "client": {
> "group": {
> "features": "0x7010fb86aa42ada",
> "release": "jewel",
> "num": 5
> },
>
On Thu, Oct 17, 2019 at 3:38 PM Lei Liu wrote:
>
> Hi Cephers,
>
> We have some ceph clusters in 12.2.x version, now we want to use upmap
> balancer,but when i set set-require-min-compat-client to luminous, it's failed
>
> # ceph osd set-require-min-compat-client luminous
> Error EPERM: cannot se
On Tue, Oct 1, 2019 at 9:12 PM Jeff Layton wrote:
>
> On Tue, 2019-10-01 at 15:04 -0400, Sasha Levin wrote:
> > On Tue, Oct 01, 2019 at 01:54:45PM -0400, Jeff Layton wrote:
> > > On Tue, 2019-10-01 at 19:03 +0200, Ilya Dryomov wrote:
> > > > On Tue, Oct 1, 20
_inode().
>
> Cc: sta...@vger.kernel.org
> Link: https://tracker.ceph.com/issues/40102
> Signed-off-by: "Yan, Zheng"
> Reviewed-by: Jeff Layton
> Signed-off-by: Ilya Dryomov
> Signed-off-by: Sasha Levin
>
>
> Backing this patch out and recomp
On Tue, Aug 13, 2019 at 1:06 PM Hector Martin wrote:
>
> I just had a minor CephFS meltdown caused by underprovisioned RAM on the
> MDS servers. This is a CephFS with two ranks; I manually failed over the
> first rank and the new MDS server ran out of RAM in the rejoin phase
> (ceph-mds didn't get
On Wed, Aug 14, 2019 at 1:54 PM Tim Bishop wrote:
>
> On Wed, Aug 14, 2019 at 12:44:15PM +0200, Ilya Dryomov wrote:
> > On Tue, Aug 13, 2019 at 10:56 PM Tim Bishop wrote:
> > > This email is mostly a heads up for others who might be using
> > > Canonical's liv
On Tue, Aug 13, 2019 at 10:56 PM Tim Bishop wrote:
>
> Hi,
>
> This email is mostly a heads up for others who might be using
> Canonical's livepatch on Ubuntu on a CephFS client.
>
> I have an Ubuntu 18.04 client with the standard kernel currently at
> version linux-image-4.15.0-54-generic 4.15.0-
On Tue, Aug 13, 2019 at 6:37 PM Gesiel Galvão Bernardes
wrote:
>
> HI,
>
> I recently noticed that in two of my pools the command "rbd ls" has take
> several minutes to return the values. These pools have between 100 and 120
> images each.
>
> Where should I look to check why this slowness? The
On Tue, Aug 13, 2019 at 4:30 PM Serkan Çoban wrote:
>
> I am out of office right now, but I am pretty sure it was the same
> stack trace as in tracker.
> I will confirm tomorrow.
> Any workarounds?
Compaction
# echo 1 >/proc/sys/vm/compact_memory
might help if the memory in question is moveable
On Tue, Aug 13, 2019 at 3:57 PM Serkan Çoban wrote:
>
> I checked /var/log/messages and see there are page allocation
> failures. But I don't understand why?
> The client has 768GB memory and most of it is not used, cluster has
> 1500OSDs. Do I need to increase vm.min_free_kytes? It is set to 1GB
On Tue, Aug 13, 2019 at 12:36 PM Serkan Çoban wrote:
>
> Hi,
>
> Just installed nautilus 14.2.2 and setup cephfs on it. OS is all centos 7.6.
> From a client I can mount the cephfs with ceph-fuse, but I cannot
> mount with ceph kernel client.
> It gives "mount error 110 connection timeout" and I c
On Tue, Jul 30, 2019 at 10:33 AM Massimo Sgaravatto
wrote:
>
> The documentation that I have seen says that the minimum requirements for
> clients to use upmap are:
>
> - CentOs 7.5 or kernel 4.5
> - Luminous version
Do you have a link for that?
This is wrong: CentOS 7.5 (i.e. RHEL 7.5 kernel)
On Fri, Jul 12, 2019 at 5:38 PM Marc Roos wrote:
>
>
> Thanks Ilya for explaining. Am I correct to understand from the link[0]
> mentioned in the issue, that because eg. I have an unhealthy state for
> some time (1 pg on a insignificant pool) I have larger osdmaps,
> triggering this issue? Or is j
On Fri, Jul 12, 2019 at 12:33 PM Paul Emmerich wrote:
>
>
>
> On Thu, Jul 11, 2019 at 11:36 PM Marc Roos wrote:
>> Anyone know why I would get these? Is it not strange to get them in a
>> 'standard' setup?
>
> you are probably running on an ancient kernel. this bug has been fixed a long
> time a
On Mon, Jun 10, 2019 at 8:03 PM Jason Dillaman wrote:
>
> On Mon, Jun 10, 2019 at 1:50 PM Jonas Jelten wrote:
> >
> > When I run:
> >
> > rbd map --name client.lol poolname/somenamespace/imagename
> >
> > The image is mapped to /dev/rbd0 and
> >
> > /dev/rbd/poolname/imagename
> >
> > I would
On Tue, May 21, 2019 at 11:41 AM Marc Roos wrote:
>
>
>
> I have this on a cephfs client, I had ceph common on 12.2.11, and
> upgraded to 12.2.12 while having this error. They are writing here [0]
> you need to upgrade kernel and it is fixed in 12.2.2
>
> [@~]# uname -a
> Linux mail03 3.10.0-9
On Wed, Feb 27, 2019 at 12:00 PM Thomas <74cmo...@gmail.com> wrote:
>
> Hi,
> I have noticed an error when writing to a mapped RBD.
> Therefore I unmounted the block device.
> Then I tried to unmap it w/o success:
> ld2110:~ # rbd unmap /dev/rbd0
> rbd: sysfs write failed
> rbd: unmap failed: (16)
On Fri, Feb 15, 2019 at 12:05 AM Mike Perez wrote:
>
> Hi Marc,
>
> You can see previous designs on the Ceph store:
>
> https://www.proforma.com/sdscommunitystore
Hi Mike,
This site stopped working during DevConf and hasn't been working since.
I think Greg has contacted some folks about this, bu
On Wed, Feb 6, 2019 at 11:09 AM James Dingwall
wrote:
>
> Hi,
>
> I have been doing some testing with striped rbd images and have a
> question about the calculation of the optimal_io_size and
> minimum_io_size parameters. My test image was created using a 4M object
> size, stripe unit 64k and str
On Mon, Feb 4, 2019 at 9:25 AM Massimo Sgaravatto
wrote:
>
> The official documentation [*] says that the only requirement to use the
> balancer in upmap mode is that all clients must run at least luminous.
> But I read somewhere (also in this mailing list) that there are also
> requirements wrt
On Mon, Jan 28, 2019 at 7:31 AM ST Wong (ITSC) wrote:
>
> > That doesn't appear to be an error -- that's just stating that it found a
> > dead client that was holding the exclusice-lock, so it broke the dead
> > client's lock on the image (by blacklisting the client).
>
> As there is only 1 RBD
On Fri, Jan 25, 2019 at 9:40 AM Martin Palma wrote:
>
> > Do you see them repeating every 30 seconds?
>
> yes:
>
> Jan 25 09:34:37 sdccgw01 kernel: [6306813.737615] libceph: mon4
> 10.8.55.203:6789 session lost, hunting for new mon
> Jan 25 09:34:37 sdccgw01 kernel: [6306813.737620] libceph: mon3
On Fri, Jan 25, 2019 at 8:37 AM Martin Palma wrote:
>
> Hi Ilya,
>
> thank you for the clarification. After setting the
> "osd_map_messages_max" to 10 the io errors and the MDS error
> "MDS_CLIENT_LATE_RELEASE" are gone.
>
> The messages of "mon session lost, hunting for new new mon" didn't go
>
On Thu, Jan 24, 2019 at 6:21 PM Andras Pataki
wrote:
>
> Hi Ilya,
>
> Thanks for the clarification - very helpful.
> I've lowered osd_map_messages_max to 10, and this resolves the issue
> about the kernel being unhappy about large messages when the OSDMap
> changes. One comment here though: you m
On Thu, Jan 24, 2019 at 8:16 PM Martin Palma wrote:
>
> We are experiencing the same issues on clients with CephFS mounted
> using the kernel client and 4.x kernels.
>
> The problem shows up when we add new OSDs, on reboots after
> installing patches and when changing the weight.
>
> Here the log
On Mon, Jan 21, 2019 at 11:43 AM ST Wong (ITSC) wrote:
>
> Hi, we’re trying mimic on an VM farm. It consists 4 OSD hosts (8 OSDs) and 3
> MON. We tried mounting as RBD and CephFS (fuse and kernel mount) on
> different clients without problem.
Is this an upgraded or a fresh cluster?
>
> Th
On Fri, Jan 18, 2019 at 11:25 AM Mykola Golub wrote:
>
> On Thu, Jan 17, 2019 at 10:27:20AM -0800, Void Star Nill wrote:
> > Hi,
> >
> > We am trying to use Ceph in our products to address some of the use cases.
> > We think Ceph block device for us. One of the use cases is that we have a
> > numb
On Fri, Jan 18, 2019 at 9:25 AM Burkhard Linke
wrote:
>
> Hi,
>
> On 1/17/19 7:27 PM, Void Star Nill wrote:
>
> Hi,
>
> We am trying to use Ceph in our products to address some of the use cases. We
> think Ceph block device for us. One of the use cases is that we have a number
> of jobs running
On Wed, Jan 16, 2019 at 7:12 PM Andras Pataki
wrote:
>
> Hi Ilya/Kjetil,
>
> I've done some debugging and tcpdump-ing to see what the interaction
> between the kernel client and the mon looks like. Indeed -
> CEPH_MSG_MAX_FRONT defined as 16Mb seems low for the default mon
> messages for our clus
On Wed, Jan 16, 2019 at 1:27 AM Kjetil Joergensen wrote:
>
> Hi,
>
> you could try reducing "osd map message max", some code paths that end up as
> -EIO (kernel: libceph: mon1 *** io error) is exceeding
> include/linux/ceph/libceph.h:CEPH_MSG_MAX_{FRONT,MIDDLE,DATA}_LEN.
>
> This "worked for us"
On Fri, Jan 11, 2019 at 11:58 AM Rom Freiman wrote:
>
> Same kernel :)
Rom, can you update your CentOS ticket with the link to the Ceph BZ?
Thanks,
Ilya
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listin
On Fri, Jan 11, 2019 at 1:38 AM Brad Hubbard wrote:
>
> On Fri, Jan 11, 2019 at 9:57 AM Jason Dillaman wrote:
> >
> > I think Ilya recently looked into a bug that can occur when
> > CONFIG_HARDENED_USERCOPY is enabled and the IO's TCP message goes
> > through the loopback interface (i.e. co-locat
On Wed, Jan 9, 2019 at 5:17 PM Kenneth Van Alstyne
wrote:
>
> Hey folks, I’m looking into what I would think would be a simple problem, but
> is turning out to be more complicated than I would have anticipated. A
> virtual machine managed by OpenNebula was blown away, but the backing RBD
> im
On Sat, Dec 22, 2018 at 7:18 PM Brian : wrote:
>
> Sorry to drag this one up again.
>
> Just got the unsubscribed due to excessive bounces thing.
>
> 'Your membership in the mailing list ceph-users has been disabled due
> to excessive bounces The last bounce received from you was dated
> 21-Dec-20
On Thu, Dec 6, 2018 at 11:15 AM Ashley Merrick wrote:
>
> That is correct, but that command was run weeks ago.
>
> And the RBD connected fine on 2.9 via the kernel 4.12 so I’m really lost to
> why suddenly it’s now blocking a connection it originally allowed through
> (even if by mistake)
When
On Thu, Dec 6, 2018 at 10:58 AM Ashley Merrick wrote:
>
> That command returns luminous.
This is the issue.
My guess is someone ran "ceph osd set-require-min-compat-client
luminous", making it so that only luminous aware clients are allowed to
connect to the cluster. Kernel 4.12 doesn't support
On Thu, Dec 6, 2018 at 4:22 AM Ashley Merrick wrote:
>
> Hello,
>
> As mentioned earlier the cluster is seperatly running on the latest mimic.
>
> Due to 14.04 only supporting up to Luminous I was running the 12.2.9 version
> of ceph-common for the rbd binary.
>
> This is what was upgraded when I
On Wed, Dec 5, 2018 at 3:48 PM Ashley Merrick wrote:
>
> I have had some ec backed Mimic RBD's mounted via the kernel module on a
> Ubuntu 14.04 VM, these have been running no issues after updating the kernel
> to 4.12 to support EC features.
>
> Today I run an apt dist-upgrade which upgraded fr
On Thu, Nov 8, 2018 at 5:10 PM Stefan Kooman wrote:
>
> Quoting Stefan Kooman (ste...@bit.nl):
> > I'm pretty sure it isn't. I'm trying to do the same (force luminous
> > clients only) but ran into the same issue. Even when running 4.19 kernel
> > it's interpreted as a jewel client. Here is the li
On Thu, Nov 8, 2018 at 2:15 PM Stefan Kooman wrote:
>
> Quoting Ilya Dryomov (idryo...@gmail.com):
> > On Sat, Nov 3, 2018 at 10:41 AM wrote:
> > >
> > > Hi.
> > >
> > > I tried to enable the "new smart balancing" - backend are o
On Wed, Nov 7, 2018 at 2:25 PM wrote:
>
> Hi!
>
> I use ceph version 13.2.1 (5533ecdc0fda920179d7ad84e0aa65a127b20d77) mimic
> (stable) and i want to call `ls -ld` to read whole dir size in cephfs:
>
> When i man mount.ceph:
>
> rbytes Report the recursive size of the directory contents for st_si
On Sat, Nov 3, 2018 at 10:41 AM wrote:
>
> Hi.
>
> I tried to enable the "new smart balancing" - backend are on RH luminous
> clients are Ubuntu 4.15 kernel.
>
> As per: http://docs.ceph.com/docs/mimic/rados/operations/upmap/
> $ sudo ceph osd set-require-min-compat-client luminous
> Error EPERM:
On Wed, Oct 10, 2018 at 8:48 PM Kjetil Joergensen wrote:
>
> Hi,
>
> We tested bcache, dm-cache/lvmcache, and one more which name eludes me with
> PCIe NVME on top of large spinning rust drives behind a SAS3 expander - and
> decided this were not for us.
>
> This was probably jewel with filestor
On Tue, Sep 25, 2018 at 2:05 PM 刘 轩 wrote:
>
> Hi Ilya:
>
> I have some questions about the commit
> d84b37f9fa9b23a46af28d2e9430c87718b6b044 about the function
> handle_cap_export. In which case, issued! = cap->implemented may occur.
>
> I encountered this kind of mistake in my cluster. Do you
On Tue, Sep 11, 2018 at 1:00 PM Tobias Florek wrote:
>
> Hi!
>
> I have a cluster serving RBDs and CephFS that has a big number of
> clients I don't control. I want to know what feature flags I can safely
> set without locking out clients. Is there a command analogous to `ceph
> versions` that s
On Mon, Sep 10, 2018 at 7:46 PM David Turner wrote:
>
> Now that you mention it, I remember those threads on the ML. What happens if
> you use --yes-i-really-mean-it to do those things and then later you try to
> map an RBD with an older kernel for CentOS 7.3 or 7.4? Will that mapping
> fail
On Mon, Sep 10, 2018 at 7:19 PM David Turner wrote:
>
> I haven't found any mention of this on the ML and Google's results are all
> about compiling your own kernel to use NBD on CentOS. Is everyone that's
> using rbd-nbd on CentOS honestly compiling their own kernels for the clients?
> This fe
On Mon, Sep 10, 2018 at 10:46 AM Martin Palma wrote:
>
> We are trying to unmap an rbd image form a host for deletion and
> hitting the following error:
>
> rbd: sysfs write failed
> rbd: unmap failed: (16) Device or resource busy
>
> We used commands like "lsof" and "fuser" but nothing is reporte
On Sun, Sep 9, 2018 at 6:31 AM David Turner wrote:
>
> The problem is with the kernel pagecache. If that is still shared in a
> containerized environment with the OSDs in containers and RBDs which are
> married on The node outside of containers, then it is indeed still a problem.
> I would gues
On Sat, Sep 8, 2018 at 1:52 AM Tyler Bishop
wrote:
>
> I have a fairly large cluster running ceph bluestore with extremely fast SAS
> ssd for the metadata. Doing FIO benchmarks I am getting 200k-300k random
> write iops but during sustained workloads of ElasticSearch my clients seem to
> hit a
On Thu, Aug 30, 2018 at 1:04 PM Eugen Block wrote:
>
> Hi again,
>
> we still didn't figure out the reason for the flapping, but I wanted
> to get back on the dmesg entries.
> They just reflect what happened in the past, they're no indicator to
> predict anything.
The kernel client is just that,
On Tue, Aug 21, 2018 at 9:19 PM Jacob DeGlopper wrote:
>
> I'm seeing an error from the rbd map command running in ceph-container;
> I had initially deployed this cluster as Luminous, but a pull of the
> ceph/daemon container unexpectedly upgraded me to Mimic 13.2.1.
>
> [root@nodeA2 ~]# ceph vers
On Mon, Aug 20, 2018 at 9:49 PM Dan van der Ster wrote:
>
> On Mon, Aug 20, 2018 at 5:37 PM Ilya Dryomov wrote:
> >
> > On Mon, Aug 20, 2018 at 4:52 PM Dietmar Rieder
> > wrote:
> > >
> > > Hi Cephers,
> > >
> > >
> > >
On Tue, Aug 21, 2018 at 9:12 AM Dietmar Rieder
wrote:
>
> On 08/20/2018 05:36 PM, Ilya Dryomov wrote:
> > On Mon, Aug 20, 2018 at 4:52 PM Dietmar Rieder
> > wrote:
> >>
> >> Hi Cephers,
> >>
> >>
> >> I wonder if the cephfs client
On Mon, Aug 20, 2018 at 4:52 PM Dietmar Rieder
wrote:
>
> Hi Cephers,
>
>
> I wonder if the cephfs client in RedHat/CentOS 7.5 will be updated to
> luminous?
> As far as I see there is some luminous related stuff that was
> backported, however,
> the "ceph features" command just reports "jewel" as
On Mon, Aug 13, 2018 at 5:57 PM Nikola Ciprich
wrote:
>
> Hi Ilya,
>
> hmm, OK, I'm not sure now whether this is the bug which I'm
> experiencing.. I've had read_partial_message / bad crc/signature
> problem occurance on the second cluster in short period even though
> we're on the same ceph ver
On Mon, Aug 6, 2018 at 8:17 PM Ilya Dryomov wrote:
>
> On Mon, Aug 6, 2018 at 8:13 PM Ilya Dryomov wrote:
> >
> > On Thu, Jul 26, 2018 at 1:55 AM Alex Gorbachev
> > wrote:
> > >
> > > On Wed, Jul 25, 2018 at 7:07 PM, Alex Gorbachev
> > > w
On Mon, Aug 13, 2018 at 2:49 PM Nikola Ciprich
wrote:
>
> Hi Paul,
>
> thanks, I'll give it a try.. do you think this might head to
> upstream soon? for some reason I can't review comments for
> this patch on github.. Is some new version of this patch
> on the way, or can I try to apply this one
On Mon, Aug 6, 2018 at 8:13 PM Ilya Dryomov wrote:
>
> On Thu, Jul 26, 2018 at 1:55 AM Alex Gorbachev
> wrote:
> >
> > On Wed, Jul 25, 2018 at 7:07 PM, Alex Gorbachev
> > wrote:
> > > On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev
> > > wrote:
On Thu, Jul 26, 2018 at 1:55 AM Alex Gorbachev wrote:
>
> On Wed, Jul 25, 2018 at 7:07 PM, Alex Gorbachev
> wrote:
> > On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev
> > wrote:
> >> On Wed, Jul 25, 2018 at 5:51 PM, Jason Dillaman
> >> wrote:
> >>>
> >>>
> >>> On Wed, Jul 25, 2018 at 5:41 PM
On Mon, Aug 6, 2018 at 3:24 AM Dai Xiang wrote:
>
> On Thu, Aug 02, 2018 at 01:04:46PM +0200, Ilya Dryomov wrote:
> > On Thu, Aug 2, 2018 at 12:49 PM wrote:
> > >
> > > I create a rbd named dx-app with 500G, and map as rbd0.
> > >
> > > But
On Mon, Aug 6, 2018 at 9:10 AM Will Zhao wrote:
>
> Hi all: extern "C" int rbd_discard(rbd_image_t image, uint64_t ofs,
> uint64_t len)
> {
> librbd::ImageCtx *ictx = (librbd::ImageCtx *)image;
> tracepoint(librbd, discard_enter, ictx, ictx->name.c_str(),
> ictx->snap_name.c_str(), ictx->read_only
On Thu, Aug 2, 2018 at 12:49 PM wrote:
>
> I create a rbd named dx-app with 500G, and map as rbd0.
>
> But i find the size is different with different cmd:
>
> [root@dx-app docker]# rbd info dx-app
> rbd image 'dx-app':
> size 32000 GB in 8192000 objects <
> order 22 (4096 kB objects)
On Wed, Aug 1, 2018 at 11:13 AM wrote:
>
> Hi!
>
> I find a rbd map service issue:
> [root@dx-test ~]# systemctl status rbdmap
> ● rbdmap.service - Map RBD devices
>Loaded: loaded (/usr/lib/systemd/system/rbdmap.service; enabled; vendor
> preset: disabled)
>Active: active (exited) (Resul
On Thu, Jul 26, 2018 at 5:15 PM Alex Gorbachev wrote:
>
> On Thu, Jul 26, 2018 at 9:49 AM, Ilya Dryomov wrote:
> > On Thu, Jul 26, 2018 at 1:07 AM Alex Gorbachev
> > wrote:
> >>
> >> On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev
> >> wrote:
On Thu, Jul 26, 2018 at 1:07 AM Alex Gorbachev wrote:
>
> On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev
> wrote:
> > On Wed, Jul 25, 2018 at 5:51 PM, Jason Dillaman wrote:
> >>
> >>
> >> On Wed, Jul 25, 2018 at 5:41 PM Alex Gorbachev
> >> wrote:
> >>>
> >>> I am not sure this related to RBD,
On Thu, Jul 26, 2018 at 1:55 AM Alex Gorbachev wrote:
>
> On Wed, Jul 25, 2018 at 7:07 PM, Alex Gorbachev
> wrote:
> > On Wed, Jul 25, 2018 at 6:07 PM, Alex Gorbachev
> > wrote:
> >> On Wed, Jul 25, 2018 at 5:51 PM, Jason Dillaman
> >> wrote:
> >>>
> >>>
> >>> On Wed, Jul 25, 2018 at 5:41 PM
On Fri, Jun 29, 2018 at 8:08 PM Nick Fisk wrote:
>
> This is for us peeps using Ceph with VMWare.
>
>
>
> My current favoured solution for consuming Ceph in VMWare is via RBD’s
> formatted with XFS and exported via NFS to ESXi. This seems to perform better
> than iSCSI+VMFS which seems to not pl
On Fri, Jun 8, 2018 at 6:37 AM, Tracy Reed wrote:
> On Thu, Jun 07, 2018 at 09:30:23AM PDT, Jason Dillaman spake thusly:
>> I think what Ilya is saying is that it's a very old RHEL 7-based
>> kernel (RHEL 7.1?). For example, the current RHEL 7.5 kernel includes
>> numerous improvements that have b
On Thu, Jun 7, 2018 at 6:30 PM, Jason Dillaman wrote:
> On Thu, Jun 7, 2018 at 12:13 PM, Tracy Reed wrote:
>> On Thu, Jun 07, 2018 at 08:40:50AM PDT, Ilya Dryomov spake thusly:
>>> > Kernel is Linux cpu04.mydomain.com 3.10.0-229.20.1.el7.x86_64 #1 SMP Tue
>>> &g
On Thu, Jun 7, 2018 at 4:33 PM, Tracy Reed wrote:
> On Thu, Jun 07, 2018 at 02:05:31AM PDT, Ilya Dryomov spake thusly:
>> > find /sys/kernel/debug/ceph -type f -print -exec cat {} \;
>>
>> Can you paste the entire output of that command?
>>
>> Which kern
On Thu, Jun 7, 2018 at 5:12 AM, Tracy Reed wrote:
>
> Hello all! I'm running luminous with old style non-bluestore OSDs. ceph
> 10.2.9 clients though, haven't been able to upgrade those yet.
>
> Occasionally I have access to rbds hang on the client such as right now.
> I tried to dd a VM image int
On Tue, Jun 5, 2018 at 4:07 AM, 李昊华 wrote:
> Thanks for reading my questions!
>
> I want to run MySQL on Ceph using KRBD because KRBD is faster than librbd.
> And I know KRBD is a kernel module and we can use KRBD to mount the RBD
> device on the operating systems.
>
> It is easy to use command li
On Thu, May 31, 2018 at 2:39 PM, Heðin Ejdesgaard Møller wrote:
> I have encountered the same issue and wrote to the mailing list about it,
> with the subject: [ceph-users] krbd upmap support on kernel-4.16 ?
>
> The odd thing is that I can krbd map an image after setting min compat to
> luminou
On Thu, May 31, 2018 at 4:16 AM, Linh Vu wrote:
> Hi all,
>
>
> On my test Luminous 12.2.4 cluster, with this set (initially so I could use
> upmap in the mgr balancer module):
>
>
> # ceph osd set-require-min-compat-client luminous
>
> # ceph osd dump | grep client
> require_min_compat_client lum
On Fri, May 18, 2018 at 3:25 PM, Donald "Mac" McCarthy
wrote:
> Ilya,
> Your recommendation worked beautifully. Thank you!
>
> Is this something that is expected behavior or is this something that should
> be filed as a bug.
>
> I ask because I have just enough experience with ceph at this poi
On Wed, May 16, 2018 at 8:27 PM, Donald "Mac" McCarthy
wrote:
> CephFS. 8 core atom C2758, 16 GB ram, 256GB ssd, 2.5 GB NIC (supermicro
> microblade node).
>
> Read test:
> dd if=/ceph/1GB.test of=/dev/null bs=1M
Yup, looks like a kcephfs regression. The performance of the above
command is hig
On Thu, May 17, 2018 at 11:03 AM, Jorge Pinilla López wrote:
> Thanks for the info!, I absolutely agree that it should be documented
>
> Any further info about why journaling feature is so slow?
Because everything is written twice: first to the journal and then to
the actual data objects. journa
On Tue, May 15, 2018 at 10:07 AM, wrote:
> Hi, all!
>
> I use rbd to do something and find below issue:
>
> when i create a rbd image with feature:
> layering,exclusive-lock,object-map,fast-diff
>
> failed to map:
> rbd: sysfs write failed
> RBD image feature set mismatch. Try disabling features
On Sat, Mar 17, 2018 at 5:11 PM, shadow_lin wrote:
> Hi list,
> My ceph version is jewel 10.2.10.
> I tired to use rbd rm to remove a 50TB image(without object map because krbd
> does't support it).It takes about 30mins to just complete about 3%. Is this
> expected? Is there a way to make it faste
On Fri, Mar 23, 2018 at 5:53 PM, Nicolas Huillard wrote:
> Le vendredi 23 mars 2018 à 12:14 +0100, Ilya Dryomov a écrit :
>> On Fri, Mar 23, 2018 at 11:48 AM, wrote:
>> > The stock kernel from Debian is perfect
>> > Spectre / meltdown mitigations are worthless fo
On Fri, Mar 23, 2018 at 3:01 PM, wrote:
> Ok ^^
>
> For Cephfs, as far as I know, quota support is not supported in kernel space
> This is not specific to luminous, tho
quota support is coming, hopefully in 4.17.
Thanks,
Ilya
___
ceph
On Fri, Mar 23, 2018 at 2:18 PM, wrote:
> On 03/23/2018 12:14 PM, Ilya Dryomov wrote:
>> luminous cluster-wide feature bits are supported since kernel 4.13.
>
> ?
>
> # uname -a
> Linux abweb1 4.14.0-0.bpo.3-amd64 #1 SMP Debian 4.14.13-1~bpo9+1
> (2018-01-14) x86_64
On Fri, Mar 23, 2018 at 11:48 AM, wrote:
> The stock kernel from Debian is perfect
> Spectre / meltdown mitigations are worthless for a Ceph point of view,
> and should be disabled (again, strictly from a Ceph point of view)
>
> If you need the luminous features, using the userspace implementatio
On Wed, Mar 21, 2018 at 6:50 PM, Frederic BRET wrote:
> Hi all,
>
> The context :
> - Test cluster aside production one
> - Fresh install on Luminous
> - choice of Bluestore (coming from Filestore)
> - Default config (including wpq queuing)
> - 6 nodes SAS12, 14 OSD, 2 SSD, 2 x 10Gb nodes, far mor
On Mon, Mar 12, 2018 at 8:20 PM, Maged Mokhtar wrote:
> On 2018-03-12 21:00, Ilya Dryomov wrote:
>
> On Mon, Mar 12, 2018 at 7:41 PM, Maged Mokhtar wrote:
>
> On 2018-03-12 14:23, David Disseldorp wrote:
>
> On Fri, 09 Mar 2018 11:23:02 +0200, Maged Mokhtar wrote:
>
>
On Mon, Mar 12, 2018 at 7:41 PM, Maged Mokhtar wrote:
> On 2018-03-12 14:23, David Disseldorp wrote:
>
> On Fri, 09 Mar 2018 11:23:02 +0200, Maged Mokhtar wrote:
>
> 2)I undertand that before switching the path, the initiator will send a
> TMF ABORT can we pass this to down to the same abort_reque
On Tue, Feb 13, 2018 at 1:24 AM, Blair Bethwaite
wrote:
> Thanks Ilya,
>
> We can probably handle ~6.2MB for a 100TB volume. Is it reasonable to expect
> a librbd client such as QEMU to only hold one object-map per guest?
Yes, I think so.
Thanks,
Ilya
___
On Mon, Feb 12, 2018 at 6:25 AM, Blair Bethwaite
wrote:
> Hi all,
>
> Wondering if anyone can clarify whether there are any significant overheads
> from rbd features like object-map, fast-diff, etc. I'm interested in both
> performance overheads from a latency and space perspective, e.g., can
> ob
On Fri, Feb 9, 2018 at 12:05 PM, Mauricio Garavaglia
wrote:
> Hello,
> Is it possible to get the cephfs client id/address in the host that mounted
> it, in the same way we can get the address on rbd mapped volumes looking at
> /sys/bus/rbd/devices/*/client_addr?
No, not without querying the serve
On Thu, Feb 8, 2018 at 12:54 PM, Kevin Olbrich wrote:
> 2018-02-08 11:20 GMT+01:00 Martin Emrich :
>>
>> I have a machine here mounting a Ceph RBD from luminous 12.2.2 locally,
>> running linux-generic-hwe-16.04 (4.13.0-32-generic).
>>
>> Works fine, except that it does not support the latest feat
On Thu, Feb 8, 2018 at 11:20 AM, Martin Emrich
wrote:
> I have a machine here mounting a Ceph RBD from luminous 12.2.2 locally,
> running linux-generic-hwe-16.04 (4.13.0-32-generic).
>
> Works fine, except that it does not support the latest features: I had to
> disable exclusive-lock,fast-diff,ob
On Mon, Jan 29, 2018 at 8:37 AM, Konstantin Shalygin wrote:
> Anybody know about changes in rbd feature 'striping'? May be is deprecated
> feature? What I mean:
>
> I have volume created by Jewel client on Luminous cluster.
>
> # rbd --user=cinder info
> solid_rbd/volume-12b5df1e-df4c-4574-859d-22
1 - 100 of 493 matches
Mail list logo