On Fri, Sep 30, 2022 at 7:36 PM Filipe Mendes wrote:
>
> Hello!
>
>
> I'm considering switching my current storage solution to CEPH. Today we use
> iscsi as a communication protocol and we use several different hypervisors:
> VMware, hyper-v, xcp-ng, etc.
Hi Filipe,
Ceph's main hypervisor target
On Thu, Sep 15, 2022 at 3:33 PM Arthur Outhenin-Chalandre
wrote:
>
> Hi Ronny,
>
> > On 15/09/2022 14:32 ronny.lippold wrote:
> > hi arthur, some time went ...
> >
> > i would like to know, if there are some news of your setup.
> > do you have replication active running?
>
> No, there was no chan
On Wed, Sep 14, 2022 at 11:11 AM Ilya Dryomov wrote:
>
> On Tue, Sep 13, 2022 at 10:03 PM Yuri Weinstein wrote:
> >
> > Details of this release are summarized here:
> >
> > https://tracker.ceph.com/issues/57472#note-1
> > Release Notes - https://github.com/
On Tue, Sep 13, 2022 at 10:03 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/57472#note-1
> Release Notes - https://github.com/ceph/ceph/pull/48072
>
> Seeking approvals for:
>
> rados - Neha, Travis, Ernesto, Adam
> rgw - Casey
> fs -
On Sun, Sep 11, 2022 at 2:52 AM Angelo Hongens wrote:
>
> Does that windows driver even support ipv6?
Hi Angelo,
Adding Lucian who would know more, but there is a recent fix for IPv6
on Windows:
https://tracker.ceph.com/issues/53281
Thanks,
Ilya
>
> I remember I could not get
On Thu, Sep 1, 2022 at 8:19 PM Yuri Weinstein wrote:
>
> I have several PRs that are ready for merge but failing "make check"
>
> https://github.com/ceph/ceph/pull/47650 (main related to quincy)
> https://github.com/ceph/ceph/pull/47057
> https://github.com/ceph/ceph/pull/47621
> https://github.co
On Fri, Aug 19, 2022 at 1:21 PM Martin Traxl wrote:
>
> Hi Ilya,
>
> On Thu, 2022-08-18 at 13:27 +0200, Ilya Dryomov wrote:
> > On Tue, Aug 16, 2022 at 12:44 PM Martin Traxl
> > wrote:
>
> [...]
>
> > >
> > >
> >
> > Hi Martin,
On Tue, Aug 16, 2022 at 12:44 PM Martin Traxl wrote:
>
> Hi,
>
> I am running a Ceph 16.2.9 cluster with wire encryption. From my ceph.conf:
> _
> ms client mode = secure
> ms cluster mode = secure
> ms mon client mode = secure
> ms mon cluster mode = secure
> ms mon service mode = s
On Wed, Aug 10, 2022 at 3:03 AM Laura Flores wrote:
>
> Hey Satoru and others,
>
> Try this link:
> https://ceph.io/en/news/blog/2022/v15-2-17-octopus-released/
Note that this release also includes the fix for CVE-2022-0670 [1]
(same as in v16.2.10 and v17.2.2 hotfix releases). I have updated th
On Tue, Jul 26, 2022 at 1:41 PM Peter Lieven wrote:
>
> Am 21.07.22 um 17:50 schrieb Ilya Dryomov:
> > On Thu, Jul 21, 2022 at 11:42 AM Peter Lieven wrote:
> >> Am 19.07.22 um 17:57 schrieb Ilya Dryomov:
> >>> On Tue, Jul 19, 2022 at 5:10 PM Peter Lieven w
On Sat, Jul 23, 2022 at 12:16 PM Konstantin Shalygin wrote:
>
> Hi,
>
> This is hotfix only release? No another patches was targeted to 16.2.10
> landed here?
Hi Konstantin,
Correct, just fixes for CVE-2022-0670 and potential s3website
denial-of-service bug.
Thanks,
Ilya
_
On Thu, Jul 21, 2022 at 11:42 AM Peter Lieven wrote:
>
> Am 19.07.22 um 17:57 schrieb Ilya Dryomov:
> > On Tue, Jul 19, 2022 at 5:10 PM Peter Lieven wrote:
> >> Am 24.06.22 um 16:13 schrieb Peter Lieven:
> >>> Am 23.06.22 um 12:59 schrieb Ilya Dryomov:
> &
On Thu, Jul 21, 2022 at 4:24 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/56484
> Release Notes - https://github.com/ceph/ceph/pull/47198
>
> Seeking approvals for:
>
> rados - Neha, Travis, Ernesto, Adam
> rgw - Casey
> fs, kcephfs,
On Tue, Jul 19, 2022 at 9:55 PM Wesley Dillingham
wrote:
>
>
> Thanks.
>
> Interestingly the older kernel did not have a problem with it but the newer
> kernel does.
The older kernel can't communicate via v2 protocol so it doesn't (need
to) distinguish v1 and v2 addresses.
Thanks,
On Tue, Jul 19, 2022 at 9:12 PM Wesley Dillingham
wrote:
>
>
> from ceph.conf:
>
> mon_host = 10.26.42.172,10.26.42.173,10.26.42.174
>
> map command:
> rbd --id profilerbd device map win-rbd-test/originalrbdfromsnap
>
> [root@a2tlomon002 ~]# ceph mon dump
> dumped monmap epoch 44
> epoch 44
> fsi
On Tue, Jul 19, 2022 at 5:01 PM Wesley Dillingham
wrote:
>
> I have a strange error when trying to map via krdb on a RH (alma8) release
> / kernel 4.18.0-372.13.1.el8_6.x86_64 using ceph client version 14.2.22
> (cluster is 14.2.16)
>
> the rbd map causes the following error in dmesg:
>
> [Tue Ju
On Tue, Jul 19, 2022 at 5:10 PM Peter Lieven wrote:
>
> Am 24.06.22 um 16:13 schrieb Peter Lieven:
> > Am 23.06.22 um 12:59 schrieb Ilya Dryomov:
> >> On Thu, Jun 23, 2022 at 11:32 AM Peter Lieven wrote:
> >>> Am 22.06.22 um 15:46 schrieb Josh Baergen:
> &g
On Fri, Jul 15, 2022 at 8:49 AM Olivier Nicole wrote:
>
> Hi,
>
> I would like to try Ceph on FreeBSD (because I mostly use FreeBSD) but
> before I invest too much time in it, it seems that the current version
> of Ceph for FreeBSD is quite old. Is it still being taken care of or
> not?
Adding Wi
On Wed, Jul 13, 2022 at 10:50 PM Reed Dier wrote:
>
> Hoping this may be trivial to point me towards, but I typically keep a
> background screen running `rbd perf image iostat` that shows all of the rbd
> devices with io, and how busy that disk may be at any given moment.
>
> Recently after upgr
On Fri, Jul 1, 2022 at 10:59 PM Yuri Weinstein wrote:
>
> We've been scraping for octopus PRs for awhile now.
>
> I see only two PRs being on final stages of testing:
>
> https://github.com/ceph/ceph/pull/44731 - Venky is reviewing
> https://github.com/ceph/ceph/pull/46912 - Ilya is reviewing
>
>
On Fri, Jul 1, 2022 at 5:48 PM Konstantin Shalygin wrote:
>
> Hi,
>
> Since Jun 28 04:05:58 postfix/smtpd[567382]: NOQUEUE: reject: RCPT from
> unknown[158.69.70.147]: 450 4.7.25 Client host rejected: cannot find your
> hostname, [158.69.70.147]; from=
> helo=
>
> ipaddr was changed from 158.69
On Fri, Jul 1, 2022 at 8:32 AM Ansgar Jazdzewski
wrote:
>
> Hi folks,
>
> I did a little testing with the persistent write-back cache (*1) we
> run ceph quincy 17.2.1 qemu 6.2.0
>
> rbd.fio works with the cache, but as soon as we start we get something like
>
> error: internal error: process exite
On Thu, Jun 23, 2022 at 11:32 AM Peter Lieven wrote:
>
> Am 22.06.22 um 15:46 schrieb Josh Baergen:
> > Hey Peter,
> >
> >> I found relatively large allocations in the qemu smaps and checked the
> >> contents. It contained several hundred repetitions of osd and pool names.
> >> We use the defaul
On Wed, Jun 22, 2022 at 11:14 AM Peter Lieven wrote:
>
>
>
> Von meinem iPhone gesendet
>
> > Am 22.06.2022 um 10:35 schrieb Ilya Dryomov :
> >
> > On Tue, Jun 21, 2022 at 8:52 PM Peter Lieven wrote:
> >>
> >> Hi,
> >>
> >&
On Tue, Jun 21, 2022 at 8:52 PM Peter Lieven wrote:
>
> Hi,
>
>
> we noticed that some of our long running VMs (1 year without migration) seem
> to have a very slow memory leak. Taking a dump of the leaked memory revealed
> that it seemed to contain osd and pool information so we concluded that
On Sun, Jun 19, 2022 at 6:13 PM Yuri Weinstein wrote:
>
> rados, rgw, rbd and fs suits ran on the latest sha1
> (https://shaman.ceph.com/builds/ceph/quincy-release/eb0eac1a195f1d8e9e3c472c7b1ca1e9add581c2/)
>
> pls see the summary:
> https://tracker.ceph.com/issues/55974#note-1
>
> seeking final
On Wed, Jun 15, 2022 at 3:21 PM Frank Schilder wrote:
>
> Hi Eugen,
>
> in essence I would like the property "thick provisioned" to be sticky after
> creation and apply to any other operation that would be affected.
>
> To answer the use-case question: this is a disk image on a pool designed for
On Tue, Jun 14, 2022 at 7:21 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/55974
> Release Notes - https://github.com/ceph/ceph/pull/46576
>
> Seeking approvals for:
>
> rados - Neha, Travis, Ernesto, Adam
> rgw - Casey
> fs - Venky,
On Wed, May 25, 2022 at 9:21 AM Sopena Ballesteros Manuel
wrote:
>
> attached,
>
>
> nid001388:~ # ceph auth get client.noir
> 2022-05-25T09:20:00.731+0200 7f81f63f3700 -1 auth: unable to find a keyring
> on
> /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph
On Tue, May 24, 2022 at 8:14 PM Sopena Ballesteros Manuel
wrote:
>
> yes dmesg shows the following:
>
> ...
>
> [23661.367449] rbd: rbd12: failed to lock header: -13
> [23661.367968] rbd: rbd2: no lock owners detected
> [23661.369306] rbd: rbd11: no lock owners detected
> [23661.370068] rbd: rbd11
On Tue, May 24, 2022 at 5:20 PM Sopena Ballesteros Manuel
wrote:
>
> Hi Ilya,
>
>
> thank you very much for your prompt response,
>
>
> Any rbd command variation is affected (mapping device included)
>
> We are using a physical machine (no container involved)
>
>
> Below is the output of the runni
On Tue, May 24, 2022 at 3:57 PM Sopena Ballesteros Manuel
wrote:
>
> Dear ceph user community,
>
>
> I am trying to install and configure a node with a ceph cluster. The linux
> kernel we have does not include the rbd kernel module, hence we installed if
> ourselves:
>
>
> zypper install -y ceph
t;
> I'm just wondering if it's worth waiting a bit for new Pacific
> deployments to try 16.2.8 or not. Thanks!
Hi Steve,
The last blocker PR just merged so it should be a matter of days now.
Thanks,
Ilya
>
> Steve
>
> On Wed, Apr 20, 2022 at 3:37 AM I
On Sat, Apr 30, 2022 at 3:45 PM Denis Polom wrote:
>
> Hi,
>
> I'm setting up RBD mirror between two Ceph clusters and have an issue to
> set up rx-tx direction on primary site.
>
> Issuing the command
>
> rbd mirror pool peer bootstrap import --direction rx-tx --site-name
> primary rbd token
Hi
On Wed, Apr 20, 2022 at 6:21 AM Harry G. Coin wrote:
>
> Great news! Any notion when the many pending bug fixes will show up in
> Pacific? It's been a while.
Hi Harry,
The 16.2.8 release is planned within the next week or two.
Thanks,
Ilya
On Mon, Apr 18, 2022 at 9:04 PM David Galloway wrote:
>
> The LRC is upgraded but the same mgr did crash during the upgrade. It is
> running now despite the crash. Adam suspects it's due to earlier breakage.
>
> https://pastebin.com/NWzzsNgk
src/mgr/DaemonServer.cc: 2946: FAILED
ceph_asser
On Fri, Apr 15, 2022 at 3:10 AM David Galloway wrote:
>
> For transparency and posterity's sake...
>
> I tried upgrading the LRC and the first two mgrs upgraded fine but
> reesi004 threw an error.
>
> Apr 14 22:54:36 reesi004 podman[2042265]: 2022-04-14 22:54:36.210874346
> + UTC m=+0.13889786
On Mon, Mar 28, 2022 at 11:48 PM Yuri Weinstein wrote:
>
> We are trying to release v17.2.0 as soon as possible.
> And need to do a quick approval of tests and review failures.
>
> Still outstanding are two PRs:
> https://github.com/ceph/ceph/pull/45673
> https://github.com/ceph/ceph/pull/45604
>
On Fri, Mar 25, 2022 at 4:11 PM Ilya Dryomov wrote:
>
> On Thu, Mar 24, 2022 at 2:04 PM Budai Laszlo wrote:
> >
> > Hi Ilya,
> >
> > Thank you for your answer!
> >
> > On 3/24/22 14:09, Ilya Dryomov wrote:
> >
> >
> > How can w
On Thu, Mar 24, 2022 at 2:04 PM Budai Laszlo wrote:
>
> Hi Ilya,
>
> Thank you for your answer!
>
> On 3/24/22 14:09, Ilya Dryomov wrote:
>
>
> How can we see whether a lock is exclusive or shared? the rbd lock ls command
> output looks identical for the two
On Fri, Mar 25, 2022 at 10:11 AM Eugen Block wrote:
>
> Hi,
>
> I was curious and tried the same with debug logs. One thing I noticed
> was that if I use the '-k ' option I get a different error
> message than with '--id user3'. So with '-k' the result is the same:
>
> ---snip---
> pacific:~ # rbd
On Thu, Mar 24, 2022 at 11:06 AM Budai Laszlo wrote:
>
> Hi all,
>
> is there any possibility to turn an exclusive lock into a shared one?
>
> for instance if I map a device with "rbd map testimg --exclusive" then is
> there any way to switch that lock to a shared one so I can map the rbd image
On Fri, Mar 11, 2022 at 12:02 PM Szabo, Istvan (Agoda)
wrote:
>
> Hi,
>
> OSDs are not full and pool I don't really see full either.
> This doesn't say anything like which pool it is talking about.
Hi Istvan,
Yes, that's unfortunate. But you should be able to tell which pool
reached quota from
On Fri, Mar 11, 2022 at 8:04 AM Kai Stian Olstad wrote:
>
> Hi
>
> I'm trying to create namespace in an rbd pool, but get operation not
> supported.
> This is on a 16.2.6 Cephadm installed on Ubuntu 20.04.3.
>
> The pool is erasure encoded and the commands I run was the following.
>
> cephadm shel
On Mon, Jan 31, 2022 at 5:07 PM Frank Schilder wrote:
>
> Hi all,
>
> we observed server crashes with these possibly related error messages in the
> log showing up:
>
> Jan 26 10:07:53 sn180 kernel: kernel BUG at include/linux/ceph/decode.h:262!
> Jan 25 23:33:47 sn319 kernel: kernel BUG at inclu
Hi Torkil,
I would recommend sticking to rx-tx to make potential failback back to
the primary cluster easier. There shouldn't be any issue with running
rbd-mirror daemons at both sites either -- it doesn't start replicating
until it is instructed to, either per-pool or per-image.
Thanks,
appear to be unrelated.
Thanks,
Ilya
>
> On Mon, Aug 30, 2021 at 6:34 PM Ilya Dryomov wrote:
> >
> > On Tue, Aug 24, 2021 at 11:43 AM Yanhu Cao wrote:
> > >
> > > Any progress on this? We have encountered the same problem, use the
> > >
On Tue, Aug 24, 2021 at 11:43 AM Yanhu Cao wrote:
>
> Any progress on this? We have encountered the same problem, use the
> rbd-nbd option timeout=120.
> ceph version: 14.2.13
> kernel version: 4.19.118-2+deb10u1
Hi Yanhu,
No, we still don't know what is causing this.
If rbd-nbd is being too sl
On Wed, Aug 25, 2021 at 7:02 AM Paul Giralt (pgiralt) wrote:
>
> I upgraded to Pacific 16.2.5 about a month ago and everything was working
> fine. Suddenly for the past few days I’ve started having the tcmu-runner
> container on my iSCSI gateways just disappear. I’m assuming this is because
> t
1] https://cloudbase.it/ceph-for-windows
Thanks,
Ilya
>
> Best regards
> Daniel
>
>
> On Mon, Aug 9, 2021 at 5:43 PM Ilya Dryomov wrote:
>
> > On Mon, Aug 9, 2021 at 5:14 PM Robert W. Eckert
> > wrote:
> > >
> > > I have
On Wed, Aug 18, 2021 at 12:40 PM Torkil Svensgaard wrote:
>
> Hi
>
> I am looking at one way mirroring from cluster A to B cluster B.
>
> As pr [1] I have configured two pools for RBD on cluster B:
>
> 1) Pool rbd_data using default EC 2+2
> 2) Pool rbd using replica 2
>
> I have a peer relationsh
On Fri, Aug 13, 2021 at 9:45 AM Boris Behrens wrote:
>
> Hi Janne,
> thanks for the hint. I was aware of that, but it is goot to add that
> knowledge to the question for further googlesearcher.
>
> Hi Ilya,
> that fixed it. Do we know why the discard does not work when the partition
> table is not
On Thu, Aug 12, 2021 at 5:03 PM Boris Behrens wrote:
>
> Hi everybody,
>
> we just stumbled over a problem where the rbd image does not shrink, when
> files are removed.
> This only happenes when the rbd image is partitioned.
>
> * We tested it with centos8/ubuntu20.04 with ext4 and a gpt partitio
On Mon, Aug 9, 2021 at 5:14 PM Robert W. Eckert wrote:
>
> I have had the same issue with the windows client.
> I had to issue
> ceph config set mon auth_expose_insecure_global_id_reclaim false
> Which allows the other clients to connect.
> I think you need to restart the monitors as well,
On Mon, Jul 26, 2021 at 5:25 PM wrote:
>
> Have found the problem. All this was caused by missing mon_host directive in
> ceph.conf. I have expected userspace to catch this, but it looks like it
> didn't care.
We should probably add an explicit check for that so that the error
message is explic
On Mon, Jul 26, 2021 at 12:39 PM wrote:
>
> Although I appreciate the responses, they have provided zero help solving
> this issue thus far.
> It seems like the kernel module doesn't even get to the stage where it reads
> the attributes/features of the device. It doesn't know where to connect an
On Fri, Jul 23, 2021 at 11:58 PM wrote:
>
> Hi.
>
> I've followed the installation guide and got nautilus 14.2.22 running on el7
> via https://download.ceph.com/rpm-nautilus/el7/x86_64/ yum repo.
> I'm now trying to map a device on an el7 and getting extremely weird errors:
>
> # rbd info test1/b
On Wed, Jul 21, 2021 at 4:30 PM Marc wrote:
>
> Crappy code continues to live on?
>
> This issue has been automatically marked as stale because it has not had
> recent activity. It will be closed in a week if no further activity occurs.
> Thank you for your contributions.
Hi Marc,
Which issue
ceph-win-latest is where "Ceph 16.0.0 for
Windows x64 - Latest Build" button points to.
Thanks,
Ilya
>
> Thanks,
> Rob
>
>
> -Original Message-
> From: Ilya Dryomov
> Sent: Monday, July 19, 2021 8:04 AM
> To: Lucian Petrut
>
On Tue, Jun 29, 2021 at 4:03 PM Lucian Petrut
wrote:
>
> Hi,
>
> It’s a compatibility issue, we’ll have to update the Windows Pacific build.
Hi Lucian,
Did you get a chance to update the build?
I assume that means the MSI installer at [1]? I see [2] but the MSI
bundle still seems to contain th
On Thu, Jul 15, 2021 at 11:55 PM Robert W. Eckert wrote:
>
> I would like to directly mount cephfs from the windows client, and keep
> getting the error below.
>
>
> PS C:\Program Files\Ceph\bin> .\ceph-dokan.exe -l x
> 2021-07-15T17:41:30.365Eastern Daylight Time 4 -1 monclient(hunting):
> hand
On Thu, Jul 1, 2021 at 10:36 AM Oliver Dzombic wrote:
>
>
>
> Hi,
>
> mapping of rbd volumes fails clusterwide.
Hi Oliver,
Clusterwide -- meaning on more than one client node?
>
> The volumes that are mapped, are ok, but new volumes wont map.
>
> Receiving errors liks:
>
> (108) Cannot send aft
On Thu, Jul 1, 2021 at 10:50 AM Jan Kasprzak wrote:
>
> Ilya Dryomov wrote:
> : On Thu, Jul 1, 2021 at 8:37 AM Jan Kasprzak wrote:
> : >
> : > # rbd snap unprotect one/one-1312@snap
> : > 2021-07-01 08:28:40.747 7f3cb6ffd700 -1 librbd::SnapshotUnprotectRequest:
>
On Thu, Jul 1, 2021 at 9:48 AM Jan Kasprzak wrote:
>
> Ilya Dryomov wrote:
> : On Thu, Jul 1, 2021 at 8:37 AM Jan Kasprzak wrote:
> : >
> : > Hello, Ceph users,
> : >
> : > How can I figure out why it is not possible to unprotect a snapshot
> : >
On Thu, Jul 1, 2021 at 8:37 AM Jan Kasprzak wrote:
>
> Hello, Ceph users,
>
> How can I figure out why it is not possible to unprotect a snapshot
> in a RBD image? I use this RBD pool for OpenNebula, and somehow there
> is a snapshot in one image, which OpenNebula does not see. So I wanted
ed for ..."
splats in dmesg.
Thanks,
Ilya
>
>
> On Wed, Jun 23, 2021 at 11:25 AM Ilya Dryomov wrote:
> >
> > On Wed, Jun 23, 2021 at 9:59 AM Matthias Ferdinand
> > wrote:
> > >
> > > On Tue, Jun 22, 2021 at 02:36:00PM +02
On Wed, Jun 23, 2021 at 3:36 PM Marc wrote:
>
> From what kernel / ceph version is krbd usage on a osd node problematic?
>
> Currently I am running Nautilus 14.2.11 and el7 3.10 kernel without any
> issues.
>
> I can remember using a cephfs mount without any issues as well, until some
> specific
On Wed, Jun 23, 2021 at 9:59 AM Matthias Ferdinand wrote:
>
> On Tue, Jun 22, 2021 at 02:36:00PM +0200, Ml Ml wrote:
> > Hello List,
> >
> > oversudden i can not mount a specific rbd device anymore:
> >
> > root@proxmox-backup:~# rbd map backup-proxmox/cluster5 -k
> > /etc/ceph/ceph.client.admin.k
On Wed, Jun 9, 2021 at 1:38 PM Wido den Hollander wrote:
>
> Hi,
>
> While doing some benchmarks I have two identical Ceph clusters:
>
> 3x SuperMicro 1U
> AMD Epyc 7302P 16C
> 256GB DDR
> 4x Samsung PM983 1,92TB
> 100Gbit networking
>
> I tested on such a setup with v16.2.4 with fio:
>
> bs=4k
>
On Wed, Jun 9, 2021 at 1:36 PM Peter Lieven wrote:
>
> Am 09.06.21 um 13:28 schrieb Ilya Dryomov:
> > On Wed, Jun 9, 2021 at 11:24 AM Peter Lieven wrote:
> >> Hi,
> >>
> >>
> >> we currently run into an issue where a rbd ls for a namespace ret
On Wed, Jun 9, 2021 at 11:24 AM Peter Lieven wrote:
>
> Hi,
>
>
> we currently run into an issue where a rbd ls for a namespace returns ENOENT
> for some of the images in that namespace.
>
>
> /usr/bin/rbd --conf=XXX --id XXX ls
> 'mypool/28ef9470-76eb-4f77-bc1b-99077764ff7c' -l --format=json
>
On Tue, Jun 8, 2021 at 9:20 PM Phil Merricks wrote:
>
> Hey folks,
>
> I have deployed a 3 node dev cluster using cephadm. Deployment went
> smoothly and all seems well.
>
> If I try to mount a CephFS from a client node, 2/3 mons crash however.
> I've begun picking through the logs to see what I
On Sun, May 16, 2021 at 8:06 PM Markus Kienast wrote:
>
> Am So., 16. Mai 2021 um 19:38 Uhr schrieb Ilya Dryomov :
>>
>> On Sun, May 16, 2021 at 4:18 PM Markus Kienast wrote:
>> >
>> > Am So., 16. Mai 2021 um 15:36 Uhr schrieb Ilya Dryomov
>> > :
On Sun, May 16, 2021 at 4:18 PM Markus Kienast wrote:
>
> Am So., 16. Mai 2021 um 15:36 Uhr schrieb Ilya Dryomov :
>>
>> On Sun, May 16, 2021 at 12:54 PM Markus Kienast wrote:
>> >
>> > Hi Ilya,
>> >
>> > unfortunately I can not find any &q
On Sun, May 16, 2021 at 12:54 PM Markus Kienast wrote:
>
> Hi Ilya,
>
> unfortunately I can not find any "missing primary copy of ..." error in the
> logs of my 3 OSDs.
> The NVME disks are also brand new and there is not much traffic on them.
>
> The only error keyword I find are those two messa
On Fri, May 14, 2021 at 8:20 AM Rainer Krienke wrote:
>
> Hello,
>
> has the "negative progress bug" also been fixed in 14.2.21? I cannot
> find any info about this in the changelog?
Unfortunately not -- this was a hotfix release driven by rgw and
dashboard CVEs.
Thanks,
Ilya
__
On Tue, May 11, 2021 at 10:50 AM Konstantin Shalygin wrote:
>
> Hi Ilya,
>
> On 3 May 2021, at 14:15, Ilya Dryomov wrote:
>
> I don't think empty directories matter at this point. You may not have
> had 12 OSDs at any point in time, but the max_osd value appears to hav
On Mon, May 3, 2021 at 12:24 PM Magnus Harlander wrote:
>
> Am 03.05.21 um 11:22 schrieb Ilya Dryomov:
>
> There is a 6th osd directory on both machines, but it's empty
>
> [root@s0 osd]# ll
> total 0
> drwxrwxrwt. 2 ceph ceph 200 2. Mai 16:31 ceph-1
> drwxrwxrw
On Mon, May 3, 2021 at 12:27 PM Magnus Harlander wrote:
>
> Am 03.05.21 um 12:25 schrieb Ilya Dryomov:
>
> ceph osd setmaxosd 10
>
> Bingo! Mount works again.
>
> Vry strange things are going on here (-:
>
> Thanx a lot for now!! If I can help to track it down
On Mon, May 3, 2021 at 12:00 PM Magnus Harlander wrote:
>
> Am 03.05.21 um 11:22 schrieb Ilya Dryomov:
>
> max_osd 12
>
> I never had more then 10 osds on the two osd nodes of this cluster.
>
> I was running a 3 osd-node cluster earlier with more than 10
> osds, but t
On Mon, May 3, 2021 at 9:20 AM Magnus Harlander wrote:
>
> Am 03.05.21 um 00:44 schrieb Ilya Dryomov:
>
> On Sun, May 2, 2021 at 11:15 PM Magnus Harlander wrote:
>
> Hi,
>
> I know there is a thread about problems with mounting cephfs with 5.11
> kernels.
>
> .
On Sun, May 2, 2021 at 11:15 PM Magnus Harlander wrote:
>
> Hi,
>
> I know there is a thread about problems with mounting cephfs with 5.11
> kernels.
> I tried everything that's mentioned there, but I still can not mount a cephfs
> from an octopus node.
>
> I verified:
>
> - I can not mount with
On Sun, Apr 25, 2021 at 11:42 AM Ilya Dryomov wrote:
>
> On Sun, Apr 25, 2021 at 12:37 AM Markus Kienast wrote:
> >
> > I am seeing these messages when booting from RBD and booting hangs there.
> >
> > libceph: get_reply osd2 tid 1459933 data 3248128 >
On Sun, Apr 25, 2021 at 12:37 AM Markus Kienast wrote:
>
> I am seeing these messages when booting from RBD and booting hangs there.
>
> libceph: get_reply osd2 tid 1459933 data 3248128 > preallocated
> 131072, skipping
>
> However, Ceph Health is OK, so I have no idea what is going on. I
> reboot
On Fri, Apr 23, 2021 at 1:12 PM Boris Behrens wrote:
>
>
>
> Am Fr., 23. Apr. 2021 um 13:00 Uhr schrieb Ilya Dryomov :
>>
>> On Fri, Apr 23, 2021 at 12:46 PM Boris Behrens wrote:
>> >
>> >
>> >
>> > Am Fr., 23. Apr. 2021 um 12:16 Uhr s
On Fri, Apr 23, 2021 at 12:46 PM Boris Behrens wrote:
>
>
>
> Am Fr., 23. Apr. 2021 um 12:16 Uhr schrieb Ilya Dryomov :
>>
>> On Fri, Apr 23, 2021 at 12:03 PM Boris Behrens wrote:
>> >
>> >
>> >
>> > Am Fr., 23. Apr. 2021 um 11:52 Uhr
On Fri, Apr 23, 2021 at 12:03 PM Boris Behrens wrote:
>
>
>
> Am Fr., 23. Apr. 2021 um 11:52 Uhr schrieb Ilya Dryomov :
>>
>>
>> This snippet confirms my suspicion. Unfortunately without a verbose
>> log from that VM from three days ago (i.e. when it got i
On Fri, Apr 23, 2021 at 9:16 AM Boris Behrens wrote:
>
>
>
> Am Do., 22. Apr. 2021 um 20:59 Uhr schrieb Ilya Dryomov :
>>
>> On Thu, Apr 22, 2021 at 7:33 PM Boris Behrens wrote:
>> >
>> >
>> >
>> > Am Do., 22. Apr. 2021 um 18:30 Uhr s
On Fri, Apr 23, 2021 at 6:57 AM Cem Zafer wrote:
>
> Hi Ilya,
> Sorry, totally my mistake. I just saw that the configuration on mars like
> that.
>
> auth_cluster_required = none
> auth_service_required = none
> auth_client_required = none
>
> So I changed none to cephx, solved the problem.
> Tha
On Thu, Apr 22, 2021 at 10:16 PM Cem Zafer wrote:
>
> This client ceph-common version is 16.2.0, here are the outputs.
>
> indiana@mars:~$ ceph -v
> ceph version 16.2.0 (0c2054e95bcd9b30fdd908a79ac1d8bbc3394442) pacific
> (stable)
>
> indiana@mars:~$ dpkg -l | grep -i ceph-common
> ii ceph-commo
On Thu, Apr 22, 2021 at 9:24 PM Cem Zafer wrote:
>
> Sorry to disturb you again but changing the value to yes doesnt affect
> anything. Executing simple ceph command from the client replies the following
> error, again. I'm not so sure it is related with that parameter.
> Have you any idea what
On Thu, Apr 22, 2021 at 6:00 PM Boris Behrens wrote:
>
>
>
> Am Do., 22. Apr. 2021 um 17:27 Uhr schrieb Ilya Dryomov :
>>
>> On Thu, Apr 22, 2021 at 5:08 PM Boris Behrens wrote:
>> >
>> >
>> >
>> > Am Do., 22. Apr. 2021 um 16:43 Uhr s
On Thu, Apr 22, 2021 at 6:01 PM Cem Zafer wrote:
>
> Thanks Ilya, pointing me out to the right direction.
> So if I change the auth_allow_insecure_global_id_reclaim to true means older
> userspace clients allowed to the cluster?
Yes, but upgrading all clients and setting it to false is recommend
On Thu, Apr 22, 2021 at 5:08 PM Boris Behrens wrote:
>
>
>
> Am Do., 22. Apr. 2021 um 16:43 Uhr schrieb Ilya Dryomov :
>>
>> On Thu, Apr 22, 2021 at 4:20 PM Boris Behrens wrote:
>> >
>> > Hi,
>> >
>> > I have a customer VM that is runni
al_id in a secure fashion. See
https://docs.ceph.com/en/latest/security/CVE-2021-20288/
for details.
Thanks,
Ilya
>
> On Thu, Apr 22, 2021 at 4:49 PM Ilya Dryomov wrote:
>>
>> On Thu, Apr 22, 2021 at 3:24 PM Cem Zafer wrote:
>> >
>> > Hi,
>
On Thu, Apr 22, 2021 at 4:20 PM Boris Behrens wrote:
>
> Hi,
>
> I have a customer VM that is running fine, but I can not make snapshots
> anymore.
> rbd snap create rbd/IMAGE@test-bb-1
> just hangs forever.
Hi Boris,
Run
$ rbd snap create rbd/IMAGE@test-bb-1 --debug-ms=1 --debug-rbd=20
let it
On Thu, Apr 22, 2021 at 3:24 PM Cem Zafer wrote:
>
> Hi,
> I have recently add a new host to ceph and copied /etc/ceph directory to
> the new host. When I execute the simple ceph command as "ceph -s", get the
> following error.
>
> 021-04-22T14:50:46.226+0300 7ff541141700 -1 monclient(hunting):
>
On Tue, Apr 20, 2021 at 11:30 AM Dan van der Ster wrote:
>
> On Tue, Apr 20, 2021 at 11:26 AM Ilya Dryomov wrote:
> >
> > On Tue, Apr 20, 2021 at 2:01 AM David Galloway wrote:
> > >
> > > This is the 20th bugfix release in the Nautilus stable series. It
>
On Tue, Apr 20, 2021 at 2:02 AM David Galloway wrote:
>
> This is the first bugfix release in the Pacific stable series. It
> addresses a security vulnerability in the Ceph authentication framework.
> We recommend users to update to this release. For a detailed release
> notes with links & change
On Tue, Apr 20, 2021 at 1:56 AM David Galloway wrote:
>
> This is the 11th bugfix release in the Octopus stable series. It
> addresses a security vulnerability in the Ceph authentication framework.
> We recommend users to update to this release. For a detailed release
> notes with links & changel
101 - 200 of 262 matches
Mail list logo