Re: [ceph-users] https://ceph-storage.slack.com

2018-10-10 Thread Katie Holly
Me too please. Also, there is https://github.com/rauchg/slackin for allowing new users to join the Slack team. Best regards Katie Holly On 10/10/18 5:31 PM, David Turner wrote: I would like an invite to. drakonst...@gmail.com On Wed, Sep 19, 2018 at 1:02 PM

Re: [ceph-users] https://ceph-storage.slack.com

2018-10-10 Thread Konstantin Shalygin
why would a ceph slack be invite only? Because this is not Telegram. k ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] OSD log being spammed with BlueStore stupidallocator dump

2018-10-10 Thread David Turner
Not a resolution, but an idea that you've probably thought of. Disabling logging on any affected OSDs (possibly just all of them) seems like a needed step to be able to keep working with this cluster to finish the upgrade and get it healthier. On Wed, Oct 10, 2018 at 6:37 PM Wido den Hollander

Re: [ceph-users] OSD log being spammed with BlueStore stupidallocator dump

2018-10-10 Thread Wido den Hollander
On 10/11/2018 12:08 AM, Wido den Hollander wrote: > Hi, > > On a Luminous cluster running a mix of 12.2.4, 12.2.5 and 12.2.8 I'm > seeing OSDs writing heavily to their logfiles spitting out these lines: > > > 2018-10-10 21:52:04.019037 7f90c2f0f700 0 stupidalloc 0x0x55828ae047d0 > dump

[ceph-users] OSD log being spammed with BlueStore stupidallocator dump

2018-10-10 Thread Wido den Hollander
Hi, On a Luminous cluster running a mix of 12.2.4, 12.2.5 and 12.2.8 I'm seeing OSDs writing heavily to their logfiles spitting out these lines: 2018-10-10 21:52:04.019037 7f90c2f0f700 0 stupidalloc 0x0x55828ae047d0 dump 0x15cd2078000~34000 2018-10-10 21:52:04.019038 7f90c2f0f700 0

Re: [ceph-users] bcache, dm-cache support

2018-10-10 Thread Maged Mokhtar
On 10/10/18 21:08, Ilya Dryomov wrote: On Wed, Oct 10, 2018 at 8:48 PM Kjetil Joergensen wrote: Hi, We tested bcache, dm-cache/lvmcache, and one more which name eludes me with PCIe NVME on top of large spinning rust drives behind a SAS3 expander - and decided this were not for us. This

Re: [ceph-users] add existing rbd to new tcmu iscsi gateways

2018-10-10 Thread Brady Deetz
Looks like that may have recently been broken. Unfortunately no real logs of use in rbd-target-api.log or rbd-target-gw.log. Is there an increased log level I can enable for whatever web-service is handling this? [root@dc1srviscsi01 ~]# rbd -p vmware_ssd_metadata --data-pool vmware_ssd --size 2T

Re: [ceph-users] tcmu iscsi (failover not supported)

2018-10-10 Thread Brady Deetz
Thanks Jason, That got us running. We'll see how it goes. On Wed, Oct 10, 2018 at 2:41 PM Jason Dillaman wrote: > The latest master branch version on shaman should be functional: > > [1] https://shaman.ceph.com/repos/ceph-iscsi-config/ > [2] https://shaman.ceph.com/repos/ceph-iscsi-cli > [3]

Re: [ceph-users] ceph-iscsi upgrade issue

2018-10-10 Thread Jason Dillaman
Can you add "debug = true" to your "iscsi-gateway.cfg" and restart the rbd-target-api on osd03 to see if that provides additional details of the failure? Also, if you don't mind getting your hands dirty, you could temporarily apply this patch [1] to "/usr/bin/rbd-target-api" to see if it can catch

Re: [ceph-users] ceph-iscsi upgrade issue

2018-10-10 Thread Steven Vacaroaia
Yes, I am! [root@osd01 ~]# uname -a Linux osd01.tor.medavail.net 4.18.11-1.el7.elrepo.x86_64 [root@osd03 latest]# uname -a Linux osd03.tor.medavail.net 4.18.11-1.el7.elrepo.x86_64 On Wed, 10 Oct 2018 at 16:22, Jason Dillaman wrote: > Are you running the same kernel version on both nodes? >

Re: [ceph-users] ceph-iscsi upgrade issue

2018-10-10 Thread Jason Dillaman
Are you running the same kernel version on both nodes? On Wed, Oct 10, 2018 at 4:18 PM Steven Vacaroaia wrote: > > so, it seems OSD03 is having issues when creating disks ( I can create target > and hosts ) - here is an excerpt from api.log > Please note I can create disk on the other node > >

Re: [ceph-users] ceph-iscsi upgrade issue

2018-10-10 Thread Steven Vacaroaia
so, it seems OSD03 is having issues when creating disks ( I can create target and hosts ) - here is an excerpt from api.log Please note I can create disk on the other node 2018-10-10 16:03:03,369DEBUG [lun.py:381:allocate()] - LUN.allocate starting, listing rbd devices 2018-10-10 16:03:03,381

Re: [ceph-users] tcmu iscsi (failover not supported)

2018-10-10 Thread Jason Dillaman
The latest master branch version on shaman should be functional: [1] https://shaman.ceph.com/repos/ceph-iscsi-config/ [2] https://shaman.ceph.com/repos/ceph-iscsi-cli [3] https://shaman.ceph.com/repos/tcmu-runner/ On Wed, Oct 10, 2018 at 3:39 PM Brady Deetz wrote: > > Here's where we are now. >

Re: [ceph-users] tcmu iscsi (failover not supported)

2018-10-10 Thread Brady Deetz
Here's where we are now. By cherry-picking that patch into ceph-iscsi-config tags/v2.6 and cleaning up the merge conflicts, the rbd-target-gw service would not start. With the release of ceph-iscsi-config v2.6 (no cherry picked commits) and tcmu-runner v1.3.0 the originally described errors

Re: [ceph-users] bcache, dm-cache support

2018-10-10 Thread Ilya Dryomov
On Wed, Oct 10, 2018 at 8:48 PM Kjetil Joergensen wrote: > > Hi, > > We tested bcache, dm-cache/lvmcache, and one more which name eludes me with > PCIe NVME on top of large spinning rust drives behind a SAS3 expander - and > decided this were not for us. > > This was probably jewel with

Re: [ceph-users] https://ceph-storage.slack.com

2018-10-10 Thread Gregory Farnum
On Wed, Oct 10, 2018 at 11:23 AM Ronny Aasen wrote: > > On 18.09.2018 21:15, Alfredo Daniel Rezinovsky wrote: > > Can anyone add me to this slack? > > > > with my email alfrenov...@gmail.com > > > > Thanks. > > > why would a ceph slack be invite only? > Also is the slack bridged to matrix? room

Re: [ceph-users] bcache, dm-cache support

2018-10-10 Thread Kjetil Joergensen
Hi, We tested bcache, dm-cache/lvmcache, and one more which name eludes me with PCIe NVME on top of large spinning rust drives behind a SAS3 expander - and decided this were not for us. This was probably jewel with filestore, and our primary reason for trying to go down this path were that

Re: [ceph-users] tcmu iscsi (failover not supported)

2018-10-10 Thread Mike Christie
On 10/10/2018 01:13 PM, Brady Deetz wrote: > ceph-iscsi-config v2.6 https://github.com/ceph/ceph-iscsi-config.git > Ignore that. ceph-iscsi-config 2.6 enabled explicit alua in anticipation > for the tcmu-runner support. We are about to release 2.7 which matches > tcmu-runner

Re: [ceph-users] https://ceph-storage.slack.com

2018-10-10 Thread Ronny Aasen
On 18.09.2018 21:15, Alfredo Daniel Rezinovsky wrote: Can anyone add me to this slack? with my email alfrenov...@gmail.com Thanks. why would a ceph slack be invite only? Also is the slack bridged to matrix?  room id ? kind regards Ronny Aasen

Re: [ceph-users] Does anyone use interactive CLI mode?

2018-10-10 Thread Brady Deetz
I run 2 clusters and have never purposely executed the interactive cli. I save remove the code bloat. On Wed, Oct 10, 2018 at 9:20 AM John Spray wrote: > Hi all, > > Since time immemorial, the Ceph CLI has had a mode where when run with > no arguments, you just get an interactive prompt that

Re: [ceph-users] tcmu iscsi (failover not supported)

2018-10-10 Thread Brady Deetz
Thanks for the response. I am using gwcli for configuration. We are not using ansible. I built the following from from git releases because centos mirrors are far behind. rtslib-fb v2.1.fb69 https://github.com/open-iscsi/rtslib-fb.git targetcli-fb v2.1.fb49

Re: [ceph-users] Namespaces and RBD

2018-10-10 Thread Jason Dillaman
On Wed, Oct 10, 2018 at 11:57 AM Florian Florensa wrote: > > Hello everyone, > > I noticed sometime ago the namespaces appeared in RBD documentation, > and by searching it looks like it was targeted for mimic, so i wanted > to know if anyone had any experiences with it, and if it is going to > be

Re: [ceph-users] tcmu iscsi (failover not supported)

2018-10-10 Thread Mike Christie
On 10/10/2018 12:40 PM, Mike Christie wrote: > On 10/09/2018 05:09 PM, Brady Deetz wrote: >> I'm trying to replace my old single point of failure iscsi gateway with >> the shiny new tcmu-runner implementation. I've been fighting a Windows >> initiator all day. I haven't tested any other

Re: [ceph-users] ceph-iscsi upgrade issue

2018-10-10 Thread Mike Christie
On 10/10/2018 12:52 PM, Mike Christie wrote: > On 10/10/2018 08:21 AM, Steven Vacaroaia wrote: >> Hi Jason, >> Thanks for your prompt responses >> >> I have used same iscsi-gateway.cfg file - no security changes - just >> added prometheus entry >> There is no iscsi-gateway.conf but the

Re: [ceph-users] ceph-iscsi upgrade issue

2018-10-10 Thread Mike Christie
On 10/10/2018 08:21 AM, Steven Vacaroaia wrote: > Hi Jason, > Thanks for your prompt responses > > I have used same iscsi-gateway.cfg file - no security changes - just > added prometheus entry > There is no iscsi-gateway.conf but the gateway.conf object is created > and has correct entries > >

[ceph-users] Inconsistent PG, repair doesn't work

2018-10-10 Thread Brett Chancellor
Hi all, I have an inconsistent PG. I've tried running a repair and manual deep scrub, but neither operation seems to actually do anything. I've also tried stopping the primary OSD, removing the object, and restarting the OSD. The system copies the object back, but the inconsistent PG ERR

Re: [ceph-users] tcmu iscsi (failover not supported)

2018-10-10 Thread Mike Christie
On 10/09/2018 05:09 PM, Brady Deetz wrote: > I'm trying to replace my old single point of failure iscsi gateway with > the shiny new tcmu-runner implementation. I've been fighting a Windows > initiator all day. I haven't tested any other initiators, as Windows is > currently all we use iscsi for.

[ceph-users] Namespaces and RBD

2018-10-10 Thread Florian Florensa
Hello everyone, I noticed sometime ago the namespaces appeared in RBD documentation, and by searching it looks like it was targeted for mimic, so i wanted to know if anyone had any experiences with it, and if it is going to be available in mimic and when. Regards, Florian

Re: [ceph-users] https://ceph-storage.slack.com

2018-10-10 Thread David Turner
I would like an invite to. drakonst...@gmail.com On Wed, Sep 19, 2018 at 1:02 PM Gregory Farnum wrote: > Done. :) > > On Tue, Sep 18, 2018 at 12:15 PM Alfredo Daniel Rezinovsky < > alfredo.rezinov...@ingenieria.uncuyo.edu.ar> wrote: > >> Can anyone add me to this slack? >> >> with my email

Re: [ceph-users] HEALTH_WARN 2 osd(s) have {NOUP, NODOWN, NOIN, NOOUT} flags set

2018-10-10 Thread David Turner
There is a newer [1] feature to be able to set flags per OSD instead of cluster wide. This way you can prevent a problem host from marking its OSDs down while the rest ofthe cluster is capable of doing so. [2] These commands ought to clear up your status. [1]

Re: [ceph-users] Does anyone use interactive CLI mode?

2018-10-10 Thread Sergey Malinin
All uncommon tasks can easily be done using basic shell scripting so I don't see any practical use for such interface. > On 10.10.2018, at 17:19, John Spray wrote: > > Hi all, > > Since time immemorial, the Ceph CLI has had a mode where when run with > no arguments, you just get an

Re: [ceph-users] Does anyone use interactive CLI mode?

2018-10-10 Thread Mark Johnston
On Wed, 2018-10-10 at 15:19 +0100, John Spray wrote: > Since time immemorial, the Ceph CLI has had a mode where when run with > no arguments, you just get an interactive prompt that lets you run > commands without "ceph" at the start. > > I recently discovered that we actually broke this in

Re: [ceph-users] Does anyone use interactive CLI mode?

2018-10-10 Thread David Turner
I know that it existed, but I've never bothered using it. In applications like Python where you can get a different reaction by interacting with it line by line and setting up an environment it is very helpful. Ceph, however, doesn't have any such environment variables that would make this more

[ceph-users] Does anyone use interactive CLI mode?

2018-10-10 Thread John Spray
Hi all, Since time immemorial, the Ceph CLI has had a mode where when run with no arguments, you just get an interactive prompt that lets you run commands without "ceph" at the start. I recently discovered that we actually broke this in Mimic[1], and it seems that nobody noticed! So the

Re: [ceph-users] ceph-iscsi upgrade issue

2018-10-10 Thread Steven Vacaroaia
Hi Jason, Thanks for your prompt responses I have used same iscsi-gateway.cfg file - no security changes - just added prometheus entry There is no iscsi-gateway.conf but the gateway.conf object is created and has correct entries iscsi-gateway.cfg is identical and contains the following [config]

Re: [ceph-users] Best version and SO for CefhFS

2018-10-10 Thread Daniel Carrasco
Ok, thanks. The I'll use standby-replay mode (typo error on other mail). Greetings!! El mié., 10 oct. 2018 a las 13:06, Sergey Malinin () escribió: > Standby MDS is required for HA. It can be configured in standby-replay > mode for faster failover. Otherwise, replaying the journal is incurred >

Re: [ceph-users] Best version and SO for CefhFS

2018-10-10 Thread Sergey Malinin
Standby MDS is required for HA. It can be configured in standby-replay mode for faster failover. Otherwise, replaying the journal is incurred which can take somewhat longer. > On 10.10.2018, at 13:57, Daniel Carrasco wrote: > > Thanks for your response. > > I'll point in that direction. > I

Re: [ceph-users] Best version and SO for CefhFS

2018-10-10 Thread Daniel Carrasco
Thanks for your response. I'll point in that direction. I also need a fast recovery in case that MDS die so, Standby MDS are recomended or recovery is fast enought to be useful? Greetings! El mié., 10 oct. 2018 a las 12:26, Sergey Malinin () escribió: > > > On 10.10.2018, at 10:49, Daniel

Re: [ceph-users] Best version and SO for CefhFS

2018-10-10 Thread Sergey Malinin
> On 10.10.2018, at 10:49, Daniel Carrasco wrote: > > Wich is the best configuration to avoid that MDS problems. Single active MDS with lots of RAM. ___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] Best version and SO for CefhFS

2018-10-10 Thread Daniel Carrasco
Hello, I'm trying to create a simple cluster to archieve HA on a webpage: - Three nodes with MDS, OSD, MON, y MGR - Replication factor of three (one copy on every node) - Two active and a backup MDS to allow a fail of one server - CephFS mounted using kernel driver - One disk by