Re: [ceph-users] TCMU Runner: Could not check lock ownership. Error: Cannot send after transport endpoint shutdown

2019-10-22 Thread Mike Christie
runs VMs on each > of the LUNs) > > > Ok, i'll update this tomorrow with the logs you asked for ... > > ---- > *Von:* Mike Christie > *Gesendet:* Dienstag, 22. Oktober 2019 19:43:40 > *An:* Ki

Re: [ceph-users] TCMU Runner: Could not check lock ownership. Error: Cannot send after transport endpoint shutdown

2019-10-22 Thread Mike Christie
s like it has the fix: commit dd7dd51c6cafa8bbcd3ca0eef31fb378b27ff499 Author: Mike Christie Date: Mon Jan 14 17:06:27 2019 -0600 Allow some commands to run while taking lock so we should not be seeing it. Could you turn on tcmu-runner debugging? Open the file: /etc/tcmu/tcmu.conf and set: log_level = 5

Re: [ceph-users] ceph iscsi question

2019-10-17 Thread Mike Christie
On 10/17/2019 10:52 AM, Mike Christie wrote: > On 10/16/2019 01:35 AM, 展荣臻(信泰) wrote: >> hi,all >> we deploy ceph with ceph-ansible.osds,mons and daemons of iscsi runs in >> docker. >> I create iscsi target according to >> https://docs.ceph.com/docs/lumin

Re: [ceph-users] ceph iscsi question

2019-10-17 Thread Mike Christie
On 10/16/2019 01:35 AM, 展荣臻(信泰) wrote: > hi,all > we deploy ceph with ceph-ansible.osds,mons and daemons of iscsi runs in > docker. > I create iscsi target according to > https://docs.ceph.com/docs/luminous/rbd/iscsi-target-cli/. > I discovered and logined iscsi target on another host,as

Re: [ceph-users] tcmu-runner: mismatched sizes for rbd image size

2019-10-15 Thread Mike Christie
nner starts up without any errors ;) > > > ---- > *Von:* Mike Christie > *Gesendet:* Donnerstag, 3. Oktober 2019 00:20:51 > *An:* Kilian Ries; dilla...@redhat.com > *Cc:* ceph-users@lists.ceph.com > *Betreff

Re: [ceph-users] tcmu-runner: mismatched sizes for rbd image size

2019-10-02 Thread Mike Christie
On 10/02/2019 02:15 PM, Kilian Ries wrote: > Ok i just compared my local python files and the git commit you sent me > - it really looks like i have the old files installed. All the changes > are missing in my local files. > > > > Where can i get a new ceph-iscsi-config package that has the

Re: [ceph-users] reproducible rbd-nbd crashes

2019-08-15 Thread Mike Christie
On 08/14/2019 06:55 PM, Mike Christie wrote: > On 08/14/2019 02:09 PM, Mike Christie wrote: >> On 08/14/2019 07:35 AM, Marc Schöchlin wrote: >>>>> 3. I wonder if we are hitting a bug with PF_MEMALLOC Ilya hit with krbd. >>>>> He removed t

Re: [ceph-users] reproducible rbd-nbd crashes

2019-08-14 Thread Mike Christie
On 08/14/2019 02:09 PM, Mike Christie wrote: > On 08/14/2019 07:35 AM, Marc Schöchlin wrote: >>>> 3. I wonder if we are hitting a bug with PF_MEMALLOC Ilya hit with krbd. >>>> He removed that code from the krbd. I will ping him on that. >> >> Interesting. I

Re: [ceph-users] reproducible rbd-nbd crashes

2019-08-14 Thread Mike Christie
On 08/14/2019 07:35 AM, Marc Schöchlin wrote: >>> 3. I wonder if we are hitting a bug with PF_MEMALLOC Ilya hit with krbd. >>> He removed that code from the krbd. I will ping him on that. > > Interesting. I activated Coredumps for that processes - probably we can > find something interesting

Re: [ceph-users] reproducible rbd-nbd crashes

2019-08-13 Thread Mike Christie
On 08/13/2019 07:04 PM, Mike Christie wrote: > On 07/31/2019 05:20 AM, Marc Schöchlin wrote: >> Hello Jason, >> >> it seems that there is something wrong in the rbd-nbd implementation. >> (added this information also at https://tracker.ceph.com/issues/40822)

Re: [ceph-users] reproducible rbd-nbd crashes

2019-08-13 Thread Mike Christie
On 07/31/2019 05:20 AM, Marc Schöchlin wrote: > Hello Jason, > > it seems that there is something wrong in the rbd-nbd implementation. > (added this information also at https://tracker.ceph.com/issues/40822) > > The problem not seems to be related to kernel releases, filesystem types or > the

Re: [ceph-users] tcmu-runner: "Acquired exclusive lock" every 21s

2019-08-06 Thread Mike Christie
On 08/06/2019 11:28 AM, Mike Christie wrote: > On 08/06/2019 07:51 AM, Matthias Leopold wrote: >> >> >> Am 05.08.19 um 18:31 schrieb Mike Christie: >>> On 08/05/2019 05:58 AM, Matthias Leopold wrote: >>>> Hi, >>>> >>>> I'm s

Re: [ceph-users] tcmu-runner: "Acquired exclusive lock" every 21s

2019-08-06 Thread Mike Christie
On 08/06/2019 07:51 AM, Matthias Leopold wrote: > > > Am 05.08.19 um 18:31 schrieb Mike Christie: >> On 08/05/2019 05:58 AM, Matthias Leopold wrote: >>> Hi, >>> >>> I'm still testing my 2 node (dedicated) iSCSI gateway with ceph 12.2.12 >>> b

Re: [ceph-users] tcmu-runner: "Acquired exclusive lock" every 21s

2019-08-05 Thread Mike Christie
On 08/05/2019 05:58 AM, Matthias Leopold wrote: > Hi, > > I'm still testing my 2 node (dedicated) iSCSI gateway with ceph 12.2.12 > before I dare to put it into production. I installed latest tcmu-runner > release (1.5.1) and (like before) I'm seeing that both nodes switch > exclusive locks for

Re: [ceph-users] reproducable rbd-nbd crashes

2019-07-24 Thread Mike Christie
On 07/23/2019 12:28 AM, Marc Schöchlin wrote: >>> For testing purposes i set the timeout to unlimited ("nbd_set_ioctl >>> /dev/nbd0 0", on already mounted device). >>> >> I re-executed the problem procedure and discovered that the >>> >> compression-procedure crashes not at the same file, but

Re: [ceph-users] reproducable rbd-nbd crashes

2019-07-22 Thread Mike Christie
On 07/19/2019 02:42 AM, Marc Schöchlin wrote: > We have ~500 heavy load rbd-nbd devices in our xen cluster (rbd-nbd 12.2.5, > kernel 4.4.0+10, centos clone) and ~20 high load krbd devices (kernel > 4.15.0-45, ubuntu 16.04) - we never experienced problems like this. For this setup, do you have

Re: [ceph-users] reproducable rbd-nbd crashes

2019-07-22 Thread Mike Christie
On 07/22/2019 06:00 AM, Marc Schöchlin wrote: >> With older kernels no timeout would be set for each command by default, >> so if you were not running that tool then you would not see the nbd >> disconnect+io_errors+xfs issue. You would just see slow IOs. >> >> With newer kernels, like 4.15,

Re: [ceph-users] reproducable rbd-nbd crashes

2019-07-19 Thread Mike Christie
On 07/19/2019 02:42 AM, Marc Schöchlin wrote: > Hello Jason, > > Am 18.07.19 um 20:10 schrieb Jason Dillaman: >> On Thu, Jul 18, 2019 at 1:47 PM Marc Schöchlin wrote: >>> Hello cephers, >>> >>> rbd-nbd crashes in a reproducible way here. >> I don't see a crash report in the log below. Is it

Re: [ceph-users] ceph-iscsi: problem when discovery auth is disabled, but gateway receives auth requests

2019-04-23 Thread Mike Christie
On 04/18/2019 06:24 AM, Matthias Leopold wrote: > Hi, > > the Ceph iSCSI gateway has a problem when receiving discovery auth > requests when discovery auth is not enabled. Target discovery fails in > this case (see below). This is especially annoying with oVirt (KVM > management platform) where

Re: [ceph-users] 答复: CEPH ISCSI LIO multipath change delay

2019-03-21 Thread Mike Christie
On 03/21/2019 11:27 AM, Maged Mokhtar wrote: > > Though i do not recommend changing it, if there is a need to lower > fast_io_fail_tmo, then osd_heartbeat_interval + osd_heartbeat_grace sum > need to be lowered as well, their default sum is 25 sec, which i would > assume why fast_io_fail_tmo is

Re: [ceph-users] CEPH ISCSI Gateway

2019-03-09 Thread Mike Christie
On 03/07/2019 09:22 AM, Ashley Merrick wrote: > Been reading into the gateway, and noticed it’s been mentioned a few > times it can be installed on OSD servers. > > I am guessing therefore there be no issues like is sometimes mentioned > when using kRBD on a OSD node apart from the extra

Re: [ceph-users] ceph-iscsi iSCSI Login negotiation failed

2018-12-05 Thread Mike Christie
On 12/05/2018 09:43 AM, Steven Vacaroaia wrote: > Hi, > I have a strange issue > I configured 2 identical iSCSI gateways but one of them is complaining > about negotiations although gwcli reports the correct auth and status ( > logged-in) > > Any help will be truly appreciated > > Here are

Re: [ceph-users] Using FC with LIO targets

2018-10-30 Thread Mike Christie
On 10/28/2018 03:18 AM, Frédéric Nass wrote: > Hello Mike, Jason, > > Assuming we adapt the current LIO configuration scripts and put QLogic HBAs > in our SCSI targets, could we use FC instead of iSCSI as a SCSI transport > protocol with LIO ? Would this still work with multipathing and ALUA ?

Re: [ceph-users] tcmu iscsi (failover not supported)

2018-10-10 Thread Mike Christie
tcmu-runner 1.4.0. > You need this patch which sets the failover type back to implicit to match tcmu-runner 1.4.0 and also makes it configurable for future versions: commit 8d66492b8c7134fb37b72b5e8e77d7c8109220d9 Author: Mike Christie Date: Mon Jul 23 15:45:09 2018 -0500 Allow alu

Re: [ceph-users] tcmu iscsi (failover not supported)

2018-10-10 Thread Mike Christie
On 10/10/2018 12:40 PM, Mike Christie wrote: > On 10/09/2018 05:09 PM, Brady Deetz wrote: >> I'm trying to replace my old single point of failure iscsi gateway with >> the shiny new tcmu-runner implementation. I've been fighting a Windows >> initiator all day. I haven't tested

Re: [ceph-users] ceph-iscsi upgrade issue

2018-10-10 Thread Mike Christie
On 10/10/2018 12:52 PM, Mike Christie wrote: > On 10/10/2018 08:21 AM, Steven Vacaroaia wrote: >> Hi Jason, >> Thanks for your prompt responses >> >> I have used same iscsi-gateway.cfg file - no security changes - just >> added prometheus entry

Re: [ceph-users] ceph-iscsi upgrade issue

2018-10-10 Thread Mike Christie
On 10/10/2018 08:21 AM, Steven Vacaroaia wrote: > Hi Jason, > Thanks for your prompt responses > > I have used same iscsi-gateway.cfg file - no security changes - just > added prometheus entry > There is no iscsi-gateway.conf but the gateway.conf object is created > and has correct entries > >

Re: [ceph-users] tcmu iscsi (failover not supported)

2018-10-10 Thread Mike Christie
On 10/09/2018 05:09 PM, Brady Deetz wrote: > I'm trying to replace my old single point of failure iscsi gateway with > the shiny new tcmu-runner implementation. I've been fighting a Windows > initiator all day. I haven't tested any other initiators, as Windows is > currently all we use iscsi for.

Re: [ceph-users] Ceph ISCSI Gateways on Ubuntu

2018-09-24 Thread Mike Christie
On 09/24/2018 05:47 AM, Florian Florensa wrote: > Hello there, > > I am still in the works of preparing a deployment with iSCSI gateways > on Ubuntu, but both the latest LTS of ubuntu ships with kernel 4.15, > and i dont see support for iscsi. > What kernel are people using for this ? > -

Re: [ceph-users] Upgrade Ceph 13.2.0 -> 13.2.1 and Windows iSCSI clients breakup

2018-07-30 Thread Mike Christie
On 07/28/2018 03:59 PM, Wladimir Mutel wrote: > Dear all, > > I want to share some experience of upgrading my experimental 1-host > Ceph cluster from v13.2.0 to v13.2.1. > First, I fetched new packages and installed them using 'apt > dist-upgrade', which went smooth as usual. > Then I

Re: [ceph-users] chkdsk /b fails on Ceph iSCSI volume

2018-07-16 Thread Mike Christie
On 07/15/2018 08:08 AM, Wladimir Mutel wrote: > Hi, > > I cloned a NTFS with bad blocks from USB HDD onto Ceph RBD volume > (using ntfsclone, so the copy has sparse regions), and decided to clean > bad blocks within the copy. I run chkdsk /b from WIndows and it fails on > free space

Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-18 Thread Mike Christie
On 06/15/2018 12:21 PM, Wladimir Mutel wrote: > Jason Dillaman wrote: > [1] http://docs.ceph.com/docs/master/rbd/iscsi-initiator-win/ > >>> I don't use either MPIO or MCS on Windows 2008 R2 or Windows 10 >>> initiator (not Win2016 but hope there is no much difference). I try >>> to

Re: [ceph-users] iSCSI to a Ceph node with 2 network adapters - how to ?

2018-06-01 Thread Mike Christie
On 06/01/2018 02:01 AM, Wladimir Mutel wrote: > Dear all, > > I am experimenting with Ceph setup. I set up a single node > (Asus P10S-M WS, Xeon E3-1235 v5, 64 GB RAM, 8x3TB SATA HDDs, > Ubuntu 18.04 Bionic, Ceph packages from >

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-15 Thread Mike Christie
On 03/15/2018 02:32 PM, Maxim Patlasov wrote: > On Thu, Mar 15, 2018 at 12:48 AM, Mike Christie <mchri...@redhat.com > <mailto:mchri...@redhat.com>> wrote: > > ... > > It looks like there is a bug. > > 1. A regression was added when I stopped kil

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-15 Thread Mike Christie
On 03/14/2018 04:28 PM, Maxim Patlasov wrote: > On Wed, Mar 14, 2018 at 12:05 PM, Michael Christie > wrote: > > On 03/14/2018 01:27 PM, Michael Christie wrote: > > On 03/14/2018 01:24 PM, Maxim Patlasov wrote: > >> On Wed,

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-11 Thread Mike Christie
--- > shadowlin > > > > *发件人:*Jason Dillaman <jdill...@redhat.com> > *发送时间:*2018-03-11 07:46 > *主题:*Re: Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD > Exclusive Lock > *收件人:*"shadow_lin"<shadow_...@163.com> >

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-08 Thread Mike Christie
On 03/08/2018 12:44 PM, Mike Christie wrote: > stuck/queued then your osd_request_timeout value might be too short. For Sorry, I meant too long. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-08 Thread Mike Christie
h request on the wire but then the iscsi connection goes down, will the ceph request always get sent to the OSD before the initiator side failover timeouts have fired and it starts using a different target node. > Best regards, > > On Mar 8, 2018 11:54 PM, "Mike Christie&qu

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-08 Thread Mike Christie
target_core_rbd? > Thanks. > > 2018-03-07 > > shadowlin > > ---- > > *发件人:*Mike Christie <mchri...@redhat.com> &g

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-06 Thread Mike Christie
On 03/06/2018 01:17 PM, Lazuardi Nasution wrote: > Hi, > > I want to do load balanced multipathing (multiple iSCSI gateway/exporter > nodes) of iSCSI backed with RBD images. Should I disable exclusive lock > feature? What if I don't disable that feature? I'm using TGT (manual > way) since I get

Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-02 Thread Mike Christie
On 03/02/2018 01:24 AM, Joshua Chen wrote: > Dear all, > I wonder how we could support VM systems with ceph storage (block > device)? my colleagues are waiting for my answer for vmware (vSphere 5) We were having difficulties supporting older versions, because they will drop down to using SCSI-2

Re: [ceph-users] ceph iscsi kernel 4.15 - "failed with 500"

2018-02-14 Thread Mike Christie
On 02/13/2018 01:09 PM, Steven Vacaroaia wrote: > Hi, > > I noticed a new ceph kernel (4.15.0-ceph-g1c778f43da52) was made available > so I have upgraded my test environment > ... > > It will be appreciated if someone can provide instructions / stpes for > upgrading the kernel without

Re: [ceph-users] iSCSI over RBD

2018-01-19 Thread Mike Christie
; > > On Thu, Jan 4, 2018 at 10:55 AM, Joshua Chen > <csc...@asiaa.sinica.edu.tw > <mailto:csc...@asiaa.sinica.edu.tw>> wrote: > > I had the same problem before, mine is CentOS, and when > I

Re: [ceph-users] iSCSI over RBD

2018-01-05 Thread Mike Christie
create iqn.1994-05.com.redhat:15dbed23be9e-ovirt1 > > create iqn.1994-05.com.redhat:a7c1ec3c43f7-ovirt2 > > create iqn.1994-05.com.redhat:67669afedddf-ovirt3 > > create > iqn.1994-05.com.redha

Re: [ceph-users] iSCSI over RBD

2018-01-03 Thread Mike Christie
On 12/25/2017 03:13 PM, Joshua Chen wrote: > Hello folks, > I am trying to share my ceph rbd images through iscsi protocol. > > I am trying iscsi-gateway > http://docs.ceph.com/docs/master/rbd/iscsi-overview/ > > > now > > systemctl start rbd-target-api > is working and I could run gwcli >

Re: [ceph-users] CEPH luminous - Centos kernel 4.14 qfull_time not supported

2017-12-22 Thread Mike Christie
On 12/20/2017 03:21 PM, Steven Vacaroaia wrote: > Hi, > > I apologies for creating a new thread ( I already mentioned my issue in > another one) > but I am hoping someone will be able to > provide clarification / instructions > > Looks like the patch for including qfull_time is missing from

Re: [ceph-users] rbd iscsi gateway question

2017-04-10 Thread Mike Christie
On 04/10/2017 01:21 PM, Timofey Titovets wrote: > JFYI: Today we get totaly stable setup Ceph + ESXi "without hacks" and > this pass stress tests. > > 1. Don't try pass RBD directly to LIO, this setup are unstable > 2. Instead of that, use Qemu + KVM (i use proxmox for that create VM) > 3. Attach

Re: [ceph-users] rbd iscsi gateway question

2017-04-10 Thread Mike Christie
On 04/06/2017 08:46 AM, David Disseldorp wrote: > On Thu, 6 Apr 2017 14:27:01 +0100, Nick Fisk wrote: > ... >>> I'm not to sure what you're referring to WRT the spiral of death, but we did >>> patch some LIO issues encountered when a command was aborted while >>> outstanding at the LIO backstore

Re: [ceph-users] rbd iscsi gateway question

2017-04-10 Thread Mike Christie
On 04/06/2017 03:22 AM, yipik...@gmail.com wrote: > On 06/04/2017 09:42, Nick Fisk wrote: >> >> I assume Brady is referring to the death spiral LIO gets into with >> some initiators, including vmware, if an IO takes longer than about >> 10s. I haven’t heard of anything, and can’t see any changes,

Re: [ceph-users] new Open Source Ceph based iSCSI SAN project

2016-10-17 Thread Mike Christie
On 10/17/2016 02:40 PM, Mike Christie wrote: > For the (non target_mode approach), everything that is needed for basic Oops. Meant to write for the non target_mod_rbd approach. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.

Re: [ceph-users] new Open Source Ceph based iSCSI SAN project

2016-10-17 Thread Mike Christie
aware of David Disseldorp & Mike Christie efforts to upstream > the patches from a while back ago. I understand there will be a move > away from the SUSE target_mod_rbd to support a more generic device > handling but do not know what the current status of this work is. We > have m

Re: [ceph-users] ceph + vmware

2016-07-21 Thread Mike Christie
On 07/21/2016 11:41 AM, Mike Christie wrote: > On 07/20/2016 02:20 PM, Jake Young wrote: >> >> For starters, STGT doesn't implement VAAI properly and you will need to >> disable VAAI in ESXi. >> >> LIO does seem to implement VAAI properly, but performance is n

Re: [ceph-users] ceph + vmware

2016-07-21 Thread Mike Christie
/* * Code from QEMU Block driver for RADOS (Ceph) ported to a TCMU handler * by Mike Christie. * * Copyright (C) 2010-2011 Christian Brunner <c...@muc.de>, * Josh Durgin <josh.dur...@dreamhost.com> * * This work is licensed under the terms of the GN

Re: [ceph-users] ceph + vmware

2016-07-20 Thread Mike Christie
On 07/20/2016 11:52 AM, Jan Schermer wrote: > >> On 20 Jul 2016, at 18:38, Mike Christie <mchri...@redhat.com> wrote: >> >> On 07/20/2016 03:50 AM, Frédéric Nass wrote: >>> >>> Hi Mike, >>> >>> Thanks for the update on the RHC

Re: [ceph-users] ceph + vmware

2016-07-20 Thread Mike Christie
On 07/20/2016 03:50 AM, Frédéric Nass wrote: > > Hi Mike, > > Thanks for the update on the RHCS iSCSI target. > > Will RHCS 2.1 iSCSI target be compliant with VMWare ESXi client ? (or is > it too early to say / announce). No HA support for sure. We are looking into non HA support though. > >

Re: [ceph-users] ceph + vmware

2016-07-11 Thread Mike Christie
On 07/08/2016 02:22 PM, Oliver Dzombic wrote: > Hi, > > does anyone have experience how to connect vmware with ceph smart ? > > iSCSI multipath does not really worked well. Are you trying to export rbd images from multiple iscsi targets at the same time or just one target? For the HA/multiple

Re: [ceph-users] CentOS 7 iscsi gateway using lrbd

2016-04-29 Thread Mike Christie
On 04/29/2016 11:44 AM, Ming Lin wrote: > On Tue, Jan 19, 2016 at 1:34 PM, Mike Christie <mchri...@redhat.com> wrote: >> Everyone is right - sort of :) >> >> It is that target_core_rbd module that I made that was rejected >> upstream, along with modifications

Re: [ceph-users] CentOS 7 iscsi gateway using lrbd

2016-02-15 Thread Mike Christie
minik Zalewski <dzalew...@optlink.co.uk > <mailto:dzalew...@optlink.co.uk>> wrote: > > Thanks Mike. > > Would you not recommend using iscsi and ceph under Redhat based > distros untill new code is in place? > > Dominik > > On 21 January

Re: [ceph-users] CentOS 7 iscsi gateway using lrbd

2016-01-20 Thread Mike Christie
properly and so we will not fall into that problem. > 3. Can you still use something like bcache over the krbd? Not initially. I had been doing active/active across nodes by default, so you cannot use bcache and krbd as is like that. > > > >> -Original Message- >> From:

Re: [ceph-users] CentOS 7 iscsi gateway using lrbd

2016-01-19 Thread Mike Christie
the right thing. On 01/19/2016 05:45 AM, Василий Ангапов wrote: > So is it a different approach that was used here by Mike Christie: > http://www.spinics.net/lists/target-devel/msg10330.html ? > It seems to be a confusion because it also implements target_core_rbd > module. Or not? >

Re: [ceph-users] tgt and krbd

2015-03-17 Thread Mike Christie
On 03/15/2015 08:42 PM, Mike Christie wrote: On 03/15/2015 07:54 PM, Mike Christie wrote: On 03/09/2015 11:15 AM, Nick Fisk wrote: Hi Mike, I was using bs_aio with the krbd and still saw a small caching effect. I'm not sure if it was on the ESXi or tgt/krbd page cache side, but I

Re: [ceph-users] tgt and krbd

2015-03-15 Thread Mike Christie
On 03/09/2015 11:15 AM, Nick Fisk wrote: Hi Mike, I was using bs_aio with the krbd and still saw a small caching effect. I'm not sure if it was on the ESXi or tgt/krbd page cache side, but I was definitely seeing the IO's being coalesced into larger ones on the krbd I am not sure what you

Re: [ceph-users] tgt and krbd

2015-03-15 Thread Mike Christie
On 03/15/2015 07:54 PM, Mike Christie wrote: On 03/09/2015 11:15 AM, Nick Fisk wrote: Hi Mike, I was using bs_aio with the krbd and still saw a small caching effect. I'm not sure if it was on the ESXi or tgt/krbd page cache side, but I was definitely seeing the IO's being coalesced

Re: [ceph-users] tgt and krbd

2015-03-06 Thread Mike Christie
On 03/06/2015 06:51 AM, Jake Young wrote: On Thursday, March 5, 2015, Nick Fisk n...@fisk.me.uk mailto:n...@fisk.me.uk wrote: Hi All, __ __ Just a heads up after a day’s experimentation. __ __ I believe tgt with its default settings has a small

Re: [ceph-users] rbd: I/O Errors in low memory situations

2015-02-19 Thread Mike Christie
On 02/18/2015 06:05 PM, Sebastian Köhler [Alfahosting GmbH] wrote: Hi, yesterday we had had the problem that one of our cluster clients remounted a rbd device in read-only mode. We found this[1] stack trace in the logs. We investigated further and found similar traces on all other machines

Re: [ceph-users] ISCSI LIO hang after 2-3 days of working

2015-02-05 Thread Mike Christie
Not sure if there are multiple problems. On 02/05/2015 04:46 AM, reistlin87 wrote: Feb 3 13:17:01 is-k13bi32e2s6vdi-002 CRON[10237]: (root) CMD ( cd / run-parts --report /etc/cron.hourly) Feb 3 13:25:01 is-k13bi32e2s6vdi-002 CRON[10242]: (root) CMD (command -v debian-sa1 /dev/null

Re: [ceph-users] Ceph, LIO, VMWARE anyone?

2015-01-28 Thread Mike Christie
. Hope that helps Nick -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mike Christie Sent: 28 January 2015 03:06 To: Zoltan Arnold Nagy; Jake Young Cc: Nick Fisk; ceph-users Subject: Re: [ceph-users] Ceph, LIO, VMWARE anyone? Oh yeah, I

Re: [ceph-users] Ceph, LIO, VMWARE anyone?

2015-01-27 Thread Mike Christie
describing your results? If so, were you running oracle or something like that? Just wondering. On 01/27/2015 08:58 PM, Mike Christie wrote: I do not know about perf, but here is some info on what is safe and general info. - If you are not using VAAI then it will use older style RESERVE/RELEASE

Re: [ceph-users] Ceph, LIO, VMWARE anyone?

2015-01-27 Thread Mike Christie
I do not know about perf, but here is some info on what is safe and general info. - If you are not using VAAI then it will use older style RESERVE/RELEASE commands only. If you are using VAAI ATS, and doing active/active then you need something, like the lock/sync talked about in the

Re: [ceph-users] tgt / rbd performance

2014-12-15 Thread Mike Christie
On 12/13/2014 09:39 AM, Jake Young wrote: On Friday, December 12, 2014, Mike Christie mchri...@redhat.com mailto:mchri...@redhat.com wrote: On 12/11/2014 11:39 AM, ano nym wrote: there is a ceph pool on a hp dl360g5 with 25 sas 10k (sda-sdy) on a msa70 which gives me

Re: [ceph-users] tgt / rbd performance

2014-12-12 Thread Mike Christie
On 12/11/2014 11:39 AM, ano nym wrote: there is a ceph pool on a hp dl360g5 with 25 sas 10k (sda-sdy) on a msa70 which gives me about 600 MB/s continous write speed with rados write bench. tgt on the server with rbd backend uses this pool. mounting local(host) with iscsiadm, sdz is the

Re: [ceph-users] Poor RBD performance as LIO iSCSI target

2014-12-08 Thread Mike Christie
it resolved. Nick -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of David Moreau Simard Sent: 18 November 2014 19:58 To: Mike Christie Cc: ceph-users@lists.ceph.com; Christopher Spearman Subject: Re: [ceph-users] Poor RBD performance as LIO iSCSI

Re: [ceph-users] Poor RBD performance as LIO iSCSI target

2014-12-08 Thread Mike Christie
Oh yeah, for the iscsi fio full write test, did you experiment with bs and numjobs? For just 10 GB iscsi, I think numjibs 1 (around 4 is when I stop seeing benefits) and bs 1MB (around 64K to 256K) works better. On 12/08/2014 05:22 PM, Mike Christie wrote: Some distros have LIO setup

Re: [ceph-users] Poor RBD performance as LIO iSCSI target

2014-11-13 Thread Mike Christie
On 11/13/2014 10:17 AM, David Moreau Simard wrote: Running into weird issues here as well in a test environment. I don't have a solution either but perhaps we can find some things in common.. Setup in a nutshell: - Ceph cluster: Ubuntu 14.04, Kernel 3.16.7, Ceph 0.87-1 (OSDs with separate

Re: [ceph-users] Poor RBD performance as LIO iSCSI target

2014-10-28 Thread Mike Christie
On 10/27/2014 04:24 PM, Christopher Spearman wrote: - What tested with bad performance (Reads ~25-50MB/s - Writes ~25-50MB/s) * RBD setup as target using LIO * RBD - LVM - LIO target * RBD - RAID0/1 - LIO target - What tested with good performance (Reads ~700-800MB/s - Writes