[ceph-users] Firefly 0.80.9 OSD issues with conn ect claims to be...wrong node

2015-06-04 Thread Alex Gorbachev
Hello, seeing issues with OSDs stalling and error messages such as: 2015-06-04 06:48:17.119618 7fc932d59700 0 -- 10.80.4.15:6820/3501 10.80.4.30 :6811/3003603 pipe(0xb6b4000 sd=19 :33085 s=1 pgs=311 cs=4 l=0 c=0x915c6e0).conn ect claims to be 10.80.4.30:6811/4106 not 10.80.4.30:6811/3003603 -

Re: [ceph-users] .New Ceph cluster - cannot add additional monitor

2015-06-14 Thread Alex Gorbachev
I wonder if your issue is related to: http://tracker.ceph.com/issues/5195 I had to add the new monitor to the local ceph.conf file and push that with ceph-deploy --overwrite-conf config push host to all cluster hosts and I had to issue ceph mon add host ip on one of the existing cluster monitors

Re: [ceph-users] OSD crashes

2015-07-03 Thread Alex Gorbachev
it correctly... Jan On 03 Jul 2015, at 10:16, Alex Gorbachev a...@iss-integration.com wrote: Hello, we are experiencing severe OSD timeouts, OSDs are not taken out and we see the following in syslog on Ubuntu 14.04.2 with Firefly 0.80.9. Thank you for any advice. Alex Jul 3 03:42:06

Re: [ceph-users] Redundant networks in Ceph

2015-06-27 Thread Alex Gorbachev
Nick -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Alex Gorbachev Sent: 27 June 2015 19:02 To: ceph-users@lists.ceph.com Subject: [ceph-users] Redundant networks in Ceph The current network design in Ceph (http://ceph.com/docs

Re: [ceph-users] Redundant networks in Ceph

2015-06-28 Thread Alex Gorbachev
. Nick -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Alex Gorbachev Sent: 27 June 2015 19:02 To: ceph-users@lists.ceph.com Subject: [ceph-users] Redundant networks in Ceph The current network design in Ceph (http

Re: [ceph-users] any recommendation of using EnhanceIO?

2015-08-17 Thread Alex Gorbachev
What about https://github.com/Frontier314/EnhanceIO? Last commit 2 months ago, but no external contributors :( The nice thing about EnhanceIO is there is no need to change device name, unlike bcache, flashcache etc. Best regards, Alex On Thu, Jul 23, 2015 at 11:02 AM, Daniel Gryniewicz

Re: [ceph-users] any recommendation of using EnhanceIO?

2015-08-18 Thread Alex Gorbachev
IE, should we be focusing on IOPS? Latency? Finding a way to avoid journal overhead for large writes? Are there specific use cases where we should specifically be focusing attention? general iscsi? S3? databases directly on RBD? etc. There's tons of different areas that we can work on

[ceph-users] Slow responding OSDs are not OUTed and cause RBD client IO hangs

2015-08-22 Thread Alex Gorbachev
Hello, this is an issue we have been suffering from and researching along with a good number of other Ceph users, as evidenced by the recent posts. In our specific case, these issues manifest themselves in a RBD - iSCSI LIO - ESXi configuration, but the problem is more general. When there is an

Re: [ceph-users] Slow responding OSDs are not OUTed and cause RBD client IO hangs

2015-08-24 Thread Alex Gorbachev
on this currently, so hopefully in the future there will be a direct RBD interface into LIO and it will all work much better. Either tgt or SCST seem to be pretty stable in testing. Nick -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Alex

Re: [ceph-users] Slow responding OSDs are not OUTed and cause RBD client IO hangs

2015-08-24 Thread Alex Gorbachev
clusterwide IO to a crawl. I am trying to envision this situation in production and how would one find out what is slowing everything down without guessing. Regards, Alex Jan On 24 Aug 2015, at 18:26, Alex Gorbachev a...@iss-integration.com wrote: This can be tuned in the iSCSI initiation

Re: [ceph-users] any recommendation of using EnhanceIO?

2015-08-18 Thread Alex Gorbachev
with it... So I haven't tested it heavily. Bcache should be the obvious choice if you are in control of the environment. At least you can cry on LKML's shoulder when you lose data :-) Jan On 18 Aug 2015, at 01:49, Alex Gorbachev a...@iss-integration.com wrote: What about https://github.com

Re: [ceph-users] How to improve single thread sequential reads?

2015-08-16 Thread Alex Gorbachev
Hi Nick, On Thu, Aug 13, 2015 at 4:37 PM, Nick Fisk n...@fisk.me.uk wrote: -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Nick Fisk Sent: 13 August 2015 18:04 To: ceph-users@lists.ceph.com Subject: [ceph-users] How to improve single

Re: [ceph-users] Bad performances in recovery

2015-08-20 Thread Alex Gorbachev
Just to update the mailing list, we ended up going back to default ceph.conf without any additional settings than what is mandatory. We are now reaching speeds we never reached before, both in recovery and in regular usage. There was definitely something we set in the ceph.conf bogging

Re: [ceph-users] OSD crashes

2015-07-22 Thread Alex Gorbachev
...@schermer.cz wrote: What’s the value of /proc/sys/vm/min_free_kbytes on your system? Increase it to 256M (better do it if there’s lots of free memory) and see if it helps. It can also be set too high, hard to find any formula how to set it correctly... Jan On 03 Jul 2015, at 10:16, Alex

Re: [ceph-users] Deadly slow Ceph cluster revisited

2015-07-17 Thread Alex Gorbachev
May I suggest checking also the error counters on your network switch? Check speed and duplex. Is bonding in use? Is flow control on? Can you swap the network cable? Can you swap a NIC with another node and does the problem follow? Hth, Alex On Friday, July 17, 2015, Steve Thompson

Re: [ceph-users] Ceph RBD LIO ESXi Advice?

2015-11-09 Thread Alex Gorbachev
to be helping a lot, it could be just the superior switch response on a higher end switch. Using blk_mq scheduler, it's been reported to improve performance on random IO. Good luck! -- Alex Gorbachev Storcium On Sun, Nov 8, 2015 at 5:07 PM, Timofey Titovets <nefelim...@gmail.com> wrote:

Re: [ceph-users] Block Storage Image Creation Process

2015-07-11 Thread Alex Gorbachev
Hi Jiwan, On Sat, Jul 11, 2015 at 4:44 PM, Jiwan N jiwan.ningle...@gmail.com wrote: Hi Ceph-Users, I am quite new to Ceph Storage (storage tech in general). I have been investigating Ceph to understand the precise process clearly. *Q: What actually happens When I create a block image of

Re: [ceph-users] Real world benefit from SSD Journals for a more read than write cluster

2015-07-11 Thread Alex Gorbachev
FWIW. Based on the excellent research by Mark Nelson ( http://ceph.com/community/ceph-performance-part-2-write-throughput-without-ssd-journals/) we have dropped SSD journals altogether, and instead went for the battery protected controller writeback cache. Benefits: - No negative force

Re: [ceph-users] 1 hour until Ceph Tech Talk

2015-08-29 Thread Alex Gorbachev
Hi Patrick, On Thu, Aug 27, 2015 at 12:00 PM, Patrick McGarry pmcga...@redhat.com wrote: Just a reminder that our Performance Ceph Tech Talk with Mark Nelson will be starting in 1 hour. If you are unable to attend there will be a recording posted on the Ceph YouTube channel and linked from

Re: [ceph-users] ESXi/LIO/RBD repeatable problem, hang when cloning VM

2015-09-04 Thread Alex Gorbachev
On Thu, Sep 3, 2015 at 3:20 AM, Nicholas A. Bellinger <n...@linux-iscsi.org> wrote: > (RESENDING) > > On Wed, 2015-09-02 at 21:14 -0400, Alex Gorbachev wrote: >> e have experienced a repeatable issue when performing the following: >> >> Ceph backend with

Re: [ceph-users] ESXi/LIO/RBD repeatable problem, hang when cloning VM

2015-09-03 Thread Alex Gorbachev
> >> On 03 Sep 2015, at 03:14, Alex Gorbachev <a...@iss-integration.com> wrote: >> >> e have experienced a repeatable issue when performing the following: >> >> Ceph backend with no issues, we can repeat any time at will in lab and >> production. Clonin

[ceph-users] OSD crash

2015-09-08 Thread Alex Gorbachev
Hello, We have run into an OSD crash this weekend with the following dump. Please advise what this could be. Best regards, Alex 2015-09-07 14:55:01.345638 7fae6c158700 0 -- 10.80.4.25:6830/2003934 >> 10.80.4.15:6813/5003974 pipe(0x1dd73000 sd=257 :6830 s=2 pgs=14271 cs=251 l=0

[ceph-users] ESXi/LIO/RBD repeatable problem, hang when cloning VM

2015-09-02 Thread Alex Gorbachev
e have experienced a repeatable issue when performing the following: Ceph backend with no issues, we can repeat any time at will in lab and production. Cloning an ESXi VM to another VM on the same datastore on which the original VM resides. Practically instantly, the LIO machine becomes

Re: [ceph-users] ESXi/LIO/RBD repeatable problem, hang when cloning VM

2015-09-03 Thread Alex Gorbachev
On Thu, Sep 3, 2015 at 3:20 AM, Nicholas A. Bellinger <n...@linux-iscsi.org> wrote: > (RESENDING) > > On Wed, 2015-09-02 at 21:14 -0400, Alex Gorbachev wrote: >> e have experienced a repeatable issue when performing the following: >> >> Ceph backend with

Re: [ceph-users] Potential OSD deadlock?

2015-10-04 Thread Alex Gorbachev
We had multiple issues with 4TB drives and delays. Here is the configuration that works for us fairly well on Ubuntu (but we are about to significantly increase the IO load so this may change). NTP: always use NTP and make sure it is working - Ceph is very sensitive to time being precise

Re: [ceph-users] OSD crash

2015-09-22 Thread Alex Gorbachev
Hi Brad, This occurred on a system under moderate load - has not happened since and I do not know how to reproduce. Thank you, Alex On Tue, Sep 22, 2015 at 7:29 PM, Brad Hubbard <bhubb...@redhat.com> wrote: > - Original Message - > > > From: "Alex Gorbachev&quo

Re: [ceph-users] Diffrent OSD capacity & what is the weight of item

2015-09-24 Thread Alex Gorbachev
Please review http://docs.ceph.com/docs/master/rados/operations/crush-map/ regarding weights Best regards, Alex On Wed, Sep 23, 2015 at 3:08 AM, wikison wrote: > Hi, > I have four storage machines to build a ceph storage cluster as > storage nodes. Each of them is

Re: [ceph-users] rbd merge-diff error

2015-12-08 Thread Alex Gorbachev
Hi Josh, On Mon, Dec 7, 2015 at 6:50 PM, Josh Durgin <jdur...@redhat.com> wrote: > On 12/07/2015 03:29 PM, Alex Gorbachev wrote: > >> When trying to merge two results of rbd export-diff, the following error >> occurs: >> >> iss@lab2-b1:~$ rbd export-diff --fro

[ceph-users] rbd merge-diff error

2015-12-07 Thread Alex Gorbachev
found this link http://tracker.ceph.com/issues/12911 but not sure if the patch should have already been in hammer or how to get it? System: ceph version 0.94.5 (9764da52395923e0b32908d83a9f7304401fee43) Ubuntu 14.04.3 kernel 4.2.1-040201-generic Thank you -- Alex Gorbachev Storcium

Re: [ceph-users] rbd merge-diff error

2015-12-09 Thread Alex Gorbachev
Great, thanks Josh! Using stdin/stdout merge-diff is working. Thank you for looking into this. -- Alex Gorbachev Storcium On Wed, Dec 9, 2015 at 2:25 PM, Josh Durgin <jdur...@redhat.com> wrote: > This is the problem: > > http://tracker.ceph.com/issues/14030 > > As a wor

Re: [ceph-users] rbd merge-diff error

2015-12-09 Thread Alex Gorbachev
rt04.bck Merging image diff: 13% complete...failed. rbd: merge-diff error I am not sure how to run gdb in such scenario with stdin/stdout Thanks, Alex > > > Josh > > > On 12/08/2015 11:11 PM, Josh Durgin wrote: > >> On 12/08/2015 10:44 PM, Alex Gorbachev wrote: >&g

Re: [ceph-users] rbd merge-diff error

2015-12-09 Thread Alex Gorbachev
More oddity: retrying several times, the merge-diff sometimes works and sometimes does not, using the same source files. On Wed, Dec 9, 2015 at 10:15 PM, Alex Gorbachev <a...@iss-integration.com> wrote: > Hi Josh, looks like I celebrated too soon: > > On Wed, Dec 9, 2015 at 2:25

[ceph-users] Snapshot creation time

2015-12-11 Thread Alex Gorbachev
Is there any way to obtain a snapshot creation time? rbd snap ls does not list it. Thanks! -- Alex Gorbachev Storcium ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Kernel 4.1.x RBD very slow on writes

2015-12-18 Thread Alex Gorbachev
, 479 MB/s -- Alex Gorbachev Storcium ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Monitors - proactive questions about quantity, placement and protection

2015-12-11 Thread Alex Gorbachev
h as http://blog.widodh.nl/2014/03/safely-backing-up-your-ceph-monitors ? Our cluster has 8 racks right now, and I would love to place a MON at the top of the rack (maybe on SDN switches in the future - why not?). Thank you for helping answer these questions. -- Alex Gorbachev

Re: [ceph-users] [Scst-devel] Problem compiling SCST 3.1 with kernel 4.2.8

2015-12-20 Thread Alex Gorbachev
Sorry, one last comment on issue #1 (slow with SCST iSCSI but fast qla2xxx FC with Ceph RBD): > tly work fine in combination with SCST so I'd recommend to continue >>> testing with a recent kernel. I'm running myself kernel 4.3.0 since some >>> time on my laptop and development workstation. >>

Re: [ceph-users] network failover with public/custer network - is that possible

2015-11-28 Thread Alex Gorbachev
e should be helpful as well to add robustness to the Ceph networking backend. Best regards, Alex > > Thanks for feedback and regards . Götz > > > -- -- Alex Gorbachev Storcium ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] RBD export format for start and end snapshots

2016-01-12 Thread Alex Gorbachev
). Is this the best way to determine snapshots and are letters "s" and "t" going to stay the same? Best regards, -- Alex Gorbachev Storcium ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Must host bucket name be the same with hostname ?

2016-06-11 Thread Alex Gorbachev
ket name be the same with hostname. > > > > > > Or host bucket name does no matter? > > > > > > > > Best regards, > > > > Xiucai > > -- > Christian BalzerNetwork/Systems Engineer > ch...@gol.com <javascript:;>

Re: [ceph-users] Disaster recovery and backups

2016-06-11 Thread Alex Gorbachev
estore. Best regards, Alex > ___ > ceph-users mailing list > ceph-users@lists.ceph.com <javascript:;> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- -- Alex Gorbachev Storcium ___ ceph-u

Re: [ceph-users] RBD export format for start and end snapshots

2016-01-13 Thread Alex Gorbachev
On Tue, Jan 12, 2016 at 12:09 PM, Josh Durgin <jdur...@redhat.com> wrote: > On 01/12/2016 06:10 AM, Alex Gorbachev wrote: > >> Good day! I am working on a robust backup script for RBD and ran into a >> need to reliably determine start and end snapshots for differential &

Re: [ceph-users] pg is stuck stale (osd.21 still removed)

2016-01-13 Thread Alex Gorbachev
but maybe the following links will help you make some progress: http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/ https://www.mail-archive.com/ceph-users@lists.ceph.com/msg17820.html https://ceph.com/community/incomplete-pgs-oh-my/ Good

Re: [ceph-users] ceph osd network configuration

2016-01-25 Thread Alex Gorbachev
is quite successfully. Ubuntu with 4.1+ kernel seems to work really well for all types of bonding and multiple bonds. HTH, Alex -- -- Alex Gorbachev Storcium ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] SSD Journal

2016-01-31 Thread Alex Gorbachev
Jan, I believe the block device (vs. filesystem) OSD layout is addressed in the Newstore/Bluestore: http://tracker.ceph.com/projects/ceph/wiki/NewStore_(new_osd_backend) -- Alex Gorbachev Storcium On Thu, Jan 28, 2016 at 4:32 PM, Jan Schermer <j...@schermer.cz> wrote: > You can't run

Re: [ceph-users] ceph osd network configuration

2016-01-26 Thread Alex Gorbachev
speed links >10Gb and also with multiple bonds > > > On 26 Jan 2016, at 06:32, Alex Gorbachev <a...@iss-integration.com > <javascript:_e(%7B%7D,'cvml','a...@iss-integration.com');>> wrote: > > > > On Saturday, January 23, 2016, 名花 <louisfang2...@gmail

Re: [ceph-users] Real world benefit from SSD Journals for a more read than write cluster

2016-03-10 Thread Alex Gorbachev
Reviving an old thread: On Sunday, July 12, 2015, Lionel Bouton <lionel+c...@bouton.name> wrote: > On 07/12/15 05:55, Alex Gorbachev wrote: > > FWIW. Based on the excellent research by Mark Nelson > > ( > http://ceph.com/community/ceph-performance-part-2-write-through

[ceph-users] Pacemaker Resource Agents for Ceph by Andreas Kurz

2016-05-15 Thread Alex Gorbachev
by clients' IO load. https://github.com/akurz/resource-agents/blob/SCST/heartbeat/SCSTLogicalUnit https://github.com/akurz/resource-agents/blob/SCST/heartbeat/SCSTTarget https://github.com/akurz/resource-agents/blob/SCST/heartbeat/iscsi-scstd -- Alex Gorbachev http://www.iss-integration.com

Re: [ceph-users] Starting a cluster with one OSD node

2016-05-13 Thread Alex Gorbachev
_ > ceph-users mailing list > ceph-users@lists.ceph.com <javascript:;> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- -- Alex Gorbachev Storcium ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Starting a cluster with one OSD node

2016-05-15 Thread Alex Gorbachev
> On Friday, May 13, 2016, Mike Jacobacci wrote: > Hello, > > I have a quick and probably dumb question… We would like to use Ceph > for our storage, I was thinking of a cluster with 3 Monitor and OSD > nodes. I was wondering if it was a bad idea to

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-07-27 Thread Alex Gorbachev
eing the below output - this means that discard is being sent to the backing (RBD) device, correct? Including the ceph-users list to see if there is a reason RBD is not processing this discard/unmap. Thank you, -- Alex Gorbachev Storcium Jul 26 08:23:38 e1 kernel: [ 858.324715] [20426]: scst: scs

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-07-27 Thread Alex Gorbachev
with UNMAP) - blkdiscard does release the space -- Alex Gorbachev Storcium On Wed, Jul 27, 2016 at 11:55 AM, Alex Gorbachev <a...@iss-integration.com> wrote: > Hi Vlad, > > On Mon, Jul 25, 2016 at 10:44 PM, Vladislav Bolkhovitin <v...@vlnb.net> wrote: >> Hi, >> &

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-07-30 Thread Alex Gorbachev
> > On Wednesday, July 27, 2016, Vladislav Bolkhovitin <v...@vlnb.net> wrote: >> >> >> Alex Gorbachev wrote on 07/27/2016 10:33 AM: >> > One other experiment: just running blkdiscard against the RBD block >> > device completely clears it, to th

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-07-30 Thread Alex Gorbachev
Hi Vlad, On Wednesday, July 27, 2016, Vladislav Bolkhovitin <v...@vlnb.net> wrote: > > Alex Gorbachev wrote on 07/27/2016 10:33 AM: > > One other experiment: just running blkdiscard against the RBD block > > device completely clears it, to the point where the rbd-diff met

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-08-02 Thread Alex Gorbachev
On Tue, Aug 2, 2016 at 9:56 AM, Ilya Dryomov <idryo...@gmail.com> wrote: > On Tue, Aug 2, 2016 at 3:49 PM, Alex Gorbachev <a...@iss-integration.com> > wrote: >> On Mon, Aug 1, 2016 at 11:03 PM, Vladislav Bolkhovitin <v...@vlnb.net> wrote: >>> Alex Gorbache

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-08-02 Thread Alex Gorbachev
On Mon, Aug 1, 2016 at 11:03 PM, Vladislav Bolkhovitin <v...@vlnb.net> wrote: > Alex Gorbachev wrote on 08/01/2016 04:05 PM: >> Hi Ilya, >> >> On Mon, Aug 1, 2016 at 3:07 PM, Ilya Dryomov <idryo...@gmail.com> wrote: >>> On Mon, Aug 1, 2016 at 7:55 PM

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-08-01 Thread Alex Gorbachev
root@e1:/var/log# rbd diff spin1/testdis|awk '{ SUM += $2 } END { print SUM/1024 " KB" }' 819200 KB root@e1:/var/log# blkdiscard -o 0 -l 4096 /dev/rbd28 root@e1:/var/log# rbd diff spin1/testdis|awk '{ SUM += $2 } END { print SUM/1024 " KB" }' 782336 KB -- Alex Gorbache

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-08-03 Thread Alex Gorbachev
On Wed, Aug 3, 2016 at 9:59 AM, Alex Gorbachev <a...@iss-integration.com> wrote: > On Tue, Aug 2, 2016 at 10:49 PM, Vladislav Bolkhovitin <v...@vlnb.net> wrote: >> Alex Gorbachev wrote on 08/02/2016 07:56 AM: >>> On Tue, Aug 2, 2016 at 9:56 AM, Ilya Dryomov <idryo

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-08-13 Thread Alex Gorbachev
On Mon, Aug 8, 2016 at 7:56 AM, Ilya Dryomov <idryo...@gmail.com> wrote: > On Sun, Aug 7, 2016 at 7:57 PM, Alex Gorbachev <a...@iss-integration.com> > wrote: >>> I'm confused. How can a 4M discard not free anything? It's either >>> going to hit an e

Re: [ceph-users] Is anyone seeing iissues with task_numa_find_cpu?

2016-07-19 Thread Alex Gorbachev
erm kernel: https://lkml.org/lkml/2016/7/12/919 https://lkml.org/lkml/2016/7/12/297 -- Alex Gorbachev Storcium > > 2016-07-05 11:47 GMT+03:00 Nick Fisk <n...@fisk.me.uk>: >>> -Original Message- >>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] O

Re: [ceph-users] ceph + vmware

2016-07-11 Thread Alex Gorbachev
ts are open source, there are several options for those. Currently running 3 VMware clusters with 15 hosts total, and things are quite decent. Regards, Alex Gorbachev Storcium > > Thank you ! > > -- > Mit freundlichen Gruessen / Best regards > > Oliver Dzombic > IP-Inter

[ceph-users] Is anyone seeing iissues with task_numa_find_cpu?

2016-06-28 Thread Alex Gorbachev
turned off CFQ and blk-mq/scsi-mq and are using just the noop scheduler. Does the ceph kernel code somehow use the fair scheduler code block? Thanks -- Alex Gorbachev Storcium Jun 28 09:46:41 roc04r-sca090 kernel: [137912.684974] CPU: 30 PID: 10403 Comm: ceph-osd Not tainted 4.4.13-040413

Re: [ceph-users] Is anyone seeing iissues with task_numa_find_cpu?

2016-06-28 Thread Alex Gorbachev
016 um 17:59 schrieb Tim Bishop <tim-li...@bishnet.net>: > > Yes - I noticed this today on Ubuntu 16.04 with the default kernel. No > useful information to add other than it's not just you. > > Tim. > > On Tue, Jun 28, 2016 at 11:05:40AM -0400, Alex Gorbachev wrote: >

Re: [ceph-users] suse_enterprise_storage3_rbd_LIO_vmware_performance_bad

2016-07-04 Thread Alex Gorbachev
HI Nick, On Fri, Jul 1, 2016 at 2:11 PM, Nick Fisk wrote: > However, there are a number of pain points with iSCSI + ESXi + RBD and they > all mainly centre on write latency. It seems VMFS was designed around the > fact that Enterprise storage arrays service writes in

Re: [ceph-users] Backing up RBD snapshots to a different cloud service

2016-07-10 Thread Alex Gorbachev
project, which offers excellent deduplication. HTH, Alex > > > Any advice is greatly appreciated. > > Thanks, > Brendan > -- -- Alex Gorbachev Storcium ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Is anyone seeing iissues with task_numa_find_cpu?

2016-07-04 Thread Alex Gorbachev
the default kernel. No >>>> useful information to add other than it's not just you. >>>> >>>> Tim. >>>> >>>> On Tue, Jun 28, 2016 at 11:05:40AM -0400, Alex Gorbachev wrote: >>>> >>>> After upgrading to kernel 4.4

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-08-04 Thread Alex Gorbachev
On Wed, Aug 3, 2016 at 10:54 AM, Alex Gorbachev <a...@iss-integration.com> wrote: > On Wed, Aug 3, 2016 at 9:59 AM, Alex Gorbachev <a...@iss-integration.com> > wrote: >> On Tue, Aug 2, 2016 at 10:49 PM, Vladislav Bolkhovitin <v...@vlnb.net> wrote: >>> Alex

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-08-07 Thread Alex Gorbachev
next thing. Thank you for your input, it is very practical and helpful long term. Alex > > -- -- Alex Gorbachev Storcium ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-08-07 Thread Alex Gorbachev
> I'm confused. How can a 4M discard not free anything? It's either > going to hit an entire object or two adjacent objects, truncating the > tail of one and zeroing the head of another. Using rbd diff: > > $ rbd diff test | grep -A 1 25165824 > 25165824 4194304 data > 29360128 4194304 data >

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-08-01 Thread Alex Gorbachev
Hi Ilya, On Mon, Aug 1, 2016 at 3:07 PM, Ilya Dryomov <idryo...@gmail.com> wrote: > On Mon, Aug 1, 2016 at 7:55 PM, Alex Gorbachev <a...@iss-integration.com> > wrote: >> RBD illustration showing RBD ignoring discard until a certain >> threshold - why is that?

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-08-03 Thread Alex Gorbachev
On Tue, Aug 2, 2016 at 10:49 PM, Vladislav Bolkhovitin <v...@vlnb.net> wrote: > Alex Gorbachev wrote on 08/02/2016 07:56 AM: >> On Tue, Aug 2, 2016 at 9:56 AM, Ilya Dryomov <idryo...@gmail.com> wrote: >>> On Tue, Aug 2, 2016 at 3:49 PM, Alex Gorbachev <a..

Re: [ceph-users] Is anyone seeing iissues with task_numa_find_cpu?

2016-07-02 Thread Alex Gorbachev
tanding issue that's only just been >>> resolved, another user chimed in on the lkml thread a couple of days >>> ago as well and again his trace had ceph-osd in it as well. >>> >>> https://lkml.org/lkml/headers/2016/6/21/491 >>> >>> Campbell >&

[ceph-users] Anyone using LVM or ZFS RAID1 for boot drives?

2017-02-12 Thread Alex Gorbachev
about the time when journals fail Any other solutions? Thank you for sharing. -- Alex Gorbachev Storcium ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-08-21 Thread Alex Gorbachev
re interesting like CephFS/Ganesha. Thanks for your very valuable info on analysis and hw build. Alex > > > > Am 21.08.2016 um 09:31 schrieb Nick Fisk <n...@fisk.me.uk <javascript:;>>: > > >> -Original Message- > >> From: Alex Gorbachev [mailt

Re: [ceph-users] Is anyone seeing iissues with task_numa_find_cpu?

2016-08-20 Thread Alex Gorbachev
On Tue, Jul 19, 2016 at 12:04 PM, Alex Gorbachev <a...@iss-integration.com> wrote: > On Mon, Jul 18, 2016 at 4:41 AM, Василий Ангапов <anga...@gmail.com> wrote: >> Guys, >> >> This bug is hitting me constantly, may be once per several days. Does >> anyone kn

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-08-20 Thread Alex Gorbachev
Hi Nick, On Thu, Jul 21, 2016 at 8:33 AM, Nick Fisk wrote: >> -Original Message- >> From: w...@globe.de [mailto:w...@globe.de] >> Sent: 21 July 2016 13:23 >> To: n...@fisk.me.uk; 'Horace Ng' >> Cc: ceph-users@lists.ceph.com >> Subject: Re: [ceph-users]

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-08-22 Thread Alex Gorbachev
is the best it’s been for a long time and I’m reluctant to fiddle any > further. > > > > But as mentioned above, thick vmdk’s with vaai might be a really good fit. > Any chance thin vs. thick difference could be related to discards? I saw zillions of them in recent testing. > &g

Re: [ceph-users] udev rule to set readahead on Ceph RBD's

2016-08-23 Thread Alex Gorbachev
; underneath a RBD device. We needed high sequential Write and Read performance > on those RBD devices since we were storing large files on there. > > Different approach, kind of similar result. Question: what scheduler were you guys using to facilitate the readahead on the RBD client

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-09-03 Thread Alex Gorbachev
HI Nick, On Sun, Aug 21, 2016 at 3:19 PM, Nick Fisk <n...@fisk.me.uk> wrote: > *From:* Alex Gorbachev [mailto:a...@iss-integration.com] > *Sent:* 21 August 2016 15:27 > *To:* Wilhelm Redbrake <w...@globe.de> > *Cc:* n...@fisk.me.uk; Horace Ng <hor...@hkisl.net&

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-09-03 Thread Alex Gorbachev
On Saturday, September 3, 2016, Alex Gorbachev <a...@iss-integration.com> wrote: > HI Nick, > > On Sun, Aug 21, 2016 at 3:19 PM, Nick Fisk <n...@fisk.me.uk > <javascript:_e(%7B%7D,'cvml','n...@fisk.me.uk');>> wrote: > >> *From:* Alex Gorbachev [mailto:a..

[ceph-users] Ubuntu latest ceph-deploy fails to install hammer

2016-09-09 Thread Alex Gorbachev
-recommends install -o Dpkg::Options::=--force-confnew ceph-osd ceph-mds ceph-mon radosgw -- Alex Gorbachev Storcium ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-09-11 Thread Alex Gorbachev
On Sun, Sep 4, 2016 at 4:48 PM, Nick Fisk <n...@fisk.me.uk> wrote: > > > > > *From:* Alex Gorbachev [mailto:a...@iss-integration.com] > *Sent:* 04 September 2016 04:45 > *To:* Nick Fisk <n...@fisk.me.uk> > *Cc:* Wilhelm Redbrake <w...@globe.de>; Horace

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-09-11 Thread Alex Gorbachev
-- Alex Gorbachev Storcium On Sun, Sep 11, 2016 at 12:54 PM, Nick Fisk <n...@fisk.me.uk> wrote: > > > > > *From:* Alex Gorbachev [mailto:a...@iss-integration.com] > *Sent:* 11 September 2016 16:14 > > *To:* Nick Fisk <n...@fisk.me.uk> > *Cc:* Wilhelm R

Re: [ceph-users] Ubuntu latest ceph-deploy fails to install hammer

2016-09-09 Thread Alex Gorbachev
Confirmed - older version of ceph-deploy is working fine. Odd as there is a large number of Hammer users out there. Thank you for the explanation and fix. -- Alex Gorbachev Storcium On Fri, Sep 9, 2016 at 12:15 PM, Vasu Kulkarni <vakul...@redhat.com> wrote: > There is a known issue wi

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-09-10 Thread Alex Gorbachev
-on-nfs-vs.html ) Alex > > From: Alex Gorbachev [mailto:a...@iss-integration.com] > Sent: 04 September 2016 04:45 > To: Nick Fisk <n...@fisk.me.uk> > Cc: Wilhelm Redbrake <w...@globe.de>; Horace Ng <hor...@hkisl.net>; > ceph-users <ceph-users@lists.cep

Re: [ceph-users] Ceph + VMWare

2016-10-06 Thread Alex Gorbachev
On Wed, Oct 5, 2016 at 2:32 PM, Patrick McGarry wrote: > Hey guys, > > Starting to buckle down a bit in looking at how we can better set up > Ceph for VMWare integration, but I need a little info/help from you > folks. > > If you currently are using Ceph+VMWare, or are

Re: [ceph-users] [Scst-devel] Thin Provisioning and Ceph RBD's

2016-08-18 Thread Alex Gorbachev
On Sat, Aug 13, 2016 at 4:51 PM, Alex Gorbachev <a...@iss-integration.com> wrote: > On Sat, Aug 13, 2016 at 12:36 PM, Alex Gorbachev <a...@iss-integration.com> > wrote: >> On Mon, Aug 8, 2016 at 7:56 AM, Ilya Dryomov <idryo...@gmail.com> wrote: >>> On Sun,

[ceph-users] Storcium has been certified by VMWare

2016-08-26 Thread Alex Gorbachev
continue to promote, improve, support, and deploy open source storage and compute solutions for healthcare and business applications. http://www.vmware.com/resources/compatibility/detail.php?deviceCategory=san=41781=san=1=41781=0=1_interval=10=Partner=Asc -- Alex Gorbachev Storcium

Re: [ceph-users] Monitoring Overhead

2016-10-25 Thread Alex Gorbachev
t:;> > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > > > ___ > > ceph-users mailing list > > ceph-users@lists.ceph.com <javascript:;> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > -- > Christian BalzerNetwork/Systems Engineer > ch...@gol.com <javascript:;>Global OnLine Japan/Rakuten > Communications > http://www.gol.com/ > ___ > ceph-users mailing list > ceph-users@lists.ceph.com <javascript:;> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- -- Alex Gorbachev Storcium ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] High ops/s with kRBD and "--object-size 32M"

2016-11-28 Thread Alex Gorbachev
On Mon, Nov 28, 2016 at 2:59 PM Ilya Dryomov wrote: > On Mon, Nov 28, 2016 at 6:20 PM, Francois Blondel > wrote: > > Hi *, > > > > I am currently testing different scenarios to try to optimize sequential > > read and write speeds using Kernel RBD. > >

Re: [ceph-users] 10Gbit switch advice for small ceph cluster upgrade

2016-10-30 Thread Alex Gorbachev
__ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- -- Alex Gorbachev Storcium ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] is Ceph suitable for small scale deployments?

2016-12-05 Thread Alex Gorbachev
Hi Joakim, On Mon, Dec 5, 2016 at 1:35 PM wrote: > Hello, > > I have a question regarding if Ceph is suitable for small scale > deployments. > > Lets say I have two machines, connected with gbit lan. > > I want to share data between them, like an ordinary NFS > share, but with

[ceph-users] Reusing journal partitions when using ceph-deploy/ceph-disk --dmcrypt

2016-12-04 Thread Alex Gorbachev
Referencing http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-July/003293.html When using --dmcrypt with ceph-deploy/ceph-disk, the journal device is not allowed to be an existing partition. You have to specify the entire block device, on which the tools create a partition equal to osd

Re: [ceph-users] Reusing journal partitions when using ceph-deploy/ceph-disk --dmcrypt

2016-12-05 Thread Alex Gorbachev
Hi Pierre, On Mon, Dec 5, 2016 at 3:41 AM, Pierre BLONDEAU <pierre.blond...@unicaen.fr> wrote: > Le 05/12/2016 à 05:14, Alex Gorbachev a écrit : >> Referencing >> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-July/003293.html >> >> When using --d

Re: [ceph-users] Enjoy the leap second mon skew tonight..

2016-12-31 Thread Alex Gorbachev
On Sat, Dec 31, 2016 at 5:38 PM Tyler Bishop wrote: > Enjoy the leap second guys.. lol your cluster gonna be skewed. > > Yep, pager went off right at dinner :) > > _ > > ___ >

Re: [ceph-users] Preconditioning an RBD image

2017-03-24 Thread Alex Gorbachev
(e.g. areca), all SSD OSDs whenever these can be affordable, or start experimenting with cache pools. Does not seem like SSDs are getting any cheaper, just new technologies like 3DXP showing up. > > On 03/21/17 23:22, Alex Gorbachev wrote: > > I wanted to share the recent experience, in

Re: [ceph-users] How to think a two different disk's technologies architecture

2017-03-24 Thread Alex Gorbachev
you very much ! > > -- > Alejandrito > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- -- Alex Gorbachev Storcium ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Moving data from EC pool with replicated cache tier to replicated pool

2017-03-15 Thread Alex Gorbachev
sers mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- -- Alex Gorbachev Storcium ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Preconditioning an RBD image

2017-03-21 Thread Alex Gorbachev
. Regards, Alex Storcium -- -- Alex Gorbachev Storcium ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] osd_disk_thread_ioprio_priority help

2017-03-14 Thread Alex Gorbachev
On Mon, Mar 13, 2017 at 6:09 AM, Florian Haas wrote: > On Mon, Mar 13, 2017 at 11:00 AM, Dan van der Ster > wrote: >>> I'm sorry, I may have worded that in a manner that's easy to >>> misunderstand. I generally *never* suggest that people use CFQ on >>>

[ceph-users] Socket errors, CRC, lossy con messages

2017-04-10 Thread Alex Gorbachev
oubleshooting here - dump historic ops on OSD, wireshark the links or anything else? 3. Christian, if you are looking at this, what would be your red flags in atop? Thank you. -- Alex Gorbachev Storcium ___ ceph-users mailing list ceph-users@lists.

  1   2   >