Re: [ceph-users] ceph all-nvme mysql performance tuning

2017-11-29 Thread Zoltan Arnold Nagy
On 2017-11-27 14:02, German Anders wrote: 4x 2U servers: 1x 82599ES 10-Gigabit SFI/SFP+ Network Connection 1x Mellanox ConnectX-3 InfiniBand FDR 56Gb/s Adapter (dual port) so I assume you are using IPoIB as the cluster network for the replication... 1x OneConnect 10Gb NIC (quad-port) - in

[ceph-users] cache tiering deprecated in RHCS 2.0

2016-10-22 Thread Zoltan Arnold Nagy
Hi, The 2.0 release notes for Red Hat Ceph Storage deprecate cache tiering. What does this mean for Jewel and especially going forward? Can someone shed some light why cache tiering is not meeting the original expectations technically? Thanks, Zoltan ___

[ceph-users] radosgw ignores rgw_frontends? (10.2.2)

2016-07-28 Thread Zoltan Arnold Nagy
Hi, I just did a test deployment using ceph-deploy rgw create after which I've added [client.rgw.c11n1] rgw_frontends = “civetweb port=80” to the config. Using show-config I can see that it’s there: root@c11n1:~# ceph --id rgw.c11n1 --show-config | grep civet debug_civetweb = 1/10 rgw_fronte

Re: [ceph-users] Lessons learned upgrading Hammer -> Jewel

2016-07-16 Thread Zoltan Arnold Nagy
I’ve also upgraded last weekend our Hammer cluster to Jewel except for RGW. Reading the initial user stories on the list where you had to run some special script to create the default built-in zone discouraged me from doing so. So on .2, the rgw upgrade went without any hitch just by upgrading r

Re: [ceph-users] multiple journals on SSD

2016-07-08 Thread Zoltan Arnold Nagy
Hi Christian,On 08 Jul 2016, at 02:22, Christian Balzer wrote:Hello,On Thu, 7 Jul 2016 23:19:35 +0200 Zoltan Arnold Nagy wrote:Hi Nick,How large NVMe drives are you running per 12 disks?In my current setup I have 4xP3700 per 36 disks but I feel like I couldget by with 2… Just looking for

Re: [ceph-users] multiple journals on SSD

2016-07-07 Thread Zoltan Arnold Nagy
Hi Nick, How large NVMe drives are you running per 12 disks? In my current setup I have 4xP3700 per 36 disks but I feel like I could get by with 2… Just looking for community experience :-) Cheers, Zoltan > On 07 Jul 2016, at 10:45, Nick Fisk wrote: > > Just to add if you really want to go w

[ceph-users] is it time already to move from hammer to jewel?

2016-07-06 Thread Zoltan Arnold Nagy
Hey, Those out there who are running production clusters: have you upgraded already to Jewel? I usually wait until .2 is out (which it is now for Jewel) but just looking for largish deployment experiences in the field before I pull the trigger over the weekend. It’s a largish upgrade going from

Re: [ceph-users] Another cluster completely hang

2016-06-29 Thread Zoltan Arnold Nagy
Just loosing one disk doesn’t automagically delete it from CRUSH, but in the output you had 10 disks listed, so there must be something else going - did you delete the disk from the crush map as well? Ceph waits by default 300 secs AFAIK to mark an OSD out after it will start to recover. > On

[ceph-users] multiple, independent rgws on the same ceph cluster

2016-05-31 Thread Zoltan Arnold Nagy
Hi, Is there a way to accomplish the $subject? I’ve tried to set rgw_zone, as from Jewel all rgw pools are prefixed with the zone name, however, this seems to depend on the metadata in an .rgw.zone pool which of course doesn’t exist in case of independent rgw installations. Is there a way to m

[ceph-users] pg repair behavior? (Was: Re: getting rid of misplaced objects)

2016-02-15 Thread Zoltan Arnold Nagy
gt; > http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/# > pgs-inconsistent > > > Bryan > > On 2/11/16, 11:21 AM, "ceph-users on behalf of Zoltan Arnold Nagy" > > wrote: > >> Hi, >> >> Are there any tips and tricks ar

Re: [ceph-users] Dell Ceph Hardware recommendations

2016-02-11 Thread Zoltan Arnold Nagy
That PDF specifically calls for P3700 NVMe SSDs, not the consumer 750. You need high endurance drives usually. I’m using 1x400GB Intel P3700 per 9 OSDs (so 4xP3700 per 36 disk chassis). > On 11 Feb 2016, at 17:56, Michael wrote: > > Alex Leake writes: > >> >> Hello Michael​, >> >> I mainta

[ceph-users] getting rid of misplaced objects

2016-02-11 Thread Zoltan Arnold Nagy
Hi, Are there any tips and tricks around getting rid of misplaced objects? I did check the archive but didn’t find anything. Right now my cluster looks like this: pgmap v43288593: 16384 pgs, 4 pools, 45439 GB data, 10383 kobjects 109 TB used, 349 TB / 458 TB avail

Re: [ceph-users] Ceph and hadoop (fstab insted of CephFS)

2016-02-06 Thread Zoltan Arnold Nagy
y hadoop feature would work bad if replication is > disabled "from the hadoop side". > > THanks! > De: Zoltan Arnold Nagy <mailto:zol...@linux.vnet.ibm.com>> > Enviado: viernes, 05 de febrero de 2016 02:21 p.m. > Para: Jose M > Asunto: Re: [ceph-users] Ceph a

Re: [ceph-users] Ceph and hadoop (fstab insted of CephFS)

2016-02-06 Thread Zoltan Arnold Nagy
hy I'm trying to use ceph here, well it's because we > were given an infrastructure with the possibility yo use a big ceph storage > that's working really really well (but as an object store and wasn't use > until now with hadoop). > > > De: Zoltan Arn

Re: [ceph-users] Ceph and hadoop (fstab insted of CephFS)

2016-02-04 Thread Zoltan Arnold Nagy
Might be totally wrong here, but it’s not layering them but replacing hdfs:// URLs with ceph:// URLs so all the mapreduce/spark/hbase/whatever is on top can use CephFS directly which is not a bad thing to do (if it works) :-) > On 02 Feb 2016, at 16:50, John Spray wrote: > > On Tue, Feb 2, 201

Re: [ceph-users] Optimal OSD count for SSDs / NVMe disks

2016-02-04 Thread Zoltan Arnold Nagy
One option you left out: you could put the journals on NVMe plus use the leftover space for a writeback bcache device which caches those 5 OSDs. This is exactly what I’m testing at the moment - 4xNVMe + 20 disks per box.Or just use the NVMe itself as a bcache cache device (don’t partition it) and l

Re: [ceph-users] Ceph + Libvirt + QEMU-KVM

2016-01-28 Thread Zoltan Arnold Nagy
This has recently been fixed and will be available in Mitaka (after a lot of people fighting this for years). Details: https://review.openstack.org/#/c/205282/ > On 28 Jan 2016, at 12:09, Mihai Gheorghe wrote: > > As far as i know, snapshotting with

Re: [ceph-users] ceph osd network configuration

2016-01-26 Thread Zoltan Arnold Nagy
Have you had any problems with kernels prior to 4.1? We’ve been using LACP in our environment without problems on Ubuntu 14.04.X (.13, .16 and .19 kernels). > On 26 Jan 2016, at 06:32, Alex Gorbachev wrote: > > > > On Saturday, January 23, 2016, 名花 > wrote: >

Re: [ceph-users] Ceph monitors 100% full filesystem, refusing start

2016-01-20 Thread Zoltan Arnold Nagy
0/2016 08:01 PM, Zoltan Arnold Nagy wrote: >> Wouldn’t actually blowing away the other monitors then recreating them >> from scratch solve the issue? >> >> Never done this, just thinking out loud. It would grab the osdmap and >> everything from the other monitor and for

Re: [ceph-users] Ceph monitors 100% full filesystem, refusing start

2016-01-20 Thread Zoltan Arnold Nagy
ote: > > On 01/20/2016 04:22 PM, Zoltan Arnold Nagy wrote: >> Hi Wido, >> >> So one out of the 5 monitors are running fine then? Did that have more space >> for it’s leveldb? >> > > Yes. That was at 99% full and by cleaning some stuff in /var/cache a

Re: [ceph-users] Ceph monitors 100% full filesystem, refusing start

2016-01-20 Thread Zoltan Arnold Nagy
Hi Wido, So one out of the 5 monitors are running fine then? Did that have more space for it’s leveldb? > On 20 Jan 2016, at 16:15, Wido den Hollander wrote: > > Hello, > > I have an issue with a (not in production!) Ceph cluster which I'm > trying to resolve. > > On Friday the network links

[ceph-users] jemalloc-enabled packages on trusty?

2016-01-20 Thread Zoltan Arnold Nagy
Hi, Has someone published prebuilt debs for trusty from hammer with jemalloc compiled-in instead of tcmalloc or does everybody need to compile it themselves? :-) Cheers, Zoltan ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.co

Re: [ceph-users] write speed , leave a little to be desired?

2015-12-11 Thread Zoltan Arnold Nagy
It’s very unfortunate that you guys are using the EVO drives. As we’ve discussed numerous times on the ML, they are not very suitable for this task. I think that 200-300MB/s is actually not bad (without knowing anything about the hardware setup, as you didn’t give details…) coming from those driv

Re: [ceph-users] Performance question

2015-11-24 Thread Zoltan Arnold Nagy
You are talking about 20 “GIG” (what is that? GB/s? Gb/s? I assume the latter) then talk about 40Mbit/s. Am I the only one who cannot parse this? :-) > On 24 Nov 2015, at 17:27, Marek Dohojda wrote: > > 7 total servers, 20 GIG pipe between servers, both reads and writes. The > network itself

Re: [ceph-users] All SSD Pool - Odd Performance

2015-11-22 Thread Zoltan Arnold Nagy
It would have been more interesting if you had tweaked only one option as now we can’t be sure which changed had what impact… :-) > On 22 Nov 2015, at 04:29, Udo Lembke wrote: > > Hi Sean, > Haomai is right, that qemu can have a huge performance differences. > > I have done two test to the sam

Re: [ceph-users] Ceph, LIO, VMWARE anyone?

2015-01-23 Thread Zoltan Arnold Nagy
from VMWare and the target software; so the obvious choice is to leave VMWare's path selection at the default which is Fixed and picks the first target in ASCII-betical order. That means I am actually functioning in Active/Passive mode. Jake On Fri, Jan 23, 2015 at 8:46 AM, Zoltan Arno

Re: [ceph-users] Ceph, LIO, VMWARE anyone?

2015-01-23 Thread Zoltan Arnold Nagy
Just to chime in: it will look fine, feel fine, but underneath it's quite easy to get VMFS corruption. Happened in our tests. Also if you're running LIO, from time to time expect a kernel panic (haven't tried with the latest upstream, as I've been using Ubuntu 14.04 on my "export" hosts for the