Re: [ceph-users] Redhat Storage Ceph Storage 1.3 released

2015-07-02 Thread Stefan Priebe - Profihost AG
Hi, Am 01.07.2015 um 23:35 schrieb Loic Dachary: Hi, The details of the differences between the Hammer point releases and the RedHat Ceph Storage 1.3 can be listed as described at http://www.spinics.net/lists/ceph-devel/msg24489.html reconciliation between hammer and v0.94.1.2 The

Re: [ceph-users] file/directory invisible through ceph-fuse

2015-07-02 Thread flisky
On 2015年07月02日 00:16, Gregory Farnum wrote: How reproducible is this issue for you? Ideally I'd like to get logs from both clients and the MDS server while this is happening, with mds and client debug set to 20. And also to know if dropping kernel caches and re-listing the directory resolves

[ceph-users] Timeout mechanism in ceph client tick

2015-07-02 Thread Z Zhang
Hi Guys, By reading through ceph client codes, there is timeout mechanism in tick when doing mount. Recently we met some client requests to mds spending long time to reply when doing massive test to cephfs. And if we want cephfs user to know the timeout instead of waiting for the reply, can we

[ceph-users] metadata server rejoin time

2015-07-02 Thread Matteo Dacrema
Hi all, I'm using CephFS on Hammer and I've 1.5 million files , 2 metadata servers in active/standby configuration with 8 GB of RAM , 20 clients with 2 GB of RAM each and 2 OSD nodes with 4 80GB osd and 4GB of RAM. ?I've noticed that if I kill the active metadata server the second one took

Re: [ceph-users] any recommendation of using EnhanceIO?

2015-07-02 Thread German Anders
output from iostat: *CEPHOSD01:* Device: rrqm/s wrqm/s r/s w/srMB/swMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sdc(ceph-0) 0.00 0.001.00 389.00 0.0035.98 188.9660.32 120.12 16.00 120.39 1.26 49.20 sdd(ceph-1)

Re: [ceph-users] any recommendation of using EnhanceIO?

2015-07-02 Thread Emmanuel Florac
Le Wed, 1 Jul 2015 17:13:03 -0300 German Anders gand...@despegar.com écrivait: Hi cephers, Is anyone out there that implement enhanceIO in a production environment? any recommendation? any perf output to share with the diff between using it and not? I've tried EnhanceIO back when it

Re: [ceph-users] any recommendation of using EnhanceIO?

2015-07-02 Thread German Anders
The idea is to cache rbd at a host level. Also could be possible to cache at the osd level. We have high iowait and we need to lower it a bit, since we are getting the max from our sas disks 100-110 iops per disk (3TB osd's), any advice? Flashcache? On Thursday, July 2, 2015, Jan Schermer

Re: [ceph-users] any recommendation of using EnhanceIO?

2015-07-02 Thread Jan Schermer
I think I posted my experience here ~1 month ago. My advice for EnhanceIO: don’t use it. But you didn’t exactly say what you want to cache - do you want to cache the OSD filestore disks? RBD devices on hosts? RBD devices inside guests? Jan On 02 Jul 2015, at 11:29, Emmanuel Florac

Re: [ceph-users] xattrs vs omap

2015-07-02 Thread Jan Schermer
Does anyone have a known-good set of parameters for ext4? I want to try it as well but I’m a bit worried what happnes if I get it wrong. Thanks Jan On 02 Jul 2015, at 09:40, Nick Fisk n...@fisk.me.uk wrote: -Original Message- From: ceph-users

[ceph-users] Fwd: unable to read magic from mon data

2015-07-02 Thread Ben Jost
Hi, after a power-loss all 3 Monitors crashed at the same time. When I try to start the Monitors all htree report: === mon.2 === Starting Ceph mon.2 on cephmon172... 2015-07-01 18:09:09.312039 7fe3f923c840 -1 unable to read magic from mon data failed: 'ulimit -n 32768; /usr/bin/ceph-mon -i 2

Re: [ceph-users] One of our nodes has logs saying: wrongly marked me down

2015-07-02 Thread Tuomas Juntunen
Thanks I’ll test these values, and also add the osd heartbeat grace to 60 seconds instead of 20, hopefully that would help with the latency during deep scrub. I changed shards to 6 and shard threads to 2, then it matches physical cores on the server not including hyperthreading. Br, T

Re: [ceph-users] any recommendation of using EnhanceIO?

2015-07-02 Thread Jan Schermer
And those disks are spindles? Looks like there’s simply too few of there…. Jan On 02 Jul 2015, at 13:49, German Anders gand...@despegar.com wrote: output from iostat: CEPHOSD01: Device: rrqm/s wrqm/s r/s w/srMB/swMB/s avgrq-sz avgqu-sz await r_await

[ceph-users] How to use different Ceph interfaces?

2015-07-02 Thread Hadi Montakhabi
How could I use ceph different interfaces? Here is what I understand so far. Please correct me if I am mistaken. 1. Ceph Object Gatewa http://docs.ceph.com/docs/master/radosgw/y: I am assuming the only way to use it is to utilize the API. 2. Ceph Block Devic: Here is where I am not very clear. Is

Re: [ceph-users] any recommendation of using EnhanceIO?

2015-07-02 Thread Lionel Bouton
On 07/02/15 13:49, German Anders wrote: output from iostat: CEPHOSD01: Device: rrqm/s wrqm/s r/s w/srMB/swMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sdc(ceph-0) 0.00 0.001.00 389.00 0.0035.98 188.9660.32 120.12

Re: [ceph-users] any recommendation of using EnhanceIO?

2015-07-02 Thread German Anders
yeah 3TB SAS disks *German Anders* Storage System Engineer Leader *Despegar* | IT Team *office* +54 11 4894 3500 x3408 *mobile* +54 911 3493 7262 *mail* gand...@despegar.com 2015-07-02 9:04 GMT-03:00 Jan Schermer j...@schermer.cz: And those disks are spindles? Looks like there’s simply too

Re: [ceph-users] Redhat Storage Ceph Storage 1.3 released

2015-07-02 Thread Loic Dachary
Hi, On 02/07/2015 08:16, Stefan Priebe - Profihost AG wrote: Hi, Am 01.07.2015 um 23:35 schrieb Loic Dachary: Hi, The details of the differences between the Hammer point releases and the RedHat Ceph Storage 1.3 can be listed as described at

Re: [ceph-users] Redhat Storage Ceph Storage 1.3 released

2015-07-02 Thread Loic Dachary
Hi, On 02/07/2015 10:16, Vickey Singh wrote: Thanks Loic / Ken I am a bit confused , we are running OpenSource Ceph Firefly in Production and planning to upgrade to Stable hammer release. Questions : - Which exact Hammer release is currently stable release , that we can upgrade to

Re: [ceph-users] One of our nodes has logs saying: wrongly marked me down

2015-07-02 Thread Tuomas Juntunen
Just reporting back on my findings After making these changes the flapping occurred just once during the night. To fix it further I changed the heartbeat grace to 120secs. Also matched osd_op_threads and filestore_op_threads to core count. Br,T From: ceph-users

Re: [ceph-users] Node reboot -- OSDs not logging off from cluster

2015-07-02 Thread Johannes Formann
Hi, When rebooting one of the nodes (e. g. for a kernel upgrade) the OSDs do not seem to shut down correctly. Clients hang and ceph osd tree show the OSDs of that node still up. Repeated runs of ceph osd tree show them going down after a while. For instance, here OSD.7 is still up, even

Re: [ceph-users] Where does 130IOPS come from?

2015-07-02 Thread Wido den Hollander
On 07/02/2015 05:53 PM, Steffen Tilsch wrote: Hello Cephers, Whenever I read about HDDs for OSDs it is told that they will deliver around 130 IOPS. Where does this number come from and how it was measured (random/seq, how big where the IOs, which queue-dephat what latency) or is it more a

Re: [ceph-users] Ceph Journal Disk Size

2015-07-02 Thread Shane Gibson
Lionel - thanks for the feedback ... inline below ... On 7/2/15, 9:58 AM, Lionel Bouton lionel+c...@bouton.namemailto:lionel+c...@bouton.name wrote: Ouch. These spinning disks are probably a bottleneck: there are regular advices on this list to use one DC SSD for 4 OSDs. You would probably

[ceph-users] Degraded in the negative?

2015-07-02 Thread Jan Schermer
Interesting. Any idea why degraded could be negative? :) 2015-07-02 17:27:11.551959 mon.0 [INF] pgmap v23198138: 36032 pgs: 35468 active+clean, 551 active+recovery_wait, 13 active+recovering; 13005 GB data, 48944 GB used, 21716 GB / 70660 GB avail; 11159KB/s rd, 129MB/s wr, 5059op/s;

Re: [ceph-users] Ceph Journal Disk Size

2015-07-02 Thread Shane Gibson
I'd def be happy to share what numbers I can get out of it. I'm still a neophyte w/ Ceph, and learning how to operate it, set it up ... etc... My limited performance testing to date has been with stock XFS ceph-disk built filesystem for the OSDs, basic PG/CRUSH map stuff - and using dd across

[ceph-users] Ceph Monitor Memory Sizing

2015-07-02 Thread Nate Curry
I was reading the documentation on the website in regards to the recommended memory for the monitors. It says that there should be 1GB of RAM per daemon instance. Does the daemon instance refer to the number of OSDs? I am planning on setting up 4 hosts with 16 OSDs each initially. Would I need

Re: [ceph-users] Ceph Journal Disk Size

2015-07-02 Thread Nate Curry
Are you using the 4TB disks for the journal? *Nate Curry* IT Manager ISSM *Mosaic ATM* mobile: 240.285.7341 office: 571.223.7036 x226 cu...@mosaicatm.com On Thu, Jul 2, 2015 at 12:16 PM, Shane Gibson shane_gib...@symantec.com wrote: I'd def be happy to share what numbers I can get out of it.