Re: [ceph-users] osd crash

2016-12-01 Thread VELARTIS Philipp Dürhammer
cannot start it also... Von: Nick Fisk [mailto:n...@fisk.me.uk] Gesendet: Donnerstag, 01. Dezember 2016 13:15 An: VELARTIS Philipp Dürhammer; ceph-us...@ceph.com Betreff: RE: osd crash Are you using Ubuntu 16.04 (Guessing from your kernel version). There was a numa bug in early kernels, try updating

[ceph-users] osd crash - disk hangs

2016-12-01 Thread VELARTIS Philipp Dürhammer
Hello! Tonight i had a osd crash. See the dump below. Also this osd is still mounted. Whats the cause? A bug? What to do next? I cant do a lsof or ps ax because it hangs. Thank You! Dec 1 00:31:30 ceph2 kernel: [17314369.493029] divide error: [#1] SMP Dec 1 00:31:30 ceph2 kernel:

[ceph-users] osd crash

2016-12-01 Thread VELARTIS Philipp Dürhammer
Hello! Tonight i had a osd crash. See the dump below. Also this osd is still mounted. Whats the cause? A bug? What to do next? Thank You! Dec 1 00:31:30 ceph2 kernel: [17314369.493029] divide error: [#1] SMP Dec 1 00:31:30 ceph2 kernel: [17314369.493062] Modules linked in: act_police

[ceph-users] changing ceph config - but still same mount options

2016-03-20 Thread VELARTIS Philipp Dürhammer
Hi before i tested with : osd mount options xfs = "rw,noatime,nobarrier,inode64,logbsize=256k,logbufs=8,allocsize = 4M" (added inode64) and then changed to osd mount options xfs = "rw,noatime,nobarrier,logbsize=256k,logbufs=8,allocsize = 4M" but after reboot it still mounts with inode64 as i

[ceph-users] rbd read speed only 1/4 of write speed

2014-12-16 Thread VELARTIS Philipp Dürhammer
Hello, Read speed inside our vms (most of them windows) is only ¼ of the write speed. Write speed is about 450MB/s - 500mb/s and Read is only about 100/MB/s Our network is 10Gbit for OSDs and 10GB for MONS. We have 3 Servers with 15 osds each ___

[ceph-users] how to check real rados read speed

2014-10-29 Thread VELARTIS Philipp Dürhammer
Hi, with ceph -w i can see ceph writes reads and io. But the reads seem to be only reads wich are not served from osd or monitor cache. As we have 128gb with every ceph server our monitors and osds are set to use a lot of ram. Monitoring only very view times show some ceph reads... but a lot

Re: [ceph-users] write performance per disk

2014-07-06 Thread VELARTIS Philipp Dürhammer
...@lists.ceph.com]quot; im Auftrag von quot;Mark Nelson [mark.nel...@inktank.com] Gesendet: Freitag, 04. Juli 2014 16:10 Bis: ceph-users@lists.ceph.com Betreff: Re: [ceph-users] write performance per disk On 07/03/2014 08:11 AM, VELARTIS Philipp Dürhammer wrote: Hi, I have a ceph cluster setup (with 45

Re: [ceph-users] write performance per disk

2014-07-04 Thread VELARTIS Philipp Dürhammer
Philipp Dürhammer; ceph-users@lists.ceph.com Betreff: Re: AW: [ceph-users] write performance per disk On 07/03/2014 04:32 PM, VELARTIS Philipp Dürhammer wrote: HI, Ceph.conf: osd journal size = 15360 rbd cache = true rbd cache size = 2147483648 rbd cache max

[ceph-users] write performance per disk

2014-07-03 Thread VELARTIS Philipp Dürhammer
Hi, I have a ceph cluster setup (with 45 sata disk journal on disks) and get only 450mb/sec writes seq (maximum playing around with threads in rados bench) with replica of 2 Which is about ~20Mb writes per disk (what y see in atop also) theoretically with replica2 and having journals on disk

Re: [ceph-users] write performance per disk

2014-07-03 Thread VELARTIS Philipp Dürhammer
-users-boun...@lists.ceph.com] Im Auftrag von Wido den Hollander Gesendet: Donnerstag, 03. Juli 2014 15:22 An: ceph-users@lists.ceph.com Betreff: Re: [ceph-users] write performance per disk On 07/03/2014 03:11 PM, VELARTIS Philipp Dürhammer wrote: Hi, I have a ceph cluster setup (with 45 sata

Re: [ceph-users] Can we map OSDs from different hosts (servers) to a Pool in Ceph

2014-06-12 Thread VELARTIS Philipp Dürhammer
Hi, Will ceph support mixing different disk pools (example spinners and ssds) in the future a little bit better (more safe)? Thank you philipp On Wed, Jun 11, 2014 at 5:18 AM, Davide Fanciola dfanci...@gmail.com wrote: Hi, we have a similar setup where we have SSD and HDD in the same hosts.

[ceph-users] someone using btrfs with ceph

2014-05-28 Thread VELARTIS Philipp Dürhammer
Is someone using btrfs in production? I know people say it's still not stable. But do we use so many features with ceph? And facebook uses it also in production. Would be a big speed gain. ___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] SSD and SATA Pool CRUSHMAP

2014-05-26 Thread VELARTIS Philipp Dürhammer
Hi, What is the best way to implement a ssd and sata pool? (both on 3 servers each) We have 3 servers with 15 satas and 4 ssds each. I prefer to have a fast and small ssd pool and a big sata pool then cache tiering or ssds as journals as 15 satas per server with writeback cache is ok from