cannot start it also...
Von: Nick Fisk [mailto:n...@fisk.me.uk]
Gesendet: Donnerstag, 01. Dezember 2016 13:15
An: VELARTIS Philipp Dürhammer; ceph-us...@ceph.com
Betreff: RE: osd crash
Are you using Ubuntu 16.04 (Guessing from your kernel version). There was a
numa bug in early kernels, try updating
Hello!
Tonight i had a osd crash. See the dump below. Also this osd is still mounted.
Whats the cause? A bug? What to do next? I cant do a lsof or ps ax because it
hangs.
Thank You!
Dec 1 00:31:30 ceph2 kernel: [17314369.493029] divide error: [#1] SMP
Dec 1 00:31:30 ceph2 kernel:
Hello!
Tonight i had a osd crash. See the dump below. Also this osd is still mounted.
Whats the cause? A bug? What to do next?
Thank You!
Dec 1 00:31:30 ceph2 kernel: [17314369.493029] divide error: [#1] SMP
Dec 1 00:31:30 ceph2 kernel: [17314369.493062] Modules linked in: act_police
Hi before i tested with :
osd mount options xfs =
"rw,noatime,nobarrier,inode64,logbsize=256k,logbufs=8,allocsize = 4M" (added
inode64)
and then changed to
osd mount options xfs = "rw,noatime,nobarrier,logbsize=256k,logbufs=8,allocsize
= 4M"
but after reboot it still mounts with inode64
as i
Hello,
Read speed inside our vms (most of them windows) is only ¼ of the write speed.
Write speed is about 450MB/s - 500mb/s and
Read is only about 100/MB/s
Our network is 10Gbit for OSDs and 10GB for MONS. We have 3 Servers with 15
osds each
___
Hi,
with ceph -w i can see ceph writes reads and io.
But the reads seem to be only reads wich are not served from osd or monitor
cache.
As we have 128gb with every ceph server our monitors and osds are set to use a
lot of ram.
Monitoring only very view times show some ceph reads... but a lot
...@lists.ceph.com]quot; im Auftrag von
quot;Mark Nelson [mark.nel...@inktank.com]
Gesendet: Freitag, 04. Juli 2014 16:10
Bis: ceph-users@lists.ceph.com
Betreff: Re: [ceph-users] write performance per disk
On 07/03/2014 08:11 AM, VELARTIS Philipp Dürhammer wrote:
Hi,
I have a ceph cluster setup (with 45
Philipp Dürhammer; ceph-users@lists.ceph.com
Betreff: Re: AW: [ceph-users] write performance per disk
On 07/03/2014 04:32 PM, VELARTIS Philipp Dürhammer wrote:
HI,
Ceph.conf:
osd journal size = 15360
rbd cache = true
rbd cache size = 2147483648
rbd cache max
Hi,
I have a ceph cluster setup (with 45 sata disk journal on disks) and get only
450mb/sec writes seq (maximum playing around with threads in rados bench) with
replica of 2
Which is about ~20Mb writes per disk (what y see in atop also)
theoretically with replica2 and having journals on disk
-users-boun...@lists.ceph.com] Im Auftrag von Wido
den Hollander
Gesendet: Donnerstag, 03. Juli 2014 15:22
An: ceph-users@lists.ceph.com
Betreff: Re: [ceph-users] write performance per disk
On 07/03/2014 03:11 PM, VELARTIS Philipp Dürhammer wrote:
Hi,
I have a ceph cluster setup (with 45 sata
Hi,
Will ceph support mixing different disk pools (example spinners and ssds) in
the future a little bit better (more safe)?
Thank you
philipp
On Wed, Jun 11, 2014 at 5:18 AM, Davide Fanciola dfanci...@gmail.com wrote:
Hi,
we have a similar setup where we have SSD and HDD in the same hosts.
Is someone using btrfs in production?
I know people say it's still not stable. But do we use so many features with
ceph? And facebook uses it also in production. Would be a big speed gain.
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi,
What is the best way to implement a ssd and sata pool? (both on 3 servers each)
We have 3 servers with 15 satas and 4 ssds each.
I prefer to have a fast and small ssd pool and a big sata pool then cache
tiering or ssds as journals as 15 satas per server with writeback cache is ok
from
13 matches
Mail list logo