om/d/684b330eea/
---
DT Netsolution GmbH - Taläckerstr. 30-D-70437 Stuttgart
Geschäftsführer: Daniel Schwager, Stefan Hörz - HRB Stuttgart 19870
Tel: +49-711-849910-32, Fax: -932 - Mailto:daniel.schwa...@dtnet.de
> -O
Maybe something like this?
192.168.135.31:6789:/ /cephfs ceph
name=cephfs,secretfile=/etc/ceph/client.cephfs,noatime 0 0
Best regards
Daniel
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Lazuardi Nasution
Sent: Wednesday, August 03, 2016 6:10 PM
Hi,
> ok...OSD stop. Any reason why OSD stop ( I assume if journal disk
> fails, OSD should work as no journal. Isn't it?)
No. In my understanding - if a journal fails, all the attached (to this Journal
HDD) OSD's fails also.
E.g. if you have 4 OSD's with the 4 journals's located on one
Hi ceph-users,
any idea to fix my cluster? OSD.21 removed, but still some (staled) PG's
pointing to OSD.21...
I don't know how to proceed... Help is very welcome!
Best regards
Daniel
> -Original Message-
> From: Daniel Schwager
> Sent: Friday, January 08, 2016 3:10 PM
>
Well, ok - I found the solution:
ceph health detail
HEALTH_WARN 50 pgs stale; 50 pgs stuck stale
pg 34.225 is stuck inactive since forever, current state
creating, last acting []
pg 34.225 is stuck unclean since forever, current state
One more - I tried to recreate the pg but now this pg this "stuck inactive":
root@ceph-admin:~# ceph pg force_create_pg 34.225
pg 34.225 now creating, ok
root@ceph-admin:~# ceph health detail
HEALTH_WARN 49 pgs stale; 1 pgs stuck inactive; 49 pgs stuck stale; 1
Hi,
we had a HW-problem with OSD.21 today. The OSD daemon was down and "smartctl"
told me about some hardware errors.
I decided to remove the HDD:
ceph osd out 21
ceph osd crush remove osd.21
ceph auth del osd.21
ceph osd rm osd.21
But afterwards I saw
Hi,
I think the root-CA (COMODO RSA Certification Authority) is not available on
your Linux host? Using Google chrome connecting to https://ceph.com/ works fine.
regards
Danny
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Dietmar Maurer
Hi Cristian,
We will try to report back, but I'm not sure our use case is relevant.
We are trying to use every dirty trick to speed up the VMs.
we have the same use-case.
The second pool is for the tests machines and has the journal in ram,
so this part is very volatile. We don't really
Hallo Mike,
This is also have another way.
* for CONF 2,3 replace 200Gb SSD to 800Gb and add another 1-2 SSD to
each node.
* make tier1 read-write cache on SSDs
* also you can add journal partition on them if you wish - then data
will moving from SSD to SSD before let down on HDD
* on HDD
Hi.
If you are like me, you have the journals for your OSD's with rotating
media stored separately on an SSD. If you are even more like me, you
happen to use Intel 530 SSD's in some of your hosts. If so, please do
check your S.M.A.R.T. statistics regularly, because these SSD's really
Hi Ramakrishna,
we use the phy. path (containing the serial number) to a disk to prevent
complexity and wrong mapping... This path will never change:
/etc/ceph/ceph.conf
[osd.16]
devs = /dev/disk/by-id/scsi-SATA_ST4000NM0033-9Z_Z1Z0SDCY-part1
Hi,
is there a possibility to see which rbd-device (used by a kvm hypervisor)
produces high load on a ceph cluster? ceph -w shows only the total usage -
but I don't know see which client or rbd-device is responsible for this load.
best regards
Danny
smime.p7s
Description: S/MIME
Hi,
is there a way to query the used space of a RBD image created with format 2
(used for kvm)?
Also, if I create a linked clone base on this image, how do I get the
additional, individual used space of this clone?
In zfs, I can query these kind of information by calling zfs info .. (2).
rbd
Hi,
I recently had an OSD disk die, and I'm wondering what are the
current best practices for replacing it. I think I've thoroughly removed
the old disk, both physically and logically, but I'm having trouble figuring
out how to add the new disk into ceph.
I did this today (one disk
Loic,
root@ceph-node3:~# smartctl -a /dev/sdd | less
=== START OF INFORMATION SECTION ===
Device Model: ST4000NM0033-9ZM170
Serial Number:Z1Z5LGBX
..
admin@ceph-admin:~/cluster1$ emacs -nw ceph.conf
Try to create e.g. 20 (small) rbd devices, putting them all in a lvm vg,
creating a logical volume (Raid0) with
20 stripes and e.g. stripeSize 1MB (better bandwith) or 4kb (better io) - or
use md-raid0 (it's maybe 10% faster - but not that flexible):
BTW - we use this approach for VMware
Hi,
I created a 1TB rbd-image formated with vmfs (vmware) for an ESX server - but
with a wrong order (25 instead of 22 ...). The rbd man page tells me for
export/import/cp, rbd will use the order of the source image.
Is there a way to change the order of a rbd image by doing some conversion?
setup a bitter value for read_ahead_kb ? I tested with 256 MB read ahead cache (
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of ???
Sent: Friday, February 07, 2014 10:55 AM
To: Konrad Gutkowski
Cc: ceph-users@lists.ceph.com
Subject: Re:
I'm sorry, but I did not understand you :)
Sorry (-: My finger touched the RETURN-key to fast...
Try to setup a bigger value for the read ahead cache, maybe 256 MB?
echo 262144/sys/block/vda/queue/read_ahead_kb
Try also fio performance tool - it will show more detailed information.
Hallo Bradley, additionally to your question, I'm interesting in the following:
5) can I change all 'type' Ids because adding a new type host-slow to
distinguish between OSD's with journal on the same HDD / separate SSD? E.g.
from
type 0 osd
type 1 host
Hi,
Anyone else from ceph community , willing to join.
will also visit my first ceph day (-: See you all in Frankfurt!
best regards
Danny
smime.p7s
Description: S/MIME cryptographic signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi all,
my monitor3 is not able to rejoin the cluster (containing mon1, mon2 and mon3 -
running stable emperor).
I try to recreate/inject a new monmap to all 3 mon's - but only mon1 and mon2
are up and joined.
Now, enabling debugging on mon3, I got the following:
2014-01-30 08:51:03.823669
OK - found the problem:
mon_status
{ name: ceph-mon3,
..
mons: [
{ rank: 2,
name: mon.ceph-mon3,
NAME is wrong
addr: 192.168.135.33:6789\/0}]}}
In the docu http://ceph.com/docs/master/man/8/monmaptool/
the creation of the monmap is described
Hi,
just a small question: Createing a new OSD i use e.g.
ceph-deploy osd create ceph-node1:sdg:/dev/sdb5
Question: What happens if the mapping of my disks changes (e.g. because adding
new disks to the server)
sdg becomes sgh
sdb becomes sdc
Is this handled (how?) by
Hi,
The low points are all ~35Mbytes/sec and the high points are all
~60Mbytes/sec. This is very reproducible.
It occurred to me that just stopping the OSD's selectively would allow me to
see if there was a change when one
was ejected, but at no time was there a change to the graph...
Hi,
The problem is: now I want to delete or rename the pool '-help',
maybe you will try using double-hyphen (--) [1] , e.g. something (not tested)
like
ceph osd pool rename -- -help aaa
ceph osd pool delete -- -help
regards
Danny
[1]
Hi Robert,
What is the easiest way to replace a failed disk / OSD.
It looks like the documentation here is not really compatible with
ceph_deploy:
http://ceph.com/docs/master/rados/operations/add-or-rm-osds/
I found the following thread useful:
28 matches
Mail list logo