[ceph-users] OSD disk concern

2017-04-18 Thread gjprabu
Hi Team, Ceph OSD disk allocation procedure suggested that "We recommend using a dedicated drive for the operating system and software, and one drive for each Ceph OSD Daemon you run on the host" We have only SSD hard disk and is it advisable to run OS and OSD on the same disk.

Re: [ceph-users] Ceph with Clos IP fabric

2017-04-18 Thread Richard Hesse
I'd suggest running your Ceph network on the "public" interface /24 you have assigned to that rack. Assign loopback IP's from another (not otherwise routed) network to your MONs. I have no experience with the RBD client, but with RGW you can assign the same IP to all of them and let your ToR

Re: [ceph-users] Adding a new rack to crush map without pain?

2017-04-18 Thread Richard Hesse
Most ceph clusters are setup once and then maintained. Even if new OSD nodes are added, it's not a frequent enough operation to warrant automation. Yes, ceph does provide hooks for automatically updating the CRUSH map (crush location hook), but it's up to you to properly write, debug, and

Re: [ceph-users] SSD Primary Affinity

2017-04-18 Thread Anthony D'Atri
I get digests, so please forgive me if this has been covered already. > Assuming production level, we would keep a pretty close 1:2 SSD:HDD ratio, 1:4-5 is common but depends on your needs and the devices in question, ie. assuming LFF drives and that you aren’t using crummy journals. > First

[ceph-users] PHP client for RGW Admin Ops API

2017-04-18 Thread Wido den Hollander
Hi, I wanted to share a PHP client for the RGW Admin Ops API [0] which has been developed at my company. There is a proper Python [1] client for the API, but we were unable to find one for PHP, so we wrote it: https://github.com/PCextreme/rgw-admin-php The client works with PHP 7 and is a

Re: [ceph-users] slow requests and short OSD failures in small cluster

2017-04-18 Thread Peter Maloney
On 04/18/17 11:44, Jogi Hofmüller wrote: > Hi, > > Am Dienstag, den 18.04.2017, 13:02 +0200 schrieb mj: >> On 04/18/2017 11:24 AM, Jogi Hofmüller wrote: >>> This might have been true for hammer and older versions of ceph. >>> From >>> what I can tell now, every snapshot taken reduces performance

Re: [ceph-users] Ceph extension - how to equilibrate ?

2017-04-18 Thread Peter Maloney
On 04/18/17 16:31, pascal.pu...@pci-conseil.net wrote: > > Hello, > > Just an advise : next time, I will extend my Jewel ceph cluster with a > fourth node. > > Actually, we have 3 x nodes of 12 x OSD with 4TB DD (36 x DD 4TB). > > I will add a new node with 12 x 8TB DD (will add 12 new OSD => 48

[ceph-users] Ceph extension - how to equilibrate ?

2017-04-18 Thread pascal.pu...@pci-conseil.net
Hello, Just an advise : next time, I will extend my Jewel ceph cluster with a fourth node. Actually, we have 3 x nodes of 12 x OSD with 4TB DD (36 x DD 4TB). I will add a new node with 12 x 8TB DD (will add 12 new OSD => 48 OSD). So, how to simply equilibrate ? How to just unplug 3 x DD

Re: [ceph-users] Creating journal on needed partition

2017-04-18 Thread Vincent Godin
Hi, If you're using ceph-deploy, just run the command : ceph-deploy osd prepare --overwrite-conf {your_host}:/dev/sdaa:/dev/sdaf2 ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] 回复: Re: ceph activation error

2017-04-18 Thread gjprabu
Its not working in vm and checked with individual node, once ZAP the disk its working . On Tue, 18 Apr 2017 15:41:53 +0530 xu xu gor...@gmail.com wrote is it resolved? I find in "ceph-deploy osd activate cphosd1:/home/osd1 cphosd2:/home/osd2 cphosd3:/home/osd3" logout

Re: [ceph-users] Adding a new rack to crush map without pain?

2017-04-18 Thread Adam Tygart
Ceph has the ability to us a script to figure out where in the crushmap this disk should go (on osd start): http://docs.ceph.com/docs/master/rados/operations/crush-map/#ceph-crush-location-hook -- Adam On Tue, Apr 18, 2017 at 7:53 AM, Matthew Vernon wrote: > On 17/04/17

Re: [ceph-users] Creating journal on needed partition

2017-04-18 Thread Nikita Shalnov
Hi, Chris. Thank you for your help. I found some useful stuff in these files and I coped with the task If someone is interesting how, below is a little summary: 1. Create new osd ceph-disk prepare /dev/sdaa /dev/sdaf Now we have 4 partitions on sdaf, but we want to use second partition

Re: [ceph-users] Adding a new rack to crush map without pain?

2017-04-18 Thread Matthew Vernon
On 17/04/17 21:16, Richard Hesse wrote: > I'm just spitballing here, but what if you set osd crush update on start > = false ? Ansible would activate the OSD's but not place them in any > particular rack, working around the ceph.conf problem you mentioned. > Then you could place them in your CRUSH

Re: [ceph-users] slow requests and short OSD failures in small cluster

2017-04-18 Thread Jogi Hofmüller
Hi, Am Dienstag, den 18.04.2017, 13:02 +0200 schrieb mj: > > On 04/18/2017 11:24 AM, Jogi Hofmüller wrote: > > This might have been true for hammer and older versions of ceph. > > From > > what I can tell now, every snapshot taken reduces performance of > > the > > entire cluster :( > > Really?

[ceph-users] librbd::ImageCtx: error reading immutable metadata: (2) No such file or directory

2017-04-18 Thread Frode Nordahl
Hello all, A while ago I came across a Ceph cluster with a RBD volume missing the header object describing the characteristics of the volume, making it impossible to attach or perform any operations on said volume. As a courtesy to anyone else encountering the same situation I would like to

Re: [ceph-users] slow requests and short OSD failures in small cluster

2017-04-18 Thread Lionel Bouton
Le 18/04/2017 à 11:24, Jogi Hofmüller a écrit : > Hi, > > thanks for all you comments so far. > > Am Donnerstag, den 13.04.2017, 16:53 +0200 schrieb Lionel Bouton: >> Hi, >> >> Le 13/04/2017 à 10:51, Peter Maloney a écrit : >>> Ceph snapshots relly slow things down. > I can confirm that now :(

Re: [ceph-users] slow requests and short OSD failures in small cluster

2017-04-18 Thread mj
On 04/18/2017 11:24 AM, Jogi Hofmüller wrote: This might have been true for hammer and older versions of ceph. From what I can tell now, every snapshot taken reduces performance of the entire cluster :( Really? Can others confirm this? Is this a 'wellknown fact'? (unknown only to us,

[ceph-users] 回复: Re: ceph activation error

2017-04-18 Thread xu xu
is it resolved? I find in "ceph-deploy osd activate cphosd1:/home/osd1 cphosd2:/home/osd2 cphosd3:/home/osd3" logout ... [cphosd1][WARNIN] activate_dir: Creating symlink /var/lib/ceph/osd/ceph-0 -> /home/osd1 .. and "journalctl -xe" logout .. Mar 4 09:26:33 localhost

Re: [ceph-users] Ceph with Clos IP fabric

2017-04-18 Thread Jan Marquardt
Am 17.04.17 um 22:12 schrieb Richard Hesse: > A couple of questions: > > 1) What is your rack topology? Are all ceph nodes in the same rack > communicating with the same top of rack switch? The cluster is planned for one rack with two ToR-/cluster-internal switches. The cluster will be accessed

Re: [ceph-users] slow requests and short OSD failures in small cluster

2017-04-18 Thread Jogi Hofmüller
Hi, thanks for all you comments so far. Am Donnerstag, den 13.04.2017, 16:53 +0200 schrieb Lionel Bouton: > Hi, > > Le 13/04/2017 à 10:51, Peter Maloney a écrit : > > Ceph snapshots relly slow things down. I can confirm that now :( > We use rbd snapshots on Firefly (and Hammer now) and I

Re: [ceph-users] Socket errors, CRC, lossy con messages

2017-04-18 Thread Ilya Dryomov
On Mon, Apr 17, 2017 at 1:42 PM, Alex Gorbachev wrote: > On Thu, Apr 13, 2017 at 4:24 AM, Ilya Dryomov wrote: >> On Thu, Apr 13, 2017 at 5:39 AM, Alex Gorbachev >> wrote: >>> On Wed, Apr 12, 2017 at 10:51 AM, Ilya Dryomov

Re: [ceph-users] Ceph OSD network with IPv6 SLAAC networks?

2017-04-18 Thread Simon Leinen
Félix Barbeira writes: > We are implementing an IPv6 native ceph cluster using SLAAC. We have > some legacy machines that are not capable of using IPv6, only IPv4 due > to some reasons (yeah, I know). I'm wondering what could happen if I > use an additional IPv4 on the radosgw in addition to the

Re: [ceph-users] librbd: deferred image deletion

2017-04-18 Thread Ricardo Dias
Hi Song, Next time, please post the question in ceph-devel or ceph-users mailing list. I'm also replying the answer to your question to the ceph-users maling list. The purpose of "--delay SEC" is to protect the image from being accidentally removed from trash before SEC is expired, and to allow