[ceph-users] symlink to journal not created as it should with cep-deploy prepare. (jewel)

2016-05-26 Thread Stefan Eriksson
Hi When we deploy new OSD’s we see an issue where the journal symlink to the external path we provided with ceph-deploy is not created, instead ceph-deploy creates a new local journal on the osd itself. Here is the log: Running ceph: 10.2.1 on Centos 7. ceph-deploy osd prepare

[ceph-users] ceph-deploy prepare doesnt mount osd

2016-05-17 Thread Stefan Eriksson
Hi I'm running hammer 0.94.7 (centos 7) and have issues with deploying new osd's they doenst mount after initiation. I run using this command: ceph-deploy osd prepare ceph01-osd02:sdj:/journals/osd.38 everything seem fine but I get this in the osd log:2016-05-17 11:40:41.298846

Re: [ceph-users] why was osd pool default size changed from 2 to 3.

2015-10-24 Thread Stefan Eriksson
> Am 23.10.2015 um 20:53 schrieb Gregory Farnum: >> On Fri, Oct 23, 2015 at 8:17 AM, Stefan Eriksson <ste...@eriksson.cn> wrote: >> >> Nothing changed to make two copies less secure. 3 copies is just so >> much more secure and is the number that all the companies p

[ceph-users] why was osd pool default size changed from 2 to 3.

2015-10-23 Thread Stefan Eriksson
Hi I have been looking for info about "osd pool default size" and the reason its 3 as default. I see it got changed in v0.82 from 2 to 3, Here its 2. http://docs.ceph.com/docs/v0.81/rados/configuration/pool-pg-config-ref/ and in v0.82 its 3.

Re: [ceph-users] v0.94.4 Hammer released

2015-10-20 Thread Stefan Eriksson
A change like this below, where we have to change ownership was not add to a point release for hammer right? Den 2015-10-20 kl. 20:06, skrev Udo Lembke: Hi, do you have changed the ownership like discribed in Sages mail about "v9.1.0 Infernalis release candidate released"? #. Fix the

Re: [ceph-users] add new monitor doesn't update ceph.conf in hammer with ceph-deploy.

2015-10-20 Thread Stefan Eriksson
Thanks! I'll do that, should I add a bug report to mention this in the documentation? Den 2015-10-20 kl. 17:25, skrev LOPEZ Jean-Charles: And forgot. Yes, update both lines with the new mon node information mon_initial_members and mon_host JC On Oct 20, 2015, at 07:54, Stefan Eriksson <

[ceph-users] add new monitor doesn't update ceph.conf in hammer with ceph-deploy.

2015-10-20 Thread Stefan Eriksson
Hi I’m using cep-deploy with hammer and recently added a new monitor, I used this: http://docs.ceph.com/docs/hammer/rados/deployment/ceph-deploy-mon/ But it doesn’t say anything about adding conf manually to

Re: [ceph-users] cant get cluster to become healthy. "stale+undersized+degraded+peered"

2015-09-17 Thread Stefan Eriksson
few OSDs for the number of replicas you > are requesting > > Cheers > G. > > > > On 09/17/2015 02:59 AM, Stefan Eriksson wrote: >> I have a completely new cluster for testing and its three servers which all >> are monitors and hosts for OSD, they each have

Re: [ceph-users] cant get cluster to become healthy. "stale+undersized+degraded+peered"

2015-09-17 Thread Stefan Eriksson
he default config, Since this is a fresh installation you can delete all default pools, check cluster state for no objects and clean state,  setup ceph.conf based on your cluster and push on all nodes.and recreate default pools if needed. On Thu, Sep 17, 2015 at 12:01 AM, Stefan Eriksson <ste..

[ceph-users] cant get cluster to become healthy. "stale+undersized+degraded+peered"

2015-09-16 Thread Stefan Eriksson
I have a completely new cluster for testing and its three servers which all are monitors and hosts for OSD, they each have one disk. The issue is ceph status shows: 64 stale+undersized+degraded+peered health: health HEALTH_WARN clock skew detected on mon.ceph01-osd03

[ceph-users] feature to automatically set journal file name as osd.{osd-num} with ceph-deploy.

2015-09-13 Thread Stefan Eriksson
I ran this as a test: "ceph-deploy osd prepare ceph01-osd02:sdb:/mnt/" And output is: "[ceph01-osd02][WARNIN] ceph-disk: Error: Journal /mnt/ is neither a block device nor regular file" It would be great if we could only provide a directory as journal, and when ceph-deploy detects this, it

Re: [ceph-users] Using OS disk (SSD) as journal for OSD

2015-09-12 Thread Stefan Eriksson
Hi Thanks for the reply. Some follow-up's. Den 2015-09-12 kl. 17:30, skrev Christian Balzer: Hello, On Sat, 12 Sep 2015 17:11:04 +0200 Stefan Eriksson wrote: Hi, I'm reading the documentation about creating new OSD's and I'm see: "The foregoing example assumes a disk dedicated to one

[ceph-users] Using OS disk (SSD) as journal for OSD

2015-09-12 Thread Stefan Eriksson
Hi, I'm reading the documentation about creating new OSD's and I'm see: "The foregoing example assumes a disk dedicated to one Ceph OSD Daemon, and a path to an SSD journal partition. We recommend storing the journal on a separate drive to maximize throughput. You may dedicate a single drive