Re: [ceph-users] v0.94.6 Hammer released

2016-03-22 Thread Chris Dunlop
On Wed, Mar 23, 2016 at 01:22:45AM +0100, Loic Dachary wrote: > On 23/03/2016 01:12, Chris Dunlop wrote: >> On Wed, Mar 23, 2016 at 01:03:06AM +0100, Loic Dachary wrote: >>> On 23/03/2016 00:39, Chris Dunlop wrote: "The old OS'es" that were being supported up to v0.94.5 includes debian

[ceph-users] root and non-root user for ceph/ceph-deploy

2016-03-22 Thread yang
Hi, everyone, In my ceph cluster, first I deploy my ceph using ceph-deploy with user root, I don't set up any thing after it's setup, to my surprise, the cluster can auto-start after my host reboot, all thing is ok, mon is running and OSDs of device is mounted itself and also running properly.

Re: [ceph-users] Need help for PG problem

2016-03-22 Thread Dotslash Lu
Hello Gonçalo, Thanks for your reminding. I was just setting up the cluster for test, so don't worry, I can just remove the pool. And I learnt that since the replication number and pool number are related to pg_num, I'll consider them carefully before deploying any data. > On Mar 23, 2016,

Re: [ceph-users] Need help for PG problem

2016-03-22 Thread David Wang
Hi Zhang, From the ceph health detail, I suggest NTP server should be calibrated. Can you share crush map output? 2016-03-22 18:28 GMT+08:00 Zhang Qiang : > Hi Reddy, > It's over a thousand lines, I pasted it on gist: >

Re: [ceph-users] Periodic evicting & flushing

2016-03-22 Thread Christian Balzer
Hello, On Tue, 22 Mar 2016 12:28:22 -0400 Maran wrote: > Hey guys, > > I'm trying to wrap my head about the Ceph Cache Tiering to discover if > what I want is achievable. > > My cluster exists of 6 OSD nodes with normal HDD and one cache tier of > SSDs. > One cache tier being what, one node?

Re: [ceph-users] v0.94.6 Hammer released

2016-03-22 Thread Loic Dachary
On 23/03/2016 01:12, Chris Dunlop wrote: > Hi Loïc, > > On Wed, Mar 23, 2016 at 01:03:06AM +0100, Loic Dachary wrote: >> On 23/03/2016 00:39, Chris Dunlop wrote: >>> "The old OS'es" that were being supported up to v0.94.5 includes debian >>> wheezy. It would be quite surprising and unexpected

Re: [ceph-users] v0.94.6 Hammer released

2016-03-22 Thread Chris Dunlop
Hi Loïc, On Wed, Mar 23, 2016 at 01:03:06AM +0100, Loic Dachary wrote: > On 23/03/2016 00:39, Chris Dunlop wrote: >> "The old OS'es" that were being supported up to v0.94.5 includes debian >> wheezy. It would be quite surprising and unexpected to drop support for an >> OS in the middle of a

Re: [ceph-users] v0.94.6 Hammer released

2016-03-22 Thread Loic Dachary
Hi Chris, On 23/03/2016 00:39, Chris Dunlop wrote: > Hi Loïc, > > On Wed, Mar 23, 2016 at 12:14:27AM +0100, Loic Dachary wrote: >> On 22/03/2016 23:49, Chris Dunlop wrote: >>> Hi Stable Release Team for v0.94, >>> >>> Let's try again... Any news on a release of v0.94.6 for debian wheezy >>>

Re: [ceph-users] v0.94.6 Hammer released

2016-03-22 Thread Chris Dunlop
Hi Loïc, On Wed, Mar 23, 2016 at 12:14:27AM +0100, Loic Dachary wrote: > On 22/03/2016 23:49, Chris Dunlop wrote: >> Hi Stable Release Team for v0.94, >> >> Let's try again... Any news on a release of v0.94.6 for debian wheezy >> (bpo70)? > > I don't think publishing a debian wheezy backport

Re: [ceph-users] v0.94.6 Hammer released

2016-03-22 Thread Loic Dachary
On 22/03/2016 23:49, Chris Dunlop wrote: > Hi Stable Release Team for v0.94, > > Let's try again... Any news on a release of v0.94.6 for debian wheezy (bpo70)? I don't think publishing a debian wheezy backport for v0.94.6 is planned. Maybe it's a good opportunity to initiate a community

Re: [ceph-users] Need help for PG problem

2016-03-22 Thread Goncalo Borges
Hi Zhang... If I can add some more info, the change of PGs is a heavy operation, and as far as i know, you should NEVER decrease PGs. From the notes in pgcalc (http://ceph.com/pgcalc/): "It's also important to know that the PG count can be increased, but NEVER decreased without destroying /

Re: [ceph-users] v0.94.6 Hammer released

2016-03-22 Thread Chris Dunlop
Hi Stable Release Team for v0.94, Let's try again... Any news on a release of v0.94.6 for debian wheezy (bpo70)? Cheers, Chris On Thu, Mar 17, 2016 at 12:43:15PM +1100, Chris Dunlop wrote: > Hi Chen, > > On Thu, Mar 17, 2016 at 12:40:28AM +, Chen, Xiaoxi wrote: >> It’s already there, in

Re: [ceph-users] Infernalis .rgw.buckets.index objects becoming corrupted in on RHEL 7.2 during recovery

2016-03-22 Thread Brandon Morris, PMP
I was able to get this back to HEALTH_OK by doing the following: 1. Allow ceph-objectstore-tool to run over a weekend attempting to export the PG. Looking at timestamps it took approximately 6 hours to complete successfully 2. Import the PG into unused PG and start it up+out 3. Allow the cluster

Re: [ceph-users] CephFS Advice

2016-03-22 Thread Gregory Farnum
On Tue, Mar 22, 2016 at 9:37 AM, John Spray wrote: > On Tue, Mar 22, 2016 at 2:37 PM, Ben Archuleta wrote: >> Hello All, >> >> I have experience using Lustre but I am new to the Ceph world, I have some >> questions to the Ceph users out there. >> >> I am

Re: [ceph-users] Need help for PG problem

2016-03-22 Thread Zhang Qiang
I got it, the pg_num suggested is the total, I need to divide it by the number of replications. Thanks Oliver, your answer is very thorough and helpful! On 23 March 2016 at 02:19, Oliver Dzombic wrote: > Hi Zhang, > > yeah i saw your answer already. > > At very first,

Re: [ceph-users] recorded data digest != on disk

2016-03-22 Thread Gregory Farnum
On Tue, Mar 22, 2016 at 1:19 AM, Max A. Krasilnikov wrote: > Hello! > > I have 3-node cluster running ceph version 0.94.6 > (e832001feaf8c176593e0325c8298e3f16dfb403) > on Ubuntu 14.04. When scrubbing I get error: > > -9> 2016-03-21 17:36:09.047029 7f253a4f6700 5 -- op

[ceph-users] Need help for PG problem

2016-03-22 Thread Oliver Dzombic
Hi Zhang, yeah i saw your answer already. At very first, you should make sure that there is no clock skew. This can cause some sideeffects. According to http://docs.ceph.com/docs/master/rados/operations/placement-groups/ you have to: (OSDs * 100) Total PGs =

[ceph-users] Teuthology installation issue CentOS 6.5 (Python 2.6)

2016-03-22 Thread Mick McCarthy
Hello All, I’m experiencing some issues installing Teuthology on CentOS 6.5. I’ve tried installing it in a number of ways: * Wishing a python virtual environment * Using "pip install teuthology” directly The installation fails in both cases. a) In a python virtual environment (using

Re: [ceph-users] About the NFS on RGW

2016-03-22 Thread Ilya Dryomov
On Tue, Mar 22, 2016 at 1:12 PM, Xusangdi wrote: > Hi Matt & Cephers, > > I am looking for advise on setting up a file system based on Ceph. As CephFS > is not yet productive ready(or I missed some breakthroughs?), the new NFS on > RadosGW should be a promising alternative,

Re: [ceph-users] CephFS Advice

2016-03-22 Thread John Spray
On Tue, Mar 22, 2016 at 2:37 PM, Ben Archuleta wrote: > Hello All, > > I have experience using Lustre but I am new to the Ceph world, I have some > questions to the Ceph users out there. > > I am thinking about deploying a Ceph storage cluster that lives in multiple > location

Re: [ceph-users] Qemu+RBD recommended cache mode and AIO settings

2016-03-22 Thread Ilya Dryomov
On Tue, Mar 22, 2016 at 4:48 PM, Jason Dillaman wrote: >> Hi Jason, >> >> Le 22/03/2016 14:12, Jason Dillaman a écrit : >> > >> > We actually recommend that OpenStack be configured to use writeback cache >> > [1]. If the guest OS is properly issuing flush requests, the cache

Re: [ceph-users] Qemu+RBD recommended cache mode and AIO settings

2016-03-22 Thread Jason Dillaman
> Hi Jason, > > Le 22/03/2016 14:12, Jason Dillaman a écrit : > > > > We actually recommend that OpenStack be configured to use writeback cache > > [1]. If the guest OS is properly issuing flush requests, the cache will > > still provide crash-consistency. By default, the cache will

Re: [ceph-users] Qemu+RBD recommended cache mode and AIO settings

2016-03-22 Thread Loris Cuoghi
Hi Jason, Le 22/03/2016 14:12, Jason Dillaman a écrit : We actually recommend that OpenStack be configured to use writeback cache [1]. If the guest OS is properly issuing flush requests, the cache will still provide crash-consistency. By default, the cache will automatically start up in

[ceph-users] Ceph Advice

2016-03-22 Thread Ben Archuleta
Hello All, I have experience using Lustre but I am new to the Ceph world, I have some questions to the Ceph users out there. I am thinking about deploying a Ceph storage cluster that lives in multiple location "Building A" and "Building B”, this cluster will be comprised of two dell servers

[ceph-users] CephFS Advice

2016-03-22 Thread Ben Archuleta
Hello All, I have experience using Lustre but I am new to the Ceph world, I have some questions to the Ceph users out there. I am thinking about deploying a Ceph storage cluster that lives in multiple location "Building A" and "Building B”, this cluster will be comprised of two dell servers

Re: [ceph-users] About the NFS on RGW

2016-03-22 Thread Matt Benjamin
Hi Xusangdi, NFS on RGW is not intended as an alternative to CephFS. The basic idea is to expose the S3 namespace using Amazon's prefix+delimiter convention (delimiter currently limited to '/'). We use opens for atomicity, which implies NFSv4 (or 4.1). In addition to limitations by design,

Re: [ceph-users] Qemu+RBD recommended cache mode and AIO settings

2016-03-22 Thread Jason Dillaman
> > I've been looking on the internet regarding two settings which might > > influence > > performance with librbd. > > > > When attaching a disk with Qemu you can set a few things: > > - cache > > - aio > > > > The default for libvirt (in both CloudStack and OpenStack) for 'cache' is > > 'none'.

Re: [ceph-users] Qemu+RBD recommended cache mode and AIO settings

2016-03-22 Thread Loris Cuoghi
Hi Wido, Le 22/03/2016 13:52, Wido den Hollander a écrit : Hi, I've been looking on the internet regarding two settings which might influence performance with librbd. When attaching a disk with Qemu you can set a few things: - cache - aio The default for libvirt (in both CloudStack and

[ceph-users] Qemu+RBD recommended cache mode and AIO settings

2016-03-22 Thread Wido den Hollander
Hi, I've been looking on the internet regarding two settings which might influence performance with librbd. When attaching a disk with Qemu you can set a few things: - cache - aio The default for libvirt (in both CloudStack and OpenStack) for 'cache' is 'none'. Is that still the recommend value

[ceph-users] About the NFS on RGW

2016-03-22 Thread Xusangdi
Hi Matt & Cephers, I am looking for advise on setting up a file system based on Ceph. As CephFS is not yet productive ready(or I missed some breakthroughs?), the new NFS on RadosGW should be a promising alternative, especially for large files, which is what we are most interested in. However,

Re: [ceph-users] Fresh install - all OSDs remain down and out

2016-03-22 Thread Markus Goldberg
Hi desmond, this seems to be much to do for 90 OSDs. And possible a few mistakes in typing. Every change of disk needs extra editing too. This weighting was automatically done in former versions. Do you know why and where this changed or was i faulty at some point? Markus Am 21.03.2016 um

Re: [ceph-users] Need help for PG problem

2016-03-22 Thread Zhang Qiang
Hi Reddy, It's over a thousand lines, I pasted it on gist: https://gist.github.com/dotSlashLu/22623b4cefa06a46e0d4 On Tue, 22 Mar 2016 at 18:15 M Ranga Swami Reddy wrote: > Hi, > Can you please share the "ceph health detail" output? > > Thanks > Swami > > On Tue, Mar 22,

Re: [ceph-users] Need help for PG problem

2016-03-22 Thread Oliver Dzombic
Hi Zhang, are you sure, that all your 20 osd's are up and in ? Please provide the complete output of ceph -s or better with detail flag. Thank you :-) -- Mit freundlichen Gruessen / Best regards Oliver Dzombic IP-Interactive mailto:i...@ip-interactive.de Anschrift: IP Interactive UG (

Re: [ceph-users] Need help for PG problem

2016-03-22 Thread M Ranga Swami Reddy
Hi, Can you please share the "ceph health detail" output? Thanks Swami On Tue, Mar 22, 2016 at 3:32 PM, Zhang Qiang wrote: > Hi all, > > I have 20 OSDs and 1 pool, and, as recommended by the > doc(http://docs.ceph.com/docs/master/rados/operations/placement-groups/), I >

[ceph-users] Need help for PG problem

2016-03-22 Thread Zhang Qiang
Hi all, I have 20 OSDs and 1 pool, and, as recommended by the doc( http://docs.ceph.com/docs/master/rados/operations/placement-groups/), I configured pg_num and pgp_num to 4096, size 2, min size 1. But ceph -s shows: HEALTH_WARN 534 pgs degraded 551 pgs stuck unclean 534 pgs undersized too many

Re: [ceph-users] How to enable civetweb log in Infernails (or Jewel)

2016-03-22 Thread Mika c
Hi Cephers, I don't notice the user already changed from root into ceph. By changed the directory caps, the problem already fixed. Thank you all. Best wishes, Mika 2016-03-22 16:50 GMT+08:00 Mika c : > Hi Cephers, > ​​Setting of "rgw frontends = >

[ceph-users] recorded data digest != on disk

2016-03-22 Thread Max A. Krasilnikov
Hello! I have 3-node cluster running ceph version 0.94.6 (e832001feaf8c176593e0325c8298e3f16dfb403) on Ubuntu 14.04. When scrubbing I get error: -9> 2016-03-21 17:36:09.047029 7f253a4f6700 5 -- op tracker -- seq: 48045, time: 2016-03-21 17:36:09.046984, event: all_read, op:

Re: [ceph-users] ZFS or BTRFS for performance?

2016-03-22 Thread Mike Almateia
20-Mar-16 23:23, Schlacta, Christ пишет: What do you use as an interconnect between your osds, and your clients? Two Mellanox 10Gb SFP NIC dual port each = 4 x 10Gbit/s ports on each server. On servers each 2 ports bonded, so we have 2 bond for Cluster net and Storage net. Clients servers