Re: [ceph-users] why no likely() and unlikely() used in Ceph's source code?

2014-09-15 Thread Tim Zhang
ok, thank you all. 2014-09-16 0:52 GMT+08:00 Yehuda Sadeh : > I agree with Greg. When dealing with the latencies that we deal with due > to different IO operations (networking, storage), it's mostly not worth the > trouble. I think the main reason we didn't actually put it to use is that > we for

Re: [ceph-users] does CephFS still have no fsck utility?

2014-09-15 Thread brandon li
Great to know you are working on it! I am new to the mailing list. Is there any reference of discussion last year, so I can look into. or any bug number I can watch to keep track of the development? Thanks, Brandon ___ ceph-users mailing list ceph-user

[ceph-users] purpose of different default pools created by radosgw instance

2014-09-15 Thread pragya jain
Hi all! As document says, ceph has some default pools for radosgw instance. These pools are: * .rgw.root * .rgw.control * .rgw.gc * .rgw.buckets * .rgw.buckets.index * .log * .intent-log * .usage * .users * .users.ema

[ceph-users] How to fix pgs unclean

2014-09-15 Thread livemoon
Hi, I am new to ceph. Now I have a problem about my ceph status. The following is my ceph status: # ceph --version ceph version 0.72.1 (4d923861868f6a15dcb33fef7f50f674997322de) # ceph -w cluster 323e974d-ea51-4d10-94e5-8b1ae7a41429 health HEALTH_WARN 305 pgs degraded; 448 pgs stuck uncl

Re: [ceph-users] does CephFS still have no fsck utility?

2014-09-15 Thread Gregory Farnum
CephFS in general has a lot fewer metadata structures than traditional filesystems generally do; about the only thing that could go wrong without users noticing directly is: 1) The data gets corrupted 2) Files somehow get removed from folders. Data corruption is something RADOS is responsible for

Re: [ceph-users] does CephFS still have no fsck utility?

2014-09-15 Thread brandon li
Thanks for the reply, Greg. With traditional file system experience, I have to admit it will take me some time to get used to the way CephFS works. Considering it as part of my learning curve. :-) One of concerns I have it that, without tools like fsck, how could we know the file system is stil

Re: [ceph-users] does CephFS still have no fsck utility?

2014-09-15 Thread Gregory Farnum
On Mon, Sep 15, 2014 at 3:23 PM, brandon li wrote: > If it's true, is there any other tools I can use to check and repair the > file system? Not much, no. That said, you shouldn't really need an fsck unless the underlying RADOS store went through some catastrophic event. Is there anything in part

[ceph-users] does CephFS still have no fsck utility?

2014-09-15 Thread brandon li
If it's true, is there any other tools I can use to check and repair the file system? Thanks, Brandon ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Crushmap ruleset for rack aware PG placement

2014-09-15 Thread Amit Vijairania
Thanks Sage! We will test this and share our observations.. Regards, Amit Amit Vijairania | 415.610.9908 --*-- On Mon, Sep 15, 2014 at 8:28 AM, Sage Weil wrote: > Hi Amit, > > On Mon, 15 Sep 2014, Amit Vijairania wrote: >> Hello! >> >> In a two (2) rack Ceph cluster, with 15 hosts per rack

Re: [ceph-users] Cephfs upon Tiering

2014-09-15 Thread Gregory Farnum
On Mon, Sep 15, 2014 at 6:32 AM, Berant Lemmenes wrote: > Greg, > > So is the consensus that the appropriate way to implement this scenario is > to have the fs created on the EC backing pool vs. the cache pool but that > the UI check needs to be tweaked to distinguish between this scenario and > j

Re: [ceph-users] OSD troubles on FS+Tiering

2014-09-15 Thread Gregory Farnum
The pidfile bug is already fixed in master/giant branches. As for the crashing, I'd try killing all the osd processes and turning them back on again. It might just be some daemon restart failed, or your cluster could be sufficiently overloaded that the node disks are going unresponsive and they're

Re: [ceph-users] Dumpling cluster can't resolve peering failures, ceph pg query blocks, auth failures in logs

2014-09-15 Thread Gregory Farnum
Not sure, but have you checked the clocks on their nodes? Extreme clock drift often results in strange cephx errors. -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com On Sun, Sep 14, 2014 at 11:03 PM, Florian Haas wrote: > Hi everyone, > > [Keeping this on the -users list for no

Re: [ceph-users] why no likely() and unlikely() used in Ceph's source code?

2014-09-15 Thread Yehuda Sadeh
I agree with Greg. When dealing with the latencies that we deal with due to different IO operations (networking, storage), it's mostly not worth the trouble. I think the main reason we didn't actually put it to use is that we forgot we've had this macro defined, and it really wasn't worth the troub

Re: [ceph-users] why no likely() and unlikely() used in Ceph's source code?

2014-09-15 Thread Gregory Farnum
I don't know where the file came from, but likely/unlikely markers are the kind of micro-optimization that isn't worth the cost in Ceph dev resources right now. -Greg On Monday, September 15, 2014, Tim Zhang wrote: > Hey guys, > After reading ceph source code, I find that there is a file named >

[ceph-users] Ceph Different Confgurations for RAS(Reliability, Availability and Serviceability)

2014-09-15 Thread Hossein Zabolzadeh
Hi there, I am new to Ceph and in general to Cloud Storage. I want to know that is there different Ceph configuration for variant DC storage needs? By DC storage needs I mean, for example Reliability or High performance Storage system. In other words, is there different Ceph configuration if I want

Re: [ceph-users] Crushmap ruleset for rack aware PG placement

2014-09-15 Thread Sage Weil
Hi Amit, On Mon, 15 Sep 2014, Amit Vijairania wrote: > Hello! > > In a two (2) rack Ceph cluster, with 15 hosts per rack (10 OSD per > host / 150 OSDs per rack), is it possible to create a ruleset for a > pool such that the Primary and Secondary PGs/replicas are placed in > one rack and Tertiary

Re: [ceph-users] Cache pool stats

2014-09-15 Thread Jean-Charles Lopez
Hi ceph daemon osd.x perf dump will show you the stats Andrei JC On Monday, September 15, 2014, Andrei Mikhailovsky wrote: > Hi > > Does anyone know how to check the basic cache pool stats for the > information like how well the cache layer is working for a recent or > historic time frame? Thi

Re: [ceph-users] why no likely() and unlikely() used in Ceph's source code?

2014-09-15 Thread Christian Balzer
On Mon, 15 Sep 2014 22:48:07 +0800 Tim Zhang wrote: > the ceph-dev always deny my mail telling that my mail is a jam because it > include html code, actually it is not. > Actually it is, like this very mail from you. I would think/hope that there is a configuration option in Gmail to turn that o

Re: [ceph-users] why no likely() and unlikely() used in Ceph's source code?

2014-09-15 Thread Tim Zhang
the ceph-dev always deny my mail telling that my mail is a jam because it include html code, actually it is not. 2014-09-15 19:43 GMT+08:00 Marco Garcês : > Perhaps this question belongs in ceph-dev ? > > > *Marco Garcês* > *#sysadmin* > Maputo - Mozambique > *[Phone]* +258 84 4105579 > *[Skype]*

Re: [ceph-users] Bcache / Enhanceio with osds

2014-09-15 Thread Mark Nelson
On 09/15/2014 07:35 AM, Andrei Mikhailovsky wrote: *From: *"Mark Nelson" *To: *ceph-users@lists.ceph.com *Sent: *Monday, 15 September, 2014 1:13:01 AM *Subject: *Re: [ceph-users] Bcache / Enhanceio with os

Re: [ceph-users] Bcache / Enhanceio with osds

2014-09-15 Thread Andrei Mikhailovsky
- Original Message - > From: "Mark Nelson" > To: ceph-users@lists.ceph.com > Sent: Monday, 15 September, 2014 1:13:01 AM > Subject: Re: [ceph-users] Bcache / Enhanceio with osds > On 09/14/2014 05:11 PM, Andrei Mikhailovsky wrote: > > Hello guys, > > > > Was wondering if anyone uses or d

[ceph-users] OSD troubles on FS+Tiering

2014-09-15 Thread Kenneth Waegeman
Hi, I have some strange OSD problems. Before the weekend I started some rsync tests over CephFS, on a cache pool with underlying EC KV pool. Today the cluster is completely degraded: [root@ceph003 ~]# ceph status cluster 82766e04-585b-49a6-a0ac-c13d9ffd0a7d health HEALTH_WARN 19 pg

[ceph-users] why no likely() and unlikely() used in Ceph's source code?

2014-09-15 Thread Tim Zhang
Hey guys, After reading ceph source code, I find that there is a file named common/likely.h and it implements the function likely() and unlikey() which will optimize the prediction of code branch for cpu. But there isn't any place using these two functions, I am curious about why the developer of

Re: [ceph-users] OSDs are crashing with "Cannot fork" or "cannot create thread" but plenty of memory is left

2014-09-15 Thread Christian Eichelmann
Hi all, I have no idea why running out of filehandles should produce a "out of memory" error, but well. I've increased the ulimit as you told me, and nothing changed. I've noticed that the osd init script sets the max open file handles explicitly, so I was setting the corresponding option in my ce

[ceph-users] OSDs crashing on CephFS and Tiering

2014-09-15 Thread Kenneth Waegeman
Hi, I have some strange OSD problems. Before the weekend I started some rsync tests over CephFS, on a cache pool with underlying EC KV pool. Today the cluster is completely degraded: [root@ceph003 ~]# ceph status cluster 82766e04-585b-49a6-a0ac-c13d9ffd0a7d health HEALTH_WARN 19

[ceph-users] Cache pool stats

2014-09-15 Thread Andrei Mikhailovsky
Hi Does anyone know how to check the basic cache pool stats for the information like how well the cache layer is working for a recent or historic time frame? Things like cache hit ratio would be very helpful as well as. Thanks Andrei ___ ceph-use

Re: [ceph-users] Cephfs upon Tiering

2014-09-15 Thread Berant Lemmenes
Greg, So is the consensus that the appropriate way to implement this scenario is to have the fs created on the EC backing pool vs. the cache pool but that the UI check needs to be tweaked to distinguish between this scenario and just trying to use a EC pool alone? I'm also interested in the scena

Re: [ceph-users] why no likely() and unlikely() used in Ceph's source code?

2014-09-15 Thread Marco Garcês
Perhaps this question belongs in ceph-dev ? *Marco Garcês* *#sysadmin* Maputo - Mozambique *[Phone]* +258 84 4105579 *[Skype]* marcogarces On Mon, Sep 15, 2014 at 12:28 PM, Tim Zhang wrote: > Hey guys, > After reading ceph source code, I find that there is a file named > common/likely.h and it

[ceph-users] why no likely() and unlikely() used in Ceph's source code?

2014-09-15 Thread Tim Zhang
Hey guys, After reading ceph source code, I find that there is a file named common/likely.h and it implements the function likely() and unlikey() which will optimize the prediction of code branch for cpu. But there isn't any place using these two functions, I am curious about why the developer of

[ceph-users] best libleveldb version ?

2014-09-15 Thread Alexandre DERUMIER
Hi, I would like to known with libleveldb should be us with firefly. I'm using debian wheezy which provide really old libleveldb (I don't use it), and in wheezy backport 1.17 is provided. But in intank repositories , I see that 1.9 is provide for some distribs. So, what is the best/tested ve

[ceph-users] Crushmap ruleset for rack aware PG placement

2014-09-15 Thread Amit Vijairania
Hello! In a two (2) rack Ceph cluster, with 15 hosts per rack (10 OSD per host / 150 OSDs per rack), is it possible to create a ruleset for a pool such that the Primary and Secondary PGs/replicas are placed in one rack and Tertiary PG/replica is placed in the other rack? root standard { id -1 #

[ceph-users] Ceph RBD kernel module support for Cache Tiering

2014-09-15 Thread Amit Vijairania
Hello! We are using Ceph RBD kernel module, on RHEL 7.0, with Ceph "Firefly" 0.80.5.. Does RBD kernel module support Cache Tiering in Firefly? If not, when will RBD kernel module support Cache Tiering (Linux kernel version and Ceph version)? Regards, Amit Vijairania | Cisco Systems, Inc. --*--

[ceph-users] Ceph RBD kernel module support for Cache Tiering

2014-09-15 Thread Amit Vijairania
Hello! We are using Ceph RBD kernel module, on RHEL 7.0, with Ceph "Firefly" 0.80.5.. Does RBD kernel module support Cache Tiering in Firefly? If not, when will RBD kernel module support Cache Tiering (Linux kernel version and Ceph version)? Regards, Amit Vijairania | Cisco Systems, Inc. --*--