Re: [ceph-users] Write IO Problem

2015-03-23 Thread Alexandre DERUMIER
Hi, >>dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc >> >>1073741824 Bytes (1,1 GB) kopiert, 2,53986 s, 423 MB/s How much do you get with o_dsync? (ceph journal use o_dsync, and some ssd are pretty slow with dsync) http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-t

Re: [ceph-users] Issue with free Inodes

2015-03-23 Thread Kamil Kuramshin
Yes I read it and do no not understand what you mean when say *verify this*? All 3335808 inodes are definetly files and direcories created by ceph OSD process: *tune2fs 1.42.5 (29-Jul-2012)* Filesystem volume name: Last mounted on: /var/lib/ceph/tmp/mnt.05NAJ3 Filesystem UUID: e4dcc

Re: [ceph-users] Write IO Problem

2015-03-23 Thread Christian Balzer
Hello, If you had used "performance" or "slow" in your subject future generations would be able find this thread and what it is about more easily. ^_- Also, check the various "SSD" + "performance" threads in the ML archives. On Fri, 20 Mar 2015 14:13:19 + Rottmann Jonas wrote: > Hi, > > W

[ceph-users] Does crushtool --test --simulate do what cluster should do?

2015-03-23 Thread Robert LeBlanc
I'm trying to create a CRUSH ruleset and I'm using crushtool to test the rules, but it doesn't seem to mapping things correctly. I have two roots, on for spindles and another for SSD. I have two rules, one for each root. The output of crushtool on rule 0 shows objects being mapped to SSD OSDs when

Re: [ceph-users] CRUSH Map Adjustment for Node Replication

2015-03-23 Thread Christian Balzer
Georgios, it really depends on how busy and powerful your cluster is, as Robert wrote. If in doubt, lower the backfill value as pointed out by Robert. Look at the osd_scrub_load_threshold and with new enough versions of Ceph at the osd_scrub_sleep setting, this is very helpful in keeping deep sc

Re: [ceph-users] OSD + Flashcache + udev + Partition uuid

2015-03-23 Thread Noah Mehl
A. I see now. Has anyone used cachecade from LSI for both the read and write cache to SSD? I don’t know if you can attach a cachecade device to a JBOD, but if you could it would probably perform reall

Re: [ceph-users] CRUSH Map Adjustment for Node Replication

2015-03-23 Thread Robert LeBlanc
I don't believe that you can set the schedule of the deep scrubs. People that want that kind of control disable deep scrubs and run a script to scrub all PGs. For the other options, you should look through http://ceph.com/docs/master/rados/configuration/osd-config-ref/ and find what you feel might

Re: [ceph-users] CRUSH Map Adjustment for Node Replication

2015-03-23 Thread Dimitrakakis Georgios
Robert thanks for the info! How can I find out and modify when is scheduled the next deep scrub, the number of backfill processes and their priority? Best regards, George Ο χρήστης Robert LeBlanc έγραψε >You just need to change your rule from > >step chooseleaf firstn 0 type osd

Re: [ceph-users] CRUSH Map Adjustment for Node Replication

2015-03-23 Thread Robert LeBlanc
You just need to change your rule from step chooseleaf firstn 0 type osd to step chooseleaf firstn 0 type host There will be data movement as it will want to move about half the objects to the new host. There will be data generation as you move from size 1 to size 2. As far as I know a deep scr

Re: [ceph-users] CephFS questions

2015-03-23 Thread John Spray
On 23/03/2015 19:00, Bogdan SOLGA wrote: * is the http://ceph.com/docs/master/cephfs/ page referring to the current release version (Giant) or to the HEAD (Hammer) version? if it's referring to Giant -- are there any major improvements and fixes for CephFS included in the (upcoming

[ceph-users] CRUSH Map Adjustment for Node Replication

2015-03-23 Thread Georgios Dimitrakakis
Hi all! I had a CEPH Cluster with 10x OSDs all of them in one node. Since the cluster was built from the beginning with just one OSDs node the crushmap had as a default the replication to be on OSDs. Here is the relevant part from my crushmap: # rules rule replicated_ruleset { rules

Re: [ceph-users] OSD + Flashcache + udev + Partition uuid

2015-03-23 Thread Nick Fisk
Just to add, the main reason it seems to make a difference is the metadata updates which lie on the actual OSD. When you are doing small block writes, these metadata updates seem to take almost as long as the actual data, so although the writes are getting coalesced, the actual performance isn't mu

Re: [ceph-users] OSD + Flashcache + udev + Partition uuid

2015-03-23 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Brendan Moloney > Sent: 23 March 2015 21:02 > To: Noah Mehl > Cc: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] OSD + Flashcache + udev + Partition uuid > > This would be in additi

[ceph-users] ERROR: missing keyring, cannot use cephx for authentication

2015-03-23 Thread Jesus Chavez (jeschave)
Hi all, I did HA failover test shutting down 1 node and I see that only 1 OSD came up after reboot: [root@geminis ceph]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/rhel-root 50G 4.5G 46G 9% / devtmpfs 126G 0 126G 0% /dev tmpfs

Re: [ceph-users] OSD + Flashcache + udev + Partition uuid

2015-03-23 Thread Brendan Moloney
This would be in addition to having the journal on SSD. The journal doesn't help at all with small random reads and has a fairly limited ability to coalesce writes. In my case, the SSDs we are using for journals should have plenty of bandwidth/IOPs/space to spare, so I want to see if I can ge

Re: [ceph-users] OSD + Flashcache + udev + Partition uuid

2015-03-23 Thread Noah Mehl
We deployed with just putting the journal on an SSD directly, why would this not work for you? Just wondering really :) Thanks! ~Noah > On Mar 23, 2015, at 4:36 PM, Brendan Moloney wrote: > > I have been looking at the options for SSD caching for a bit now. Here is my > take on the current

Re: [ceph-users] OSD + Flashcache + udev + Partition uuid

2015-03-23 Thread Brendan Moloney
I have been looking at the options for SSD caching for a bit now. Here is my take on the current options: 1) bcache - Seems to have lots of reliability issues mentioned on mailing list with little sign of improvement. 2) flashcache - Seems to be no longer (or minimally?) developed/maintained,

Re: [ceph-users] CRUSH decompile failes

2015-03-23 Thread Robert LeBlanc
OK, sorry for all the quick e-mails, but I got it to compile. For some reason there are a few errors from decompiling the CRUSH map. 1. The decompiled map has "alg straw2" which is not vaild, removing the 2 lets it compile 2. The hosts have weight 0.000, which I don't think prevents the map from c

Re: [ceph-users] CRUSH decompile failes

2015-03-23 Thread Sage Weil
On Mon, 23 Mar 2015, Robert LeBlanc wrote: > Ok, so the decompile error is because I didn't download the CRUSH map > (found that out using hexdump), but I still can't compile an > unmodified CRUSH map. > > [root@nodezz ~]# crushtool -d map.crush -o map > [root@nodezz ~]# crushtool -c map -o map.cr

Re: [ceph-users] CRUSH decompile failes

2015-03-23 Thread Robert LeBlanc
Ok, so the decompile error is because I didn't download the CRUSH map (found that out using hexdump), but I still can't compile an unmodified CRUSH map. [root@nodezz ~]# crushtool -d map.crush -o map [root@nodezz ~]# crushtool -c map -o map.crush map:105 error: parse error at '' For some reason i

Re: [ceph-users] CephFS questions

2015-03-23 Thread Bogdan SOLGA
Hello, John! Thank you very much for your reply and for the provided information! As a follow-up to your email, a few other questions have arisen: - is the http://ceph.com/docs/master/cephfs/ page referring to the current release version (Giant) or to the HEAD (Hammer) version? if it's r

[ceph-users] CRUSH decompile failes

2015-03-23 Thread Robert LeBlanc
I was trying to decompile and edit the CRUSH map to adjust the CRUSH rules. My first attempt created a map that would decompile, but I could not recompile the CRUSH even if didn't modify it. When trying to download the CRUSH fresh, now the decompile fails. [root@nodezz ~]# ceph osd getmap -o map.c

Re: [ceph-users] RADOS Gateway Maturity

2015-03-23 Thread Jerry Lam
Hi Chris and Craig, Thank you for sharing your experience with me about S3 API RADOS gateway! Jerry Lam Senior Software Developer, Big Data Ontario Institute for Cancer Research MaRS Centre 661 University Avenue Suite 510 Toronto, Ontario Canada M5G 0A3 Email: jerry@oicr.on.ca Toll-free:

Re: [ceph-users] Multiple OSD's in a Each node with replica 2

2015-03-23 Thread Robert LeBlanc
I don't have a fresh cluster on hand to double check, but the default is to select a different host for each replica. You can adjust that to fit your needs, we are using cabinet as the selection criteria so that we can lose an entire cabinet of storage and still function. In order to store multipl

Re: [ceph-users] Uneven CPU usage on OSD nodes

2015-03-23 Thread Somnath Roy
Yes, we are also facing similar issue on load (and running after some time). This is a tcmalloc behavior. You can try setting the following env variable to a bigger value say 128MB or so. TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES This env variable is supposed to alleviate the issue but what we foun

Re: [ceph-users] arm cluster install

2015-03-23 Thread hp cre
Yann, Thanks for the info. Its been a great help. On 23 Mar 2015 14:44, "Yann Dupont - Veille Techno" < veilletechno-i...@univ-nantes.fr> wrote: > Le 22/03/2015 22:44, hp cre a écrit : > >> >> Hello Yann, >> >> Thanks for your reply. Unfortunately, I found it by chance during a >> search, since y

Re: [ceph-users] ceph cluster on docker containers

2015-03-23 Thread Pavel V. Kaygorodov
Hi! I'm using ceph cluster, packed to a number of docker containers. There are two things, which you need to know: 1. Ceph OSDs are using FS attributes, which may not be supported by filesystem inside docker container, so you need to mount external directory inside a container to store OSD data

Re: [ceph-users] who is using radosgw with civetweb?

2015-03-23 Thread Axel Dunkel
Sage, we use apache as a filter for security and additional functionality reasons. I do like the idea, but we'd need some kind of interface to filter/modify/process requests. Best regards Axel Dunkel -Ursprüngliche Nachricht- Von: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-ow...

Re: [ceph-users] Ceph Hammer OSD Shard Tuning Test Results

2015-03-23 Thread Vu Pham
>>> This would be a good thing to bring up in the meeting on Wednesday. >yes ! > Yes, we can discuss details on Wed's call. > >>>I wonder how much effect flow-control and header/data crc had. >yes. I known that sommath also disable crc for his bench > I disabled ceph's header/data crc for bot

Re: [ceph-users] Deploy ceph

2015-03-23 Thread kefu chai
hi harry, your question is more related to the ceph-user, so i am replying to ceph-users. On Wed, Mar 18, 2015 at 12:02 AM, harryxiyou wrote: > Hi all, > > I wanna deploy Ceph and I see the doc here > (http://docs.ceph.com/docs/dumpling/start/quick-start-preflight/). I > wonder how could I insta

Re: [ceph-users] The project of ceph client file system porting from Linux to AIX

2015-03-23 Thread Ketor D
Hi Dennis, I am interested in your project. I wrote a Win32 cephfs client https://github.com/ceph/ceph-dokan. But ceph-dokan runs in user-mode. I see you port code from kernel cephfs, are you planning to write a kernel mode AIX-cephfs? Thanks! 2015-03-04 17:59 GMT+08:00 Dennis

[ceph-users] Write IO Problem

2015-03-23 Thread Rottmann Jonas
Hi, We have a huge write IO Problem in our preproductive Ceph Cluster. First our Hardware: 4 OSD Nodes with: Supermicro X10 Board 32GB DDR4 RAM 2x Intel Xeon E5-2620 LSI SAS 9300-8i Host Bus Adapter Intel Corporation 82599EB 10-Gigabit 2x Intel SSDSA2CT040G3 in software raid 1 for system Disks

Re: [ceph-users] Ceph User Teething Problems

2015-03-23 Thread Lincoln Bryant
Hi David, I also see only the RBD pool getting created by default in 0.93. With regards to resizing placement groups, I believe you can use: ceph osd pool set [pool name] pg_num ceph osd pool set [pool name] pgp_num Be forewarned, this will trigger data migration. Cheers, Lincol

[ceph-users] Multiple OSD's in a Each node with replica 2

2015-03-23 Thread Azad Aliyar
I have a doubt . In a scenario (3nodes x 4osd each x 2replica) I tested with a node down and as long as you have space available all objects were there. Is it possible all replicas of an object to be saved in the same node? Is it possible to lose any? Is there a mechanism that prevents replica

Re: [ceph-users] Mapping users to different rgw pools

2015-03-23 Thread Steffen W Sørensen
My vag understanding is that this is mapped through the zone associated with the specific user. So define your desiree pools and zones mapping to the pools and assign users to desired regions+zones and thus to different pools per user. > Den 13/03/2015 kl. 07.48 skrev Sreenath BH : > > Hi all,

[ceph-users] Ceph's Logo

2015-03-23 Thread Amy Wilson
Hello, My name is Amy Wilson. I am the Director of eBrand Business (a 3D logo design and professional CGI animation studio). I've just visited the Ceph website and, I have to say, you have a really nice business over there. My congratulations!

[ceph-users] pool has data but rados ls empty

2015-03-23 Thread jipeng song
hi : there is a pool , data is not empty (by using 'rados df' or rados stats -p) ,but can not list the objs in that pool (using 'rados ls -p 'or python api) . how do it happened ? The pool was created by a normal cmd . By the way using the c code read and write works find. ___

[ceph-users] Ceph courseware development opportunity

2015-03-23 Thread Golden Ink
We are looking for someone to develop a course on a Ceph implementation in a large computer manufacturer hybrid or public cloud. The project would involve meeting with internal engineers to discover latest developments and applying their style and standards to the courseware. Project timeframe i

[ceph-users] Calamari Deployment

2015-03-23 Thread JESUS CHAVEZ ARGUELLES
Does anybody know how to succesful install Calamari in rhel7 ? I have tried the vagrant thug without sucesss and it seems like a nightmare there is a Kind of Sidur when you do vagrant up where it seems not to find the vm path... Regards Jesus Chavez SYSTEMS ENGINEER-C.SALES jesch...@cisco.co

Re: [ceph-users] More writes on filestore than on journal ?

2015-03-23 Thread Olivier Bonvalet
Hi, Le lundi 23 mars 2015 à 07:29 -0700, Gregory Farnum a écrit : > On Mon, Mar 23, 2015 at 6:21 AM, Olivier Bonvalet wrote: > > Hi, > > > > I'm still trying to find why there is much more write operations on > > filestore since Emperor/Firefly than from Dumpling. > > Do you have any history aro

Re: [ceph-users] Uneven CPU usage on OSD nodes

2015-03-23 Thread f...@univ-lr.fr
Hi Greg, the low-/high-CPU comportement is absolutely persistent while a host is UP, no oscillation. But rebooting a node can make its comportment switch low-/high-CPU, as seen this morning after checking the BIOS settings (especially numa) were the same on 2 hosts. Hosts are identical, pupp

Re: [ceph-users] More writes on filestore than on journal ?

2015-03-23 Thread Gregory Farnum
On Mon, Mar 23, 2015 at 6:21 AM, Olivier Bonvalet wrote: > Hi, > > I'm still trying to find why there is much more write operations on > filestore since Emperor/Firefly than from Dumpling. Do you have any history around this? It doesn't sound familiar, although I bet it's because of the WBThrottl

Re: [ceph-users] Ceph in Production: best practice to monitor OSD up/down status

2015-03-23 Thread Gregory Farnum
On Mon, Mar 23, 2015 at 7:17 AM, Saverio Proto wrote: > Hello, > > thanks for the answers. > > This was exacly what I was looking for: > > mon_osd_down_out_interval = 900 > > I was not waiting long enoght to see my cluster recovering by itself. > That's why I tried to increase min_size, because I

Re: [ceph-users] Ceph in Production: best practice to monitor OSD up/down status

2015-03-23 Thread Saverio Proto
Hello, thanks for the answers. This was exacly what I was looking for: mon_osd_down_out_interval = 900 I was not waiting long enoght to see my cluster recovering by itself. That's why I tried to increase min_size, because I did not understand what min_size was for. Now that I know what is min_

Re: [ceph-users] Can't Start OSD

2015-03-23 Thread Gregory Farnum
On Sun, Mar 22, 2015 at 11:22 AM, Somnath Roy wrote: > You should be having replicated copies on other OSDs (disks), so, no need to > worry about the data loss. You add a new drive and follow the steps in the > following link (either 1 or 2) Except that's not the case if you only had one copy o

Re: [ceph-users] How does crush selects different osds using hash(pg) in diferent iterations

2015-03-23 Thread Gregory Farnum
On Sat, Mar 21, 2015 at 10:46 AM, shylesh kumar wrote: > Hi , > > I was going through this simplified crush algorithm given in ceph website. > > def crush(pg): >all_osds = ['osd.0', 'osd.1', 'osd.2', ...] >result = [] ># size is the number of copies; primary+replicas >while len(res

Re: [ceph-users] Issue with free Inodes

2015-03-23 Thread Christian Balzer
On Mon, 23 Mar 2015 15:26:07 +0300 Kamil Kuramshin wrote: > Yes, I understand that. > > The initial purpose of first email was just an advise for new comers. My > fault was in that I was selected ext4 for SSD disks as backend. > But I did not foresee that inode number can reach its limit before

Re: [ceph-users] Uneven CPU usage on OSD nodes

2015-03-23 Thread Gregory Farnum
On Mon, Mar 23, 2015 at 4:31 AM, f...@univ-lr.fr wrote: > Hi Somnath, > > Thank you, please find my answers below > > Somnath Roy a écrit le 22/03/15 18:16 : > > Hi Frederick, > > Need some information here. > > > > 1. Just to clarify, you are saying it is happening g in 0.87.1 and not in > Firef

Re: [ceph-users] SSD Hardware recommendation

2015-03-23 Thread Christian Balzer
On Mon, 23 Mar 2015 11:51:56 +0100 (CET) Alexandre DERUMIER wrote: > >> the combination of all the > >>things mentioned before in the Ceph/FS stack caused a 12x amplification > >>(instead of 2x) _before_ hitting the SSD. > > oh, ok, pretty strange. > > BTW, is it through ceph-fs ? or rbd/rados

Re: [ceph-users] Ceph in Production: best practice to monitor OSD up/down status

2015-03-23 Thread Gregory Farnum
On Sun, Mar 22, 2015 at 2:55 AM, Saverio Proto wrote: > Hello, > > I started to work with CEPH few weeks ago, I might ask a very newbie > question, but I could not find an answer in the docs or in the ml > archive for this. > > Quick description of my setup: > I have a ceph cluster with two server

Re: [ceph-users] add stop_scrub command for ceph

2015-03-23 Thread Sage Weil
On Mon, 23 Mar 2015, Xinze Chi wrote: > hi ceph: > > Currently, there is not a command which can stop scrubbing when > the pg is doing scrub or deep > scrub. What about add a command to support it ? I think this s every > used for system administrator. > > I have add a issue to track it.

Re: [ceph-users] More writes on blockdevice than on filestore ?

2015-03-23 Thread Olivier Bonvalet
Erg... I sent to fast. Bad title, please read «More writes on blockdevice than on filestore) Le lundi 23 mars 2015 à 14:21 +0100, Olivier Bonvalet a écrit : > Hi, > > I'm still trying to find why there is much more write operations on > filestore since Emperor/Firefly than from Dumpling. > > So

[ceph-users] More writes on filestore than on journal ?

2015-03-23 Thread Olivier Bonvalet
Hi, I'm still trying to find why there is much more write operations on filestore since Emperor/Firefly than from Dumpling. So, I add monitoring of all perf counters values from OSD. From what I see : «filestore.ops» reports an average of 78 operations per seconds. But, block device monitoring r

[ceph-users] Rados Gateway and keystone

2015-03-23 Thread ghislain.chevalier
Hi All, I just would to be sure about keystone configuration for Rados Gateway. I read the documentation http://ceph.com/docs/master/radosgw/keystone/ and http://ceph.com/docs/master/radosgw/config-ref/?highlight=keystone but I didn't catch if after having configured the rados gateway (ceph.conf

[ceph-users] Ceph cache tier

2015-03-23 Thread Yujian Peng
Hi all, I have a ceph cluster(0.80.7) in production. Now I encounter a bottleneck of iosp, so I want to add a cache tier with SSDs to provide better I/O performance. Here is the procedure: 1. Create a cache pool 2. Set up a cahce tire ceph osd tier add cold-storage hot-storage 3. Set cach

Re: [ceph-users] arm cluster install

2015-03-23 Thread Yann Dupont - Veille Techno
Le 22/03/2015 22:44, hp cre a écrit : Hello Yann, Thanks for your reply. Unfortunately, I found it by chance during a search, since you didn't include me in the reply, I never got it on my email. Well that wasn't intended, but that's because I replied to the list, which is usually the w

Re: [ceph-users] Issue with free Inodes

2015-03-23 Thread Kamil Kuramshin
Yes, I understand that. The initial purpose of first email was just an advise for new comers. My fault was in that I was selected ext4 for SSD disks as backend. But I did not foresee that inode number can reach its limit before the free space :) And maybe there must be some sort of warning n

[ceph-users] PG calculator queries

2015-03-23 Thread Sreenath BH
Hi, consider following values for a pool: Size = 3 OSDs = 400 %Data = 100 Target PGs per OSD = 200 (This is default) The PG calculator generates number of PGs for this pool as : 32768. Questions: 1. The Ceph documentation recommends around 100 PGs/OSD, whereas the calculator takes 200 as defau

Re: [ceph-users] Uneven CPU usage on OSD nodes

2015-03-23 Thread f...@univ-lr.fr
Hi Somnath, Thank you, please find my answers below Somnath Roy a écrit le 22/03/15 18:16 : Hi Frederick, Need some information here. 1. Just to clarify, you are saying it is happening g in 0.87.1 and not in Firefly ? That's a possibility, others running similar hardware (and possibly

Re: [ceph-users] CephFS questions

2015-03-23 Thread John Spray
On 22/03/2015 08:29, Bogdan SOLGA wrote: Hello, everyone! I have a few questions related to the CephFS part of Ceph: * is it production ready? Like it says at http://ceph.com/docs/master/cephfs/: " CephFS currently lacks a robust ‘fsck’ check and repair function. Please use caution when st

Re: [ceph-users] Issue with free Inodes

2015-03-23 Thread Thomas Foster
You could fix this by changing your block size when formatting the mount-point with the mkfs -b command. I had this same issue when dealing with the filesystem using glusterfs and the solution is to either use a filesystem that allocates inodes automatically or change the block size when you build

Re: [ceph-users] "store is getting too big" on monitors

2015-03-23 Thread Joao Eduardo Luis
On 02/17/2015 11:13 AM, Mohamed Pakkeer wrote: Hi Joao, We followed your instruction to create the store dump ceph-kvstore-tool /var/lib/ceph/mon/ceph-FOO/store.db list > store.dump' for above store's location, let's call it $STORE: for m in osdmap pgmap; do for k in first_committed last_c

Re: [ceph-users] Ceph in Production: best practice to monitor OSD up/down status

2015-03-23 Thread Xabier Elkano
El 22/03/15 a las 10:55, Saverio Proto escribió: > Hello, > > I started to work with CEPH few weeks ago, I might ask a very newbie > question, but I could not find an answer in the docs or in the ml > archive for this. > > Quick description of my setup: > I have a ceph cluster with two servers. Eac

Re: [ceph-users] Issue with free Inodes

2015-03-23 Thread Kamil Kuramshin
In my case there was cache pool for ec-pool serving RBD-images, and object size is 4Mb, and client was an /kernel-rbd /client each SSD disk is 60G disk, 2 disk per node, 6 nodes in total = 12 OSDs in total 23.03.2015 12:00, Christian Balzer пишет: Hello, This is rather confusing, as cache-t

Re: [ceph-users] Giant 0.87 update on CentOs 7

2015-03-23 Thread HEWLETT, Paul (Paul)** CTR **
Hi Steffen We have recently encountered the errors described below. Initially one must set check_obsoletes=1 in the yum priorities.conf file. However subsequent yum updates cause problems. The solution we use is to disable the epel repo by default: yum-config-manager --disable epel and

Re: [ceph-users] OSD Forece Removal

2015-03-23 Thread Stéphane DUGRAVOT
- Mail original - Thanks stephane the thing is that those steps needs to be run in the node where the osd lives, I dont have that node any more since the operating Systems got corrupted so I Couldnt make it work :( Jesus, On my test cluster, what i have done is (experimental...

Re: [ceph-users] Issue with free Inodes

2015-03-23 Thread Christian Balzer
Hello, This is rather confusing, as cache-tiers are just normal OSDs/pools and thus should have Ceph objects of around 4MB in size by default. This is matched on what I see with Ext4 here (normal OSD, not a cache tier): --- size: /dev/sde1 2.7T 204G 2.4T 8% /var/lib/ceph/osd/ceph-0 ino

[ceph-users] add stop_scrub command for ceph

2015-03-23 Thread Xinze Chi
hi ceph: Currently, there is not a command which can stop scrubbing when the pg is doing scrub or deep scrub. What about add a command to support it ? I think this s every used for system administrator. I have add a issue to track it. http://tracker.ceph.com/issues/11202. -- Regards, xi

Re: [ceph-users] Finding out how much data is in the journal

2015-03-23 Thread Haomai Wang
On Mon, Mar 23, 2015 at 3:04 PM, Josef Johansson wrote: > > On 23 Mar 2015, at 03:58, Haomai Wang wrote: > > On Mon, Mar 23, 2015 at 2:53 AM, Josef Johansson wrote: > > Hi all! > > Trying to figure out how much my journals are used, using SSDs as journals > and SATA-drives as storage, I dive int

[ceph-users] Issue with free Inodes

2015-03-23 Thread Kamil Kuramshin
Recently got a problem with OSDs based on SSD disks used in cache tier for EC-pool superuser@node02:~$ df -i FilesystemInodes IUsed *IFree* IUse% Mounted on <...> /dev/sdb13335808 3335808 *0* 100% /var/lib/ceph/osd/ceph-45 /dev/sda13

Re: [ceph-users] SSD Hardware recommendation

2015-03-23 Thread Christian Balzer
Hello, Again refer to my original, old mail: http://lists.opennebula.org/pipermail/ceph-users-ceph.com/2014-October/043949.html I was strictly looking at the SMART values, in the case of these Intel DC S3700 SSDs the "Host_Writes_32MiB" values. Which, according to what name implies and what al

Re: [ceph-users] Finding out how much data is in the journal

2015-03-23 Thread Josef Johansson
> On 23 Mar 2015, at 03:58, Haomai Wang wrote: > > On Mon, Mar 23, 2015 at 2:53 AM, Josef Johansson > wrote: >> Hi all! >> >> Trying to figure out how much my journals are used, using SSDs as journals >> and SATA-drives as storage, I dive into perf dump. >> But I can