[ceph-users] Replace corrupt journal

2015-01-12 Thread Sahlstrom, Claes
Hi, I have a problem starting a couple of OSDs because of the journal being corrupt. Is there any way to replace the journal and keeping the rest of the OSD intact. -1 2015-01-11 16:02:54.475138 7fb32df86900 -1 journal Unable to read past sequence 8188178 but header indicates the journal

[ceph-users] Ceph MeetUp Berlin

2015-01-12 Thread Robert Sander
Hi, the next MeetUp in Berlin takes place on January 26 at 18:00 CET. Our host is Deutsche Telekom, they will hold a short presentation about their OpenStack / CEPH based production system. Please RSVP at http://www.meetup.com/Ceph-Berlin/events/218939774/ Regards -- Robert Sander Heinlein

Re: [ceph-users] NUMA and ceph ... zone_reclaim_mode

2015-01-12 Thread Dan van der Ster
(resending to list) Hi Kyle, I'd like to +10 this old proposal of yours. Let me explain why... A couple months ago we started testing a new use-case with radosgw -- this new user is writing millions of small files and has been causing us some headaches. Since starting these tests, the relevant

[ceph-users] error adding OSD to crushmap

2015-01-12 Thread Luis Periquito
Hi all, I've been trying to add a few new OSDs, and as I manage everything with puppet, it was manually adding via the CLI. At one point it adds the OSD to the crush map using: # ceph osd crush add 6 0.0 root=default but I get Error ENOENT: osd.6 does not exist. create it before updating the

[ceph-users] NUMA zone_reclaim_mode

2015-01-12 Thread Dan Van Der Ster
(apologies if you receive this more than once... apparently I cannot reply to a 1 year old message on the list). Dear all, I'd like to +10 this old proposal of Kyle's. Let me explain why... A couple months ago we started testing a new use-case with radosgw -- this new user is writing millions

Re: [ceph-users] NUMA zone_reclaim_mode

2015-01-12 Thread Sage Weil
On Mon, 12 Jan 2015, Dan Van Der Ster wrote: Moving forward, I think it would be good for Ceph to a least document this behaviour, but better would be to also detect when zone_reclaim_mode != 0 and warn the admin (like MongoDB does). This line from the commit which disables it in the kernel is

Re: [ceph-users] cephfs modification time

2015-01-12 Thread Gregory Farnum
What versions of all the Ceph pieces are you using? (Kernel client/ceph-fuse, MDS, etc) Can you provide more details on exactly what the program is doing on which nodes? -Greg On Fri, Jan 9, 2015 at 5:15 PM, Lorieri lori...@gmail.com wrote: first 3 stat commands shows blocks and size changing,

[ceph-users] the performance issue for cache pool

2015-01-12 Thread lidc...@redhat.com
Hi everyone: I used writeback mode for cache pool : ceph osd tier add sas ssd ceph osd tier add sas ssd ceph osd tier cache-mode ssd writeback ceph osd tier set-overlay sas ssd and i also set dirty ratio and full ratio: ceph osd pool set ssd cache_target_dirty_ratio .4 ceph osd pool

[ceph-users] unsubscribe

2015-01-12 Thread Don Doerner
unsubscribe Regards, -don- -- The information contained in this transmission may be confidential. Any disclosure, copying, or further distribution of confidential information is not permitted unless such privilege is

[ceph-users] Problem with Rados gateway

2015-01-12 Thread Walter Valenti
Scenario: Openstack Juno RDO on Centos7. Ceph version: Giant. On Centos7 there isn't more the old fastcgi, but there's mod_fcgid The apache VH is the following: VirtualHost *:8080 ServerName rdo-ctrl01 DocumentRoot /var/www/radosgw RewriteEngine On RewriteRule ^/([a-zA-Z0-9-_.]*)([/]?.*)

Re: [ceph-users] rbd directory listing performance issues

2015-01-12 Thread Shain Miley
Hi, I am just wondering if anyone has any thoughts on the questions below...I would like to order some additional hardware ASAP...and the order that I place may change depending on the feedback that I receive. Thanks again, Shain Sent from my iPhone On Jan 9, 2015, at 2:45 PM, Shain Miley

Re: [ceph-users] ceph on peta scale

2015-01-12 Thread Gregory Farnum
On Mon, Jan 12, 2015 at 3:55 AM, Zeeshan Ali Shah zas...@pdc.kth.se wrote: Thanks Greg, No i am more into large scale RADOS system not filesystem . however for geographic distributed datacentres specially when network flactuate how to handle that as i read it seems CEPH need big pipe of

[ceph-users] reset osd perf counters

2015-01-12 Thread Shain Miley
Is there a way to 'reset' the osd perf counters? The numbers for osd 73 though osd 83 look really high compared to the rest of the numbers I see here. I was wondering if I could clear the counters out, so that I have a fresh set of data to work with. root@cephmount1:/var/log/samba# ceph

Re: [ceph-users] ceph on peta scale

2015-01-12 Thread Zeeshan Ali Shah
Thanks Greg, No i am more into large scale RADOS system not filesystem . however for geographic distributed datacentres specially when network flactuate how to handle that as i read it seems CEPH need big pipe of network /Zee On Fri, Jan 9, 2015 at 7:15 PM, Gregory Farnum g...@gregs42.com

[ceph-users] SSD Journal Best Practice

2015-01-12 Thread lidc...@redhat.com
Hi everyone: I plan to use SSD Journal to improve performance. I have one 1.2T SSD disk per server. what is the best practice for SSD Journal ? There are there choice to deploy SSD Journal 1. all osd used same ssd partion ceph-deploy osd create

[ceph-users] How to get ceph-extras packages for centos7

2015-01-12 Thread lei shi
Hi experts, Could you some guys guide me how to get ceph-extras packages for Centos7? I try to install giant in centos7 manually, however, I get the latest extras packages only for centos6.4 in repository. BTW, Is the qemu aware to the giant? Shoud I get the dedicated one to the giant? Thanks in

Re: [ceph-users] CRUSH question - failing to rebalance after failure test

2015-01-12 Thread Christopher Kunz
Hi, [redirecting back to list] Oh, it could be that... can you include the output from 'ceph osd tree'? That's a more concise view that shows up/down, weight, and in/out. Thanks! sage root@cepharm17:~# ceph osd tree # idweight type name up/down reweight -1 0.52root

Re: [ceph-users] NUMA zone_reclaim_mode

2015-01-12 Thread Dan Van Der Ster
On 12 Jan 2015, at 17:08, Sage Weil s...@newdream.netmailto:s...@newdream.net wrote: On Mon, 12 Jan 2015, Dan Van Der Ster wrote: Moving forward, I think it would be good for Ceph to a least document this behaviour, but better would be to also detect when zone_reclaim_mode != 0 and warn the

Re: [ceph-users] reset osd perf counters

2015-01-12 Thread Gregory Farnum
perf reset on the admin socket. I'm not sure what version it went in to; you can check the release logs if it doesn't work on whatever you have installed. :) -Greg On Mon, Jan 12, 2015 at 2:26 PM, Shain Miley smi...@npr.org wrote: Is there a way to 'reset' the osd perf counters? The numbers

Re: [ceph-users] cephfs modification time

2015-01-12 Thread Gregory Farnum
Zheng, this looks like a kernel client issue to me, or else something funny is going on with the cap flushing and the timestamps (note how the reading client's ctime is set to an even second, while the mtime is ~.63 seconds later and matches what the writing client sees). Any ideas? -Greg On Mon,

[ceph-users] Ceph erasure-coded pool

2015-01-12 Thread Don Doerner
All, I wish to experiment with erasure-coded pools in Ceph. I've got some questions: 1. Is FIREFLY a reasonable release to be using to try EC pools? When I look at various bits of development info, it appears that the work is complete in FIREFLY, but I thought I'd askJ 2. It looks,

Re: [ceph-users] ceph on peta scale

2015-01-12 Thread Robert van Leeuwen
however for geographic distributed datacentres specially when network flactuate how to handle that as i read it seems CEPH need big pipe of network Ceph isn't really suited for WAN-style distribution. Some users have high-enough and consistent-enough bandwidth (with low enough latency) to do

Re: [ceph-users] SSD Journal Best Practice

2015-01-12 Thread lidc...@redhat.com
For the first choice: ceph-deploy osd create ceph-node:sdb:/dev/ssd ceph-node:sdc:/dev/ssd i find ceph-deploy will create partition automaticaly, and each partition is 5G default. So the first choice and second choice is almost the same. Compare to filesystem, I perfer to block device to get

Re: [ceph-users] Replace corrupt journal

2015-01-12 Thread Sage Weil
On Sun, 11 Jan 2015, Sahlstrom, Claes wrote: Hi,   I have a problem starting a couple of OSDs because of the journal being corrupt. Is there any way to replace the journal and keeping the rest of the OSD intact. It is risky at best... I would not recommend it! The safe route is to

Re: [ceph-users] Replace corrupt journal

2015-01-12 Thread Sahlstrom, Claes
Thanks for the reply, I have had some more time to mess around more with this now. I understand that the best thing is to allow it to rebuild the entire OSD, but I am currently only using one replica and 2/3 machines had problems I ended up in a bad situation. With OSDs down on 2 machines and

[ceph-users] Caching

2015-01-12 Thread Samuel Terburg - Panther-IT BV
I have a couple of questions about caching: I have 5 VM-Hosts serving 20 VMs. I have 1 Ceph pool where the VM-Disks of those 20 VMs reside as RBD Images. 1) Can i use multiple caching-tiers on the same data pool? I would like to use a local SSD OSD on each VM-Host that can serve as