Re: [ceph-users] mds isn't working anymore after osd's running full

2014-08-20 Thread Jasper Siero
Unfortunately that doesn't help. I restarted both the active and standby mds but that doesn't change the state of the mds. Is there a way to force the mds to look at the 1832 epoch (or earlier) instead of 1833 (need osdmap epoch 1833, have 1832)? Thanks, Jasper

Re: [ceph-users] some pgs active+remapped, Ceph can not recover itself.

2014-08-20 Thread debian Only
thanks , Lewis.and i got one suggestion it is better to put similar OSD size . 2014-08-20 9:24 GMT+07:00 Craig Lewis cle...@centraldesktop.com: I believe you need to remove the authorization for osd.4 and osd.6 before re-creating them. When I re-format disks, I migrate data off of the

Re: [ceph-users] Problem when buildingrunning cuttlefish from source on Ubuntu 14.04 Server

2014-08-20 Thread NotExist
Hello Gregory: I'm doing some comparison about performance between different combination of environment. Therefore I have to try such old version. Thanks for your kindly help! The solution you provided does work! I think I was relying on ceph-disk too much therefore I didn't noticed this.

[ceph-users] Starting Ceph OSD

2014-08-20 Thread Pons
Hi All, We monitored two of our osd as down using the ceph osd tree command. We tried starting it using the following commands but ceph osd tree command still reports it as down. Please see below for the commands used. command:sudo start ceph-osd id=osd.0 output: ceph-osd (ceph/osd.0)

Re: [ceph-users] RadosGW problems

2014-08-20 Thread Marco Garcês
Hello, Yehuda, I know I was using the correct fastcgi module, it was the one on Ceph repositories; I had also disabled in apache, all other modules; I tried to create a second swift user, using the provided instructions, only to get the following: # radosgw-admin user create --uid=marcogarces

[ceph-users] Serious performance problems with small file writes

2014-08-20 Thread Hugo Mills
We have a ceph system here, and we're seeing performance regularly descend into unusability for periods of minutes at a time (or longer). This appears to be triggered by writing large numbers of small files. Specifications: ceph 0.80.5 6 machines running 3 OSDs each (one 4 TB rotational HD

Re: [ceph-users] Serious performance problems with small file writes

2014-08-20 Thread Dan Van Der Ster
Hi, Do you get slow requests during the slowness incidents? What about monitor elections? Are your MDSs using a lot of CPU? did you try tuning anything in the MDS (I think the default config is still conservative, and there are options to cache more entries, etc…) What about iostat on the OSDs

Re: [ceph-users] Serious performance problems with small file writes

2014-08-20 Thread Dan Van Der Ster
Hi, On 20 Aug 2014, at 16:55, German Anders gand...@despegar.commailto:gand...@despegar.com wrote: Hi Dan, How are you? I want to know how you disable the indexing on the /var/lib/ceph OSDs? # grep ceph /etc/updatedb.conf PRUNEPATHS = /afs /media /net /sfs /tmp /udev /var/cache/ccache

Re: [ceph-users] Serious performance problems with small file writes

2014-08-20 Thread Hugo Mills
Hi, Dan, Some questions below I can't answer immediately, but I'll spend tomorrow morning irritating people by triggering these events (I think I have a reproducer -- unpacking a 1.2 GiB tarball with 25 small files in it) and giving you more details. For the ones I can answer right now:

Re: [ceph-users] mds isn't working anymore after osd's running full

2014-08-20 Thread Gregory Farnum
After restarting your MDS, it still says it has epoch 1832 and needs epoch 1833? I think you didn't really restart it. If the epoch numbers have changed, can you restart it with debug mds = 20, debug objecter = 20, debug ms = 1 in the ceph.conf and post the resulting log file somewhere? -Greg

[ceph-users] Best Practice to Copy/Move Data Across Clusters

2014-08-20 Thread Larry Liu
Hi guys, Anyone has done copy/move data between clusters? If yes, what are the best practices for you? Thanks signature.asc Description: Message signed with OpenPGP using GPGMail ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] Best Practice to Copy/Move Data Across Clusters

2014-08-20 Thread Brian Rak
We do it with rbd volumes. We're using rbd export/import and netcat to transfer it across clusters. This was the most efficient solution, that did not require one cluster to have access to the other clusters (though it does require some way of starting the process on the different machines).

Re: [ceph-users] Translating a RadosGW object name into a filename on disk

2014-08-20 Thread Craig Lewis
Looks like I need to upgrade to Firefly to get ceph-kvstore-tool before I can proceed. I am getting some hits just from grepping the LevelDB store, but so far nothing has panned out. Thanks for the help! On Tue, Aug 19, 2014 at 10:27 AM, Gregory Farnum g...@inktank.com wrote: It's been a while

Re: [ceph-users] Translating a RadosGW object name into a filename on disk

2014-08-20 Thread Sage Weil
On Wed, 20 Aug 2014, Craig Lewis wrote: Looks like I need to upgrade to Firefly to get ceph-kvstore-tool before I can proceed. I am getting some hits just from grepping the LevelDB store, but so far nothing has panned out. FWIW if you just need the tool, you can wget the .deb and 'dpkg -x

Re: [ceph-users] Serious performance problems with small file writes

2014-08-20 Thread Andrei Mikhailovsky
Hugo, I would look at setting up a cache pool made of 4-6 ssds to start with. So, if you have 6 osd servers, stick at least 1 ssd disk in each server for the cache pool. It should greatly reduce the osd's stress of writing a large number of small files. Your cluster should become more