Re: [ceph-users] ceph-deploy issues rhel6

2013-06-18 Thread Gary Lowell
Hi Derek - If you are still having problems with ceph-deploy, please forward the ceph.log file to me, I can start trying to figure out what's gone wrong. Thanks, Gary On Jun 12, 2013, at 7:09 PM, Derek Yarnell de...@umiacs.umd.edu wrote: Hi, I am trying to run ceph-deploy on a very

Re: [ceph-users] Need help with Ceph error

2013-06-18 Thread Sreejith Keeriyattil
Hi == root@xtream:~# service ceph start === mds.a === Starting Ceph mds.a on xtream...already running === osd.0 === Mounting xfs on xtream:/var/lib/ceph/osd/ceph-0 2013-06-18 04:26:16.373075

Re: [ceph-users] Recommended versions of Qemu/KVM to run Ceph Cuttlefish

2013-06-18 Thread Jens Kristian Søgaard
Hi Alex, What versions of Qemu are recommended for this? I would go with version 1.4.2 (I don't know what the official recommendation is). which is the implementation of using asynchronous flushing in Qemu. That's only in 1.4.3 and 1.5 if I use the upstream As far as I know, it is in

Re: [ceph-users] ceph iscsi questions

2013-06-18 Thread Leen Besselink
On Tue, Jun 18, 2013 at 09:52:53AM +0200, Kurt Bauer wrote: Hi, Da Chun schrieb: Hi List, I want to deploy a ceph cluster with latest cuttlefish, and export it with iscsi interface to my applications. Some questions here: 1. Which Linux distro and release would you recommend? I

Re: [ceph-users] ceph iscsi questions

2013-06-18 Thread Da Chun
Thanks for sharing! Kurt. Yes. I have read the article you mentioned. But I also read another one: http://www.hastexo.com/resources/hints-and-kinks/turning-ceph-rbd-images-san-storage-devices. It uses LIO, which is the current standard Linux kernel SCSI target. There is another doc in the

Re: [ceph-users] ceph iscsi questions

2013-06-18 Thread Leen Besselink
On Tue, Jun 18, 2013 at 11:13:15AM +0200, Leen Besselink wrote: On Tue, Jun 18, 2013 at 09:52:53AM +0200, Kurt Bauer wrote: Hi, Da Chun schrieb: Hi List, I want to deploy a ceph cluster with latest cuttlefish, and export it with iscsi interface to my applications. Some

Re: [ceph-users] Python APIs

2013-06-18 Thread Giuseppe 'Gippa' Paterno'
Hi John, apologies for the late reply. The librados seems quite interesting ... Actually no. I'll write up an API doc for you soon. sudo apt-get install python-ceph import rados I wonder if I can ake python calls to interact with the object store (say: cephfs.open() mkdir() ) directly

[ceph-users] Single Cluster / Reduced Failure Domains

2013-06-18 Thread harri
Hi, I wondered what best practice is recommended to reducing failure domains for a virtual server platform. If I wanted to run multiple virtual server clusters then would it be feasible to serve storage from 1 x large Ceph cluster? I am concerned that, in the unlikely event the Ceph whole

Re: [ceph-users] Another osd is filled too full and taken off after manually taking one osd out

2013-06-18 Thread Leen Besselink
On Tue, Jun 18, 2013 at 08:13:39PM +0800, Da Chun wrote: Hi List,My ceph cluster has two osds on each node. One has 15g capacity, and the other 10g. It's interesting that, after I took the 15g osd out of the cluster, the cluster started to rebalance, and finally the 10g osd on the same node

Re: [ceph-users] ceph iscsi questions

2013-06-18 Thread Kurt Bauer
Da Chun schrieb: Thanks for sharing! Kurt. Yes. I have read the article you mentioned. But I also read another one: http://www.hastexo.com/resources/hints-and-kinks/turning-ceph-rbd-images-san-storage-devices. It uses LIO, which is the current standard Linux kernel SCSI target. That

Re: [ceph-users] How to remove /var/lib/ceph/osd/ceph-2?

2013-06-18 Thread Da Chun
Thanks! Craig. umount works. About the time skew, I saw the log said the time difference should be less than 50ms. I setup one of my nodes as the time server, and the others sync the time with it. I don't know why the system time still changes frequently especially after reboot. Maybe it's

Re: [ceph-users] ceph iscsi questions

2013-06-18 Thread Leen Besselink
On Tue, Jun 18, 2013 at 02:38:19PM +0200, Kurt Bauer wrote: Da Chun schrieb: Thanks for sharing! Kurt. Yes. I have read the article you mentioned. But I also read another one: http://www.hastexo.com/resources/hints-and-kinks/turning-ceph-rbd-images-san-storage-devices. It uses

Re: [ceph-users] Backporting the kernel client

2013-06-18 Thread Travis Rhoden
I built the 3.10-rc rbd module for a 3.8 kernel yesterday, and only had one thing to add (I know I'm reviving an old thread). There is one folder missing from the original list of files to use: include/linux/crush/* That would bring everything to: include/keys/ceph-type.h include/linux/ceph/*

Re: [ceph-users] ceph-deploy issues rhel6

2013-06-18 Thread Sage Weil
Derek- Please also try the latest ceph-deploy and cuttlefish branches, which fixed several issues with el6 distros. 'git pull' for hte latest ceph-deploy (clone from github and ./bootstrap if you were using the package) and install with ./ceph-deploy install --dev=cuttlefish

Re: [ceph-users] More data corruption issues with RBD (Ceph 0.61.2)

2013-06-18 Thread Guido Winkelmann
Am Dienstag, 18. Juni 2013, 07:58:50 schrieb Sage Weil: On Tue, 18 Jun 2013, Guido Winkelmann wrote: Am Donnerstag, 13. Juni 2013, 01:58:08 schrieb Josh Durgin: Which filesystem are the OSDs using? BTRFS Which kernel version? There was a recent bug (fixed in 3.9 or 3.8) that

Re: [ceph-users] More data corruption issues with RBD (Ceph 0.61.2)

2013-06-18 Thread Mike Lowe
I think the bug Sage is talking about was fixed in 3.8.0 On Jun 18, 2013, at 11:38 AM, Guido Winkelmann guido-c...@thisisnotatest.de wrote: Am Dienstag, 18. Juni 2013, 07:58:50 schrieb Sage Weil: On Tue, 18 Jun 2013, Guido Winkelmann wrote: Am Donnerstag, 13. Juni 2013, 01:58:08 schrieb Josh

Re: [ceph-users] Single Cluster / Reduced Failure Domains

2013-06-18 Thread Gregory Farnum
On Tuesday, June 18, 2013, harri wrote: Hi, ** ** ** I wondered what best practice is recommended to reducing failure domains for a virtual server platform. If I wanted to run multiple virtual server clusters then would it be feasible to serve storage from 1 x large Ceph cluster?

Re: [ceph-users] Single Cluster / Reduced Failure Domains

2013-06-18 Thread Leen Besselink
On Tue, Jun 18, 2013 at 09:02:12AM -0700, Gregory Farnum wrote: On Tuesday, June 18, 2013, harri wrote: Hi, ** ** ** I wondered what best practice is recommended to reducing failure domains for a virtual server platform. If I wanted to run multiple virtual server clusters

Re: [ceph-users] ceph-deploy issues rhel6

2013-06-18 Thread Sage Weil
Hi Derek, Are you sure the package is installed on the target? (Did you ceph-deploy install hostname) It is probably caused by /var/lib/ceph/mon not existing? sage On Tue, 18 Jun 2013, Derek Yarnell wrote: If you are still having problems with ceph-deploy, please forward the

[ceph-users] New User Q: General config, massive temporary OSD loss

2013-06-18 Thread Edward Huyer
Hi, I'm an admin for the School of Interactive Games and Media at RIT, and looking into using ceph to reorganize/consolidate the storage my department is using. I've read a lot of documentation and comments/discussion on the web, but I'm not 100% sure what I'm looking at doing is a good use of

[ceph-users] Repository Mirroring

2013-06-18 Thread Joe Ryner
I would like to make a local mirror or your yum repositories. Do you support any of the standard methods of syncing aka rsync? Thanks, Joe -- Joe Ryner Center for the Application of Information Technologies (CAIT) Production Coordinator P: (309) 298-1804 F: (309) 298-2806

Re: [ceph-users] Single Cluster / Reduced Failure Domains

2013-06-18 Thread harri
Thanks Greg, The concern I have is an all eggs in one basket approach to storage design. Is it feasible, however unlikely, that a single Ceph cluster could be brought down (obviously yes)? And what if you wanted to operate different storage networks? It feels right to build virtual

Re: [ceph-users] ceph-deploy issues rhel6

2013-06-18 Thread Derek Yarnell
On 6/18/13 10:29 AM, Sage Weil wrote: Derek- Please also try the latest ceph-deploy and cuttlefish branches, which fixed several issues with el6 distros. 'git pull' for hte latest ceph-deploy (clone from github and ./bootstrap if you were using the package) and install with

Re: [ceph-users] New User Q: General config, massive temporary OSD loss

2013-06-18 Thread Gregory Farnum
On Tue, Jun 18, 2013 at 10:34 AM, Edward Huyer erh...@rit.edu wrote: Hi, I’m an admin for the School of Interactive Games and Media at RIT, and looking into using ceph to reorganize/consolidate the storage my department is using. I’ve read a lot of documentation and comments/discussion on the

Re: [ceph-users] Single Cluster / Reduced Failure Domains

2013-06-18 Thread harri
Spell check fail, that of course should have read CRUSH map. Sent from Samsung Mobile Original message From: harri ha...@madshark.co.uk Date: 18/06/2013 19:21 (GMT+00:00) To: Gregory Farnum g...@inktank.com Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Single

Re: [ceph-users] Single Cluster / Reduced Failure Domains

2013-06-18 Thread Gregory Farnum
On Tue, Jun 18, 2013 at 11:21 AM, harri ha...@madshark.co.uk wrote: Thanks Greg, The concern I have is an all eggs in one basket approach to storage design. Is it feasible, however unlikely, that a single Ceph cluster could be brought down (obviously yes)? And what if you wanted to operate

Re: [ceph-users] ceph-deploy issues rhel6

2013-06-18 Thread Derek Yarnell
Hi, So the first error below is that /var/run/ceph isn't created when installing the ceph RPM(s). This is becasuse of line 440 in ceph.spec.in using the %ghost directive[1] for the file install. My reading of the behavior will mean that the file or directory in this case will be included in the

Re: [ceph-users] New User Q: General config, massive temporary OSD loss

2013-06-18 Thread Gregory Farnum
[ Please stay on the list. :) ] On Tue, Jun 18, 2013 at 12:54 PM, Edward Huyer erh...@rit.edu wrote: First questions: Are there obvious flaws or concerns with the following configuration I should be aware of? Does it even make sense to try to use ceph here? Anything else I should know,

Re: [ceph-users] New User Q: General config, massive temporary OSD loss

2013-06-18 Thread Edward Huyer
[ Please stay on the list. :) ] Doh. Was trying to get Outlook to quote properly, and forgot to hit Reply-all. :) The specifics of what data will migrate where will depend on how you've set up your CRUSH map, when you're updating the CRUSH locations, etc, but if you move an OSD then it

[ceph-users] ceph-deploy problems on weird /dev device names?

2013-06-18 Thread Sage Weil
I remember seeing a few reports of problems from users with strange block device names in /dev (sdaa*, c0d1p2* etc.) and have a bug open (http://tracker.ceph.com/issues/5345), but looking at the code I don't immediately see the problem, and I don't have any machines that have this problem.

Re: [ceph-users] ceph-deploy issues rhel6

2013-06-18 Thread Sage Weil
On Tue, 18 Jun 2013, Derek Yarnell wrote: Hi, So the first error below is that /var/run/ceph isn't created when installing the ceph RPM(s). This is becasuse of line 440 in ceph.spec.in using the %ghost directive[1] for the file install. My reading of the behavior will mean that the file

Re: [ceph-users] ceph-deploy problems on weird /dev device names?

2013-06-18 Thread Cameron Bahar
We solved this problem at ParaScale by enabling users to enter any fancy device names in the device discovery logic so that HP servers like the DL185 which use older Compaq RAID controllers would work. This is common. Best, Cameron -- On Tue, Jun 18, 2013 at 1:43 PM, Sage Weil s...@inktank.com

Re: [ceph-users] ceph-users Digest, Vol 4, Issue 24

2013-06-18 Thread yangpengtao.slyt
run /usr/bin/radosgw -c /etc/ceph/ceph.conf -n client.radosgw.gateway error: 2013-06-19 09:19:55.148536 7f120aa0d820 0 librados: client.radosgw.gateway authentication error (95) Operation not supported 2013-06-19 09:19:55.148923 7f120aa0d820 -1 Couldn't init storage provider (RADOS) How

[ceph-users] mount error 12

2013-06-18 Thread Luke Jing Yuan
Dear all, I am trying to mount cephfs to 2 different mount points (each should have their respective pools and keys). While the first mount works (after using set_layout to get it to the right pool), the second attempt failed with mount error 12 = Cannot allocate memory. Did I miss some steps

Re: [ceph-users] ceph-deploy issues rhel6

2013-06-18 Thread Derek Yarnell
On 6/18/13 5:31 PM, Sage Weil wrote: 1) Remove the %ghost directive and allow RPM to install the directory. Potentially leaving orphaned pid/state files after the package is removed. 2) Or the directory needs to be created in the %post section. If it is created in the %post section and the

Re: [ceph-users] ceph-deploy issues rhel6

2013-06-18 Thread Sage Weil
On Wed, 19 Jun 2013, Derek Yarnell wrote: On 6/18/13 5:31 PM, Sage Weil wrote: 1) Remove the %ghost directive and allow RPM to install the directory. Potentially leaving orphaned pid/state files after the package is removed. 2) Or the directory needs to be created in the %post section.