Re: [ceph-users] The project of ceph client file system porting from Linux to AIX

2015-03-04 Thread McNamara, Bradley
I'd like to see a Solaris client. -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Dennis Chen Sent: Wednesday, March 04, 2015 2:00 AM To: ceph-devel; ceph-users; Sage Weil; Loic Dachary Subject: [ceph-users] The project of ceph client file

[ceph-users] Double-mounting of RBD

2014-12-17 Thread McNamara, Bradley
I have a somewhat interesting scenario. I have an RBD of 17TB formatted using XFS. I would like it accessible from two different hosts, one mapped/mounted read-only, and one mapped/mounted as read-write. Both are shared using Samba 4.x. One Samba server gives read-only access to the world

Re: [ceph-users] work with share disk

2014-10-31 Thread McNamara, Bradley
CephFS, yes, but it's not considered production-ready. You can also use an RBD volume and place OCFS2 on it and share it that way, too. From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of yang.bi...@zte.com.cn Sent: Friday, October 31, 2014 12:22 AM To:

Re: [ceph-users] Upgraded now MDS won't start

2014-09-11 Thread McNamara, Bradley
and data pools to eliminate the HEALTH_WARN issue. -Original Message- From: Gregory Farnum [mailto:g...@inktank.com] Sent: Thursday, September 11, 2014 2:09 PM To: McNamara, Bradley Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Upgraded now MDS won't start On Wed, Sep 10, 2014 at 4

[ceph-users] Upgraded now MDS won't start

2014-09-10 Thread McNamara, Bradley
Hello, This is my first real issue since running Ceph for several months. Here's the situation: I've been running an Emperor cluster for several months. All was good. I decided to upgrade since I'm running Ubuntu 13.10 and 0.72.2. I decided to first upgrade Ceph to 0.80.4, which was the

Re: [ceph-users] osd pool default pg num problem

2014-05-23 Thread McNamara, Bradley
The other thing to note, too, is that it appears you're trying to decrease the PG/PGP_num parameters, which is not supported. In order to decrease those settings, you'll need to delete and recreate the pools. All new pools created will use the settings defined in the ceph.conf file.

Re: [ceph-users] NFS over CEPH - best practice

2014-05-12 Thread McNamara, Bradley
The underlying file system on the RBD needs to be a clustered file system, like OCFS2, GFS2, etc., and a cluster between the two, or more, iSCSI target servers needs to be created to manage the clustered file system. From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of

Re: [ceph-users] CEPH placement groups and pool sizes

2014-05-12 Thread McNamara, Bradley
The formula was designed to be used on a per-pool basis. Having said that, though, when looking at the number of PG's from a system-wide perspective, one does not want too many total PG's. So, it's a balancing act, and it has been suggested that it's better to have slightly more PG's than you

Re: [ceph-users] cluster_network ignored

2014-04-24 Thread McNamara, Bradley
Do you have all of the cluster IP's defined in the host file on each OSD server? As I understand it, the mon's do not use a cluster network, only the OSD servers. -Original Message- From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Gandalf

Re: [ceph-users] RBD Cloning

2014-04-24 Thread McNamara, Bradley
I believe any kernel greater than 3.9 supports format 2 RBD's. I'm sure someone will correct me if this is a misstatement. Brad -Original Message- From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Dyweni - Ceph-Users Sent: Thursday, April

Re: [ceph-users] Live database files on Ceph

2014-04-04 Thread McNamara, Bradley
Take a look at ProxmoxVE. Has full support for Ceph, is supported, and uses KVM/QEMU. -Original Message- From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Brian Candler Sent: Friday, April 04, 2014 1:44 AM To: Brian Beverage;

[ceph-users] PG Calculations

2014-03-13 Thread McNamara, Bradley
There was a very recent thread discussing PG calculations, and it made me doubt my cluster setup. So, Inktank, please provide some clarification. I followed the documentation, and interpreted that documentation to mean that PG and PGP calculation was based upon a per-pool calculation. The

Re: [ceph-users] mon servers

2014-03-06 Thread McNamara, Bradley
I'm confused... The bug tracker says this was resolved ten days ago. Also, I actually used ceph-deploy on 2/12/2014 to add two monitors to my cluster, and it worked, and the documentation says it can be done. However, I believe that I added the new mon's to the ceph.conf in the

[ceph-users] Crush Maps

2014-02-06 Thread McNamara, Bradley
I have a test cluster that is up and running. It consists of three mons, and three OSD servers, with each OSD server having eight OSD's and two SSD's for journals. I'd like to move from the flat crushmap to a crushmap with typical depth using most of the predefined types. I have the current

Re: [ceph-users] Performance issues running vmfs on top of Ceph

2014-02-04 Thread McNamara, Bradley
Just for clarity since I didn't see it explained, but how are you accessing Ceph using ESXI? Is it via iscsi or NFS? Thanks. Brad McNamara -Original Message- From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Maciej Bonin Sent: Tuesday,

Re: [ceph-users] PG's and Pools

2014-01-29 Thread McNamara, Bradley
- From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Peter Matulis Sent: Wednesday, January 29, 2014 8:11 AM To: ceph-users@lists.ceph.com Subject: Re: [ceph-users] PG's and Pools On 01/28/2014 09:46 PM, McNamara, Bradley wrote: I finally have my

[ceph-users] PG's and Pools

2014-01-28 Thread McNamara, Bradley
I finally have my first test cluster up and running. No data on it, yet. The config is: three mons, and three OSDS servers. Each OSDS server has eight 4TB SAS drives and two SSD journal drives. The cluster is healthy, so I started playing with PG and PGP values. By the provided

Re: [ceph-users] OCFS2 or GFS2 for cluster filesystem?

2013-07-11 Thread McNamara, Bradley
Correct me if I'm wrong, I'm new to this, but I think the distinction between the two methods is that using 'qemu-img create -f rbd' creates an RBD for either a VM to boot from, or for mounting within a VM. Whereas, the OP wants a single RBD, formatted with a cluster file system, to use as a

Re: [ceph-users] Mounting a shared block device on multiple hosts

2013-05-29 Thread McNamara, Bradley
Instead of using ext4 for the file system, you need to use a clustered file system on the RBD device. From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Jon Sent: Wednesday, May 29, 2013 7:55 AM To: Igor Laskovy Cc: ceph-users Subject: Re:

Re: [ceph-users] Might Be Spam -RE: Mounting a shared block device on multiple hosts

2013-05-29 Thread McNamara, Bradley
...@gmail.com] Sent: Wednesday, May 29, 2013 11:47 AM To: McNamara, Bradley Cc: ceph-users Subject: Might Be Spam -RE: [ceph-users] Mounting a shared block device on multiple hosts Hello Bradley, Please excuse my ignorance, I am new to CEPH and what I thought was a good understanding of file