Object Store.
On Thu, Mar 28, 2013 at 11:51 AM, Sebastien Han
sebastien@enovance.comwrote:
Hi,
It depends, what do you to use from Ceph? Object store? Block device?
Distributed filesystem?
Sébastien Han
Cloud Engineer
Always give 100%. Unless you're giving blood.
On Wed, 27 Mar 2013, Matthieu Patou wrote:
On 03/27/2013 10:41 AM, Marco Aroldi wrote:
Hi list,
I'm trying to create a active/active Samba cluster on top of Cephfs
I would ask if Ceph fully supports CTDB at this time.
If I'm not wrong Ceph (even CephFS) do not support exporting a block
On 03/28/2013 07:41 AM, Sage Weil wrote:
On Wed, 27 Mar 2013, Matthieu Patou wrote:
On 03/27/2013 10:41 AM, Marco Aroldi wrote:
Hi list,
I'm trying to create a active/active Samba cluster on top of Cephfs
I would ask if Ceph fully supports CTDB at this time.
If I'm not wrong Ceph (even
Thanks for the answer,
I haven't yet looked at the samba.git clone, sorry. I will.
Just a quick report on my test environment:
* cephfs mounted with kernel driver re-exported from 2 samba nodes
* If node B goes down, everything works like a charm: node A does
ip takeover and bring up the node
Not sure how much of a difference it makes at this point, but I also tend to use -i size=2048.Well, while running through the Ceph and XFS ML, I came across those options several times.Looks like you guys are using AGPL V3. I don't actually know too much about that license other than that it's
Hi,
I get the same behavior an new created cluster as well, no changes to
the cluster config at all.
I stop the osd.1, after 20 seconds it got marked down. But it never get
marked out.
ceph version 0.59 (cbae6a435c62899f857775f66659de052fb0e759)
-martin
On 28.03.2013 19:48, John Wilkins wrote:
Hi John,
my ceph.conf is a bit further down in this email.
-martin
Am 28.03.2013 23:21, schrieb John Wilkins:
Martin,
Would you mind posting your Ceph configuration file too? I don't see
any value set for mon_host:
On Thu, Mar 28, 2013 at 1:04 PM, Martin Mailand mar...@tuxadero.com wrote:
Hi John,
I did the changes and restarted the cluster, nothing changed.
ceph --admin-daemon /var/run/ceph/ceph-osd.2.asok config show|grep mon_host
mon_host: ,
-martin
On 28.03.2013 23:45, John Wilkins wrote:
Martin,
I'm just speculating: since I just rewrote the networking section and
I have a large bucket (about million objects) and it takes a few days to delete
it. Watching ceph -w, I only see 8 to 30 op/s. What's going on? Thanks.
The command:
radosgw-admin bucket rm --bucket=testbucket --purge-objects
___
ceph-users mailing
On Thu, 28 Mar 2013, ronnie sahlberg wrote:
Disable the recovery lock file from ctdb completely.
And disable fcntl locking from samba.
To be blunt, unless your cluster filesystem is called GPFS,
locking is probably completely broken and should be avoided.
Ha!
On Thu, Mar 28, 2013 at 8:46
On Thu, Mar 28, 2013 at 6:09 PM, Sage Weil s...@inktank.com wrote:
On Thu, 28 Mar 2013, ronnie sahlberg wrote:
Disable the recovery lock file from ctdb completely.
And disable fcntl locking from samba.
To be blunt, unless your cluster filesystem is called GPFS,
locking is probably completely
The ctdb package comes with a tool ping pong that is used to test
and exercise fcntl() locking.
I think a good test is using this tool and then randomly powercycling
nodes in your fs cluster
making sure that
1, fcntl() locking is still coherent and correct
2, always recover within 20 seconds for
This is pretty cool, Sébastien.
On 03/28/2013 02:34 AM, Sebastien Han wrote:
Hello everybody,
Quite recently François Charlier and I worked together on the Puppet
modules for Ceph on behalf of our employer eNovance. In fact, François
started to work on them last summer, back then he achieved
13 matches
Mail list logo