We want to standardize the locations for ceph data directories, configs,
etc. We'd also like to allow a single host to run OSDs that participate
in multiple ceph clusters. We'd like easy to deal with names (i.e., avoid
UUIDs if we can).
The metavariables are:
cluster = ceph (by default)
I feel it's up to the sysadmin to mount / symlink the correct storage devices
on the correct paths - ceph should not be concerned that some volumes might
need to sit together.
Rgds,
Bernard
On 05 Apr 2012, at 09:12, Andrey Korolyov wrote:
Right, but probably we need journal separation at the
In ceph case, such layout breakage may be necessary in almost all
installations(except testing), comparing to almost all general-purpose
server software which need division like that only in very specific
setups.
On Thu, Apr 5, 2012 at 11:28 AM, Bernard Grymonpon bern...@openminds.be wrote:
I
On 04/05/2012 08:18 AM, Sage Weil wrote:
Should we make cephx auth the default? Currently it is still 'none', but
this code is 2 years old now and I haven't seen an auth-related bug in at
least half a year.
I'd say yes. I haven't seen any cephx related issues either.
I don't see any reason
I assume most OSD nodes will normally run a single OSD, so this would not apply
to most nodes.
Only in specific cases (where multiple OSDs run on a single node) this would
come up, and these specific cases might even require to have the journals split
over multiple devices (multiple ssd-disks
On 04/05/2012 10:38 AM, Bernard Grymonpon wrote:
I assume most OSD nodes will normally run a single OSD, so this would not apply
to most nodes.
Only in specific cases (where multiple OSDs run on a single node) this would
come up, and these specific cases might even require to have the
On 05 Apr 2012, at 14:34, Wido den Hollander wrote:
On 04/05/2012 10:38 AM, Bernard Grymonpon wrote:
I assume most OSD nodes will normally run a single OSD, so this would not
apply to most nodes.
Only in specific cases (where multiple OSDs run on a single node) this would
come up, and
Hi,
On 04/04/2012 08:26 PM, Borodin Vladimir wrote:
Hello Wido,
Yesterday I built from git: $ ceph -v
ceph version 0.44.1-149-ge80126e
(commit:e80126ea689e9a972fbf09e8848fc4a2ade13c59)
That commit should contain the new heartbeat code.
The messages are a bit different but the problem
Le 23/11/2011 16:13, Sage Weil a écrit :
On Wed, 23 Nov 2011, Alexandre Oliva wrote:
On Nov 22, 2011, Sage Weils...@newdream.net wrote:
On Tue, 22 Nov 2011, Christian Brunner wrote:
- compression: I'm using lzo compression right now, as my CPUs in the
OSD nodes where idle most of the time
Hi folks,
I created a pool without adding any osds to it in the crushmap. As
expected, all pgs remains in creating state since no osds take care of
them.
However, I do expect that this pool should be able to be removed.
Currently, removing such a pool hangs forever as the pg removal has to
wait
A recent change made changes to the rbd_client_list be protected by
a spinlock. Unfortunately in rbd_put_client(), the lock is taken
before possibly dropping the last reference to an rbd_client, and on
the last reference that eventually calls flush_workqueue() which can
sleep.
The problem was
Yes, I'm still running XFS with mdraid under the OSDs. Do you
recommend another configuration?
Right now I have:
$ ceph osd dump -o - | grep osd
dumped osdmap epoch 2311
max_osd 51
osd.47 down in weight 1 up_from 2284 up_thru 2301 down_at 2310
last_clean_interval [1782,2276)
On Thu, 5 Apr 2012, Yann Dupont wrote:
Le 23/11/2011 16:13, Sage Weil a écrit :
On Wed, 23 Nov 2011, Alexandre Oliva wrote:
On Nov 22, 2011, Sage Weils...@newdream.net wrote:
On Tue, 22 Nov 2011, Christian Brunner wrote:
- compression: I'm using lzo compression right now,
On Thu, 5 Apr 2012, Bernard Grymonpon wrote:
On 05 Apr 2012, at 14:34, Wido den Hollander wrote:
On 04/05/2012 10:38 AM, Bernard Grymonpon wrote:
I assume most OSD nodes will normally run a single OSD, so this would not
apply to most nodes.
Only in specific cases (where multiple
On Thu, 5 Apr 2012, Henry C Chang wrote:
Hi folks,
I created a pool without adding any osds to it in the crushmap. As
expected, all pgs remains in creating state since no osds take care of
them.
However, I do expect that this pool should be able to be removed.
Currently, removing such a
Le 05/04/2012 17:11, Sage Weil a écrit :
Yes, it's safe. We fall back to a manual copy if we see EINVAL from the
ioctl.
sage
Great, thanks for the answer .
cheers,
--
Yann Dupont - Service IRTS, DSI Université de Nantes
Tel : 02.53.48.49.20 - Mail/Jabber : yann.dup...@univ-nantes.fr
--
On Thu, 5 Apr 2012, Bernard Grymonpon wrote:
On 05 Apr 2012, at 17:17, Sage Weil wrote:
On Thu, 5 Apr 2012, Bernard Grymonpon wrote:
On 05 Apr 2012, at 14:34, Wido den Hollander wrote:
On 04/05/2012 10:38 AM, Bernard Grymonpon wrote:
I assume most OSD nodes will normally run a
hi all
I am new to ceph and is trying to give it a spin for testing:
on both master or stable branch, the configure script gives the
WARNING (CentOS 6)
config.status: executing depfiles commands
config.status: executing libtool commands
=== configuring in src/leveldb
On Thu, Apr 5, 2012 at 10:58 AM, Feiyi Wang fwa...@gmail.com wrote:
hi all
I am new to ceph and is trying to give it a spin for testing:
on both master or stable branch, the configure script gives the
WARNING (CentOS 6)
config.status: executing depfiles commands
config.status: executing
On Thu, 5 Apr 2012, Alex Elder wrote:
A recent change made changes to the rbd_client_list be protected by
a spinlock. Unfortunately in rbd_put_client(), the lock is taken
before possibly dropping the last reference to an rbd_client, and on
the last reference that eventually calls
On Thu, 5 Apr 2012, Tommi Virtanen wrote:
As I think it is a very specific scenario where a machine would be
participating in multiple Ceph clusters I'd vote for:
/var/lib/ceph/$type/$id
I really want to avoid having two different cases, two different code
paths to test, a more rare
21 matches
Mail list logo