Hi Markus,
On 24/03/2015 14:47, Markus Goldberg wrote:
Hi,
this is ceph version 0,93
I can't create an image in an rbd-erasure-pool:
root@bd-0:~#
root@bd-0:~# ceph osd pool create bs3.rep 4096 4096 replicated
pool 'bs3.rep' created
root@bd-0:~# rbd create --size 1000 --pool bs3.rep test
On Tue, Mar 24, 2015 at 12:13 AM, Christian Balzer ch...@gol.com wrote:
On Tue, 24 Mar 2015 09:41:04 +0300 Kamil Kuramshin wrote:
Yes I read it and do no not understand what you mean when say *verify
this*? All 3335808 inodes are definetly files and direcories created by
ceph OSD process:
I can not reproduce the snapshot issue with BTRFS on the 3.17 kernel. My
test cluster had 48 btrfs OSDs on BTRFS for four months without an issue
since going to 3.17. The only concern I have is potential slowness over
time. We are not using compression. We are going production in one month
and
Hello,
On Tue, 24 Mar 2015 07:43:00 + Rottmann Jonas wrote:
Hi,
First of all, thank you for your detailed answer.
My Ceph Version is Hammer, sry should have mentioned that.
Yes we have 2 Intel 320 for the OS, the think process behind this was
that the OS Disk is not that
On Tue, 24 Mar 2015 07:24:05 -0600 Robert LeBlanc wrote:
I can not reproduce the snapshot issue with BTRFS on the 3.17 kernel.
Good to know.
I shall give that a spin on one of my test cluster nodes then, once a
kernel over 3.16 actually shows up in Debian sid. ^o^
Christian
My
test cluster
Hi,
this is ceph version 0,93
I can't create an image in an rbd-erasure-pool:
root@bd-0:~#
root@bd-0:~# ceph osd pool create bs3.rep 4096 4096 replicated
pool 'bs3.rep' created
root@bd-0:~# rbd create --size 1000 --pool bs3.rep test
root@bd-0:~#
root@bd-0:~# ceph osd pool create bs3.era 4096
- Original Message -
From: Greg Meier greg.me...@nyriad.com
To: ceph-users@lists.ceph.com
Sent: Tuesday, March 24, 2015 4:24:16 PM
Subject: [ceph-users] Auth URL not found when using object gateway
Hi,
I'm having trouble setting up an object gateway on an existing cluster. The
Hi,
I'm having trouble setting up an object gateway on an existing cluster. The
cluster I'm trying to add the gateway to is running on a Precise 12.04
virtual machine.
The cluster is up and running, with a monitor, two OSDs, and a metadata
server. It returns HEALTH_OK and active+clean, so I am
Hi,
is there any way to use ceph-deploy with lvm ?
Stefan
Excuse my typo sent from my mobile phone.___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
- Mail original -
Hi Markus,
On 24/03/2015 14:47, Markus Goldberg wrote:
Hi,
this is ceph version 0,93
I can't create an image in an rbd-erasure-pool:
root@bd-0:~#
root@bd-0:~# ceph osd pool create bs3.rep 4096 4096 replicated
pool 'bs3.rep' created
root@bd-0:~# rbd
Hi,
Sreenath BH wrote :
consider following values for a pool:
Size = 3
OSDs = 400
%Data = 100
Target PGs per OSD = 200 (This is default)
The PG calculator generates number of PGs for this pool as : 32768.
Questions:
1. The Ceph documentation recommends around 100 PGs/OSD, whereas
Is there an enumerated list of issues with snapshots on cache pools.
We currently have snapshots on a cache tier and haven't seen any
issues (development cluster). I just want to know what we should be
looking for.
On Tue, Mar 24, 2015 at 9:21 AM, Stéphane DUGRAVOT
I'm not sure why crushtool --test --simulate doesn't match what the
cluster actually does, but the cluster seems to be executing the rules
even though crushtool doesn't. Just kind of stinks that you have to
test the rules on actual data.
Should I create a ticket for this?
On Mon, Mar 23, 2015 at
Hi Loic and Markus,
By the way, Inktank do not support snapshot of a pool with cache tiering :
*
https://download.inktank.com/docs/ICE%201.2%20-%20Cache%20and%20Erasure%20Coding%20FAQ.pdf
Hi,
You seem to be talking about pool snapshots rather than RBD snapshots. But in
the linked
http://tracker.ceph.com/issues/11224
On Tue, Mar 24, 2015 at 12:11 PM, Gregory Farnum g...@gregs42.com wrote:
On Tue, Mar 24, 2015 at 10:48 AM, Robert LeBlanc rob...@leblancnet.us wrote:
I'm not sure why crushtool --test --simulate doesn't match what the
cluster actually does, but the cluster
On Tue, Mar 24, 2015 at 12:09 PM, Brendan Moloney molo...@ohsu.edu wrote:
Hi Loic and Markus,
By the way, Inktank do not support snapshot of a pool with cache tiering :
*
https://download.inktank.com/docs/ICE%201.2%20-%20Cache%20and%20Erasure%20Coding%20FAQ.pdf
Hi,
You seem to be
Yes I read it and do no not understand what you mean when say *verify this*?
All 3335808 inodes are definetly files and direcories created by ceph
OSD process:
*tune2fs 1.42.5 (29-Jul-2012)*
Filesystem volume name: none
Last mounted on: /var/lib/ceph/tmp/mnt.05NAJ3
Filesystem UUID:
Hi,
dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
1073741824 Bytes (1,1 GB) kopiert, 2,53986 s, 423 MB/s
How much do you get with o_dsync? (ceph journal use o_dsync, and some ssd are
pretty slow with dsync)
On Tue, 24 Mar 2015 09:41:04 +0300 Kamil Kuramshin wrote:
Yes I read it and do no not understand what you mean when say *verify
this*? All 3335808 inodes are definetly files and direcories created by
ceph OSD process:
What I mean is how/why did Ceph create 3+ million files, where in the tree
This was excellent advice. It should be on some official Ceph
troubleshooting page. It takes a while for the monitors to deal with new
info, but it works.
Thanks again!
--Greg
On Wed, Mar 18, 2015 at 5:24 PM, Sage Weil s...@newdream.net wrote:
On Wed, 18 Mar 2015, Greg Chavez wrote:
We have
On Tue, Mar 24, 2015 at 10:48 AM, Robert LeBlanc rob...@leblancnet.us wrote:
I'm not sure why crushtool --test --simulate doesn't match what the
cluster actually does, but the cluster seems to be executing the rules
even though crushtool doesn't. Just kind of stinks that you have to
test the
Hi Experts,
After implemented Ceph initially with 3 OSDs, now I am facing an issue:
It reports healthy but sometimes(or often) fails to access the pools.
While sometimes it comes back to normal automatically.
For example:
*[*ceph@gcloudcon ceph-cluster]$ *rados -p volumes ls*
No, 262144 ops total in 18 seconds.
Oh ok ;)
rbd bench-write is clearly doing something VERY differently from rados
bench (and given its output was also written by somebody else), maybe some
Ceph dev can enlighten us?
Maybe rbd_cache is merging 4k block to 4M rados objects ?
does
On Tue, 24 Mar 2015 08:36:40 +0100 (CET) Alexandre DERUMIER wrote:
No, 262144 ops total in 18 seconds.
Oh ok ;)
rbd bench-write is clearly doing something VERY differently from
rados bench (and given its output was also written by somebody
else), maybe some Ceph dev can enlighten us?
On Tue, 24 Mar 2015 07:56:33 +0100 (CET) Alexandre DERUMIER wrote:
Hi,
dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
1073741824 Bytes (1,1 GB) kopiert, 2,53986 s, 423 MB/s
How much do you get with o_dsync? (ceph journal use o_dsync, and some
ssd are pretty slow
Hi,
Yeah, my problem is the performance with o_direct and o_dsync.
I guess you switched something at the rbd bench-write results:
elapsed: 18 ops: 262144 ops/sec: 14466.30 bytes/sec: 59253946.11
Means
elapsed: 18
ops: 262144
ops/sec: 14466.30
bytes/sec: 59253946.11
what makes ~4k per IO
26 matches
Mail list logo