Hello,
Ceph is still not compiling when if I add Kinetic support option.
Could you have a look on the log and tell me what's missing ?
--
Best regards,
Julien
On 12/02/2014 09:53 AM, Julien Lutran wrote:
It's ok for KeyValueDB.cc now, but I have another problem with
src/os/KineticStore.h :
Hi
Why command 'rbd list'executed on monitor stuck,any prompt should be
appreciated!
Backtree:
[810bfdee] futex_wait_queue_me+0xde/0x140
[810c0969] futex_wait+0x179/0x280
[810c297e] do_futex+0xfe/0x5e0
[810c2ee0] SyS_futex+0x80/0x180
[815f2119]
Hello Manoj
My answers to your queries.
# For testing purpose you can install Ceph on virtual machines ( multiple
instances of virtual box for multiple mon, osd ). Its good to practice Ceph
with multiple MON and OSD.
# For real data storage , please use physical servers , virtual servers are
Hi,
Since firefly, ceph can support cache tiering.
Cache tiering: support for creating ‘cache pools’ that store hot, recently
accessed objects with automatic demotion of colder data to a base tier.
Typically the cache pool is backed by faster storage devices like SSDs.
I'm testing cache tiering,
On 12/17/2014 11:21 AM, John Spray wrote:
On Wed, Dec 17, 2014 at 2:07 AM, Kevin Shiah agan...@gmail.com wrote:
setfattr -n ceph.dir.layout.stripe_count -v 2 dir
And return:
setfattr: dir: Operation not supported
Works for me on master. What ceph version are you using?
I just tried
On 12/17/2014 12:35 PM, John Spray wrote:
On Wed, Dec 17, 2014 at 10:25 AM, Wido den Hollander w...@42on.com wrote:
I just tried something similar on Giant (0.87) and I saw this in the logs:
parse_layout_vxattr name layout.pool value 'cephfs_svo'
invalid data pool 3
reply request -22
I
Hello Loic,
Thanks for you help, I’ve take a look to my crush map and I replace step
chooseleaf indep 0 type osd” by step choose indep 0 type osd” and all PGs was
created successfully.
At.
Italo Santos
http://italosantos.com.br/
On Tuesday, December 16, 2014 at 8:39 PM, Loic Dachary
Both fuse and kernel module fail to mount,
The mons mds are on two other nodes, so they are available when this node is
booting.
They can be mounted manually after boot.
my fstab:
idmin /mnt/cephfs fuse.ceph defaults,nonempty,_netdev 0 0
Hmm, from a quick google it appears you are not the only one who has
seen this symptom with mount.ceph. Our mtab code appears to have
diverged a bit from the upstream util-linux repo, so it seems entirely
possible we have a bug in ours somewhere. I've opened
http://tracker.ceph.com/issues/10351
Can you tell us more about how they fail? Error messages on console,
anything in syslog?
In the absence of other clues, you might want to try checking that the
network is coming up before ceph tries to mount.
John
On Wed, Dec 17, 2014 at 1:34 PM, Lindsay Mathieson
lindsay.mathie...@gmail.com
On Wed, 17 Dec 2014 02:02:52 PM John Spray wrote:
Can you tell us more about how they fail? Error messages on console,
anything in syslog?
Not quite sure what to look for, but I did a quick scan on ceph through dmesg
syslog, nothing stood out
In the absence of other clues, you might
Cache tiering is a stable, functioning system. Those particular commands
are for testing and development purposes, not something you should run
(although they ought to be safe).
-Greg
On Wed, Dec 17, 2014 at 1:44 AM Yujian Peng pengyujian5201...@126.com
wrote:
Hi,
Since firefly, ceph can
Dear All,
We have set up ceph and used it for about one year already.
Here is a summary of the setting. We used 3 servers to run the ceph.
cs02, cs03, cs04
Here is how we set up the ceph:
1. We created several OSDs on three of these servers. using command like:
ceph-deploy osd create
Hey there,
Is there a good work around if our SSDs are not handling D_SYNC very well? We
invested a ton of money into Samsung 840 EVOS and they are not playing well
with D_SYNC. Would really appreciate the help!
Thank you,
Bryson
___
ceph-users
Hi,all
I found content below at
http://ceph.com/docs/master/rados/operations/crush-map :
step choose firstn {num} type {bucket-type}
Description: Selects the number of buckets of the given type. The
number is usually the number of replicas in the pool (i.e., pool size).
On Tue, Dec 16, 2014 at 6:19 AM, Cyan Cheng cheng.1...@gmail.com wrote:
Dear All,
We have set up ceph and used it for about one year already.
Here is a summary of the setting. We used 3 servers to run the ceph.
cs02, cs03, cs04
Here is how we set up the ceph:
1. We created several OSDs
Hi,
We have some problems with ceph-deploy install node
This is the error I get when I run the installation:
[mon01][INFO ] Running command: sudo rpm --import
https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
[mon01][INFO ] Running command: sudo rpm --import
Strange, when I visit https://ceph.com, I get a certificate that
doesn't expire until 10 February 2015.
Perhaps check the clock on your node isn't in the future?
John
On Wed, Dec 17, 2014 at 4:16 PM, Emilio emilio.mor...@adam.es wrote:
Hi,
We have some problems with ceph-deploy install node
Hi,
Thanks for the update : good news are much appreciated :-) Would you have time
to review the documentation at https://github.com/ceph/ceph/pull/3194/files ?
It was partly motivated by the problem you had.
Cheers
On 17/12/2014 14:03, Italo Santos wrote:
Hello Loic,
Thanks for you help,
Yes, sorry this server was in the past!
Thx!
On 17/12/14 17:40, John Spray wrote:
Strange, when I visit https://ceph.com, I get a certificate that
doesn't expire until 10 February 2015.
Perhaps check the clock on your node isn't in the future?
John
On Wed, Dec 17, 2014 at 4:16 PM, Emilio
Hello,
I’ve take a look to this documentation (which help a lot) and if I understand
right, when I set a profile like:
===
ceph osd erasure-code-profile set isilon k=8 m=2 ruleset-failure-domain=host
===
And create a pool following the recommendations on doc, I’ll need (100*16)/2 =
800 PGs,
On 17/12/2014 18:18, Italo Santos wrote:
Hello,
I’ve take a look to this documentation (which help a lot) and if I understand
right, when I set a profile like:
===
ceph osd erasure-code-profile set isilon k=8 m=2 ruleset-failure-domain=host
===
And create a pool following the
Loic,
So, if want have a failure domain by host, I’ll need set up a erasure profile
which k+m = total number of hosts I have, right?
Regards.
Italo Santos
http://italosantos.com.br/
On Wednesday, December 17, 2014 at 3:24 PM, Loic Dachary wrote:
On 17/12/2014 18:18, Italo Santos
On 17/12/2014 19:22, Italo Santos wrote:
Loic,
So, if want have a failure domain by host, I’ll need set up a erasure profile
which k+m = total number of hosts I have, right?
Yes, k+m has to be = number of hosts.
Regards.
*Italo Santos*
http://italosantos.com.br/
On Wednesday,
Understood.
Thanks for your help, the cluster is healthy now :D
Also, using for example k=6,m=1 and failure domain by host I’ll be able lose
all OSD on the same host, but if a lose 2 disks on different hosts I can lose
data right? So, it is possible been a failure domain which allow me to lose
I am trying to setup a small VM ceph cluster to excersise before creating a real
cluster. Currently there are two osd's on the same host. I wanted to create an
erasure coded pool with k=1 and m=1 (yes I know it's stupid, but it is a test
case). On top of it there is a cache tier (writeback) and I
Hi Max,
On 17/12/2014 20:57, Max Power wrote:
I am trying to setup a small VM ceph cluster to excersise before creating a
real
cluster. Currently there are two osd's on the same host. I wanted to create an
erasure coded pool with k=1 and m=1 (yes I know it's stupid, but it is a test
case).
On 17/12/2014 19:46, Italo Santos wrote: Understood.
Thanks for your help, the cluster is healthy now :D
Also, using for example k=6,m=1 and failure domain by host I’ll be able lose
all OSD on the same host, but if a lose 2 disks on different hosts I can lose
data right? So, it is
I have a somewhat interesting scenario. I have an RBD of 17TB formatted using
XFS. I would like it accessible from two different hosts, one mapped/mounted
read-only, and one mapped/mounted as read-write. Both are shared using Samba
4.x. One Samba server gives read-only access to the world
On Wed, Dec 17, 2014 at 2:31 PM, McNamara, Bradley
bradley.mcnam...@seattle.gov wrote:
I have a somewhat interesting scenario. I have an RBD of 17TB formatted
using XFS. I would like it accessible from two different hosts, one
mapped/mounted read-only, and one mapped/mounted as read-write.
On 12/17/2014 03:49 PM, Gregory Farnum wrote:
On Wed, Dec 17, 2014 at 2:31 PM, McNamara, Bradley
bradley.mcnam...@seattle.gov wrote:
I have a somewhat interesting scenario. I have an RBD of 17TB formatted
using XFS. I would like it accessible from two different hosts, one
mapped/mounted
Hi John,
I am using 0.56.1. Could it be because data striping is not supported in
this version?
Kevin
On Wed Dec 17 2014 at 4:00:15 AM PST Wido den Hollander w...@42on.com
wrote:
On 12/17/2014 12:35 PM, John Spray wrote:
On Wed, Dec 17, 2014 at 10:25 AM, Wido den Hollander w...@42on.com
Hello,
On Tue, 16 Dec 2014 08:58:23 -0700 Bryson McCutcheon wrote:
Hey there,
Is there a good work around if our SSDs are not handling D_SYNC very
well? We invested a ton of money into Samsung 840 EVOS and they are not
playing well with D_SYNC. Would really appreciate the help!
Baring
On Wednesday, December 17, 2014, Josh Durgin josh.dur...@inktank.com
wrote:
On 12/17/2014 03:49 PM, Gregory Farnum wrote:
On Wed, Dec 17, 2014 at 2:31 PM, McNamara, Bradley
bradley.mcnam...@seattle.gov wrote:
I have a somewhat interesting scenario. I have an RBD of 17TB formatted
using
On 12/17/2014 02:58 AM, Bryson McCutcheon wrote:
Is there a good work around if our SSDs are not handling D_SYNC very
well? We invested a ton of money into Samsung 840 EVOS and they are
not playing well with D_SYNC. Would really appreciate the help!
Just in case it's linked with the recent
I'be been experimenting with CephFS for funning KVM images (proxmox).
cephfs fuse version - 0.87
cephfs kernel module - kernel version 3.10
Part of my testing involves running a Windows 7 VM up and running
CrystalDiskMark to check the I/O in the VM. Its surprisingly good with
both the fuse and
Hi Mikaël,
I have EVOs too, what to you mean by not playing well with D_SYNC?
Is there something I can test on my side to compare results with you,
as I have mine flashed?
http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
described how
what to you mean by not playing well with D_SYNC?
Hi, check this blog:
http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
- Mail original -
De: Mikaël Cluseau mclus...@isi.nc
À: Bryson McCutcheon brysonmccutch...@gmail.com,
Looking at the blog, I notice he disabled the write cache before the
tests: doing this on my m550 resulted in *improved* dsync results (300
IOPS - 700 IOPS) still not great obviously, but ... interesting.
So do experiment with the settings to see if you can get the 840's
working better for
The cluster state must be wrong,but how to recovery?
root@node3 ceph-cluster]# ceph -w
cluster 1365f2dd-b86c-436c-a64f-3318a937f3c2
health HEALTH_WARN 64 pgs incomplete; 64 pgs stale; 64 pgs stuck
inactive; 64 pgs stuck stale; 64 pgs stuck unclean; 8 requests are blocked
32 sec
40 matches
Mail list logo