Michael, indeed I have pool size = 3. I changed it to 2. After that I have
recompiled crush map to reflect different sizes of hard drives and put 1.0 to
1Tb drive and 0.75 for 750Gb.
Now I have all my PG-s at status active. It should be „active+clean“ isn’t
it ?
I put object into the
Hi Alexandre,
http://www.lataverneducroissant.fr/ turned out to be a nice place for the Ceph
meetups. It's free of charge also ... as long as people drink. You just have to
be careful to choose a non football event night otherwise the video projector
is not available. And it's too noisy to
Hello List,
the other day when i looked at our ceph cluster it showed:
health HEALTH_ERR 135 pgs inconsistent; 1 pgs recovering;
recovery 76/4633296 objects degraded (0.002%); 169 scrub errors; clock
skew detected on mon.mon2-nb8
I did a
ceph pg dump | grep -i incons | cut -f 1 | while
Hi,
I'm playing around with cephfs, everything works fine except creating snapshots:
# mkdir .snap/test
mkdir: cannot create directory `.snap/test': Operation not permitted
Client Kernel version:
3.14
Ceph Cluster version:
0.80.1
I tried it on 2 different clients, both Debian, one with
On Thu, Jun 5, 2014 at 10:38 PM, Igor Krstic igor.z.krs...@gmail.com wrote:
Hello,
dmesg:
[ 690.181780] libceph: mon1 192.168.214.102:6789 feature set mismatch, my
4a042a42 server's 504a042a42, missing 50
[ 690.181907] libceph: mon1 192.168.214.102:6789 socket error on read
[
hi sage,
I use rados_stat() in fuse module , It's block all all all the time. Thank
you for help me.
#define FUSE_USE_VERSION 26
#include fuse.h
#include stdio.h
#include stdlib.h
#include string.h
#include assert.h
#include errno.h
#include sys/types.h
#include sys/stat.h
#include unistd.h
On 06/06/2014 10:13 AM, Leo Chen wrote:
hi sage,
I use rados_stat() in fuse module , It's block all all all the time.
Is the cluster healthy? Aka HEALTH_OK?
What if you try to stat the object this way:
$ rados -p pool stat obj
Wido
Thank you for help me.
#define FUSE_USE_VERSION 26
Thank you very much for your reply.
$ rados -p metad stat 812d45e9f48f7372d908270616b2b06bfad44958
metad/812d45e9f48f7372d908270616b2b06bfad44958 mtime 1401948103, size 43
use rados command no problem,
in simple code to test rados_stat() also no problem,
just invoke rados_stat() code:
int
Hi Vadim,
Is every pool also using your custom crush_ruleset (step chooseleaf firstn
0 type osd)?
Otherwise Ceph will use the default rule to replicate data on separate
hosts, which, in your case of a single host, cannot work.
You can check it with
ceph osd dump --format=json-pretty
And in
http://www.lataverneducroissant.fr/ turned out to be a nice place for the
Ceph meetups. It's free of charge also ... as long as people drink. You just
have to be careful to choose a non football event night otherwise the video
projector is not available. And it's too noisy to discuss anything.
2014-06-06 9:18 GMT+02:00 Benedikt Fraunhofer
given.to.lists.ceph-users.ceph.com.toasta@traced.net:
Hello List,
and it logs nothing in ceph -w when i issue
ceph pg repair 2.c1
instructing pg 2.c1 on osd.51 to repair
ceph pg repair 2.68
instructing pg 2.68 on osd.69 to repair
I have only one ruleset number 0 and all pools use it. My crushmap is very
simple:
--
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable
Hi all,
thanks for the replies and heads up for the different bonding options.
I'll toy around with them in the next days; hopefully there's some
stable setup possible with provides HA and increased bandwidth together.
Cheers,
Sven
Am 05.06.2014 21:36, schrieb Cedric Lemarchand:
Yes,
On Fri, 2014-06-06 at 11:51 +0400, Ilya Dryomov wrote:
On Thu, Jun 5, 2014 at 10:38 PM, Igor Krstic igor.z.krs...@gmail.com wrote:
Hello,
dmesg:
[ 690.181780] libceph: mon1 192.168.214.102:6789 feature set mismatch, my
4a042a42 server's 504a042a42, missing 50
[ 690.181907]
With 4 NICs and 2 switches I'd do:
eth0/2 goes to sw1
eth1/3 goes to sw2
at least 2 port trunk/LACP
active/standby bonding
bond interface for public traffic with eth0/1, active NIC is eth0 (so
public traffic goes thru sw1 if all is up
bond interface for OSD traffic with eth 2/3, active NIC is
- Message from Igor Krstic igor.z.krs...@gmail.com -
Date: Fri, 06 Jun 2014 13:23:19 +0200
From: Igor Krstic igor.z.krs...@gmail.com
Subject: Re: [ceph-users] question about feature set mismatch
To: Ilya Dryomov ilya.dryo...@inktank.com
Cc: ceph-users@lists.ceph.com
Hi all,
I configured a ceph cluster firefly on ubuntu 12.04.
I also confiured a centos 6.5 client with ceph-0.80.1-2.el6.x86_64
and kernel 3.14.2-1.el6.elrepo.x86_64
On Centeos I am able to use rbd remote block devices but if I try to map
them with rbdmap no link are generated.
Last week, before
Hi folks,
Thanks to Sage Weil's advice, I fixed my TMAP2OMAP problem by just
restarting the osds, but now I'm running into the following cephfs problem.
When I try to mount the filesystem, I get errors like the following:
libceph: mon0 192.168.1.31:6789 feature set mismatch, my
On Fri, Jun 6, 2014 at 4:34 PM, Kenneth Waegeman
kenneth.waege...@ugent.be wrote:
- Message from Igor Krstic igor.z.krs...@gmail.com -
Date: Fri, 06 Jun 2014 13:23:19 +0200
From: Igor Krstic igor.z.krs...@gmail.com
Subject: Re: [ceph-users] question about feature set
On Fri, Jun 6, 2014 at 5:35 PM, Bryan Wright bk...@virginia.edu wrote:
Hi folks,
Thanks to Sage Weil's advice, I fixed my TMAP2OMAP problem by just
restarting the osds, but now I'm running into the following cephfs problem.
When I try to mount the filesystem, I get errors like the
On Fri, Jun 6, 2014 at 4:47 PM, Ignazio Cassano
ignaziocass...@gmail.com wrote:
Hi all,
I configured a ceph cluster firefly on ubuntu 12.04.
I also confiured a centos 6.5 client with ceph-0.80.1-2.el6.x86_64
and kernel 3.14.2-1.el6.elrepo.x86_64
On Centeos I am able to use rbd remote block
I am sorry for my mistake
service rbdmap restart
afther restart no links are created end rbd is duplicated
2014-06-06 16:07 GMT+02:00 Ilya Dryomov ilya.dryo...@inktank.com:
On Fri, Jun 6, 2014 at 4:47 PM, Ignazio Cassano
ignaziocass...@gmail.com wrote:
Hi all,
I configured a ceph
On Fri, Jun 6, 2014 at 6:15 PM, Ignazio Cassano
ignaziocass...@gmail.com wrote:
I Ilya, no file 50-rbd.rules exist on my system.
My guess would be that the upgrade went south. In order for the symlinks to be
created that file should exist on the client (i.e. the system you run 'rbd map'
on).
Many thanks
2014-06-06 16:25 GMT+02:00 Ilya Dryomov ilya.dryo...@inktank.com:
On Fri, Jun 6, 2014 at 6:15 PM, Ignazio Cassano
ignaziocass...@gmail.com wrote:
I Ilya, no file 50-rbd.rules exist on my system.
My guess would be that the upgrade went south. In order for the symlinks
to be
Ignazio,
You are hitting http://tracker.ceph.com/issues/8533
We do not have a 0.80.1-2 package, you are getting that from EPEL.
Make sure that when installing Ceph it is not coming from EPEL, but
from our repos.
You can install the yum priorities plugin and then add them to the
repo sections
Ilya Dryomov ilya.dryomov@... writes:
Unless you have been playing with 'osd primary-affinity' command, the
problem is probably that you have chooseleaf_vary_r tunable set in your
crushmap. This is a new tunable, it will be supported in 3.15. If you
disable it with
ceph osd getcrushmap
Snapshots are disabled by default; there's a command you can run to
enable them if you want, but the reason they're disabled is because
they're significantly more likely to break your filesystem than
anything else is!
ceph mds set allow_new_snaps true
-Greg
Software Engineer #42 @
Has anyone run into this issue and would like to provide any troubleshooting
tip?
Thanks,
Jimmy
From: J L j...@yahoo-inc.commailto:j...@yahoo-inc.com
Date: Thursday, June 5, 2014 at 4:20 PM
To: ceph-users@lists.ceph.commailto:ceph-users@lists.ceph.com
Hi,
Am 05.06.2014 11:27, schrieb ale...@kurnosov.spb.ru:
ceph 0.72.2 on SL6.5 from offical repo.
After down one of OSDs (for further the sever out) one of PGs become
incomplte: $ ceph health detail HEALTH_WARN 1 pgs incomplete; 1 pgs stuck
inactive; 1 pgs stuck unclean; 2 requests are
Vadim,
I think the issue is probably this: we've made many of the defaults more
realistic for what would actually get put into production. In your case,
you are working with a 1-node cluster. Our quick start guides now reflect a
3-node cluster. In your case, your osd crush chooseleaf type is set
Assitance really appreciated. This output says it all:-
ceph@ceph-admin:~$ ceph-deploy osd activate ceph-4:/dev/sdb1
ceph-4:/dev/sdc1 ceph-4:/dev/sdd1
[ceph_deploy.conf][DEBUG ] found configuration file
at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.2):
I haven't used ceph-deploy to do this much, but I think you need to
prepare before you activate and it looks like you haven't done so.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Fri, Jun 6, 2014 at 3:54 PM, Jonathan Gowar j...@whiteheat.org.uk wrote:
Assitance really
On Fri, 2014-06-06 at 16:04 -0700, Gregory Farnum wrote:
I haven't used ceph-deploy to do this much, but I think you need to
prepare before you activate and it looks like you haven't done so.
Thanks, Greg. I did to the prepare, and it worked without a hitch :-\
Hey All,
Simple question, does 'rbd export-diff' work with children snapshot aka;
root:~# rbd children images/03cb46f7-64ab-4f47-bd41-e01ced45f0b4@snap
compute/2b65c0b9-51c3-4ab1-bc3c-6b734cc796b8_disk
compute/54f3b23c-facf-4a23-9eaa-9d221ddb7208_disk
Hi,
I'm not sure if this question makes sense, but ...
Will perform a client-side rate control (limiting the number of requests
sent per second) help in avoiding a MDS crash?
I'm currently trying to get a baseline metadata performance of cephfs with
multiple *active* mds servers and directory
On Sat, 2014-06-07 at 00:45 +0100, Jonathan Gowar wrote:
On Fri, 2014-06-06 at 16:04 -0700, Gregory Farnum wrote:
I haven't used ceph-deploy to do this much, but I think you need to
prepare before you activate and it looks like you haven't done so.
Thanks, Greg. I did to the prepare,
On Sat, Jun 7, 2014 at 9:02 AM, Qing Zheng zhe...@cs.cmu.edu wrote:
Hi,
I'm not sure if this question makes sense, but ...
Will perform a client-side rate control (limiting the number of requests
sent per second) help in avoiding a MDS crash?
I'm currently trying to get a baseline metadata
(sorry if this is a dupe, sendmail issues)
Hey Cephers,
If you still haven?t signed up for Ceph Day Boston you should stop
procrastinating and proceed directly to the event page! :)
http://www.inktank.com/cephdays/boston/
We?d really love to have all of our closest Ceph friends join us for
Over the last two days, I set up ceph on a set of ubuntu 12.04 VMs (my
first time working with ceph), and it seems to be working fine (I have
HEALTH_OK, and can create a test document via the rados commandline tool),
but *I can't authenticate with the swift API*.
I followed the quickstart guides
hi , I have some question about region and zone
1. why define the concepts of REGION , ZONE, what the purpose is it.
2. what the relation between region , zone ,cluster ,and how design the
federated architecture. and how do the disaster recover.
Expect receiving your early
Hello,
My environment is only one machine, only one hard disk. I cannot restart the
cluster after machine reboots.
This is what I did before reboot:
Stop the osd and mon instances using:
$ sudo stop ceph-osd-all
ceph-osd-all stop/waiting
$ sudo stop ceph-mon-all
ceph-mon-all stop/waiting
I
On Wed, Jun 4, 2014 at 12:00 PM, David Curtiss
dcurtiss_c...@dcurtiss.com wrote:
Over the last two days, I set up ceph on a set of ubuntu 12.04 VMs (my first
time working with ceph), and it seems to be working fine (I have HEALTH_OK,
and can create a test document via the rados commandline
42 matches
Mail list logo