Hi all, I am planning to install a ceph cluster and I have 3 nodes with 2
nic for each one.
I read the documentation and it suggests to set a public netowork and a
cluster network.
Firstly I need to know if public network is the network used by clients to
mount ceph file system or to access rbd
Hi all,
I am planning to install a ceph cluster for testing but, in the future, I'd
like to use
it in production for kvm virtualization .
I'd like to use rbd and not ceph file system.
Which is the best linux operating system you suggest for ceph nodes and
which
is the more stable ceph version for
Hi,
i updated from 0.67 to firefly.
After update i have a warning:
health HEALTH_WARN pool .rgw.buckets has too few pgs; crush map has
legacy tunables
What is this problem???
Many thanks
Fabio
___
ceph-users mailing list
ceph-users@lists.ceph.com
Dear experts,
Recently, a disk for one of our OSDs was failure and caused osd down,
after I recovered the disk and filesystem, I noticed two problems:
1. journal corruption, which causes osd failure from starting:
-2 2014-05-28 22:21:19.592034 7f5c6ff437a0 1 journal _open
Hi ,
My cinder backend storage is ceph . Isthere is a mechanism to convert a
booted instance (Volume) into an image ?
Cheers
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi ,
Thanks Travis .. I was following RDO documentation on howto deploy ceph.
Instead of Ceph - Once I read Ceph documenation on it . It was clear.
Cheersf
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi ,
As we all know , we need to create keyring in ceph-storage-cluster . So
the radosgw can use them to communicate with ceph-storage-cluster.
My question is : how the radosgw can get the keyring information???
I guess there will be the same /etc/ceph/ceph.conf and
Hi Ceph,
Welcome Eric Mourgaya, head of the Ceph User Committee starting May 2014 until
the next elections in October 2014.
Founded six months ago, shortly after the Firefly Ceph Design Summit (
https://wiki.ceph.com/Planning/Blueprints/Firefly/Ceph_User_Committee ) is an
independent, non
On 05/06/2014 01:23 AM, Dave Chinner wrote:
On Tue, May 06, 2014 at 12:59:27AM +0400, Andrey Korolyov wrote:
On Tue, May 6, 2014 at 12:36 AM, Dave Chinner da...@fromorbit.com wrote:
On Mon, May 05, 2014 at 11:49:05PM +0400, Andrey Korolyov wrote:
Hello,
We are currently exploring issue which
Congratulations Eric !
- Mail original -
De: Loic Dachary l...@dachary.org
À: ceph-users ceph-users@lists.ceph.com
Envoyé: Jeudi 29 Mai 2014 12:28:56
Objet: [ceph-users] Ceph User Committee : welcome Eric Mourgaya
Hi Ceph,
Welcome Eric Mourgaya, head of the Ceph User Committee
Hi,
I think you can check this wiki:
http://ceph.com/docs/master/start/os-recommendations/
currently, only ubuntu 12.04 is deeply tested with inktank (but I think it'll
be rhel7 soon ;)
the wiki don't have been updated yet for firefly.
I known that ceph enterprise users are using dumpling for
Hi,
I've got 16 nodes cluster ssd only. Each node is 6x600GB, 10Gbit uplink.
We're using Intel 320 series. Cluster is running now half year as production
and no problems with ssds.
Replication x3 (main DC) and x2 in backup DC (10 nodes cluster there = less
space).
From what I've notice it's
On Thu, 29 May 2014, Ignazio Cassano wrote:
Hi all, I am planning to install a ceph cluster and I have 3 nodes with 2
nic for each one.
I read the documentation and it suggests to set a public netowork and a
cluster network.
Firstly I need to know if public network is the network used by
Il 29/05/14 15:21, Alexandre DERUMIER ha scritto:
crush map has legacy tunables
you need t update tunables in the crush, to benefit from last optimisati
#ceph osd crush tunables optimal
http://ceph.com/docs/master/rados/operations/crush-map/
Many thanks
Don't use nginx. The current version buffers all the uploads to the
local disk, which causes all sorts of problems with radosgw (timeouts,
clock skew errors, etc). Use tengine instead (or apache). I sent the
mailing list some info on tengine a couple weeks ago.
On 5/29/2014 6:11 AM,
From the docs, you need this setting in ceph.conf (if you're using
nginx/tengine):
rgw print continue = false
This will fix the 100-continue issues.
On 5/29/2014 5:56 AM, Michael Lukzak wrote:
Re[2]: [ceph-users] nginx (tengine) and radosgw Hi,
I'm also use tengine, works fine with SSL (I
Hi,
Yes, I read this. I responded with new question.
I tried to use tengine, but my problem was not solved.
I can't find solution.
You don't have this problem in your case?
I'm using boto (client for S3), at the end when file
is uploaded at 100%, boto hangs after command PUT /[file] 100,
Hi,
Ups, so I don't read carefully a doc...
I will try this solution.
Thanks!
Michael
From the docs, you need this setting in ceph.conf (if you're using
nginx/tengine):
rgw print continue = false
This will fix the 100-continue issues.
On 5/29/2014 5:56 AM, Michael Lukzak wrote:
Re[2]:
Congrats Eric,
Feel free to drop me a line if you have any questions. Thanks!
Best Regards,
Patrick McGarry
Director, Community || Inktank
http://ceph.com || http://inktank.com
@scuttlemonkey || @ceph || @inktank
On Thu, May 29, 2014 at 6:28 AM, Loic Dachary l...@dachary.org wrote:
Hi
Hey cephers,
If you haven't already signed up for Ceph Day Boston (10 June), it's
not too late to do so.
http://www.inktank.com/cephdays/boston/
We had one speaker have to pull out at the last minute, but thankfully
Jeff Darcy was willing to come tell us about his franken-filesystem
built on
Greetings!
I know that many of you have discussed various enhancements/changes to
the wiki with me over the past months, and now is the time on
sprockets when we dance! (it's time to work on the wiki :P). I have
created a blueprint that encompasses many of the requirements that I
have been
I can confirm, we used to run all our nodes on btrfs and I cannot recommend
anyone of doing that at this time. We had problems with deadlocks and very
slow performance and even corruption over time all the way up to kernel
3.13, I haven't tried 3.14 but there's some patches mentioning performance
On May 28, 2014, at 5:31 AM, Gregory Farnum g...@inktank.com wrote:
On Sun, May 25, 2014 at 6:24 PM, Guang Yang yguan...@yahoo.com wrote:
On May 21, 2014, at 1:33 AM, Gregory Farnum g...@inktank.com wrote:
This failure means the messenger subsystem is trying to create a
thread and is
Is it possible to have a CephFS mounted as RW on one machine and RO on
another machine? We have a use case where we would have one box writing
files to CephFS and at least one other which would need the information in
CephFS. It seems silly to us to use NFS or something like that to get the
Hi Shawn,
just use the standard option=ro (-o ro)
Cheers
JC
On May 29, 2014, at 20:10, Shawn Edwards lesser.e...@gmail.com wrote:
Is it possible to have a CephFS mounted as RW on one machine and RO on
another machine? We have a use case where we would have one box writing
files to
25 matches
Mail list logo