On 18.02.2014 21:41, shacky wrote:
Hi.
I have to create a new Ceph cluster with 3 nodes with 4 hard drives in
RAID5 (12Tb available per node).
Drop RAID5 and create one OSD per harddisk.
If you need to store small files consider how your applications
communicates with the storage cluster.
In just the last week I've seen at least two failures as a result of
replication factor two. I would highly suggest that for any critical data
you choose an rf of at least three.
With your stated capacity, you're looking at a mere 16TB with rf3. You'll
need to look into slightly more capacity or
Hi Kumar
Follow this
http://karan-mj.blogspot.fi/2013/12/ceph-openstack-part-2.html ( part 3 , 4
and 5 as well )
http://ceph.com/docs/master/rbd/libvirt/
—karan singh
On 19 Feb 2014, at 08:28, yalla.gnan.ku...@accenture.com wrote:
Hi,
I want to implement two scenarios:
Hi,
The slides I use are at http://www.slideshare.net/Inktank_Ceph/erasure-codeceph
. What does not show is the erasure code simulation we did ( 3 min altogether )
where 4 designated volunteers played bits in a replicated scenario and, using
the XOR table found on slide three we showed how to
Hi Karan,
Thank you very much for the reply.
Unfortunately I am unable to access the blogspot link. The second link deals
with KVM in general.
But I have installed Havana openstack. Could you please send me the list of
steps to configure
Openstack to use the rados block device both as a
Hi Karan,
Moreover my openstack setup consist of three nodes with three services each on
all the nodes.
Controller node : Keystone, Nova service, Horizon
Compute node: Nova-compute, Glance and Cinder.
Neutron node: Mysql, RabbitMQ, Neutron service.
Thanks
Kumar
From: Gnan Kumar, Yalla
Sent:
2014-02-19 9:31 GMT+01:00 Robert Sander r.san...@heinlein-support.de:
Drop RAID5 and create one OSD per harddisk.
I was thinking about using RAID5 to keep the disk redundancy even during
the file sync process through the nodes.
What do you think about this?
If you need to store small files
2014-02-19 9:40 GMT+01:00 Schlacta, Christ aarc...@aarcane.org:
In just the last week I've seen at least two failures as a result of
replication factor two. I would highly suggest that for any critical data
you choose an rf of at least three.
Could you explain me why failures happened?
With
hello Alfredo
here's some additional information and log output attached
had some trouble with yum (Resolving dl.fedoraproject.org... failed: Name or service not known),
but only with node3 and ceph-admin. looked ok after retrying
-
[ceph@ceph-admin scripts]$ uname -a
Hello
Can you try this
1) From monitor node scp /etc/ceph/ceph.client.admin.keyring node1:/etc/ceph
2) From monitor node scp /var/lib/ceph/bootstrap-osd/ceph.keyring
node1:/var/lib/ceph/bootstrap-osd
I encountered in the same issue yesterday ( on centOS ) and fixed in this
manner.
Hello list,
We've recently moved our 588TB Ceph cluster into production by moving
VMs on there, but this morning we started receiving the following
message:
cluster etc
health HEALTH_WARN 20 requests are blocked 32 sec
monmap e1: 3 mons at
Hi!
I have two sorts of storage hosts: small number of reliable hosts with a number
of big drives on each (reliable zone of the cluster), and a much larger set of
less reliable hosts, some with big drives, some with relatively small ones
(non-reliable zone of the cluster). Non-reliable hosts
Hi guys,
Quick question. I have a VM with some SCSI drives which act as the OSDs in
my test lab. I have removed the SCSI drive so it's totally gone from the
system, syslog is dropping I/O errors but the cluster still looks healthy.
Can you tell me why ? I'm trying to reproduce the problem if the
Hi Alfred,
Could you please let me know how to integrate openstack with ceph and test it .
The existing document seems to be incomplete.
Thanks
Kumar
From: Karan Singh [mailto:karan.si...@csc.fi]
Sent: Wednesday, February 19, 2014 2:23 PM
To: Gnan Kumar, Yalla
Cc: alfredo.d...@inktank.com;
Eventually after 1 hour it spotted that. I took the disk out at 11:06:02 so
literally 1 hour later:
6 0.9 osd.6 down0
7 0.9 osd.7 up 1
8 0.9 osd.8 up 1
2014-02-19 12:06:02.802388 mon.0 [INF] osd.6
Hi !
I have failover clusters (IMAP service) with 2 members configured with
Ubuntu + Drbd + Ext4. My IMAP clusters works fine with ~ 50k email accounts.
See design here: http://adminlinux.com.br/my_imap_cluster_design.txt
I would like to use a distributed architecture of the filesystem to
Hi all,
I'm trying to install a ceph cluster with one SSD sda (running the system on
sda1) sda2 is for swap and my OSD (I have only one OSD per node...) will be sdb
which is not an SSD but a 7200rpm disk.
I want to use sda3 for my journal (so I want to set my journal on my ssd).
Ceph-deploy osd
On 02/19/2014 02:22 PM, Thorvald Hallvardsson wrote:
Eventually after 1 hour it spotted that. I took the disk out at 11:06:02
so literally 1 hour later:
6 0.9 osd.6 down0
7 0.9 osd.7 up 1
8 0.9 osd.8 up
On Wed, Feb 19, 2014 at 2:37 AM, Srinivasa Rao Ragolu
srag...@mvista.com wrote:
Hi all,
I have setup cluster successfully and one node using to setup rados gateway.
Machine is Fedora 19(all nodes)
Steps I followed
1) Installed httpd, mod_fastcgi, ceph and ceph-radosgw using link
On Wed, Feb 19, 2014 at 10:32 AM, NEVEU Stephane
stephane.ne...@thalesgroup.com wrote:
Hi all,
I'm trying to install a ceph cluster with one SSD sda (running the system on
sda1) sda2 is for swap and my OSD (I have only one OSD per node...) will be
sdb which is not an SSD but a 7200rpm disk.
On 19.02.2014 14:55, Listas@Adminlinux wrote:
Is CephFS already stable to provide simultaneous access to data in a
production environment ?
It may be stable but I think the performance is not anywhere you need
for 50K accounts.
Have you looked into using dsync between your dovecot instances?
On Tue, Feb 18, 2014 at 7:24 AM, Guang Yang yguan...@yahoo.com wrote:
Hi ceph-users,
We are using Ceph (radosgw) to store user generated images, as GET latency
is critical for us, most recently I did some investigation over the GET path
to understand where time spend.
I first confirmed that
I am trying to learn about Ceph and have been looking at the documentation and
speaking to colleagues who work with it and had a question that I could not get
the answer to. As I understand it, the Crush map is updated every time a disk
is added. This causes the OSDs to migrate their data in
On Wed, Feb 19, 2014 at 1:31 PM, mike smith
michaelsmithcons...@yahoo.com wrote:
I am trying to learn about Ceph and have been looking at the documentation
and speaking to colleagues who work with it and had a question that I could
not get the answer to. As I understand it, the Crush map is
This Dumpling point release fixes a few critical issues in v0.67.6.
All v0.67.6 users are urgently encouraged to upgrade. We also
recommend that all v0.67.5 (or older) users upgrade.
The v0.67.6 point release contains a number of important fixed for the
OSD, monitor, and radosgw. Most
After i followed radosgw install and configuration document(
http://ceph.com/docs/next/radosgw/config/), I have got stucked on using
swift client. It just returns me Auth GET failed, even though i can see
that radosgw already have swift subuser and keys. how can i solve it?
What i got from
On Wed, Feb 19, 2014 at 6:36 PM, JinHwan Hwang calanc...@gmail.com wrote:
After i followed radosgw install and configuration
document(http://ceph.com/docs/next/radosgw/config/), I have got stucked on
using swift client. It just returns me Auth GET failed, even though i can
see that radosgw
Thanky you for reply.
Belows are my version and -V 1.0 result.
ceph version 0.72.2
swift 1.0
root@cephT:~/swiftApiTest# swift -V 1.0 -A
http://cephT/authhttp://cepht/auth -U
johndoe:swift -K 12hcpWyed4ycYNEV2btiI6qiaDR2EfgKhrLFpdfc post test
Traceback (most recent call last):
File
Thanks Yehuda.
Try looking at the perfcounters, see if there's any other throttling
happening. Also, make sure you have enough pgs for your data pool. One
other thing to try is disabling leveldb xattrs and see if it affects
your latency.
1. There is not throttling happening.
2. According to
On Wed, Feb 19, 2014 at 2:50 AM, Dane Elwell dane.elw...@gmail.com wrote:
Hello list,
We've recently moved our 588TB Ceph cluster into production by moving
VMs on there, but this morning we started receiving the following
message:
cluster etc
health HEALTH_WARN 20 requests
30 matches
Mail list logo