Hi Barry,
On Mon, May 6, 2013 at 7:06 PM, Barry O'Rourke Barry.O'rou...@ed.ac.uk wrote:
Hi,
I built a modified version of the fc17 package that I picked up from
koji [1]. That might not be ideal for you as fc17 uses systemd rather
than init, we use an in-house configuration management system
On 05/06/2013 01:07 PM, Michael Lowe wrote:
Um, start it? You must have synchronized clocks in a fault tolerant system
(google Byzantine generals clock) and the way to do that is ntp, therefore ntp
is required.
On May 6, 2013, at 1:34 AM, Varun Chandramouli varun@gmail.com wrote:
Hi
Hi,
The API documentation for librados says that, instead of providing command
line options or a configuration file, the rados object can also be configured
by manually setting options with rados_conf_set() (or Rados::conf_set() for
the C++ interface). This takes both the option and value as
On 05/07/2013 12:08 PM, Guido Winkelmann wrote:
Hi,
The API documentation for librados says that, instead of providing command
line options or a configuration file, the rados object can also be configured
by manually setting options with rados_conf_set() (or Rados::conf_set() for
the C++
I tried do that and put behind RR DNS, but unfortunately only one host can
server requests from clients - second host does not responds totally. I am
not to good familiar with apache, in standard log files nothing helpful.
Maybe this whole HA design is wrong? Does anybody resolve HA for Rados
Hi All,
Thanks for the replies. I started the ntp daemon and the warnings as well
as the crashes seem to have gone. This is the first time I set up a cluster
(of physical machines), and was unaware of the need to synchronize the
clocks. Probably should have googled it more :). Pardon my
Hi,
I'm not using OpenStack, I've only really been playing around with Ceph
on test machines. I'm currently speccing up my production cluster and
will probably end up running it along with OpenNebula.
Barry
On 07/05/13 10:01, Dan van der Ster wrote:
Hi Barry,
On Mon, May 6, 2013 at 7:06
Hi,
I'm looking to purchase a production cluster of 3 Dell Poweredge R515's
which I intend to run in 3 x replication. I've opted for the following
configuration;
2 x 6 core processors
32Gb RAM
H700 controller (1Gb cache)
2 x SAS OS disks (in RAID1)
2 x 1Gb ethernet (bonded for cluster
Hi,
I'd be interested to hear from anyone running a similar configuration
I'm running a somewhat similar configuration here. I'm wondering why you
have left out SSDs for the journals?
I gather they would be quite important to achieve a level of performance
for hosting 100 virtual machines
FWIW, here is what I have for my ceph cluster:
4 x HP DL 180 G6
12Gb RAM
P411 with 512MB Battery Backed Cache
10GigE
4 HP MSA 60's with 12 x 1TB 7.2k SAS and SATA drives (bought at different times
so there is a mix)
2 HP D2600 with 12 x 3TB 7.2k SAS Drives
I'm currently running 79 qemu/kvm vm's
Hi Wido,
I have experienced same problem almost half a year ago, and finally
set this value to 3 - no more wrong marks was given, except extreme
high disk load when OSD really went down for a couple of seconds.
On Tue, May 7, 2013 at 4:59 PM, Wido den Hollander w...@42on.com wrote:
Hi,
I was
Hi,
I'm running a somewhat similar configuration here. I'm wondering why you
have left out SSDs for the journals?
I can't go into exact prices due to our NDA, but I can say that getting
a couple of decent SSD disks from Dell will increase the cost per server
by a four figure sum, and we're
On 05/07/2013 06:50 AM, Barry O'Rourke wrote:
Hi,
I'm looking to purchase a production cluster of 3 Dell Poweredge R515's
which I intend to run in 3 x replication. I've opted for the following
configuration;
2 x 6 core processors
32Gb RAM
H700 controller (1Gb cache)
2 x SAS OS disks (in RAID1)
You've learned on of the three computer science facts you need to know about
distributed systems, and I'm glad I could pass something on:
1. Consistent, Available, Distributed - pick any two
2. To completely guard against k failures where you don't know which one failed
just by looking you need
Barry, I have a similar setup and found that the 600GB 15K SAS drives work
well. The 2TB 7200 disks did not work as well due to my not using SSD. Running
the journal and the data on big slow drives will result in slow writes. All the
big boys I've encountered are running SSDs.
Currently, I'm
Hey folks,
Saw this crash the other day:
ceph version 0.56.4 (63b0f854d1cef490624de5d6cf9039735c7de5ca)
1: /usr/bin/ceph-osd() [0x788fba]
2: (()+0xfcb0) [0x7f19d1889cb0]
3: (gsignal()+0x35) [0x7f19d0248425]
4: (abort()+0x17b) [0x7f19d024bb8b]
5:
It actually depends on what your accesses look like; they have
different strengths and weaknesses. In general they perform about the
same, though.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Tue, May 7, 2013 at 10:53 AM, Gandalf Corvotempesta
Hi,
With so few disks and the inability to do 10GbE, you may want to
consider doing something like 5-6 R410s or R415s and just using the
on-board controller with a couple of SATA disks and 1 SSD for the
journal. That should give you better aggregate performance since in
your case you
Hi,
On Tue, 2013-05-07 at 21:07 +0300, Igor Laskovy wrote:
If I currently understand idea, when this 1 SSD will fail whole node
with that SSD will fail. Correct?
Only OSDs that use that SSD for the journal will fail as they will lose
any writes still in the journal. If I only have 2 OSDs
Hi,
Here's a quick performance display with various block sizes on a host
with 1 public 1Gbe link and 1 1Gbe link on the same vlan as the ceph
cluster.
Thanks for taking the time to look into this for me, I'll compare it
with my existing set-up in the morning.
Thanks,
Barry
--
The
On 05/07/2013 03:36 PM, Barry O'Rourke wrote:
Hi,
With so few disks and the inability to do 10GbE, you may want to
consider doing something like 5-6 R410s or R415s and just using the
on-board controller with a couple of SATA disks and 1 SSD for the
journal. That should give you better
Now the .61 is out I have tried getting a second radosgw farm working but into
an issue using a custom root/zone pool.
The 'radosgw-admin zone set' and ' radosgw-admin zone info' commands are
working fine except it keeps defaulting to using .rgw.root. I've tried the two
settings, the one you
The settings are under the the rgw client settings
[client.radosgw.internal.01]
rgw root zone pool = .rgw.zone2
rgw cluster root pool = .rgw.zone2
I tried 'radosgw-admin zone set --rgw-root-zone-pool=.rgw.zone2 zone2'
and 'radosgw-admin zone info
With the release of cuttlefish, I decided to try out ceph-deploy and
ran into some documentation errors along the way:
http://ceph.com/docs/master/rados/deployment/preflight-checklist/
Under 'CREATE A USER' it has the following line:
To provide full privileges to the user, add the following to
Figured it out; On your post last month you were using 'rgw-zone-root-pool' but
today you're using ' rgw-root-zone-pool.' I didn't notice that root and zone
had switched and was using your old syntax. It's working now though.
Thank you for your help again!
Nelson Jeppesen
-Original
On Tue, May 7, 2013 at 4:45 PM, MinhTien MinhTien
tientienminh080...@gmail.com wrote:
Dear all,
I deploy ceph with Centos 6.3. When I upgrade kernel 3.9.0, I having few
problems with card raid.
I want to deploy ceph with default kernel 2.6.32. This is good isn't it?
Ceph client will use
So just a little update... after replacing the original failed drive things
seem to be progressing a little better however I noticed something else
odd. Looking at a 'rados df' it looks like the system thinks that the data
pool has 32 TB of data, this is only a 18TB raw system.
pool name
Dear all,
I use raid 6 deployment ceph. I have 1 SSD partition (raid 0). I use SSD
make journal for OSD.
Raid 6 containt 60TB, divided into 4 OSD..
When I deploy, OSD usually reflects the slow request.
001080 [write 0~4194304 [5@0]] 0.72bf90bf snapc 1=[]) v4 currently commit
sent
2013-05-08
28 matches
Mail list logo