Hi Derek -
If you are still having problems with ceph-deploy, please forward the ceph.log
file to me, I can start trying to figure out what's gone wrong.
Thanks,
Gary
On Jun 12, 2013, at 7:09 PM, Derek Yarnell de...@umiacs.umd.edu wrote:
Hi,
I am trying to run ceph-deploy on a very
Hi
==
root@xtream:~# service ceph start
=== mds.a ===
Starting Ceph mds.a on xtream...already running
=== osd.0 ===
Mounting xfs on xtream:/var/lib/ceph/osd/ceph-0
2013-06-18 04:26:16.373075
Hi Alex,
What versions of Qemu are recommended for this?
I would go with version 1.4.2 (I don't know what the official
recommendation is).
which is the implementation of using asynchronous flushing
in Qemu. That's only in 1.4.3 and 1.5 if I use the upstream
As far as I know, it is in
On Tue, Jun 18, 2013 at 09:52:53AM +0200, Kurt Bauer wrote:
Hi,
Da Chun schrieb:
Hi List,
I want to deploy a ceph cluster with latest cuttlefish, and export it
with iscsi interface to my applications.
Some questions here:
1. Which Linux distro and release would you recommend? I
Thanks for sharing! Kurt.
Yes. I have read the article you mentioned. But I also read another one:
http://www.hastexo.com/resources/hints-and-kinks/turning-ceph-rbd-images-san-storage-devices.
It uses LIO, which is the current standard Linux kernel SCSI target.
There is another doc in the
On Tue, Jun 18, 2013 at 11:13:15AM +0200, Leen Besselink wrote:
On Tue, Jun 18, 2013 at 09:52:53AM +0200, Kurt Bauer wrote:
Hi,
Da Chun schrieb:
Hi List,
I want to deploy a ceph cluster with latest cuttlefish, and export it
with iscsi interface to my applications.
Some
Hi John,
apologies for the late reply. The librados seems quite interesting ...
Actually no. I'll write up an API doc for you soon.
sudo apt-get install python-ceph
import rados
I wonder if I can ake python calls to interact with the object store
(say: cephfs.open() mkdir() ) directly
Hi,
I wondered what best practice is recommended to reducing failure domains for a
virtual server platform. If I wanted to run multiple virtual server clusters
then would it be feasible to serve storage from 1 x large Ceph cluster?
I am concerned that, in the unlikely event the Ceph whole
On Tue, Jun 18, 2013 at 08:13:39PM +0800, Da Chun wrote:
Hi List,My ceph cluster has two osds on each node. One has 15g capacity, and
the other 10g.
It's interesting that, after I took the 15g osd out of the cluster, the
cluster started to rebalance, and finally the 10g osd on the same node
Da Chun schrieb:
Thanks for sharing! Kurt.
Yes. I have read the article you mentioned. But I also read another
one:
http://www.hastexo.com/resources/hints-and-kinks/turning-ceph-rbd-images-san-storage-devices.
It uses LIO, which is the current standard Linux kernel SCSI target.
That
Thanks! Craig.
umount works.
About the time skew, I saw the log said the time difference should be less than
50ms. I setup one of my nodes as the time server, and the others sync the time
with it. I don't know why the system time still changes frequently especially
after reboot. Maybe it's
On Tue, Jun 18, 2013 at 02:38:19PM +0200, Kurt Bauer wrote:
Da Chun schrieb:
Thanks for sharing! Kurt.
Yes. I have read the article you mentioned. But I also read another
one:
http://www.hastexo.com/resources/hints-and-kinks/turning-ceph-rbd-images-san-storage-devices.
It uses
I built the 3.10-rc rbd module for a 3.8 kernel yesterday, and only
had one thing to add (I know I'm reviving an old thread).
There is one folder missing from the original list of files to use:
include/linux/crush/*
That would bring everything to:
include/keys/ceph-type.h
include/linux/ceph/*
Derek-
Please also try the latest ceph-deploy and cuttlefish branches, which
fixed several issues with el6 distros. 'git pull' for hte latest
ceph-deploy (clone from github and ./bootstrap if you were using the
package) and install with
./ceph-deploy install --dev=cuttlefish
Am Dienstag, 18. Juni 2013, 07:58:50 schrieb Sage Weil:
On Tue, 18 Jun 2013, Guido Winkelmann wrote:
Am Donnerstag, 13. Juni 2013, 01:58:08 schrieb Josh Durgin:
Which filesystem are the OSDs using?
BTRFS
Which kernel version? There was a recent bug (fixed in 3.9 or 3.8) that
I think the bug Sage is talking about was fixed in 3.8.0
On Jun 18, 2013, at 11:38 AM, Guido Winkelmann guido-c...@thisisnotatest.de
wrote:
Am Dienstag, 18. Juni 2013, 07:58:50 schrieb Sage Weil:
On Tue, 18 Jun 2013, Guido Winkelmann wrote:
Am Donnerstag, 13. Juni 2013, 01:58:08 schrieb Josh
On Tuesday, June 18, 2013, harri wrote:
Hi, **
** **
I wondered what best practice is recommended to reducing failure domains
for a virtual server platform. If I wanted to run multiple virtual server
clusters then would it be feasible to serve storage from 1 x large Ceph
cluster?
On Tue, Jun 18, 2013 at 09:02:12AM -0700, Gregory Farnum wrote:
On Tuesday, June 18, 2013, harri wrote:
Hi, **
** **
I wondered what best practice is recommended to reducing failure domains
for a virtual server platform. If I wanted to run multiple virtual server
clusters
Hi Derek,
Are you sure the package is installed on the target? (Did you ceph-deploy
install hostname) It is probably caused by /var/lib/ceph/mon not
existing?
sage
On Tue, 18 Jun 2013, Derek Yarnell wrote:
If you are still having problems with ceph-deploy, please forward the
Hi, I'm an admin for the School of Interactive Games and Media at RIT, and
looking into using ceph to reorganize/consolidate the storage my department is
using. I've read a lot of documentation and comments/discussion on the web,
but I'm not 100% sure what I'm looking at doing is a good use of
I would like to make a local mirror or your yum repositories. Do you support
any of the standard methods of syncing aka rsync?
Thanks,
Joe
--
Joe Ryner
Center for the Application of Information Technologies (CAIT)
Production Coordinator
P: (309) 298-1804
F: (309) 298-2806
Thanks Greg,
The concern I have is an all eggs in one basket approach to storage design.
Is it feasible, however unlikely, that a single Ceph cluster could be brought
down (obviously yes)? And what if you wanted to operate different storage
networks?
It feels right to build virtual
On 6/18/13 10:29 AM, Sage Weil wrote:
Derek-
Please also try the latest ceph-deploy and cuttlefish branches, which
fixed several issues with el6 distros. 'git pull' for hte latest
ceph-deploy (clone from github and ./bootstrap if you were using the
package) and install with
On Tue, Jun 18, 2013 at 10:34 AM, Edward Huyer erh...@rit.edu wrote:
Hi, I’m an admin for the School of Interactive Games and Media at RIT, and
looking into using ceph to reorganize/consolidate the storage my department
is using. I’ve read a lot of documentation and comments/discussion on the
Spell check fail, that of course should have read CRUSH map.
Sent from Samsung Mobile
Original message
From: harri ha...@madshark.co.uk
Date: 18/06/2013 19:21 (GMT+00:00)
To: Gregory Farnum g...@inktank.com
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Single
On Tue, Jun 18, 2013 at 11:21 AM, harri ha...@madshark.co.uk wrote:
Thanks Greg,
The concern I have is an all eggs in one basket approach to storage
design. Is it feasible, however unlikely, that a single Ceph cluster could
be brought down (obviously yes)? And what if you wanted to operate
Hi,
So the first error below is that /var/run/ceph isn't created when
installing the ceph RPM(s). This is becasuse of line 440 in
ceph.spec.in using the %ghost directive[1] for the file install. My
reading of the behavior will mean that the file or directory in this
case will be included in the
[ Please stay on the list. :) ]
On Tue, Jun 18, 2013 at 12:54 PM, Edward Huyer erh...@rit.edu wrote:
First questions: Are there obvious flaws or concerns with the
following configuration I should be aware of? Does it even make sense
to try to use ceph here? Anything else I should know,
[ Please stay on the list. :) ]
Doh. Was trying to get Outlook to quote properly, and forgot to hit Reply-all.
:)
The specifics of what data will migrate where will depend on how
you've set up your CRUSH map, when you're updating the CRUSH
locations, etc, but if you move an OSD then it
I remember seeing a few reports of problems from users with strange block
device names in /dev (sdaa*, c0d1p2* etc.) and have a bug open
(http://tracker.ceph.com/issues/5345), but looking at the code I don't
immediately see the problem, and I don't have any machines that have this
problem.
On Tue, 18 Jun 2013, Derek Yarnell wrote:
Hi,
So the first error below is that /var/run/ceph isn't created when
installing the ceph RPM(s). This is becasuse of line 440 in
ceph.spec.in using the %ghost directive[1] for the file install. My
reading of the behavior will mean that the file
We solved this problem at ParaScale by enabling users to enter any fancy
device names in the device discovery logic so that HP servers like the
DL185 which use older Compaq RAID controllers would work. This is common.
Best,
Cameron
--
On Tue, Jun 18, 2013 at 1:43 PM, Sage Weil s...@inktank.com
run /usr/bin/radosgw -c /etc/ceph/ceph.conf -n client.radosgw.gateway
error:
2013-06-19 09:19:55.148536 7f120aa0d820 0 librados: client.radosgw.gateway
authentication error (95) Operation not supported
2013-06-19 09:19:55.148923 7f120aa0d820 -1 Couldn't init storage provider
(RADOS)
How
Dear all,
I am trying to mount cephfs to 2 different mount points (each should have their
respective pools and keys). While the first mount works (after using set_layout
to get it to the right pool), the second attempt failed with mount error 12 =
Cannot allocate memory. Did I miss some steps
On 6/18/13 5:31 PM, Sage Weil wrote:
1) Remove the %ghost directive and allow RPM to install the directory.
Potentially leaving orphaned pid/state files after the package is removed.
2) Or the directory needs to be created in the %post section. If it is
created in the %post section and the
On Wed, 19 Jun 2013, Derek Yarnell wrote:
On 6/18/13 5:31 PM, Sage Weil wrote:
1) Remove the %ghost directive and allow RPM to install the directory.
Potentially leaving orphaned pid/state files after the package is removed.
2) Or the directory needs to be created in the %post section.
36 matches
Mail list logo