Hello Raghavendra
Do you have your ceph cluster running ?
Regards
Karan Singh
- Original Message -
From: Raghavendra Lad raghavendra_...@rediffmail.com
To: ceph-users@lists.ceph.com
Sent: Monday, 28 October, 2013 5:05:28 AM
Subject: [ceph-users] Install Guide - CEPH WITH
Not brand-new, but I've not seen it mentioned on here so far. Seagate
Kinetic essentially enables HDDs to present themselves directly over
Ethernet as Swift object storage:
http://www.seagate.com/solutions/cloud/data-center-cloud/platforms/?cmpid=friendly-_-pr-kinetic-us
If the CPUs on these
Thanks, I already had the correct ceph-deply version, but had the flag in the
wrong place.
Solving that got me to the next problem... I get the following error:
[ceph_deploy.cli][INFO ] Invoked (1.2.7): /usr/bin/ceph-deploy install
ldtdsr02se18 --no-adjust-repos
On Mon, Oct 28, 2013 at 7:33 AM, alistair.whit...@barclays.com wrote:
Thanks, I already had the correct ceph-deply version, but had the flag in the
wrong place.
Solving that got me to the next problem... I get the following error:
[ceph_deploy.cli][INFO ] Invoked (1.2.7):
Hello Alistair
Can you try executing ceph-deploy install ldtdsr02se18 from ROOT user.
Regards
Karan Singh
- Original Message -
From: Alfredo Deza alfredo.d...@inktank.com
To: alistair whittle alistair.whit...@barclays.com
Cc: ceph-users@lists.ceph.com
Sent: Monday, 28 October, 2013
I get Error: Nothing to do when doing this on the node itself with sudo.
-Original Message-
From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
Sent: Monday, October 28, 2013 12:12 PM
To: Whittle, Alistair: Investment Bank (LDN)
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users]
On Mon, Oct 28, 2013 at 8:26 AM, alistair.whit...@barclays.com wrote:
I get Error: Nothing to do when doing this on the node itself with sudo.
That may mean that it is already installed. Can you check if ceph is
installed and that you can move forward
with the rest of the process?
In this
Yum tells me I have the following installed on the node:
ceph-deploy.noarch : Admin and deploy tool for Ceph
ceph-release.noarch : Ceph repository configuration
I think this means ceph is NOT already installed. Interesting that ceph-deploy
is on the node as well. I only installed it on the
I've been wondering about the same thing.
Has anyone had a chance to look at the Simulator?
https://github.com/Seagate/Kinetic-Preview
On Mon, Oct 28, 2013 at 5:56 PM, ja...@peacon.co.uk wrote:
Not brand-new, but I've not seen it mentioned on here so far. Seagate
Kinetic essentially enables
Well, as I understand it Seagate has their own home-rolled thing. I
believe there was some discussion at one point about using Ceph
together with their offering, but if I remember correctly Seagate
wanted to remove RADOS and just use Ceph clients, which didn't make a
lot of sense to us.
Best
On Mon, Oct 28, 2013 at 8:37 AM, alistair.whit...@barclays.com wrote:
Yum tells me I have the following installed on the node:
ceph-deploy.noarch : Admin and deploy tool for Ceph
ceph-release.noarch : Ceph repository configuration
I think this means ceph is NOT already installed.
Mark,
Thanks a lot for the info. Hopefully this will resolve our issues going
forward!
Shain
Shain Miley | Manager of Systems and Infrastructure, Digital Media |
smi...@npr.org | 202.513.3649
From: Mark Kirkwood [mark.kirkw...@catalyst.net.nz]
Sent:
Sadly, this is already my second attempt on a clean build.
I have made more progress. I altered my ceph repo to include the repos
documented for a manual rpm build. Ceph-deploy now finds the ceph package,
but then got a number of yum dependency errors (mostly python related). I
sorted
On Monday, October 28, 2013, wrote:
Not brand-new, but I've not seen it mentioned on here so far. Seagate
Kinetic essentially enables HDDs to present themselves directly over
Ethernet as Swift object storage:
http://www.seagate.com/**solutions/cloud/data-center-**
Hi Sage,
Thank you for reply
I try to use implement CEPH following
http://ceph.com/docs/master/start/quick-ceph-deploy/
All my servers are VMware instances, all steps working fine unless
prepare/create OSD , I try
ceph-deploy osd prepare ceph-node2:/tmp/osd0 ceph-node3:/tmp/osd1
and aslo I try
Hi Josh,
We did map it directly to the host, and it seems to work just fine. I
think this is a problem with how the container is accessing the rbd module.
--
Kevin Weiler
IT
IMC Financial Markets | 233 S. Wacker Drive, Suite 4300 | Chicago, IL
60606 | http://imc-chicago.com/
Phone: +1
Hi all,
We have a ceph cluster that being used as a backing store for several VMs
(windows and linux). We notice that when we reboot a node, the cluster enters a
degraded state (which is expected), but when it begins to recover, it starts
backfilling and it kills the performance of our VMs.
Hey folks - looking around, I see plenty (OK, some) on how to modify journal
size and location for older ceph, when ceph.conf was used (I think the switch
from ceph.conf to storing osd/journal config elsewhere happened with bobcat?).
I recently deployed a cluster with ceph-deploy on 0.67 and
Any answer to this question? I'm hitting almost the same issue with radosgw,
Read performance is not fine with radosgw
Regards
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
It will be updated by the end of the day today...
On Sun, Oct 27, 2013 at 7:31 PM, maoqi1982 maoqi1...@126.com wrote:
Hi list
my ceph version is dumpling 0.67 ,i want use RGW Geo-Replication and
Disaster Recovery function, can I refer the doc
Hi Hadi,
Can you tell me a bit about the tests you are doing and seeing poor
performance on?
Mark
On 10/28/2013 01:32 PM, hadi golestani wrote:
Any answer to this question? I'm hitting almost the same issue with radosgw,
Read performance is not fine with radosgw
Regards
I have installed apache + fastcgi as per the documentation (note: not the
ceph customized versions). I have created a user using radosgw-admin named
radosgwadmin (please see attached file radogwadmin-user.txt)
Now using S3 API, I am able to make a new request that gets authenticated,
however not
My test is so simple,
On a cluster with 3 MON, 4 OSD, 1 RGW I can't download a big file from two
different clients concurrently,
One of them will wait till the other finish downloading it.
Regards
On Mon, Oct 28, 2013 at 10:19 PM, Mark Nelson mark.nel...@inktank.comwrote:
Hi Hadi,
Can you
I'm encountering a problem with RBD-backed Xen. During a VM boot,
pygrub attaches the VM's root VDI to dom0. This hangs with these
messages in the debug log:
Oct 27 21:19:59 xen27 kernel:
vbd vbd-51728: 16 Device in use; refusing to close
Oct 27 21:19:59 xen27 xenopsd-xenlight:
[xenops]
Sounds like an issue with your apache config. How did you install your
apache? What distribution are you running on? Are you using it as
mpm-worker? Do you have non-default radosgw settings?
Yehuda
On Mon, Oct 28, 2013 at 11:58 AM, hadi golestani
hadi.golest...@gmail.com wrote:
My test is so
Strange! I'm not sure I've actually ever seen two concurrent downloads
fail to work properly. Is there anything unusual about the setup?
Mark
On 10/28/2013 01:58 PM, hadi golestani wrote:
My test is so simple,
On a cluster with 3 MON, 4 OSD, 1 RGW I can't download a big file from
two
I'm running Ubuntu 12 on all my nodes and I've just installed every package
with default configs like what is mentioned in quick installtion guide of
Ceph
Anyone else experiancing the same issue?
Regards
On Mon, Oct 28, 2013 at 11:09 PM, Mark Nelson mark.nel...@inktank.comwrote:
Strange!
I'm not really an apache expert, but you could try looking at the apache
and rgw logs and see if you can trace where the 2nd request is hanging
up. Also, just to be sure, both clients can download data
independently, just not together?
Mark
On 10/28/2013 02:54 PM, hadi golestani wrote:
I'm
On 10/28/13, 11:20 AM, Abhijeet Nakhwa wrote:
I have installed apache + fastcgi as per the documentation (note: not
the ceph customized versions). I have created a user using radosgw-admin
named radosgwadmin (please see attached file radogwadmin-user.txt)
Now using S3 API, I am able to make
Hi Rzk,
Thanks for the links! I was able to add a public_network line to the config on
the admin host and push the config to the nodes with a ceph-deploy
--overwrite-conf config push rc-ceph-node1 rc-ceph-node2 rc-ceph-node3.
The bug tracker indicates that the quickstart documentation was
You can change some OSD tunables to lower the priority of backfills:
osd recovery max chunk: 8388608
osd recovery op priority: 2
In general a lower op priority means it will take longer for your
placement groups to go from degraded to active+clean, the idea is to
balance
That looks like a permissions problem. I've updated the draft
document here: http://ceph.com/docs/wip-doc-radosgw/radosgw/federated-config/
On Mon, Oct 28, 2013 at 2:25 AM, lixuehui lixue...@chinacloud.com.cn wrote:
Hi all
Today I'd like to replicated one cluster with gateway.After master
I still need to update the graphics. The update text is here:
http://ceph.com/docs/wip-doc-radosgw/radosgw/federated-config/
On Mon, Oct 28, 2013 at 11:49 AM, John Wilkins john.wilk...@inktank.com wrote:
It will be updated by the end of the day today...
On Sun, Oct 27, 2013 at 7:31 PM,
John,
I've never installed anything on Scientific Linux. Are you sure that
QEMU has RBD support?
I have some wip-doc text, which I'm going to move around shortly. You
can see the yum install requirements here:
http://ceph.com/docs/wip-doc-install/install/yum-priorities/
Raghavendra,
You can follow the link Loic provided. If you are running on
CentOS/RHEL, make sure you install QEMU with RBD support. See
http://ceph.com/docs/master/install/qemu-rpm/
Make sure your QEMU and libvirt installs are working. Then do the
integration with OpenStack.
On Mon, Oct 28,
Hi JL,
I added public and cluster ip in global section manually (ceph.conf).
yup. i couldn't find it either. maybe they update it in this link,
http://ceph.com/docs/master/rados/operations/add-or-rm-mons/
but failed when i tried to add public address using ceph-mon -i {mon-id}
--public-addr
Hi all,
I have the same problem, just curious.
could it be caused by poor hdd performance ?
read/write speed doesn't match the network speed ?
Currently i'm using desktop hdd in my cluster.
Rgrds,
Rzk
On Tue, Oct 29, 2013 at 6:22 AM, Kyle Bader kyle.ba...@gmail.com wrote:
You can change
The bobtail release added udev/upstart capabilities that allowed you
to not have per OSD entries in ceph.conf. Under the covers the new
udev/upstart scripts look for a special label on OSD data volumes,
matching volumes are mounted and then a few files are inspected:
journal_uuid whoami
The
Hi John,
On 10/28/2013 08:55 PM, John Wilkins wrote:
John,
I've never installed anything on Scientific Linux.
SL6 is binary compatible with other EL6 varieties. This host uses the
same repos as CentOS for Dave Scott's xenserver-core technology preview:
EPEL6, Dave's XenServer + Ceph
I have a radosgw instance (ceph 0.71-299-g5cba838 src build), running on
Ubuntu 13.10. I've been experimenting with multipart uploads (which are
working fine). However while *most* objects (from radosgw perspective)
have their storage space gc'd after a while post deletion, I'm seeing
what
Maybe nothing to do with your issue, but I was having problems using librbd
with blktap, and ended up adding:
[client]
ms rwthread stack bytes = 8388608
to my config. This is a workaround, not a fix though (IMHO) as there is nothing
to indicate that librbd is running out of stack space,
The suspicious line in /var/log/debug (see the pastebin below) and that
'blkid' was the culprit keeping the device open looked like juicy clues:
kernel: vbd vbd-51728: 16 Device in use; refusing to close
Search results:
https://www.redhat.com/archives/libguestfs/2012-February/msg00023.html
42 matches
Mail list logo