rgw_admin.cc not rgw_main.cc or rgw_code.cc.
2014-12-04 16:14 GMT+08:00 han vincent hang...@gmail.com:
rgw_admin not rgw_main.cc or rgw_code.cc.
2014-12-04 16:02 GMT+08:00 han vincent hang...@gmail.com:
I am sorry, I have made a mistake. The source file of the code is
rgw_code.cc
rgw_admin not rgw_main.cc or rgw_code.cc.
2014-12-04 16:02 GMT+08:00 han vincent hang...@gmail.com:
I am sorry, I have made a mistake. The source file of the code is
rgw_code.cc
instead of rgw_mian.cc.
2014-12-04 11:13 GMT+08:00 han vincent hang...@gmail.com:
hello, every one.
Hi all!
I have a CEPH installation with radosgw and the radosgw.log in the
/var/log/ceph directory is empty.
In the ceph.conf I have
log file = /var/log/ceph/radosgw.log
debug ms = 1
debug rgw = 20
under the: [client.radosgw.gateway]
Any ideas?
Best,
George
Hello - Please help me here. Where I can locate the source package?
On Tuesday, December 2, 2014 12:41 PM, lakshmi k s lux...@yahoo.com
wrote:
Hello:
I am trying to locate the source package used for DebianWheezy for the
radosgw-agent 1.2-1-bpo70+1 that is available from the
I am sorry, I have made a mistake. The source file of the code is
rgw_code.cc
instead of rgw_mian.cc.
2014-12-04 11:13 GMT+08:00 han vincent hang...@gmail.com:
hello, every one.
when I read the source code of ceph of which the version is
0.80.1. I found that line 1646 in
Hi all,
One of OSD in my cluster is in down state, I am not able to start service
on it.
Is there any way that I can start osd service on it?
ems@rack2-storage-1:~$ sudo ceph osd tree
# idweight type name up/down reweight
-1 46.9root default
-2 23.45 host
Hi all,
I would like to know which tool or cli that all users are using to simulate
metadata/data corruption.
This is to test scrub operation.
-Thanks regards,
Mallikarjun Biradar
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi,
i am wondering about running virtual environment traffic (VM - Ceph)
traffic on the ceph cluster network by plugging virtual hosts into this
network. Is this a good idea?
My thoughts are no, as VM - ceph traffic would be client traffic from
ceph perspective.
Just want the community's
Hi!
On CentOS 6.6 I have installed CEPH and ceph-radosgw
When I try to (re)start the ceph-radosgw service I am getting the
following:
# service ceph-radosgw restart
Stopping radosgw instance(s)...[ OK ]
Starting radosgw instance(s)...
/usr/bin/dirname: extra
AFAIK there is no tool to do this.
You simply rm object or dd a new content in the object (fill with zero)
On 04 Dec 2014, at 13:41, Mallikarjun Biradar
mallikarjuna.bira...@gmail.com wrote:
Hi all,
I would like to know which tool or cli that all users are using to simulate
Hi,
Ceph cluster network is only useful for OSDs.
Your vm only need access to public network (or client network if you
prefer).
My cluster is also in a virtual environnement. MONs and MDS are
virtuals. OSDs are physicals of course.
--
Thomas Lemarchand
Cloud Solutions SAS - Responsable des
For metadata corruption you would have to modify object file's extended
attributes (with xattr for example).
--
Tomasz Kuzemko
tomasz.kuze...@ovh.net
On Thu, Dec 04, 2014 at 02:26:56PM +0100, Sebastien Han wrote:
AFAIK there is no tool to do this.
You simply rm object or dd a new content in
I was thinking the same thing for the following implementation:
I would like to have an RBD volume mounted and accessible at the same
time by different VMs (using OCFS2).
Therefore I was also thinking that I had to put VMs on the internal
CEPH network by adding a second NIC and plugging that
I have a small update to this:
After an even closer reading of an offending pg's query I noticed the following:
peer: 4,
pgid: 19.6e,
last_update: 51072'48910307,
last_complete: 51072'48910307,
log_tail: 50495'48906592,
The log tail seems to have lagged behind the last_update/last_complete. I
Hi all,
Does anyone know about a list of good and bad SSD disks for OSD journals?
I was pointed to
http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
But I was looking for something more complete?
For example, I have a Samsung 840 Pro
Hi Eneko,
There has been various discussions on the list previously as to the best SSD
for Journal use. All of them have pretty much come to the conclusion that the
Intel S3700 models are the best suited and in fact work out the cheapest in
terms of write durability.
Nick
-Original
Thanks, will look back in the list archive.
On 04/12/14 15:47, Nick Fisk wrote:
Hi Eneko,
There has been various discussions on the list previously as to the best SSD
for Journal use. All of them have pretty much come to the conclusion that the
Intel S3700 models are the best suited and in
Eneko,
I do have plan to push to a performance initiative section on the ceph.com/docs
sooner or later so people will put their own results through github PR.
On 04 Dec 2014, at 16:09, Eneko Lacunza elacu...@binovo.es wrote:
Thanks, will look back in the list archive.
On 04/12/14 15:47,
Hi,
Maybe this could be interesting for you:
[Qemu-devel] [RFC PATCH 3/3] virtio-blk: introduce multiread
https://www.mail-archive.com/qemu-devel@nongnu.org/msg268718.html
Currently virtio-blk don't support merge request on read. (I think virtio-scsi
is already doing it).
So, that's mean
perhaps I'm confused about it,
but what I mean is, virtual host to storage traffic, ie: physical
virtual host machines plugged into ceph cluster network.
- P
On 04/12/14 13:54, Georgios Dimitrakakis wrote:
I was thinking the same thing for the following implementation:
I would like to have
On Fri, Nov 14, 2014 at 4:38 PM, Andrei Mikhailovsky and...@arhont.com
wrote:
Any other suggestions why several osds are going down on Giant and
causing IO to stall? This was not happening on Firefly.
Thanks
I had a very similar probem to yours which started after upgrading from
Firefly to
This is the second development release since Giant. The big items
include the first batch of scrub patchs from Greg for CephFS, a rework
in the librados object listing API to properly handle namespaces, and
a pile of bug fixes for RGW. There are also several smaller issues
fixed up in the
Hi Cephers,
Have anyone of you decided to put Giant into production instead of Firefly?
Any gotchas?
Regards
Anthony
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hello,
This morning I decided to reboot a storage node (Debian Jessie, thus 3.16
kernel and Ceph 0.80.7, HDD OSDs with SSD journals) after applying some
changes.
It came back up one OSD short, the last log lines before the reboot are:
---
2014-12-05 09:35:27.700330 7f87e789c700 2 --
24 matches
Mail list logo