Hi ceph-users,
I am trying S3-compatible API of ceph, but meet following issues:
1. x-amz-acl: public-read-write
I upload an object with public-read-write acl. Then I can get this object
directly without access key.
curl -v -s http://radosgw_server/mybucket0/20131015_1
...
HTTP/1.1 200
...
|
hi all!
my ceph is 0.62, and I want to build it wit hadoop.
./configure -with-hadoop
but it return jni.h not found.
I found the jni.h in /usr/java/jdk/include/jni.h
how can I fix this Problem!
thinks
pengft
|
|
| | |
Hi All,
Does anyone know if it'll be possible to use the radosgw admin API when
using keystone users? I suspect not due to the user requiring specific
caps, however it'd be great if someone can validate (I'm still running
v0.67.4 so can't play with this much).
Thanks!
-Matt
On Tue, Oct 15,
Hello,
when i set a read permission for all users to the bucket i read only the
content of the bucket but i received access denied for all directory
and sub-directory inside this bucket.
Where i wrong???
Many thanks
Fabio
___
ceph-users mailing list
Hi all,
There has been some confusion the past couple days at the CHEP conference
during conversations about Ceph and protection from bit flips or other subtle
data corruption. Can someone please summarise the current state of data
integrity protection in Ceph, assuming we have an XFS backend
The --with-hadoop option has been removed. The Ceph Hadoop bindings are now
located in git://github.com/ceph/hadoop-common cepfs/branch-1.0, and the
required CephFS Java bindings can be built from the Ceph Git repository
using the --enable-cephfs-java configure option.
On Wed, Oct 16, 2013 at
On 10/16/13 5:15 AM, Fabio - NS3 srl wrote:
Hello,
when i set a read permission for all users to the bucket i read only the
content of the bucket but i received access denied for all directory
and sub-directory inside this bucket.
Where i wrong???
Hi Fabio,
This is the default S3
On Tue, Oct 15, 2013 at 9:54 PM, Luke Jing Yuan jyl...@mimos.my wrote:
Hi,
I am trying to install/upgrade to 1.2.7 but Ubuntu (Precise) is complaining
about unmet dependency which seemed to be python-pushy 0.5.3 which seemed to
be missing. Am I correct to assume so?
That is odd, we still
Hello cephLVM communities!
I noticed very slow reads from xfs mount that is on ceph
client(rbd+gpt partition+LVM PV + xfs on LE)
To find a cause I created another rbd in the same pool, formatted it
straight away with xfs, mounted.
Write performance for both xfs mounts is similar ~12MB/s
reads
On Tue, Oct 15, 2013 at 9:19 PM, Guang yguan...@yahoo.com wrote:
-bash-4.1$ which sgdisk
/usr/sbin/sgdisk
Which path does ceph-deploy use?
That is unexpected... these are the paths that ceph-deploy uses:
'/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin'
So `/usr/sbin/` is
Hi Greg,
on http://comments.gmane.org/gmane.comp.file-systems.ceph.user/1705
I found a statement from you regarding snapshots on cephfs:
---snip---
Filesystem snapshots exist and you can experiment with them on CephFS
(there's a hidden .snaps folder; you can create or remove snapshots
by
I wouldn't go so far as to say putting a vm in a file on a networked filesystem
is wrong. It is just not the best choice if you have a ceph cluster at hand,
in my opinion. Networked filesystems have a bunch of extra stuff to implement
posix semantics and live in kernel space. You just need
On 10/16/13 4:26 AM, Valery Tschopp wrote:
Hi Derek,
Thanks for your example.
I've added caps='metadata=*', but I still have an error and get:
send: 'GET /admin/metadata/user?format=json HTTP/1.1\r\nHost:
objects.bcc.switch.ch\r\nAccept-Encoding: identity\r\nDate: Wed, 16 Oct
2013
Dieter,
Creating snapshots using cephfs is quite simple...all you need to do is create
a directory (mkdir) inside the hidden '.snap' directory.
After that you can list (ls) and remove them (rm -r) just as you would any
other directory:
smiley@server1:/mnt/cephfs$ cd .snap
Hi,
On Wed, 16 Oct 2013, Ugis wrote:
Hello cephLVM communities!
I noticed very slow reads from xfs mount that is on ceph
client(rbd+gpt partition+LVM PV + xfs on LE)
To find a cause I created another rbd in the same pool, formatted it
straight away with xfs, mounted.
Write performance
On Wed, Oct 16, 2013 at 8:01 AM, Kasper Dieter
dieter.kas...@ts.fujitsu.com wrote:
Hi Greg,
on http://comments.gmane.org/gmane.comp.file-systems.ceph.user/1705
I found a statement from you regarding snapshots on cephfs:
---snip---
Filesystem snapshots exist and you can experiment with them
Does Ceph log anywhere corrected(/caught) silent corruption - would be
interesting to know how much a problem this is, in a large scale
deployment. Something to gather in the league table mentioned at the
London Ceph day?
Just thinking out-loud (please shout me down...) - if the FS itself
At CERN, we have had cases in the past of silent corruptions. It is good to be
able to identify the devices causing them and swap them out.
It's an old presentation but the concepts are still relevant today ...
http://www.nsc.liu.se/lcsc2007/presentations/LCSC_2007-kelemen.pdf
Tim
Thank you Sage for the thorough answer.
It just occurred to me to also ask about the gateway. The docs explain that one
can supply content-md5 during an object PUT (which I assume is verified by the
RGW), but does a GET respond with the ETag md5? (Sorry, I don't have a gateway
running at the
It was long ago and Linux was very different .
With respect to today, we found quite a few cases of bad RAID cards which had
limited ECC checking on their memory, Stuck bits had serious impacts given our
data transit volumes :-(
While the root causes we found in the past may be less
On Wed, 16 Oct 2013, ja...@peacon.co.uk wrote:
Does Ceph log anywhere corrected(/caught) silent corruption - would be
interesting to know how much a problem this is, in a large scale deployment.
Something to gather in the league table mentioned at the London Ceph day?
It is logged, and causes
On Wed, Oct 16, 2013 at 6:12 PM, Sage Weil s...@inktank.com wrote:
3. During deep scrub of an object with 2 replicas, suppose the checksum is
different for the two objects -- which object wins? (I.e. if you store the
checksum locally, this is trivial since the consistency of objects can be
Rookie question: What's the curl command / URL / steps to get an authentication
token from the cluster without using the swift debug command first.
Using the swift_key values should work but I haven't found the right
combination /url.
Here's what I've done:
1: Get user info from ceph cluster:
Hi,
I have 2 x 2TB disks, in 3 servers, so total of 6 disks... I have deployed
total of 6 OSDs.
ie:
host1 = osd.0 and osd.1
host2 = osd.2 and osd.3
host4 = osd.4 and osd.5
Now, since I will have total of 3 replica (original + 2 replicas), I want
my replica placement to be such, that I don't end
Andrija,
You can use a single pool and the proper CRUSH rule
step chooseleaf firstn 0 type host
to accomplish your goal.
http://ceph.com/docs/master/rados/operations/crush-map/
Cheers,
Mike Dawson
On 10/16/2013 5:16 PM, Andrija Panic wrote:
Hi,
I have 2 x 2TB disks, in 3 servers, so
On 10/15/2013 08:56 PM, Blair Bethwaite wrote:
Date: Wed, 16 Oct 2013 16:06:49 +1300
From: Mark Kirkwood mark.kirkw...@catalyst.net.nz
mailto:mark.kirkw...@catalyst.net.nz
To: Wido den Hollander w...@42on.com mailto:w...@42on.com,
ceph-users@lists.ceph.com mailto:ceph-users@lists.ceph.com
I configured my cluster using the default journal location for my osds. Can I
migrate the default journals to explicit separate devices without a complete
cluster teardown and reinstallation? How?
Thanks,
Tim
___
ceph-users mailing list
On Wed, 16 Oct 2013, Snider, Tim wrote:
I configured my cluster using the default journal location for my osds. Can
I migrate the default journals to explicit separate devices without a
complete cluster teardown and reinstallation? How?
- stop a ceph-osd daemon, then
- ceph-osd --flush-journal
well, nice one :)
*step chooseleaf firstn 0 type host* - it is the part of default crush map
(3 hosts, 2 OSDs per host)
It means: write 3 replicas (in my case) to 3 hosts...and randomly select
OSD from each host ?
I already read all the docs...and still not sure how to proceed...
On 16
Hello Michael,
Thanks for the reply. It seems like ceph isn't actually mounting the rbd
to the vm host which is where I think I was getting hung up (I had
previously been attempting to mount rbds directly to multiple hosts and as
you can imagine having issues).
Could you possible expound on why
Hi,
I have gotten so close to have Ceph work in my cloud but I have reached a
roadblock. Any help would be greatly appreciated.
I receive the following error when trying to get KVM to run a VM with an RBD
volume:
Libvirtd.log:
2013-10-16 22 :05:15.516+: 9814: error :
I stumbled across this today:
4 osds on 4 hosts (names ceph1 - ceph4). They are KVM guests (this is a
play setup).
- ceph1 and ceph2 each have a 5G volume for osd data (+ 2G vol for journal)
- ceph3 and ceph4 each have a 10G volume for osd data (+ 2G vol for journal)
I do a standard
I may be wrong, but I always thought that a weight of 0 means don't put
anything there. All weights 0 will be looked at proportionally.
See http://ceph.com/docs/master/rados/operations/crush-map/ which recommends
higher weights anyway:
Weighting Bucket Items
Ceph expresses bucket weights
On Thu, 17 Oct 2013, Mark Kirkwood wrote:
I stumbled across this today:
4 osds on 4 hosts (names ceph1 - ceph4). They are KVM guests (this is a play
setup).
- ceph1 and ceph2 each have a 5G volume for osd data (+ 2G vol for journal)
- ceph3 and ceph4 each have a 10G volume for osd data (+
Performing yum updates on Fedora 19 now break qemu.
There is a different set of package names and contents between the
default fedora ceph packages and the ceph.com packages.
The is no ceph-libs package in the ceph.com repository and qemu now
enforces the dependency on ceph-libs.
Yum update now
35 matches
Mail list logo