Hi Yehuda,
Following were my findings after I have created two buckets
my-bucket-test-in-main in US-Standard and my-bucket-test-in-eu in
EU region.
1. AWS throws 409 conflict with error code BucketAlreadyOwnedByYou
when I try to recreate either of the two buckets in EU region.
2. Throws 200 OK
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Have you configured and enabled the epel repo?
-
Robert LeBlanc
GPG Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Thu, Jun 11, 2015 at 6:26 AM, Shambhu Rajak wrote:
I am trying to install ceph gaint on rhel
- Original Message -
From: Abhishek Dixit dixita...@gmail.com
To: ceph-devel ceph-devel@vger.kernel.org
Sent: Thursday, June 11, 2015 8:28:14 AM
Subject: Multipart Upload : Limit on no of parts
Hi,
I was looking into limit imposed on no of uploads for multipart upload.
The way
I don't see a compelling reason why to change our current behaviour. The fact
that Amazon itself is inconsistent makes me think that it just an artifact of
their architecture, rather than a carefully designed api.
Yehuda
- Original Message -
From: Harshal Gupta
This development release is delayed a bit due to tooling changes in the
build environment. As a result the next one (v9.0.2) will have a bit more
work than is usual.
Highlights here include lots of RGW Swift fixes, RBD feature work
surrounding the new object map feature, more CephFS snapshot
This Hammer point release fixes a few critical bugs in RGW that can
prevent objects starting with underscore from behaving properly and that
prevent garbage collection of deleted objects when using the Civetweb
standalone mode.
All v0.94.x Hammer users are strongly encouraged to upgrade, and
Hi,
The ceph-deploy suse back-end currently pulls down and installs packages
from ceph.com. As we move to shipping Ceph with our own SES / openSUSE
media, I'd like to allow users to also install from the distribution
repositories using upstream ceph-deploy.
Use of bundled Red Hat distribution
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
One feature we would like is an rbd top command that would be like
top, but show usage of RBD volumes so that we can quickly identify
high demand RBDs.
Since I haven't done any programming for Ceph, I'm trying to think
through the best way to
I don't see a compelling reason why to change our current behaviour. The fact
that Amazon itself is inconsistent makes me think that it just an artifact of
their architecture, rather than a carefully designed api.
Interesting.. if we look at the Amazon S3 FAQ we can understand the
Hi Mehdi,
I tried to
manage_resolv_conf: true
resolv_conf:
nameservers: ['8.8.4.4', '8.8.8.8']
but did not get any result and according to /var/log/cloud-init.log it does not
seem to be taken into account.
It looks like this is still an open issue according to
- Original Message -
From: Kyle Bader kyle.ba...@gmail.com
To: Yehuda Sadeh-Weinraub yeh...@redhat.com
Cc: Harshal Gupta harshal.gupta...@gmail.com, ceph-devel
ceph-devel@vger.kernel.org
Sent: Thursday, June 11, 2015 2:22:56 PM
Subject: Re: Duplicate bucket creation Response in
Hi Alexandre,
I agree with your rational, of one iothread per disk. CPU consumed in
IOwait is pretty high in each VM. But I am not finding a way to set
the same on a nova instance. I am using openstack Juno with QEMU+KVM.
As per libvirt documentation for setting iothreads, I can edit
domain.xml
Looking at resolvconf cloud-init src:
https://github.com/number5/cloud-init/blob/74e61ab27addbfcceac4eba254f739ef9964b0ed/cloudinit/config/cc_resolv_conf.py
As Debian/Ubuntu will, by default, utilize
#resovlconf, and similarly RedHat will use sysconfig, this module is
#likely to be of
Hi Loic,
I'm always playing with cloudinit currently,
and I never can get working resolv_conf module too (with configdrive datasource)
Finaly, I manage it with this configdrive:
/latest/meta_data.json
{
uuid: c5240fed-76a8-48d9-b417-45b46599d999,
network_config :{ content_path:
Hi Greg,
Please find responses inline.
With regards,
Shishir
-Original Message-
From: Gregory Farnum [mailto:g...@gregs42.com]
Sent: Wednesday, June 10, 2015 12:17 PM
To: Shishir Gowda
Cc: ceph-devel@vger.kernel.org
Subject: Re: Ceph tier’ing enhancements blue print for jewel
On
Hi,
I was looking into limit imposed on no of uploads for multipart upload.
The way it works currently seems as below:
1. RGWCompleteMultipart::get_params does following
__
#define COMPLETE_MULTIPART_MAX_LEN (1024 * 1024) /* api defines max
10,000
16 matches
Mail list logo