- Original Message -
> From: "Adam C. Emerson"
> To: "Graham Allan"
> Cc: "Ceph Users"
> Sent: Thursday, July 13, 2017 1:23:27 AM
> Subject: Re: [ceph-users] Bucket policies in Luminous
>
> Graham Allan Wrote:
> > I thought
On Mon, Jul 10, 2017 at 3:41 PM, Maged Mokhtar wrote:
> On 2017-07-10 20:06, Mohamad Gebai wrote:
>
>
> On 07/10/2017 01:51 PM, Jason Dillaman wrote:
>
> On Mon, Jul 10, 2017 at 1:39 PM, Maged Mokhtar wrote:
>
> These are significant differences, to
Hi,
I have a single-node Ceph cluster. After a power failure, the mds cell
stuck on replaying the logs and cephfs stops working.
ceph -s gives:
health HEALTH_WARN
mds cluster is degraded
noscrub,nodeep-scrub flag(s) set
monmap e4: 1 mons at
Sorry meant to include the list.
-- Forwarded message --
From: Brad Hubbard
Date: Wed, Jul 12, 2017 at 9:12 PM
Subject: Re: [ceph-users] installing specific version of ceph-common
To: Buyens Niels
On Wed, Jul 12, 2017 at 8:14 PM,
On Tue, Jul 11, 2017 at 12:22 AM, Marc Roos wrote:
>
>
> Is it possible to change the cephfs meta data pool. I would like to
> lower the pg's. And thought about just making a new pool, copying the
> pool and then renaming them. But I guess cephfs works with the pool id
>
On Wed, Jul 12, 2017 at 11:31 AM, Dan van der Ster wrote:
> On Wed, Jul 12, 2017 at 5:51 PM, Abhishek L
> wrote:
>> On Wed, Jul 12, 2017 at 9:13 PM, Xiaoxi Chen wrote:
>>> +However, it also introduced a regression that
I'm not sure if it is the blank tenant - I should have thought to try
before writing, but I added a new user which does have a tenancy, but
get the same issue.
policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "AWS":
Graham Allan Wrote:
> I thought I'd try out the new bucket policy support in Luminous. My goal
> was simply to permit access on a bucket to another user.
[snip]
> Thanks for any ideas,
It's probably the 'blank' tenant. I'll make up a test case to exercise
this and come up with a patch for it.
Hi Sage,
The automated tool Cepheus https://github.com/cepheus-io/cepheus does this
with ceph-chef. It's based on json data for a given environment. It uses
Chef and Ansible. If someone wanted to break out the haproxy (ADC) portion
into a package then it has a good model for HAProxy they could
On Wed, 12 Jul 2017, Patrick Donnelly wrote:
> On Wed, Jul 12, 2017 at 11:29 AM, Sage Weil wrote:
> > In the meantime, we can also avoid making the problem worse by requiring
> > that all pull requests include any relevant documentation updates. This
> > means (1) helping
I thought I'd try out the new bucket policy support in Luminous. My goal
was simply to permit access on a bucket to another user.
I have 2 users, "gta" and "gta2", both of which are in the default ("")
tenant. "gta" also owns the bucket named "gta". I want to grant access
on this bucket to
On Wed, Jul 12, 2017 at 11:29 AM, Sage Weil wrote:
> In the meantime, we can also avoid making the problem worse by requiring
> that all pull requests include any relevant documentation updates. This
> means (1) helping educate contributors that doc updates are needed, (2)
>
On Wed, Jul 12, 2017 at 5:51 PM, Abhishek L
wrote:
> On Wed, Jul 12, 2017 at 9:13 PM, Xiaoxi Chen wrote:
>> +However, it also introduced a regression that could cause MDS damage.
>> +Therefore, we do *not* recommend that Jewel users upgrade
We have a fair-sized list of documentation items to update for the
luminous release. The other day when I starting looking through what is
there now, though, I was also immediately struck by how out of date much
of the content is. In addition to addressing the immediate updates for
luminous,
> -Original Message-
> From: Nick Fisk [mailto:n...@fisk.me.uk]
> Sent: 12 July 2017 13:47
> To: 'Ilya Dryomov'
> Cc: 'Ceph Users'
> Subject: RE: [ceph-users] Kernel mounted RBD's hanging
>
> > -Original Message-
> > From: Nick
Hi!
I have installed Ceph using ceph-deploy.
The Ceph Storage Cluster setup includes these nodes:
ld4257 Monitor0 + Admin
ld4258 Montor1
ld4259 Monitor2
ld4464 OSD0
ld4465 OSD1
Ceph Health status is OK.
However, I cannot mount Ceph FS.
When I enter this command on ld4257
mount -t ceph
We use a puppet module to deploy them. We give it devices to
configure from hiera data specific to our different types of storage
nodes. The module is a fork from
https://github.com/openstack/puppet-ceph.
Ultimately the module ends up running 'ceph-disk prepare [arguments]
/dev/mapper/mpathXX
Understood, thanks Abhishek.
So 10.2.9 will not be another release cycle but just 10.2.8+ mds fix,
and expect to be out soon, right?
2017-07-12 23:51 GMT+08:00 Abhishek L :
> On Wed, Jul 12, 2017 at 9:13 PM, Xiaoxi Chen wrote:
>>
Hi Ben,
Thanks for this, much appreciated.
Can I just check: Do you use ceph-deploy to create your OSDs? E.g.:
ceph-deploy disk zap ceph-sn1.example.com:/dev/mapper/disk1
ceph-deploy osd prepare ceph-sn1.example.com:/dev/mapper/disk1
Best wishes,
Bruno
-Original Message-
From:
Yup already working on fixing the client, but it seems like a potentially nasty
issue for RGW, as a malicious client could potentially DOS an endpoint pretty
easily this way.
Aaron
> On Jul 12, 2017, at 11:48 AM, Jens Rosenboom wrote:
>
> 2017-07-12 15:23 GMT+00:00 Aaron
On Wed, Jul 12, 2017 at 9:13 PM, Xiaoxi Chen wrote:
> +However, it also introduced a regression that could cause MDS damage.
> +Therefore, we do *not* recommend that Jewel users upgrade to this version -
> +instead, we recommend upgrading directly to v10.2.9 in which the
2017-07-12 15:23 GMT+00:00 Aaron Bassett :
> I have a situation where a client is GET'ing a large key (100GB) from RadosGW
> and just reading the first few bytes to determine if it's a gzip file or not,
> and then just moving on without closing the connection. I'm
+However, it also introduced a regression that could cause MDS damage.
+Therefore, we do *not* recommend that Jewel users upgrade to this version -
+instead, we recommend upgrading directly to v10.2.9 in which the regression is
+fixed.
It looks like this version is NOT production ready. Curious
I have a situation where a client is GET'ing a large key (100GB) from RadosGW
and just reading the first few bytes to determine if it's a gzip file or not,
and then just moving on without closing the connection. I'm RadosGW then goes
on to read the rest of the object out of the cluster, while
Make sure to test that stuff. I've never had to modify the min_size on an
EC pool before.
On Wed, Jul 12, 2017 at 11:12 AM Jake Grimmett
wrote:
> Hi David,
>
> put that way, the docs make complete sense, thank you!
>
> i.e. to allow writing to a 5+2 EC cluster with one
Hi David,
put that way, the docs make complete sense, thank you!
i.e. to allow writing to a 5+2 EC cluster with one node down:
default is:
# ceph osd pool get ecpool min_size
min_size: 7
to tolerate one node failure, set:
# ceph osd pool set ecpool min_size 6
set pool 1 min_size to 6
to
SOLVED! S3-style subdomains work now!
In summary, to cutover from apache to civetweb without breaking other sites
on the same domain, here are the changes that worked for me:
/etc/ceph/ceph.conf:
# FASTCGI SETTINGS
#rgw socket path = ""
#rgw print continue = false
#rgw frontends = fastcgi
The lack of communication on this makes me tentative to upgrade to it. Are
the packages available to Ubuntu/Debian systems production ready and
intended for upgrades?
On Tue, Jul 11, 2017 at 8:33 PM Brad Hubbard wrote:
> On Wed, Jul 12, 2017 at 12:58 AM, David Turner
Hi Trilliams,
Sounds good, I bet that would be popular among OpenStack new-comers.
I'm interested in seeing some content that looks forward at the next
couple of years of server and storage hardware roadmaps alongside new
and upcoming Ceph features (e.g. BlueStore, EC overwrite support) to
Hi Greg,
On 12 July 2017 at 03:48, Gregory Farnum wrote:
> I poked at Patrick about this and it sounds like the venue is a little
> smaller than usual (and community planning is a little less
> planned-out for those ranges than usual) so things are still up in the
> air. :/
As long as you have the 7 copies online if you're using 7+2 then you can
still work and read to the EC pool. For EC pool, size is equivalently 9
and min_size is 7.
I have a 3 node cluster with 2+1 and I can restart 1 node at a time with
host failure domain.
On Wed, Jul 12, 2017, 6:34 AM Jake
Dear All,
Quick question; is it possible to write to a degraded EC pool?
i.e. is there an equivalent to this setting for a replicated pool..
osd pool default size = 3
osd pool default min size = 2
My reason for asking, is that it would be nice if we could build a EC
7+2 cluster, and actively
I tried installing librados2-10.2.7 separately first (which worked). Then
trying to install ceph-common-10.2.7 again:
Error: Package: 1:ceph-common-10.2.7-0.el7.x86_64 (Ceph)
Requires: librados2 = 1:10.2.7-0.el7
Removing: 1:librados2-10.2.7-0.el7.x86_64 (@Ceph)
On Wed, Jul 12, 2017 at 6:19 PM, Buyens Niels wrote:
> Hello,
>
>
> When trying to install a specific version of ceph-common when a newer
> version has been released, the installation fails.
>
>
> I have an environment running version 10.2.7 on CentOS 7. Recently, 10.2.8
>
On 11/07/17 20:05, Eino Tuominen wrote:
> Hi Richard,
>
> Thanks for the explanation, that makes perfect sense. I've missed the
> difference between ceph osd reweight and ceph osd crush reweight. I have to
> study that better.
>
> Is there a way to get ceph to prioritise fixing degraded
Oh, correcting myself. When HTTP proxying Apache translates the host header to
whatever was specified in the ProxyPass line, so your civetweb server is
receiving requests with host headers for localhost! Presumably for fcgi
protocol it works differently. Nonetheless ProxyPreserveHost should
Best guess, apache is munging together everything it picks up using the aliases
and translating the host to the ServerName before passing on the request. Try
setting ProxyPreserveHost on as per
https://httpd.apache.org/docs/2.4/mod/mod_proxy.html#proxypreservehost ?
Rich
On 11/07/17 21:47,
Hello,
When trying to install a specific version of ceph-common when a newer version
has been released, the installation fails.
I have an environment running version 10.2.7 on CentOS 7. Recently, 10.2.8 has
been released to the repos.
Trying to install version 10.2.7 will fail because it
Is this planned to be merged into Luminous at some point?
,Ashley
From: Gregory Farnum [mailto:gfar...@redhat.com]
Sent: Tuesday, 6 June 2017 2:24 AM
To: Ashley Merrick ; ceph-us...@ceph.com
Cc: David Zafman
Subject: Re: [ceph-users] PG Stuck EC Pool
39 matches
Mail list logo