: 54278 objects, 71724 MB
usage: 121 GB used, 27820 GB / 27941 GB avail
pgs: 372 active+clean
1.
http://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/#replacing-an-osd
On Wed, Aug 2, 2017 at 11:08 AM Roger Brown <rogerpbr...@gmail.com> wrote:
> Hi,
>
>
; wrote:
>>>
>>>> All those pools should have been auto-marked as owned by rgw though. We
>>>> do have a ticket around that (http://tracker.ceph.com/issues/20891)
>>>> but so far it's just confusing.
>>>> -Greg
>>>>
>
pplication enable" command. For more details see
> "Associate Pool to Application" in the documentation.
>
> It is always a good idea to read the release notes before upgrading to a
> new version of Ceph.
>
> On Fri, Aug 4, 2017 at 10:29 AM Roger Brown <roger
Is this something new in Luminous 12.1.2, or did I break something? Stuff
still seems to function despite the warnings.
$ ceph health detail
POOL_APP_NOT_ENABLED application not enabled on 14 pool(s)
application not enabled on pool 'default.rgw.buckets.non-ec'
application not enabled on
Woops, nvm my last. My eyes deceived me.
On Fri, Aug 4, 2017 at 8:21 AM Roger Brown <rogerpbr...@gmail.com> wrote:
> Did you really mean to say "increase this value to 20 TB from 1 TB"?
>
>
> On Fri, Aug 4, 2017 at 7:28 AM Rhian Resnick <rresn...@fau.edu> wr
Did you really mean to say "increase this value to 20 TB from 1 TB"?
On Fri, Aug 4, 2017 at 7:28 AM Rhian Resnick wrote:
> Morning,
>
>
> We ran into an issue with the default max file size of a cephfs file. Is
> it possible to increase this value to 20 TB from 1 TB without
requires a
> working manager daemon. Did you set one up yet?
>
> On Thu, Aug 3, 2017 at 7:31 AM Roger Brown <rogerpbr...@gmail.com> wrote:
>
>> I'm running Luminous 12.1.2 and I seem to be in a catch-22. I've got pgs
>> that report they need to be scrubbed, however the comm
I'm running Luminous 12.1.2 and I seem to be in a catch-22. I've got pgs
that report they need to be scrubbed, however the command to scrub them
seems to have gone away. The flapping OSD is an issue for another thread.
Please advise.
Example:
roger@desktop:~$ ceph --version
ceph version 12.1.2
Hi,
My OSD's were continuously crashing in cephx_verify_authorizer() while on
Luminous v12.1.0 and v12.1.1, but the crashes stopped once I upgraded to
v12.1.2.
Now however, one of my OSDs is continuing to crash. Looking closer, the
crash reason is different reason and started with v12.1.1.
I've
ail on ceph-dev earlier.
>
> > On Thu, Jul 20, 2017 at 1:02 PM Roger Brown <rogerpbr...@gmail.com>
> wrote:
> ...
> >> Representative example from osd1 logs:
> >> Jul 20 13:42:18 osd1 ceph-osd[4035]: *** Caught signal (Segmentation
> >> fault) **
> &
I could be wrong, but I think you cannot achieve this objective. If you
declare a cluster network, OSDs will route heartbeat, object replication
and recovery traffic over the cluster network. We prefer that the cluster
network is NOT reachable from the public network or the Internet for added
I had same issue on Lumninous and worked around it by disabling ceph-disk.
The osds can start without it.
On Thu, Jul 27, 2017 at 3:36 PM Oscar Segarra
wrote:
> Hi,
>
> First of all, my version:
>
> [root@vdicnode01 ~]# ceph -v
> ceph version 12.1.1
nt $1}' | while read i; do
> ceph pg deep-scrub ${i}; done
>
>
>
> --
>
> Petr Malkov
>
>
>
> -
>
> Message: 57
>
> Date: Wed, 19 Jul 2017 16:38:20 +
>
> From: Roger Brown <rogerpbr...@gmail.com>
>
> T
I hope someone else can answer your question better, but in my case I found
something like this helpful to delete objects faster than I could through
the gateway:
rados -p default.rgw.buckets.data ls | grep 'replace this with pattern
matching files you want to delete' | xargs -d '\n' -n 200 rados
The method I have used is to 1) edit ceph.conf, 2) use ceph-deploy config
push, 3) restart monitors
Example:
roger@desktop:~/ceph-cluster$ vi ceph.conf# make ceph.conf change
roger@desktop:~/ceph-cluster$ ceph-deploy --overwrite-conf config push
nuc{1..3}
[ceph_deploy.conf][DEBUG ] found
sudo ceph auth get-or-create client.bootstrap-mgr mon 'allow profile
> bootstrap-mgr' > /var/lib/ceph/bootstrap-mgr/ceph.keyring
>
> or something similar - i.e better fix up that file to match the new key!
>
> Cheers
>
> Mark
>
>
> On 24/07/17 12:26, Roger Brown
;
wrote:
> Hmmm, not seen that here.
>
> From the error message it does not seem to like
> /var/lib/ceph/bootstrap-mgr/ceph.keyring - what does the contents of
> that look like?
>
> regards
>
> Mark
> On 24/07/17 03:09, Roger Brown wrote:
> > Mark,
> >
> &
Mark,
Thanks for that information. I can't seem to deploy ceph-mgr either. I also
have the busted mgr bootstrap key. I attempted the suggested fix, but my
issue may be different somehow. Complete output follows.
-Roger
roger@desktop:~$ ceph-deploy --version
1.5.38
roger@desktop:~$ ceph mon
I'm on Luminous 12.1.1 and noticed I have flapping OSDs. Even with `ceph
osd set nodown`, the OSDs will catch signal Aborted and sometimes
Segmentation fault 2-5 minutes after starting. I verified hosts can talk to
eachother on the cluster network. I've rebooted the hosts. I'm running out
of
So I disabled ceph-disk and will chalk it up as a red herring to ignore.
On Thu, Jul 20, 2017 at 11:02 AM Roger Brown <rogerpbr...@gmail.com> wrote:
> Also I'm just noticing osd1 is my only OSD host that even has an enabled
> target for ceph-disk (ceph-disk@dev-sdb2.service).
&g
active active ceph target allowing to
start/stop all ceph-radosgw@.service instances at once
ceph.target loaded active active ceph target allowing to
start/stop all ceph*@.service instances at once
On Thu, Jul 20, 2017 at 10:23 AM Roger Brown <rogerpbr...@gmail.com> wrote:
> I thi
I think I need help with some OSD trouble. OSD daemons on two hosts started
flapping. At length, I rebooted host osd1 (osd.3), but the OSD daemon still
fails to start. Upon closer inspection, ceph-disk@dev-sdb2.service is
failing to start due to, "Error: /dev/sdb2 is not a block device"
This is
What's the trick to overcoming unsupported features error when mapping an
erasure-coded rbd? This is on Ceph Luminous 12.1.1, Ubuntu Xenial, Kernel
4.10.0-26-lowlatency.
Steps to replicate:
$ ceph osd pool create rbd_data 32 32 erasure default
pool 'rbd_data' created
$ ceph osd pool set rbd_data
I just upgraded from Luminous 12.1.0 to 12.1.1 and was greeted with this
new "pgs not deep-scrubbed for" warning. Should this resolve itself, or
should I get scrubbing?
$ ceph health detail
HEALTH_WARN 4 pgs not deep-scrubbed for 86400; 15 pgs not scrubbed for 86400
PG_NOT_DEEP_SCRUBBED 4 pgs not
ch!
Roger
On Wed, Jul 19, 2017 at 7:34 AM David Turner <drakonst...@gmail.com> wrote:
> I would go with the weight that was originally assigned to them. That way
> it is in line with what new osds will be weighted.
>
> On Wed, Jul 19, 2017, 9:17 AM Roger Brown <rogerpbr...@gmai
lly
> very healthy.
>
> On Tue, Jul 18, 2017, 11:16 PM Roger Brown <rogerpbr...@gmail.com> wrote:
>
>> Resolution confirmed!
>>
>> $ ceph -s
>> cluster:
>> id: eea7b78c-b138-40fc-9f3e-3d77afb770f0
>> health: HEALTH_OK
>>
>>
: 54243 objects, 71722 MB
usage: 129 GB used, 27812 GB / 27941 GB avail
pgs: 372 active+clean
On Tue, Jul 18, 2017 at 8:47 PM Roger Brown <rogerpbr...@gmail.com> wrote:
> Ah, that was the problem!
>
> So I edited the crushmap (
> http://docs.ceph.com/docs/master/rado
ng of "osd1".
>
>
> On Wed, Jul 19, 2017 at 11:48 AM, Roger Brown <rogerpbr...@gmail.com>
> wrote:
> > I also tried ceph pg query, but it gave no helpful recommendations for
> any
> > of the stuck pgs.
> >
> >
> > On Tue, Jul 18, 2017 at
I also tried ceph pg query, but it gave no helpful recommendations for any
of the stuck pgs.
On Tue, Jul 18, 2017 at 7:45 PM Roger Brown <rogerpbr...@gmail.com> wrote:
> Problem:
> I have some pgs with only two OSDs instead of 3 like all the other pgs
> have. This is causing ac
Problem:
I have some pgs with only two OSDs instead of 3 like all the other pgs
have. This is causing active+undersized+degraded status.
History:
1. I started with 3 hosts, each with 1 OSD process (min_size 2) for a 1TB
drive.
2. Added 3 more hosts, each with 1 OSD process for a 10TB drive.
3.
I've been trying to work through similar mgr issues for Xenial-Luminous...
roger@desktop:~/ceph-cluster$ ceph-deploy mgr create mon1 nuc2
[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/roger/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.38): /usr/bin/ceph-deploy mgr
the host to the ServerName before passing on the
> request. Try setting ProxyPreserveHost on as per
> https://httpd.apache.org/docs/2.4/mod/mod_proxy.html#proxypreservehost ?
> >
> > Rich
> >
> > On 11/07/17 21:47, Roger Brown wrote:
> >> Thank you Richard, that mostly work
l 11, 2017 at 10:22 AM Richard Hesketh <
richard.hesk...@rd.bbc.co.uk> wrote:
> On 11/07/17 17:08, Roger Brown wrote:
> > What are some options for migrating from Apache/FastCGI to Civetweb for
> RadosGW object gateway *without* breaking other websites on the domain?
> &
What are some options for migrating from Apache/FastCGI to Civetweb for
RadosGW object gateway *without* breaking other websites on the domain?
I found documention on how to migrate the object gateway to Civetweb (
I'm a n00b myself, but I'll go on record with my understanding.
On Sun, Jun 4, 2017 at 3:03 PM Benoit GEORGELIN - yulPa <
benoit.george...@yulpa.io> wrote:
> Hi ceph users,
>
> Ceph have a very good documentation about technical usage, but there is a
> lot of conceptual things missing (from my
I'm using fastcgi/apache2 instead of civetweb (centos7) because i couldn't
get civetweb to work with SSL on port 443 and in a subdomain of my main
website.
So I have domain.com, www.domain.com, s3.domain.com (RGW), and *.
s3.domain.com for the RGW buckets. As long as you can do the same with
How interesting! Thank you for that.
On Sat, Apr 29, 2017 at 4:04 PM Bryan Henderson
wrote:
> A few months ago, I posted here asking why the Ceph program takes so much
> memory (virtual, real, and address space) for what seems to be a simple
> task.
> Nobody knew, but I
I don't recall. Perhaps later I can try a test and see.
On Fri, Apr 28, 2017 at 10:22 AM Ali Moeinvaziri <moein...@gmail.com> wrote:
> Thanks. So, you didn't get any error on command "ceph-deploy mon
> create-initial"?
> -AM
>
>
> On Fri, Apr 28, 2017
I used ceph on centos 7. I check monitor status with commands like these:
systemctl status ceph-mon@nuc1
systemctl stop ceph-mon@nuc1
systemctl start ceph-mon@nuc1
systemctl restart ceph-mon@nuc1
for me, the hostnames are nuc1, nuc2, nuc3 so you have to modify to suit
your case.
On Fri, Apr 28,
My first thought is ceph doesn't have permissions to the rados keyring file.
eg.
[root@nuc1 ~]# ls -l /etc/ceph/ceph.client.radosgw.keyring
-rw-rw+ 1 root root 73 Feb 8 20:40
/etc/ceph/ceph.client.radosgw.keyring
You could give it read permission or be clever with setfacl, eg.
setfacl -m
I had similar issues when I created all the rbd-related pools with
erasure-coding instead of replication. -Roger
On Wed, Mar 1, 2017 at 11:47 AM John Nielsen wrote:
> Hi all-
>
> We use Amazon S3 quite a bit at $WORK but are evaluating Ceph+radosgw as
> an alternative for
replace "master" with the release codename, eg.
http://docs.ceph.com/docs/kraken/
On Mon, Feb 27, 2017 at 12:45 PM Stéphane Klein
wrote:
> Hi,
>
> how can I read old Ceph version documentation?
>
> http://docs.ceph.com I see only "master" documentation.
>
> I look
Today I learned if you can't use an erasure coded .rgw.buckets.index pool
with radosgw. If you do, expect HTTP 500 errors and stuff
like rgw_create_bucket returned ret=-95.
My setup:
CentOS 7.3.1611
ceph 11.2.0 (Kraken)
Apache/2.4.6
PHP 5.5.38
radosgw with via FastCGI
I recreated the pool
43 matches
Mail list logo