Hi,
i updated my cluster yesterday an all is gone well.
But Today i got an error i never seen before.
-
# ceph health detail
HEALTH_ERR 1 pgs inconsistent; 1 scrub errors
pg 2.5 is active+clean+inconsistent, acting [9,4]
1 scrub errors
-
any idea to fix it?
after i did the up grade i cr
all informations i have so far are from the FOSDEM
https://fosdem.org/2016/schedule/event/virt_iaas_ceph_rados_gateway_overview/attachments/audio/1079/export/events/attachments/virt_iaas_ceph_rados_gateway_overview/audio/1079/Fosdem_RGW.pdf
Cheers,
Ansgar
2016-04-27 2:28 GMT+02:00 :
> Hello:
>
hi,
my pools ar named a bit fifferent fomr the dafault ones:
.dev-qa.rgw.gc
.rgw.control
.dev-qa.users.uid
.dev-qa.users.swift
.dev.rgw.root
.dev-qa.usage
.dev-qa.log
.dev-qa.rgw.buckets
.dev-qa.rgw.buckets.index
.dev-qa.rgw.root
.dev-qa.users.email
.dev-qa.intent-log
.dev-qa.rgw.buckets.extra
.d
Hi,
we are using ceph and radosGW to store images (~300kb each) in S3,
when in comes to deep-scrubbing we facing task timeouts (> 30s ...)
my questions is:
in case of that amount of objects/files is it better to calculate the
PGs on a object-bases instant of the volume size? and how it should be
Hi,
yes we have index sharding enabled, we have only tow big buckets at
the moment with 15Mil objects each and some smaller ones
cheers,
Ansgar
2016-06-14 10:51 GMT+02:00 Wido den Hollander :
>
>> Op 14 juni 2016 om 10:10 schreef Ansgar Jazdzewski
>> :
>>
>>
>>
ido den Hollander :
>> >
>> >> Op 14 juni 2016 om 10:10 schreef Ansgar Jazdzewski
>> >> :
>> >>
>> >>
>> >> Hi,
>> >>
>> >> we are using ceph and radosGW to store images (~300kb each) in S3,
>> >&
Hi,
Thanks, my warning is gone now.
2013/3/13 Jeff Anderson-Lee
> On 3/13/2013 9:31 AM, Greg Farnum wrote:
>
>> Nope, it's not because you were using the cluster. The "unclean" PGs here
>> are those which are in the "active+remapped" state. That's actually two
>> states — "active" which is good
Hi,
i have done a short look into RBD + iSCSI, and i found TGT + librbd.
https://github.com/fujita/tgt
http://stgt.sourceforge.net/
i didn't take a deeper look into it but i like to test it in the next
monthor so, it looks easy to me https://
github.com/fujita/tgt/blob/master/doc/README.rbd
che
Hi,
i will try to join in and help, as far es i got you you only have
HDD's in your cluster? you use the journal on the HDD? and you have a
replication of 3 set on your pools?
with that in mind you can do some calulations ceph need to:
1. write the data and metadata into the journal
2. copy the
hi *,
we are facing some issue with the upgrade of our OSD
the updateprogess on ubuntu 16.04 stops at:
Setting system user ceph properties..usermod: no changes
..done
Fixing /var/run/ceph ownershipdone
no more output is given to the system, my permission are ok so how to go ahead
Hi *,
just on note because we hit it, take a look on your discard options
make sure it not run on all OSD at the same time.
2017-11-20 6:56 GMT+01:00 M Ranga Swami Reddy :
> Hello,
> We plan to use the ceph cluster with all SSDs. Do we have any
> recommendations for Ceph cluster with Full SSD dis
Hi,
it is poosible to configure the rgw logging to a unix socket, with
this you are able to use a json stream.
In a POC we put events into a rediscache to do async processing.
sadly i cant find the needed configlines at the moment.
hope it helps,
Ansgar
__
Hi Folks,
i just try to get the prometheus plugin up and runing but as soon as i
browse /metrics i got:
500 Internal Server Error
The server encountered an unexpected condition which prevented it from
fulfilling the request.
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-pack
tanks i will have a look into it
--
Ansgar
2018-02-16 10:10 GMT+01:00 Konstantin Shalygin :
>> i just try to get the prometheus plugin up and runing
>
>
>
> Use module from master.
>
> From this commit should work with 12.2.2, just wget it and replace stock
> module.
>
> https://github.com/ceph/c
on 12.2.2 (cf0baba3b47f9427c6c97e2144b094b7e5ba)
luminous (stable)": 2
},
"rgw": {
"ceph version 12.2.2 (cf0baba3b47f9427c6c97e2144b094b7e5ba)
luminous (stable)": 3
},
"overall": {
"ceph version 12.2.2 (cf0baba3b47f9427c6c97e2144
hi,
wihle i added the "class" to all my OSD's the ceph-mgr crashed :-( but
the prometheus plugin works now
for i in {1..9}; do ceph osd crush set-device-class hdd osd.$i; done
Thanks,
Ansgar
2018-02-16 10:12 GMT+01:00 Jan Fajerski :
> On Fri, Feb 16, 2018 at 09:27:08AM +0100,
Hi,
i curently setup my new testcluster (Jewel) and found out the index
sharding configuration had changed?
i did so far:
1. radosgw-admin realm create --rgw-realm=default --default
2. radosgw-admin zonegroup get --rgw-zonegroup=default > zonegroup.json
3. chaned value "bucket_index_max_shards":
Hi,
I like to know if someone of you have some kind of a formula to set
the right number of shards for a bucket.
We have currently a Bucket with 30M objects and expect that it will go
up to 50M.
At the moment we have 64 Shards configured, but it was told me that
this is much to less.
Any hints /
Hi,
we are planing to build a new datacenter with a layer3 routed network
under all our servers.
So each server will have a 17.16.x.y/32 ip and is anounced and shared
using OSPF with ECMP.
Now i try to install ceph on this nodes: and i got stuck because the
OSD-nodes can not reach the MON (ceph -
Hi,
we are planing to build a new datacenter with a layer3 routed network
under all our servers.
So each server will have a 17.16.x.y/32 ip and is anounced and shared
using OSPF with ECMP.
Now i try to install ceph on this nodes: and i got stuck because the
OSD-nodes can not reach the MON (ceph -
hi folks,
i just figured out that my ODS's did not start because the filsystem
is not mounted.
So i wrote a script to Hack my way around it
#
#! /usr/bin/env bash
DATA=( $(ceph-volume lvm list | grep -e 'osd id\|osd fsid' | awk
'{print $3}' | tr '\n' ' ') )
OSDS=$(( ${#DATA[@]}/2 ))
for OS
Hi,
i have a new stange error in my ceph cluster:
# ceph -s
health HEALTH_WARN
'default.rgw.buckets.data.cache' at/near target max
# ceph df
default.rgw.buckets.data 10 20699G 27.86
53594G 50055267
default.rgw.buckets.data.cache 1115E
Hi,
we test Jewel in our QA environment (from Infernalis to Hammer) the
upgrade went fine but the Radosgw did not start.
the error appears also with radosgw-admin
# radosgw-admin user info --uid="images" --rgw-region=eu --rgw-zone=eu-qa
2016-04-25 12:13:33.425481 7fc757fad900 0 error in read_i
Hi all,
i got an answer, that pointed me to:
https://github.com/ceph/ceph/blob/master/doc/radosgw/multisite.rst
2016-04-25 16:02 GMT+02:00 Karol Mroz :
> On Mon, Apr 25, 2016 at 02:23:28PM +0200, Ansgar Jazdzewski wrote:
>> Hi,
>>
>> we test Jewel in our QA environmen
x.
data_pool = .eu-qa.rgw.buckets
data_extra_pool = .eu-qa.rgw.buckets.extra
how can i fix it?
Thanks
Ansgar
2016-04-26 13:07 GMT+02:00 Ansgar Jazdzewski :
> Hi all,
>
> i got an answer, that pointed me to:
> https://github.com/ceph/ceph/blob/master/doc/radosgw/multisite.rst
>
>
hi folks we need some help with our cephfs, all mds keep crashing
starting mds.mds02 at -
terminate called after throwing an instance of
'ceph::buffer::bad_alloc'
what(): buffer::bad_alloc
*** Caught signal (Aborted) **
in thread 7f542d825700 thread_name:md_log_replay
ceph version 13.2.4 (b10be4
downgrade ceph-mds to old version?
>
>
> On Mon, Jan 28, 2019 at 9:20 PM Ansgar Jazdzewski
> wrote:
> >
> > hi folks we need some help with our cephfs, all mds keep crashing
> >
> > starting mds.mds02 at -
> > terminate called after throwing an instance o
hi folks,
we try to build a new NAS using the vfs_ceph modul from samba 4.9.
if i try to open the share i recive the error:
May 8 06:58:44 nas01 smbd[375700]: 2019-05-08 06:58:44.732830
7ff3d5f6e700 0 -- 10.100.219.51:0/3414601814 >> 10.100.219.11:6789/0
pipe(0x7ff3cc00c350 sd=6 :45626 s=1 pgs
thanks,
i will try to "backport" this to ubuntu 16.04
Ansgar
Am Do., 9. Mai 2019 um 12:33 Uhr schrieb Paul Emmerich :
>
> We maintain vfs_ceph for samba at mirror.croit.io for Debian Stretch and
> Buster.
>
> We apply a9c5be394da4f20bcfea7f6d4f5919d5c0f90219 on Samba 4.9 for
> Buster to fix thi
hi,
i was able to compile samba 4.10.2 using the mimic-headerfiles and it
works fine so far.
now we are loking forward to do some real load tests.
Have a nice one,
Ansgar
Am Fr., 10. Mai 2019 um 13:33 Uhr schrieb Ansgar Jazdzewski
:
>
> thanks,
>
> i will try to "backport"
hi folks,
we had to move one of our clusters so we had to boot all servers, now
we found an Error on all OSD with the EC-Pool.
do we miss some opitons, will an upgrade to 13.2.6 help?
Thanks,
Ansgar
2019-08-06 12:10:16.265 7fb337b83200 -1
/build/ceph-13.2.4/src/osd/ECUtil.h: In function
'ECUti
a bug in the cephfs or
erasure-coding part of ceph.
Ansgar
Am Di., 6. Aug. 2019 um 14:50 Uhr schrieb Ansgar Jazdzewski
:
>
> hi folks,
>
> we had to move one of our clusters so we had to boot all servers, now
> we found an Error on all OSD with the EC-Pool.
>
> do we miss
rade to Nautilus and
hope for the beset.
any help hints are welcome,
have a nice one
Ansgar
Am Mi., 7. Aug. 2019 um 11:32 Uhr schrieb Ansgar Jazdzewski
:
>
> Hi,
>
> as a follow-up:
> * a full log of one OSD failing to start https://pastebin.com/T8UQ2rZ6
> * our ec-pool cration in t
} from OSD ${OSD}"
ceph-objectstore-tool --data-path ${DATAPATH} --pgid ${i} --op remove --force
done
since we now had removed our cephfs we still not know if we could have
solved it without data loss by upgrading to nautilus.
Have a nice Weekend,
Ansgar
Am Mi., 7. Aug. 2019 um 17:03 Uhr schrieb
Hi,
we are running ceph version 13.2.4 and qemu 2.10, we figured out that
on VMs with more than three disks IO fails with hung task timeout,
wehn ever we do IO on disks after the 2nd one.
- is this issue known to a qemu / ceph version could not find
something in the changelogs!?
- do you have an
35 matches
Mail list logo