[ceph-users] Ceph error after upgrade Argonaut to Bobtail to Cuttlefish

2013-10-11 Thread Ansgar Jazdzewski
Hi, i updated my cluster yesterday an all is gone well. But Today i got an error i never seen before. - # ceph health detail HEALTH_ERR 1 pgs inconsistent; 1 scrub errors pg 2.5 is active+clean+inconsistent, acting [9,4] 1 scrub errors - any idea to fix it? after i did the up grade i cr

Re: [ceph-users] Any Docs to configure NFS to access RADOSGW buckets on Jewel

2016-04-27 Thread Ansgar Jazdzewski
all informations i have so far are from the FOSDEM https://fosdem.org/2016/schedule/event/virt_iaas_ceph_rados_gateway_overview/attachments/audio/1079/export/events/attachments/virt_iaas_ceph_rados_gateway_overview/audio/1079/Fosdem_RGW.pdf Cheers, Ansgar 2016-04-27 2:28 GMT+02:00 : > Hello: >

Re: [ceph-users] Can Jewel read Hammer radosgw buckets?

2016-05-02 Thread Ansgar Jazdzewski
hi, my pools ar named a bit fifferent fomr the dafault ones: .dev-qa.rgw.gc .rgw.control .dev-qa.users.uid .dev-qa.users.swift .dev.rgw.root .dev-qa.usage .dev-qa.log .dev-qa.rgw.buckets .dev-qa.rgw.buckets.index .dev-qa.rgw.root .dev-qa.users.email .dev-qa.intent-log .dev-qa.rgw.buckets.extra .d

[ceph-users] 40Mil objects in S3 rados pool / how calculate PGs

2016-06-14 Thread Ansgar Jazdzewski
Hi, we are using ceph and radosGW to store images (~300kb each) in S3, when in comes to deep-scrubbing we facing task timeouts (> 30s ...) my questions is: in case of that amount of objects/files is it better to calculate the PGs on a object-bases instant of the volume size? and how it should be

Re: [ceph-users] 40Mil objects in S3 rados pool / how calculate PGs

2016-06-14 Thread Ansgar Jazdzewski
Hi, yes we have index sharding enabled, we have only tow big buckets at the moment with 15Mil objects each and some smaller ones cheers, Ansgar 2016-06-14 10:51 GMT+02:00 Wido den Hollander : > >> Op 14 juni 2016 om 10:10 schreef Ansgar Jazdzewski >> : >> >> >>

Re: [ceph-users] 40Mil objects in S3 rados pool / how calculate PGs

2016-06-14 Thread Ansgar Jazdzewski
ido den Hollander : >> > >> >> Op 14 juni 2016 om 10:10 schreef Ansgar Jazdzewski >> >> : >> >> >> >> >> >> Hi, >> >> >> >> we are using ceph and radosGW to store images (~300kb each) in S3, >> >&

Re: [ceph-users] pgs stuck unclean after growing my ceph-cluster

2013-03-14 Thread Ansgar Jazdzewski
Hi, Thanks, my warning is gone now. 2013/3/13 Jeff Anderson-Lee > On 3/13/2013 9:31 AM, Greg Farnum wrote: > >> Nope, it's not because you were using the cluster. The "unclean" PGs here >> are those which are in the "active+remapped" state. That's actually two >> states — "active" which is good

Re: [ceph-users] Re-exporting RBD images via iSCSI

2013-03-16 Thread Ansgar Jazdzewski
Hi, i have done a short look into RBD + iSCSI, and i found TGT + librbd. https://github.com/fujita/tgt http://stgt.sourceforge.net/ i didn't take a deeper look into it but i like to test it in the next monthor so, it looks easy to me https:// github.com/fujita/tgt/blob/master/doc/README.rbd che

Re: [ceph-users] Yet another performance tuning for CephFS

2017-07-18 Thread Ansgar Jazdzewski
Hi, i will try to join in and help, as far es i got you you only have HDD's in your cluster? you use the journal on the HDD? and you have a replication of 3 set on your pools? with that in mind you can do some calulations ceph need to: 1. write the data and metadata into the journal 2. copy the

[ceph-users] upgrade ceph from 10.2.7 to 10.2.9

2017-07-19 Thread Ansgar Jazdzewski
hi *, we are facing some issue with the upgrade of our OSD the updateprogess on ubuntu 16.04 stops at: Setting system user ceph properties..usermod: no changes ..done Fixing /var/run/ceph ownershipdone no more output is given to the system, my permission are ok so how to go ahead

Re: [ceph-users] Ceph - SSD cluster

2017-11-20 Thread Ansgar Jazdzewski
Hi *, just on note because we hit it, take a look on your discard options make sure it not run on all OSD at the same time. 2017-11-20 6:56 GMT+01:00 M Ranga Swami Reddy : > Hello, > We plan to use the ceph cluster with all SSDs. Do we have any > recommendations for Ceph cluster with Full SSD dis

Re: [ceph-users] RGW Logging pool

2017-12-17 Thread Ansgar Jazdzewski
Hi, it is poosible to configure the rgw logging to a unix socket, with this you are able to use a json stream. In a POC we put events into a rediscache to do async processing. sadly i cant find the needed configlines at the moment. hope it helps, Ansgar __

[ceph-users] Ceph-mgr Python error with prometheus plugin

2018-02-16 Thread Ansgar Jazdzewski
Hi Folks, i just try to get the prometheus plugin up and runing but as soon as i browse /metrics i got: 500 Internal Server Error The server encountered an unexpected condition which prevented it from fulfilling the request. Traceback (most recent call last): File "/usr/lib/python2.7/dist-pack

Re: [ceph-users] Ceph-mgr Python error with prometheus plugin

2018-02-16 Thread Ansgar Jazdzewski
tanks i will have a look into it -- Ansgar 2018-02-16 10:10 GMT+01:00 Konstantin Shalygin : >> i just try to get the prometheus plugin up and runing > > > > Use module from master. > > From this commit should work with 12.2.2, just wget it and replace stock > module. > > https://github.com/ceph/c

Re: [ceph-users] Ceph-mgr Python error with prometheus plugin

2018-02-16 Thread Ansgar Jazdzewski
on 12.2.2 (cf0baba3b47f9427c6c97e2144b094b7e5ba) luminous (stable)": 2 }, "rgw": { "ceph version 12.2.2 (cf0baba3b47f9427c6c97e2144b094b7e5ba) luminous (stable)": 3 }, "overall": { "ceph version 12.2.2 (cf0baba3b47f9427c6c97e2144

Re: [ceph-users] Ceph-mgr Python error with prometheus plugin

2018-02-16 Thread Ansgar Jazdzewski
hi, wihle i added the "class" to all my OSD's the ceph-mgr crashed :-( but the prometheus plugin works now for i in {1..9}; do ceph osd crush set-device-class hdd osd.$i; done Thanks, Ansgar 2018-02-16 10:12 GMT+01:00 Jan Fajerski : > On Fri, Feb 16, 2018 at 09:27:08AM +0100,

[ceph-users] RadosGW index-sharding on Jewel

2016-09-14 Thread Ansgar Jazdzewski
Hi, i curently setup my new testcluster (Jewel) and found out the index sharding configuration had changed? i did so far: 1. radosgw-admin realm create --rgw-realm=default --default 2. radosgw-admin zonegroup get --rgw-zonegroup=default > zonegroup.json 3. chaned value "bucket_index_max_shards":

[ceph-users] Calc the nuber of shards needed for a pucket

2016-10-14 Thread Ansgar Jazdzewski
Hi, I like to know if someone of you have some kind of a formula to set the right number of shards for a bucket. We have currently a Bucket with 30M objects and expect that it will go up to 50M. At the moment we have 64 Shards configured, but it was told me that this is much to less. Any hints /

[ceph-users] ceph in an OSPF environment

2016-11-23 Thread Ansgar Jazdzewski
Hi, we are planing to build a new datacenter with a layer3 routed network under all our servers. So each server will have a 17.16.x.y/32 ip and is anounced and shared using OSPF with ECMP. Now i try to install ceph on this nodes: and i got stuck because the OSD-nodes can not reach the MON (ceph -

[ceph-users] ceph in an OSPF environment

2016-11-23 Thread Ansgar Jazdzewski
Hi, we are planing to build a new datacenter with a layer3 routed network under all our servers. So each server will have a 17.16.x.y/32 ip and is anounced and shared using OSPF with ECMP. Now i try to install ceph on this nodes: and i got stuck because the OSD-nodes can not reach the MON (ceph -

[ceph-users] bluestore OSD did not start at system-boot

2018-04-05 Thread Ansgar Jazdzewski
hi folks, i just figured out that my ODS's did not start because the filsystem is not mounted. So i wrote a script to Hack my way around it # #! /usr/bin/env bash DATA=( $(ceph-volume lvm list | grep -e 'osd id\|osd fsid' | awk '{print $3}' | tr '\n' ' ') ) OSDS=$(( ${#DATA[@]}/2 )) for OS

[ceph-users] Writeback Cache-Tier show negativ numbers

2017-02-22 Thread Ansgar Jazdzewski
Hi, i have a new stange error in my ceph cluster: # ceph -s health HEALTH_WARN 'default.rgw.buckets.data.cache' at/near target max # ceph df default.rgw.buckets.data 10 20699G 27.86 53594G 50055267 default.rgw.buckets.data.cache 1115E

[ceph-users] RadosGW not start after upgrade to Jewel

2016-04-25 Thread Ansgar Jazdzewski
Hi, we test Jewel in our QA environment (from Infernalis to Hammer) the upgrade went fine but the Radosgw did not start. the error appears also with radosgw-admin # radosgw-admin user info --uid="images" --rgw-region=eu --rgw-zone=eu-qa 2016-04-25 12:13:33.425481 7fc757fad900 0 error in read_i

Re: [ceph-users] RadosGW not start after upgrade to Jewel

2016-04-26 Thread Ansgar Jazdzewski
Hi all, i got an answer, that pointed me to: https://github.com/ceph/ceph/blob/master/doc/radosgw/multisite.rst 2016-04-25 16:02 GMT+02:00 Karol Mroz : > On Mon, Apr 25, 2016 at 02:23:28PM +0200, Ansgar Jazdzewski wrote: >> Hi, >> >> we test Jewel in our QA environmen

Re: [ceph-users] RadosGW not start after upgrade to Jewel

2016-04-26 Thread Ansgar Jazdzewski
x. data_pool = .eu-qa.rgw.buckets data_extra_pool = .eu-qa.rgw.buckets.extra how can i fix it? Thanks Ansgar 2016-04-26 13:07 GMT+02:00 Ansgar Jazdzewski : > Hi all, > > i got an answer, that pointed me to: > https://github.com/ceph/ceph/blob/master/doc/radosgw/multisite.rst > >

[ceph-users] ceph-fs crashed after upgrade to 13.2.4

2019-01-28 Thread Ansgar Jazdzewski
hi folks we need some help with our cephfs, all mds keep crashing starting mds.mds02 at - terminate called after throwing an instance of 'ceph::buffer::bad_alloc' what(): buffer::bad_alloc *** Caught signal (Aborted) ** in thread 7f542d825700 thread_name:md_log_replay ceph version 13.2.4 (b10be4

Re: [ceph-users] ceph-fs crashed after upgrade to 13.2.4

2019-01-29 Thread Ansgar Jazdzewski
downgrade ceph-mds to old version? > > > On Mon, Jan 28, 2019 at 9:20 PM Ansgar Jazdzewski > wrote: > > > > hi folks we need some help with our cephfs, all mds keep crashing > > > > starting mds.mds02 at - > > terminate called after throwing an instance o

[ceph-users] ceph mimic and samba vfs_ceph

2019-05-08 Thread Ansgar Jazdzewski
hi folks, we try to build a new NAS using the vfs_ceph modul from samba 4.9. if i try to open the share i recive the error: May 8 06:58:44 nas01 smbd[375700]: 2019-05-08 06:58:44.732830 7ff3d5f6e700 0 -- 10.100.219.51:0/3414601814 >> 10.100.219.11:6789/0 pipe(0x7ff3cc00c350 sd=6 :45626 s=1 pgs

Re: [ceph-users] ceph mimic and samba vfs_ceph

2019-05-10 Thread Ansgar Jazdzewski
thanks, i will try to "backport" this to ubuntu 16.04 Ansgar Am Do., 9. Mai 2019 um 12:33 Uhr schrieb Paul Emmerich : > > We maintain vfs_ceph for samba at mirror.croit.io for Debian Stretch and > Buster. > > We apply a9c5be394da4f20bcfea7f6d4f5919d5c0f90219 on Samba 4.9 for > Buster to fix thi

Re: [ceph-users] ceph mimic and samba vfs_ceph

2019-05-14 Thread Ansgar Jazdzewski
hi, i was able to compile samba 4.10.2 using the mimic-headerfiles and it works fine so far. now we are loking forward to do some real load tests. Have a nice one, Ansgar Am Fr., 10. Mai 2019 um 13:33 Uhr schrieb Ansgar Jazdzewski : > > thanks, > > i will try to "backport"

[ceph-users] OSD's keep crasching after clusterreboot

2019-08-06 Thread Ansgar Jazdzewski
hi folks, we had to move one of our clusters so we had to boot all servers, now we found an Error on all OSD with the EC-Pool. do we miss some opitons, will an upgrade to 13.2.6 help? Thanks, Ansgar 2019-08-06 12:10:16.265 7fb337b83200 -1 /build/ceph-13.2.4/src/osd/ECUtil.h: In function 'ECUti

Re: [ceph-users] OSD's keep crasching after clusterreboot

2019-08-07 Thread Ansgar Jazdzewski
a bug in the cephfs or erasure-coding part of ceph. Ansgar Am Di., 6. Aug. 2019 um 14:50 Uhr schrieb Ansgar Jazdzewski : > > hi folks, > > we had to move one of our clusters so we had to boot all servers, now > we found an Error on all OSD with the EC-Pool. > > do we miss

Re: [ceph-users] OSD's keep crasching after clusterreboot

2019-08-07 Thread Ansgar Jazdzewski
rade to Nautilus and hope for the beset. any help hints are welcome, have a nice one Ansgar Am Mi., 7. Aug. 2019 um 11:32 Uhr schrieb Ansgar Jazdzewski : > > Hi, > > as a follow-up: > * a full log of one OSD failing to start https://pastebin.com/T8UQ2rZ6 > * our ec-pool cration in t

Re: [ceph-users] OSD's keep crasching after clusterreboot

2019-08-08 Thread Ansgar Jazdzewski
} from OSD ${OSD}" ceph-objectstore-tool --data-path ${DATAPATH} --pgid ${i} --op remove --force done since we now had removed our cephfs we still not know if we could have solved it without data loss by upgrading to nautilus. Have a nice Weekend, Ansgar Am Mi., 7. Aug. 2019 um 17:03 Uhr schrieb

[ceph-users] KVM userspace-rbd hung_task_timeout on 3rd disk

2019-09-11 Thread Ansgar Jazdzewski
Hi, we are running ceph version 13.2.4 and qemu 2.10, we figured out that on VMs with more than three disks IO fails with hung task timeout, wehn ever we do IO on disks after the 2nd one. - is this issue known to a qemu / ceph version could not find something in the changelogs!? - do you have an