[ceph-users] radosgw crash - Infernalis

2016-04-26 Thread Ben Hines
Is this a known one? Ceph 9.2.1. Can provide more logs if needed. 2> 2016-04-26 22:07:59.662702 7f49aeffd700 1 == req done req=0x7f49c4138be0 http_status=200 == -11> 2016-04-26 22:07:59.662752 7f49aeffd700 1 civetweb: 0x7f49c4001280: 10.30.1.221 - - [26/Apr/2016:22:07:59 -0700] "HEAD

Re: [ceph-users] how ceph mon works

2016-04-26 Thread Christian Balzer
On Tue, 26 Apr 2016 23:31:39 +0200 (CEST) Wido den Hollander wrote: > > > Op 26 april 2016 om 23:24 schreef yang sheng : > > > > > > Hi > > > > according to ceph docs, it recommend 3 monitors as least. All the > > clients will contact monitors first to get the ceph map

[ceph-users] Any Docs to configure NFS to access RADOSGW buckets on Jewel

2016-04-26 Thread WD_Hwang
Hello: Are there any documents or examples to explain the configuration of NFS to access RADOSGW buckets on Jewel? Thanks a lot. Best Regards, WD

[ceph-users] Any docs for replication in Jewel radosgw?

2016-04-26 Thread Richard Chan
Hi Cephers I'm interested in the new features of active active and bidirectional replication. https://github.com/ceph/ceph/blob/master/doc/radosgw/multisite.rst has setup information but nothing on replication. Thanks. -- Richard Chan ___

Re: [ceph-users] Hammer broke after adding 3rd osd server

2016-04-26 Thread Alwin Antreich
Hi Andrei, are you using Jumbo Frames? My experience, I had a driver issues where one NIC wouldn't accept the MTU set for the interface and the cluster ran into a very similar behavior as you are describing. After I have set the MTU for all NICs and servers to the working value of my troubling

Re: [ceph-users] how ceph mon works

2016-04-26 Thread Wido den Hollander
> Op 26 april 2016 om 23:24 schreef yang sheng : > > > Hi > > according to ceph docs, it recommend 3 monitors as least. All the clients > will contact monitors first to get the ceph map and connect the osd. > Yes, indeed. > I am curious that if I have 3 monitors, are

[ceph-users] how ceph mon works

2016-04-26 Thread yang sheng
Hi according to ceph docs, it recommend 3 monitors as least. All the clients will contact monitors first to get the ceph map and connect the osd. I am curious that if I have 3 monitors, are these monitors run in master-master mode or master-slave mode? In another word, will clients talk to any

Re: [ceph-users] Hammer broke after adding 3rd osd server

2016-04-26 Thread Wido den Hollander
> Op 26 april 2016 om 22:31 schreef Andrei Mikhailovsky : > > > Hi Wido, > > Thanks for your reply. We have a very simple ceph network. A single 40gbit/s > infiniband switch where the osd servers and hosts are connected to. There are > no default gates on the storage

Re: [ceph-users] CEPH All OSD got segmentation fault after CRUSH edit

2016-04-26 Thread Wido den Hollander
> Op 26 april 2016 om 19:39 schreef Samuel Just : > > > I think? Probably worth reproducing on a vstart cluster to validate > the fix. Didn't we introduce something in the mon to validate new > crushmaps? Hammer maybe? I ended up injecting a fixed CRUSHMap into osdmap 1432

Re: [ceph-users] Hammer broke after adding 3rd osd server

2016-04-26 Thread Andrei Mikhailovsky
Hi Wido, Thanks for your reply. We have a very simple ceph network. A single 40gbit/s infiniband switch where the osd servers and hosts are connected to. There are no default gates on the storage network. The IB is used only for ceph; everything else goes over the ethernet. I've checked the

Re: [ceph-users] Hammer broke after adding 3rd osd server

2016-04-26 Thread Wido den Hollander
> Op 26 april 2016 om 17:52 schreef Andrei Mikhailovsky : > > > Hello everyone, > > I've recently performed a hardware upgrade on our small two osd server ceph > cluster, which seems to have broke the ceph cluster. We are using ceph for > cloudstack rbd images for vms.All

Re: [ceph-users] Ceph cache tier, flushed objects does not appear to be written on disk

2016-04-26 Thread Gregory Farnum
You've probably got some issues with the exact commands you're running and how they interact with read-only caching — that's a less-common cache type. You'll need to get somebody who's experienced using those cache types or who has worked with it recently to help out, though. -Greg On Tue, Apr

Re: [ceph-users] CEPH All OSD got segmentation fault after CRUSH edit

2016-04-26 Thread Samuel Just
I think? Probably worth reproducing on a vstart cluster to validate the fix. Didn't we introduce something in the mon to validate new crushmaps? Hammer maybe? -Sam On Tue, Apr 26, 2016 at 8:09 AM, Wido den Hollander wrote: > >> Op 26 april 2016 om 16:58 schreef Samuel Just

Re: [ceph-users] krbd map on Jewel, sysfs write failed when rbd map

2016-04-26 Thread Ilya Dryomov
On Tue, Apr 26, 2016 at 5:45 PM, Somnath Roy wrote: > By default image format is 2 in jewel which is not supported by krbd..try > creating image with --image-format 1 and it should be resolved.. With the default striping pattern (no --stripe-unit or --stripe-count at

Re: [ceph-users] RadosGW not start after upgrade to Jewel

2016-04-26 Thread Yehuda Sadeh-Weinraub
On Tue, Apr 26, 2016 at 6:50 AM, Abhishek Lekshmanan wrote: > > Ansgar Jazdzewski writes: > >> Hi, >> >> After plaing with the setup i got some output that looks wrong >> >> # radosgw-admin zone get >> >> "placement_pools": [ >> { >> "key":

Re: [ceph-users] Can Jewel read Hammer radosgw buckets?

2016-04-26 Thread Richard Chan
Summary of Yehuda's script on Hammer -> Jewel upgrade: 1. It works: users, buckets, objects now accessible: the zonegroup and zone have been set to "default" ( previouslly zone = "" and region = "") 2. s3cmd needed to be upgraded to 1.6 to work Thanks. On Tue, Apr 26, 2016 at 8:06 AM, Yehuda

[ceph-users] Hammer broke after adding 3rd osd server

2016-04-26 Thread Andrei Mikhailovsky
Hello everyone, I've recently performed a hardware upgrade on our small two osd server ceph cluster, which seems to have broke the ceph cluster. We are using ceph for cloudstack rbd images for vms.All of our servers are Ubuntu 14.04 LTS with latest updates and kernel 4.4.6 from ubuntu repo.

Re: [ceph-users] krbd map on Jewel, sysfs write failed when rbd map

2016-04-26 Thread Somnath Roy
By default image format is 2 in jewel which is not supported by krbd..try creating image with --image-format 1 and it should be resolved.. Thanks Somnath Sent from my iPhone On Apr 25, 2016, at 9:38 PM, "wd_hw...@wistron.com"

Re: [ceph-users] CEPH All OSD got segmentation fault after CRUSH edit

2016-04-26 Thread Wido den Hollander
> Op 26 april 2016 om 16:58 schreef Samuel Just : > > > Can you attach the OSDMap (ceph osd getmap -o )? > -Sam > Henrik contacted me to look at this and this is what I found: 0x00b18b81 in crush_choose_firstn (map=map@entry=0x1f00200, bucket=0x0,

Re: [ceph-users] CEPH All OSD got segmentation fault after CRUSH edit

2016-04-26 Thread Samuel Just
Can you attach the OSDMap (ceph osd getmap -o )? -Sam On Tue, Apr 26, 2016 at 2:07 AM, Henrik Svensson wrote: > Hi! > > We got a three node CEPH cluster with 10 OSD each. > > We bought 3 new machines with additional 30 disks that should reside in > another location.

Re: [ceph-users] Can Jewel read Hammer radosgw buckets?

2016-04-26 Thread Richard Chan
My bad: s3cmd errors were unrelated to Jewel upgrade and Yehuda's script: It required an upgrade from s3cmd from 1.5 to 1.6 - sorry for the noise. Will try to replicate the upgrade. On Tue, Apr 26, 2016 at 9:27 PM, Richard Chan wrote: > Also s3cmd is unable to

Re: [ceph-users] RadosGW not start after upgrade to Jewel

2016-04-26 Thread Abhishek Lekshmanan
Ansgar Jazdzewski writes: > Hi, > > After plaing with the setup i got some output that looks wrong > > # radosgw-admin zone get > > "placement_pools": [ > { > "key": "default-placement", > "val": { > "index_pool": ".eu-qa.rgw.buckets.inde", >

Re: [ceph-users] Can Jewel read Hammer radosgw buckets?

2016-04-26 Thread Richard Chan
Also s3cmd is unable to create new buckets: # s3cmd -c jewel.cfg mb s3://test.3 ERROR: S3 error: None On Tue, Apr 26, 2016 at 8:06 AM, Yehuda Sadeh-Weinraub wrote: > I managed to reproduce the issue, and there seem to be multiple > problems. Specifically we have an issue

Re: [ceph-users] Can Jewel read Hammer radosgw buckets?

2016-04-26 Thread Richard Chan
Result: 1. user and buckets recognised; 2. radosgw-admin bucket list --bucket test.1 shows objects but 3. s3cmd cannot list contents of buckets # s3cmd -c jewel.cfg ls 2016-04-25 15:57 s3://test.1 2016-04-25 15:58 s3://test.2 # s3cmd -c jewel.cfg ls s3://test.1/ ERROR: S3 error: None s3cmd -c

[ceph-users] RadosGW and X-Storage-Url

2016-04-26 Thread Paweł Sadowski
Hi, I'm testing RadosGW on Infernalis (9.2.1) and have two questions regarding X-Storage-Url header. First thing is that it always returns something like below: X-Storage-Url: http://my.example.domain:0/swift/v1 While docs say it should return "... {api version}/{account} prefix" Second

Re: [ceph-users] RadosGW not start after upgrade to Jewel

2016-04-26 Thread Ansgar Jazdzewski
Hi, After plaing with the setup i got some output that looks wrong # radosgw-admin zone get "placement_pools": [ { "key": "default-placement", "val": { "index_pool": ".eu-qa.rgw.buckets.inde", "data_pool":

Re: [ceph-users] RadosGW not start after upgrade to Jewel

2016-04-26 Thread Ansgar Jazdzewski
Hi all, i got an answer, that pointed me to: https://github.com/ceph/ceph/blob/master/doc/radosgw/multisite.rst 2016-04-25 16:02 GMT+02:00 Karol Mroz : > On Mon, Apr 25, 2016 at 02:23:28PM +0200, Ansgar Jazdzewski wrote: >> Hi, >> >> we test Jewel in our QA environment (from

Re: [ceph-users] ceph OSD down+out =>health ok => remove =>PGsbackfilling... ?

2016-04-26 Thread Burkhard Linke
Hi, On 04/26/2016 12:32 PM, SCHAER Frederic wrote: Hi, One simple/quick question. In my ceph cluster, I had a disk wich was in predicted failure. It was so much in predicted failure that the ceph OSD daemon crashed. After the OSD crashed, ceph moved data correctly (or at least that’s

[ceph-users] ceph OSD down+out =>health ok => remove => PGs backfilling... ?

2016-04-26 Thread SCHAER Frederic
Hi, One simple/quick question. In my ceph cluster, I had a disk wich was in predicted failure. It was so much in predicted failure that the ceph OSD daemon crashed. After the OSD crashed, ceph moved data correctly (or at least that's what I thought), and a ceph -s was giving a "HEALTH_OK".

Re: [ceph-users] increase pgnum after adjust reweight osd

2016-04-26 Thread lin zhou
Thanks Christian. this cluster has 7 nodes with 69 osds. I know this version is so old,but its hard to stop service to upgrade. and I will increase it slowly with step 100. Thanks again. 2016-04-25 15:55 GMT+08:00 Christian Balzer : > > Hello, > > On Mon, 25 Apr 2016 13:23:04

[ceph-users] CEPH All OSD got segmentation fault after CRUSH edit

2016-04-26 Thread Henrik Svensson
Hi! We got a three node CEPH cluster with 10 OSD each. We bought 3 new machines with additional 30 disks that should reside in another location. Before adding these machines we modified the default CRUSH table. After modifying the (default) crush table with these commands the cluster went

[ceph-users] How to configure NFS to access RADOSGW buckets

2016-04-26 Thread WD_Hwang
Hello: Are there any documents or examples to explain the configuration of NFS to access RADOSGW buckets? Thanks a lot. Best Regards, WD

Re: [ceph-users] krbd map on Jewel, sysfs write failed when rbd map

2016-04-26 Thread Stefan Lissmats
Hello again! I normally map with rbd map data03 -p block_data but your format should work. Mabye it's woth a try anyway? Is there any more explaining errors in dmesg after trying to map the image? Maybe there could be some clue what's happening there? Also is there any possibility to update the

Re: [ceph-users] krbd map on Jewel, sysfs write failed when rbd map

2016-04-26 Thread WD_Hwang
Hello: Sorry for that I forgot paste the results of image format 1. And I still cannot mount the format 1 or 2 block on Ubuntu 14.04 client, which the kernel is 3.13.0-85-generic #129-Ubuntu. ## # rbd create block_data/data03 -s 10G --image-format 1 rbd: image format 1 is deprecated

Re: [ceph-users] krbd map on Jewel, sysfs write failed when rbd map

2016-04-26 Thread Stefan Lissmats
Hello! It seems you referring to an earlier message but i can't find it. It doesn't look that you have created image format 1 images. I have created images in Jewel (10.2.0 and also som erlier releases) with the switch --image-format 1 and seems to work perfectly even if it's a depreciated