Re: [ceph-users] FW: Ceph data locality

2015-07-07 Thread Lionel Bouton
On 07/07/15 18:20, Dmitry Meytin wrote: Exactly because of that issue I've reduced the number of Ceph replications to 2 and the number of HDFS copies is also 2 (so we're talking about 4 copies). I want (but didn't tried yet) to change Ceph replication to 1 and change HDFS back to 3. You are

Re: [ceph-users] Health WARN, ceph errors looping

2015-07-07 Thread Abhishek L
Steve Dainard writes: Hello, Ceph 0.94.1 2 hosts, Centos 7 I have two hosts, one which ran out of / disk space which crashed all the osd daemons. After cleaning up the OS disk storage and restarting ceph on that node, I'm seeing multiple errors, then health OK, then back into the

[ceph-users] Client - Server Version Dependencies

2015-07-07 Thread Eino Tuominen
Hello, I tried to find documentation about version dependencies. I understand that a newer client (librados) should always be able to talk to an older server, but how about the other way round? -- Eino Tuominen ___ ceph-users mailing list

[ceph-users] Health WARN, ceph errors looping

2015-07-07 Thread Steve Dainard
Hello, Ceph 0.94.1 2 hosts, Centos 7 I have two hosts, one which ran out of / disk space which crashed all the osd daemons. After cleaning up the OS disk storage and restarting ceph on that node, I'm seeing multiple errors, then health OK, then back into the errors: # ceph -w

Re: [ceph-users] Health WARN, ceph errors looping

2015-07-07 Thread Steve Dainard
The error keeps coming back, eventually status changing to OK, then back into errors. I thought it looked like a connectivity issue as well with the wrongly marked me down, but firewall rules are allowing all traffic on the cluster network. Syslog is being flooded with messages like: Jul 7

[ceph-users] CephFS archive use case

2015-07-07 Thread Peter Tiernan
Hi, i have a use case for CephFS whereby files can be added but not modified or deleted. Is this possible? Perhaps with cephFS layout or cephx capabilities. thanks in advance ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] FW: Ceph data locality

2015-07-07 Thread Dmitry Meytin
Hi Lionel, Thanks for the answer. The missing info: 1) Ceph 0.80.9 Firefly 2) map-reduce makes sequential reads of blocks of 64MB (or 128 MB) 3) HDFS which is running on top of Ceph is replicating data for 3 times between VMs which could be located on the same physical host or different hosts 4)

[ceph-users] Ceph OSDs are down and cannot be started

2015-07-07 Thread Fredy Neeser
Hi, I had a working Ceph Hammer test setup with 3 OSDs and 1 MON (running on VMs), and RBD was working fine. The setup was not touched for two weeks (also no I/O activity), and when I looked again, the cluster was in a bad state: On the MON node (sto-vm20): $ ceph health HEALTH_WARN 72 pgs

Re: [ceph-users] Node reboot -- OSDs not logging off from cluster

2015-07-07 Thread Daniel Schneller
On 2015-07-03 01:31:35 +, Johannes Formann said: Hi, When rebooting one of the nodes (e. g. for a kernel upgrade) the OSDs do not seem to shut down correctly. Clients hang and ceph osd tree show the OSDs of that node still up. Repeated runs of ceph osd tree show them going down after a

Re: [ceph-users] FW: Ceph data locality

2015-07-07 Thread Lionel Bouton
On 07/07/15 17:41, Dmitry Meytin wrote: Hi Lionel, Thanks for the answer. The missing info: 1) Ceph 0.80.9 Firefly 2) map-reduce makes sequential reads of blocks of 64MB (or 128 MB) 3) HDFS which is running on top of Ceph is replicating data for 3 times between VMs which could be located

Re: [ceph-users] FW: Ceph data locality

2015-07-07 Thread Dmitry Meytin
Unfortunately it seems that currently CephFS doesn't support Hadoop 2.* The next step will be try Tachyon on top of Ceph. Maybe somebody tried such constellation already? -Original Message- From: Lionel Bouton [mailto:lionel+c...@bouton.name] Sent: Tuesday, July 07, 2015 7:49 PM To:

Re: [ceph-users] Ceph OSDs are down and cannot be started

2015-07-07 Thread Somnath Roy
Run : 'ceph-osd -i 0 -f' in a console and see what is the output. Thanks Regards Somnath -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Fredy Neeser Sent: Tuesday, July 07, 2015 9:15 AM To: ceph-users@lists.ceph.com Subject: [ceph-users]

Re: [ceph-users] Ceph Rados-Gateway Configuration issues

2015-07-07 Thread MOSTAFA Ali (INTERN)
Hi, Which OS you are using ? I installed it on Ubuntu Vivid and it gave me a hard time to work, I didn't manage to make it work on Ubuntu trusty. For Ubuntu there's some missing commands. Since the hammer release and the newest ceph-deploy, you can install the RGW with a single command, but I

[ceph-users] Ceph Rados-Gateway Configuration issues

2015-07-07 Thread Teclus Dsouza -X (teclus - TECH MAHINDRA LIM at Cisco)
Hello Everyone, I was trying to configure Ceph Object Gateway and in the final boto script running into connectivity issues .I am following the link http://docs.ceph.com/docs/master/radosgw/config/for this . I was able to get the Apache and FastCgi configured but in the Section

Re: [ceph-users] Ceph Rados-Gateway Configuration issues

2015-07-07 Thread Teclus Dsouza -X (teclus - TECH MAHINDRA LIM at Cisco)
I am trying on Ubuntu 14.04 and using Hammer Release. I seem to have everything setup , but I am not sure what is the best alternate method to test it . Regards Teclus Dsouza From: MOSTAFA Ali (INTERN) [mailto:ali.mostafa.int...@3ds.com] Sent: Tuesday, July 07, 2015 2:36 PM To: Teclus

Re: [ceph-users] bucket owner vs S3 ACL?

2015-07-07 Thread Valery Tschopp
Hi Florent, Yes this make sense now. Thanks a lot V. On 01/07/15 20:19 , Florent MONTHEL wrote: Hi Valery, With the old account did you try to give FULL access to the new one user ID ? Process should be : From OLD account add FULL access to NEW account (S3 ACL with CloudBerry for example)

[ceph-users] ceph kernel settings

2015-07-07 Thread Daniel Hoffman
Hey all. wondering if anyone has a set of kernel settings they run on larger density setups. 24-36 disks per node. we have run into and resolved the PID issue, just wondering if there any anything else we may be seeing we dont know about yet. Thanks Daniel

[ceph-users] Ceph data locality

2015-07-07 Thread Dmitry Meytin
Hi, I need a help to configure clients to write data to the primary osd on the local server. I have a cluster with 20 nodes and on each server I have a client which writes/reads data to/from Ceph cluster (typical OpenStack installation with Ceph on each compute node). I see a lot of networking

Re: [ceph-users] CephFS archive use case

2015-07-07 Thread Gregory Farnum
That's not something that CephFS supports yet; raw RADOS doesn't have any kind of immutability support either. :( -Greg On Tue, Jul 7, 2015 at 5:28 PM Peter Tiernan ptier...@tchpc.tcd.ie wrote: Hi, i have a use case for CephFS whereby files can be added but not modified or deleted. Is this

Re: [ceph-users] FW: Ceph data locality

2015-07-07 Thread Lionel Bouton
Hi Dmitry, On 07/07/15 14:42, Dmitry Meytin wrote: Hi Christian, Thanks for the thorough explanation. My case is Elastic Map Reduce on top of OpenStack with Ceph backend for everything (block, object, images). With default configuration, performance is 300% worse than bare metal. I did a

Re: [ceph-users] Ceph FS - MDS problem

2015-07-07 Thread Gregory Farnum
On Tue, Jul 7, 2015 at 4:02 PM, Dan van der Ster d...@vanderster.com wrote: Hi Greg, On Tue, Jul 7, 2015 at 4:25 PM, Gregory Farnum g...@gregs42.com wrote: 4. mds cache size = 500 is going to use a lot of memory! We have an MDS with just 8GB of RAM and it goes OOM after delegating around

[ceph-users] PG degraded after settings OSDs out

2015-07-07 Thread MOSTAFA Ali (INTERN)
Hello, I have a test cluster of 12 OSDs, I deleted all pools then I set six of them out. After I created a Pool of 100 PG, I have the PGs stuck in creating or degraded state. Can you please advise. Does the Crush algo still taking the OSDs marked as down in consideration? Even if I have data

[ceph-users] He8 drives

2015-07-07 Thread Blair Bethwaite
Hi folks, Does anyone have any experience with the newish HGST He8 8TB Helium filled HDDs? Storagereview looked at them here: http://www.storagereview.com/hgst_ultrastar_helium_he8_8tb_enterprise_hard_drive_review. I'm torn as to the lower read performance shown there than e.g. the He6 or Seagate

[ceph-users] RadosGW - Negative bucket stats

2015-07-07 Thread Italo Santos
Hello, I realize that one of buckets in my cluster have some strange stats and I see that a issue like that was resolved on previously on issue #3127 (http://tracker.ceph.com/issues/3127), so I’d like to know how can I identify if my case is that as described on that issue? See the stats:

[ceph-users] Help with radosgw admin ops hash of header

2015-07-07 Thread Eduardo Gonzalez Gutierrez
Hi, i'm trying to use admin ops through curl, but i don't know where to get Authorization: AWS {access-key}: {hash-of-header-and-secret}. Can anyone help me how to get hash of header and secret? My test user info is: { user_id: user1, display_name: user1, email: ,

Re: [ceph-users] NVME SSD for journal

2015-07-07 Thread Christian Balzer
Hello, On Wed, 8 Jul 2015 00:33:59 +1200 Andrew Thrift wrote: We are running NVMe Intel P3700's as journals for about 8 months now. 1x P3700 per 6x OSD. So far they have been reliable. We are using S3700, S3710 and P3700 as journals and there is _currently_ no real benefit of the P3700

[ceph-users] FW: Ceph data locality

2015-07-07 Thread Dmitry Meytin
I think it's essential for huge data clusters to deal with data locality. Even very expensive network stack (100Gb/s) will not mitigate the problem if you need to move petabytes of data many times a day. Maybe there is some workaround to the problem? From: Van Leeuwen, Robert

Re: [ceph-users] FW: Ceph data locality

2015-07-07 Thread Christian Balzer
Hello, On Tue, 7 Jul 2015 11:45:11 + Dmitry Meytin wrote: I think it's essential for huge data clusters to deal with data locality. Even very expensive network stack (100Gb/s) will not mitigate the problem if you need to move petabytes of data many times a day. Maybe there is some

Re: [ceph-users] NVME SSD for journal

2015-07-07 Thread Andrew Thrift
We are running NVMe Intel P3700's as journals for about 8 months now.1x P3700 per 6x OSD. So far they have been reliable. We are using S3700, S3710 and P3700 as journals and there is _currently_ no real benefit of the P3700 over the SATA units as journals for Ceph. Regards, Andrew On

Re: [ceph-users] NVME SSD for journal

2015-07-07 Thread Van Leeuwen, Robert
I'm wondering if anyone is using NVME SSDs for journals? Intel 750 series 400GB NVME SSD offers good performance and price in comparison to let say Intel S3700 400GB. http://ark.intel.com/compare/71915,86740 My concern would be MTBF / TBW which is only 1.2M hours and 70GB per day for 5yrs

Re: [ceph-users] Ceph Rados-Gateway Configuration issues

2015-07-07 Thread Teclus Dsouza -X (teclus - TECH MAHINDRA LIM at Cisco)
Hi Ali, I have used this command and it worked fine for me . Can you be specific on what you want to see from this output. Regards Teclus Dsouza From: MOSTAFA Ali (INTERN) [mailto:ali.mostafa.int...@3ds.com] Sent: Tuesday, July 07, 2015 4:57 PM To: Teclus Dsouza -X (teclus - TECH MAHINDRA

Re: [ceph-users] Ceph Rados-Gateway Configuration issues

2015-07-07 Thread Teclus Dsouza -X (teclus - TECH MAHINDRA LIM at Cisco)
Nope I did not do any changes it just worked fine when executed. Regards Teclus Dsouza From: MOSTAFA Ali (INTERN) [mailto:ali.mostafa.int...@3ds.com] Sent: Tuesday, July 07, 2015 5:25 PM To: Teclus Dsouza -X (teclus - TECH MAHINDRA LIM at Cisco); ceph-users@lists.ceph.com Subject: RE:

Re: [ceph-users] FW: Ceph data locality

2015-07-07 Thread Dmitry Meytin
Hi Christian, Thanks for the thorough explanation. My case is Elastic Map Reduce on top of OpenStack with Ceph backend for everything (block, object, images). With default configuration, performance is 300% worse than bare metal. I did a few changes: 1) replication settings 2 2) read ahead size

Re: [ceph-users] EC cluster design considerations

2015-07-07 Thread Adrien Gillard
Thank you Christian, That comforts me in what I was thinking about the MONs, I will resize them though, according to your advices and Paul's. Regards, Adrien On Tue, Jul 7, 2015 at 6:18 AM, Christian Balzer ch...@gol.com wrote: Hello, On Sun, 5 Jul 2015 16:17:20 + Paul Evans wrote:

Re: [ceph-users] Ceph Rados-Gateway Configuration issues

2015-07-07 Thread MOSTAFA Ali (INTERN)
So the test succeeded, did you made any changes or it worked right away? Regards, ALi From: Teclus Dsouza -X (teclus - TECH MAHINDRA LIM at Cisco) [mailto:tec...@cisco.com] Sent: mardi 7 juillet 2015 13:46 To: MOSTAFA Ali (INTERN); ceph-users@lists.ceph.com Subject: RE: [ceph-users] Ceph

Re: [ceph-users] FW: Ceph data locality

2015-07-07 Thread Wido den Hollander
On 07-07-15 14:42, Dmitry Meytin wrote: Hi Christian, Thanks for the thorough explanation. My case is Elastic Map Reduce on top of OpenStack with Ceph backend for everything (block, object, images). With default configuration, performance is 300% worse than bare metal. I did a few

[ceph-users] NVME SSD for journal

2015-07-07 Thread Dominik Zalewski
Hi, I'm wondering if anyone is using NVME SSDs for journals? Intel 750 series 400GB NVME SSD offers good performance and price in comparison to let say Intel S3700 400GB. http://ark.intel.com/compare/71915,86740 My concern would be MTBF / TBW which is only 1.2M hours and 70GB per day for 5yrs

Re: [ceph-users] NVME SSD for journal

2015-07-07 Thread Christian Balzer
Hello, On Tue, 7 Jul 2015 09:51:56 + Van Leeuwen, Robert wrote: I'm wondering if anyone is using NVME SSDs for journals? Intel 750 series 400GB NVME SSD offers good performance and price in comparison to let say Intel S3700 400GB. http://ark.intel.com/compare/71915,86740 My concern

Re: [ceph-users] [Ceph-community] Ceph containers Issue

2015-07-07 Thread Joao Eduardo Luis
CC'ing to ceph-users, where you're likely to get a proper response. Ceph-community is for community related matters. Cheers! -Joao On 07/07/2015 09:16 AM, Cristian Cristelotti wrote: Hi all, I'm facing issue with a centralized Keystone and I can't create containers with returning error

Re: [ceph-users] Ceph Rados-Gateway Configuration issues

2015-07-07 Thread MOSTAFA Ali (INTERN)
Hello, You have the same problem I faced, to solve it I moved my RGW to Ubuntu 15.04 and installed Apache2.4.10 and I used the Unix Socket . the documentation is missing some commands, after you create your rgw configuration file in the conf available folder of Apache you have to enable it

Re: [ceph-users] Ceph Rados-Gateway Configuration issues

2015-07-07 Thread MOSTAFA Ali (INTERN)
Since you are using Hammer, can you please test this method and send us your feedback: http://docs.ceph.com/docs/master/start/quick-ceph-deploy/#add-an-rgw-instance I didn't test it I don't really have time but I like to see the result Regards, Ali From: MOSTAFA Ali (INTERN) Sent: mardi 7

Re: [ceph-users] He8 drives

2015-07-07 Thread Christian Balzer
Re-added list. On Wed, 8 Jul 2015 11:12:51 +1000 Nigel Williams wrote: On Wed, Jul 8, 2015 at 11:01 AM, Christian Balzer ch...@gol.com wrote: In short SMR HDDs seem to be a bad match for Ceph or any random I/O. The He8 isn't shingled though, it is a PMR drive like the He6. Argh! That's

Re: [ceph-users] He8 drives

2015-07-07 Thread Christian Balzer
On Wed, 8 Jul 2015 10:28:17 +1000 Blair Bethwaite wrote: Hi folks, Does anyone have any experience with the newish HGST He8 8TB Helium filled HDDs? Storagereview looked at them here: http://www.storagereview.com/hgst_ultrastar_helium_he8_8tb_enterprise_hard_drive_review. I'm torn as to the

Re: [ceph-users] Ceph FS - MDS problem

2015-07-07 Thread Gregory Farnum
On Fri, Jul 3, 2015 at 10:34 AM, Dan van der Ster d...@vanderster.com wrote: Hi, We're looking at similar issues here and I was composing a mail just as you sent this. I'm just a user -- hopefully a dev will correct me where I'm wrong. 1. A CephFS cap is a way to delegate permission for a

Re: [ceph-users] NVME SSD for journal

2015-07-07 Thread David Burley
There is at least one benefit, you can go more dense. In our testing of real workloads, you can get a 12:1 OSD to Journal drive ratio (or even higher) using the P3700. This assumes you are willing to accept the impact of losing 12 OSDs when a journal croaks. On Tue, Jul 7, 2015 at 8:33 AM, Andrew

[ceph-users] adding a extra monitor with ceph-deploy fails

2015-07-07 Thread Makkelie, R (ITCDCC) - KLM
i'm trying to add a extra monitor with ceph-deploy the current/first monitor is installed by hand when i do ceph-deploy mon add HOST the new monitor seems to assimilate the old monitor so the old/first monitor is now in the same state as the new monitor so it is not aware of anything. i needed

Re: [ceph-users] NVME SSD for journal

2015-07-07 Thread David Burley
Further clarification, 12:1 with SATA spinners as the OSD data drives. On Tue, Jul 7, 2015 at 9:11 AM, David Burley da...@slashdotmedia.com wrote: There is at least one benefit, you can go more dense. In our testing of real workloads, you can get a 12:1 OSD to Journal drive ratio (or even

Re: [ceph-users] metadata server rejoin time

2015-07-07 Thread Gregory Farnum
On Thu, Jul 2, 2015 at 11:38 AM, Matteo Dacrema mdacr...@enter.it wrote: Hi all, I'm using CephFS on Hammer and I've 1.5 million files , 2 metadata servers in active/standby configuration with 8 GB of RAM , 20 clients with 2 GB of RAM each and 2 OSD nodes with 4 80GB osd and 4GB of RAM. I've

Re: [ceph-users] Help with radosgw admin ops hash of header

2015-07-07 Thread Brian Andrus
Here are some technical references: https://s3.amazonaws.com/doc/s3-developer-guide/RESTAuthentication.html http://docs.aws.amazon.com/AmazonSimpleDB/latest/DeveloperGuide/HMACAuth.html http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html You also might choose to use s3curl

Re: [ceph-users] Question about change bucket quota.

2015-07-07 Thread Brian Andrus
Hi Mika, Feature request created: https://bugzilla.redhat.com/show_bug.cgi?id=1240888 On Mon, Jul 6, 2015 at 4:21 PM, Vickie ch mika.leaf...@gmail.com wrote: Dear Cephers, When a bucket created, the default quota setting is unlimited. Is there any setting can change this? That's admin

Re: [ceph-users] He8 drives

2015-07-07 Thread Blair Bethwaite
Hey Christian, Thanks, I haven't caught up with my ceph-users backlog from last week yet so hadn't noticed that thread (SMR drives are something I was thinking about for a DR cluster and long term archival pool behind rgw). But note that the He8 drives are not SMR. Cheers, On 8 July 2015 at

[ceph-users] radosgw bucket index sharding tips?

2015-07-07 Thread Ben Hines
Anyone have any data on optimal # of shards for a radosgw bucket index? We've had issues with bucket index contention with a few million+ objects in a single bucket so i'm testing out the sharding. Perhaps at least one shard per OSD? Or, less? More? I noticed some discussion here regarding slow

Re: [ceph-users] Ceph FS - MDS problem

2015-07-07 Thread Dan van der Ster
Hi Greg, On Tue, Jul 7, 2015 at 4:25 PM, Gregory Farnum g...@gregs42.com wrote: 4. mds cache size = 500 is going to use a lot of memory! We have an MDS with just 8GB of RAM and it goes OOM after delegating around 1 million caps. (this is with mds cache size = 10, btw) Hmm. We do