Re: [ceph-users] How to calculate necessary disk amount

2014-08-21 Thread Irek Fasikhov
Hi. 10ТB*2/0.85 ~= 24 TB with two replications, total volume for the raw data. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] How to calculate necessary disk amount

2014-08-21 Thread idzzy
Hello, Let me ask about disk calculation. For example , In case of requiring 10T as necessary size, How can calculate/estimate actual disk size of each nodes. - Consisting 3 nodes. - OS and OSD(data, journals) on separate drives. What size of OSD disk should use to provide 10T as actual necessa

Re: [ceph-users] Question on OSD node failure recovery

2014-08-21 Thread Craig Lewis
The default rules are sane for small clusters with few failure domains. Anything larger than a single rack should customize their rules. It's a good idea to figure this out early. Changes to your CRUSH rules can result in a large percentage of data moving around, which will make your cluster unu

Re: [ceph-users] Problem setting tunables for ceph firefly

2014-08-21 Thread Craig Lewis
There was a good discussion of this a month ago: https://www.mail-archive.com/ceph-users%40lists.ceph.com/msg11483.html That'll give you some things you can try, and information on how to undo it if it does cause problems. You can disable the warning by adding this to the [mon] section of ceph.c

Re: [ceph-users] MON running 'ceph -w' doesn't see OSD's booting

2014-08-21 Thread Bruce McFarland
Yes all of the ceph-osd processes are up and running. I perform a ceph-mon restart to see if that might trigger the osdmap update, but there is no INFO msg from the osdmap or the pgmap that I expect to when the osd's are started. All of the osd's and their hosts appear in the CRUSH map and in ce

Re: [ceph-users] fail to upload file from RadosGW by Python+S3

2014-08-21 Thread debian Only
Solved it by comment this #*rgw print continue = false* . the reason is related with 100-continue. when python use S3, it force use HTTP Expect 100-continue , but i have disable rgw print. when python S3 upload file, Log include *HTTP_EXPECT=100-Continue* , so means when radosgw enable 100

Re: [ceph-users] Ceph Cinder Capabilities reports wrong free size

2014-08-21 Thread Gregory Farnum
On Thu, Aug 21, 2014 at 8:29 AM, Jens-Christian Fischer wrote: > I am working with Cinder Multi Backends on an Icehouse installation and have > added another backend (Quobyte) to a previously running Cinder/Ceph > installation. > > I can now create QuoByte volumes, but no longer any ceph volumes

Re: [ceph-users] MON running 'ceph -w' doesn't see OSD's booting

2014-08-21 Thread Gregory Farnum
Are the OSD processes still alive? What's the osdmap output of "ceph -w" (which was not in the output you pasted)? -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com On Thu, Aug 21, 2014 at 7:11 AM, Bruce McFarland wrote: > I have 3 storage servers each with 30 osds. Each osd has

Re: [ceph-users] fail to upload file from RadosGW by Python+S3

2014-08-21 Thread debian Only
my radosgw disbaled 100-continue [global] fsid = 075f1aae-48de-412e-b024-b0f014dbc8cf mon_initial_members = ceph01-vm, ceph02-vm, ceph04-vm mon_host = 192.168.123.251,192.168.123.252,192.168.123.250 auth_cluster_required = cephx auth_service_required = cephx auth_client_required = cephx filestore_

Re: [ceph-users] fail to upload file from RadosGW by Python+S3

2014-08-21 Thread debian Only
when i use Dragondisk , i unselect Expect "100-continue " header , upload file sucessfully. when select this option, upload file will hang. maybe the python script can not upload file due to the 100-continue ?? my radosgw Apache2 not use 100-continue. if my guess is ture, how to disable this i

Re: [ceph-users] Hanging ceph client

2014-08-21 Thread Gregory Farnum
Yeah, that's fairly bizarre. Have you turned up the monitor logs and seen what they're doing? Have you checked that the nodes otherwise have the same configuration (firewall rules, client key permissions, installed version of Ceph...) -Greg Software Engineer #42 @ http://inktank.com | http://ceph.c

[ceph-users] Ceph Cinder Capabilities reports wrong free size

2014-08-21 Thread Jens-Christian Fischer
I am working with Cinder Multi Backends on an Icehouse installation and have added another backend (Quobyte) to a previously running Cinder/Ceph installation. I can now create QuoByte volumes, but no longer any ceph volumes. The cinder-scheduler log get’s an incorrect number for the free size o

Re: [ceph-users] [radosgw] unable to perform any operation using s3 api

2014-08-21 Thread debian Only
(1)rgw_dns_name = labs try use hostname ,not FDQN *compare=0 means auth passed* i am testing S3 api too. i can create bucket , list bucket ,list all file in bucket. but can not upload file successfully by python api. while i can upload by s3cmd and windows software D

[ceph-users] Problem setting tunables for ceph firefly

2014-08-21 Thread Gerd Jakobovitsch
Dear all, I have a ceph cluster running in 3 nodes, 240 TB space with 60% usage, used by rbd and radosgw clients. Recently I upgraded from emperor to firefly, and I got the message about legacy tunables described in http://ceph.com/docs/master/rados/operations/crush-map/#tunables. After some

Re: [ceph-users] MON running 'ceph -w' doesn't see OSD's booting

2014-08-21 Thread Bruce McFarland
I have 3 storage servers each with 30 osds. Each osd has a journal that is a partition on a virtual drive that is a raid0 of 6 ssds. I brought up a 3 osd (1 per storage server) cluster to bring up Ceph and figure out configuration etc. From: Dan Van Der Ster [mailto:daniel.vanders...@cern.ch] Se

Re: [ceph-users] active+remapped after remove osd via ceph osd out

2014-08-21 Thread Dominik Mostowiec
Hi, I have 2 PG in active+remapped state. ceph health detail HEALTH_WARN 2 pgs stuck unclean; recovery 24/348041229 degraded (0.000%) pg 3.1a07 is stuck unclean for 29239.046024, current state active+remapped, last acting [167,80,145] pg 3.154a is stuck unclean for 29239.039777, current state acti

[ceph-users] fail to upload file from RadosGW by Python+S3

2014-08-21 Thread debian Only
i can upload file to RadosGW by s3cmd , and software Dragondisk. the script can list all bucket and all file in the bucket. but can not from python s3. ### #coding=utf-8 __author__ = 'Administrator' #!/usr/bin/env python import fnmatch import os, sys import boto import boto.s3.connection

[ceph-users] Hanging ceph client

2014-08-21 Thread Damien Churchill
Hi, On a freshly created 4 node cluster I'm struggling to get the 4th node to create correctly. ceph-deploy is unable to create the OSDs on it and when logging in to the node and attempting to run `ceph -s` manually (after copying the client.admin keyring) with debug parameters it ends up hanging

Re: [ceph-users] Question on OSD node failure recovery

2014-08-21 Thread Sean Noonan
Ceph uses CRUSH (http://ceph.com/docs/master/rados/operations/crush-map/) to determine object placement. The default generated crush maps are sane, in that they will put replicas in placement groups into separate failure domains. You do not need to worry about this simple failure case, but you

[ceph-users] Question on OSD node failure recovery

2014-08-21 Thread LaBarre, James (CTR) A6IT
I understand the concept with Ceph being able to recover from the failure of an OSD (presumably with a single OSD being on a single disk), but I'm wondering what the scenario is if an OSD server node containing multiple disks should fail. Presuming you have a server containing 8-10 disks, your

Re: [ceph-users] Ceph + Qemu cache=writethrough

2014-08-21 Thread Paweł Sadowski
Sorry for missing subject. On 08/21/2014 03:09 PM, Paweł Sadowski wrote: > Hi, > > I'm trying to start Qemu on top of RBD. In documentation[1] there is a > big warning: > > Important > > If you set rbd_cache=true, you must set cache=writeback or risk data > loss. Without cache=writebac

[ceph-users] ceph-users@lists.ceph.com

2014-08-21 Thread Paweł Sadowski
Hi, I'm trying to start Qemu on top of RBD. In documentation[1] there is a big warning: Important If you set rbd_cache=true, you must set cache=writeback or risk data loss. Without cache=writeback, QEMU will not send flush requests to librbd. If QEMU exits uncleanly in this confi

Re: [ceph-users] Serious performance problems with small file writes

2014-08-21 Thread Dan Van Der Ster
Hi Hugo, On 21 Aug 2014, at 14:17, Hugo Mills wrote: > > Not sure what you mean about colocated journal/OSD. The journals > aren't on the same device as the OSDs. However, all three journals on > each machine are on the same SSD. I obviously didn’t drink enough coffee this morning. I read y

Re: [ceph-users] Serious performance problems with small file writes

2014-08-21 Thread Hugo Mills
On Thu, Aug 21, 2014 at 07:40:45AM +, Dan Van Der Ster wrote: > On 20 Aug 2014, at 17:54, Hugo Mills wrote: > >> Does your hardware provide enough IOPS for what your users need? > >> (e.g. what is the op/s from ceph -w) > > > > Not really an answer to your question, but: Before the ceph clu

Re: [ceph-users] Serious performance problems with small file writes

2014-08-21 Thread Hugo Mills
Just to fill in some of the gaps from yesterday's mail: On Wed, Aug 20, 2014 at 04:54:28PM +0100, Hugo Mills wrote: >Some questions below I can't answer immediately, but I'll spend > tomorrow morning irritating people by triggering these events (I think > I have a reproducer -- unpacking a

Re: [ceph-users] MON running 'ceph -w' doesn't see OSD's booting

2014-08-21 Thread Dan Van Der Ster
Hi, You only have one OSD? I’ve seen similar strange things in test pools having only one OSD — and I kinda explained it by assuming that OSDs need peers (other OSDs sharing the same PG) to behave correctly. Install a second OSD and see how it goes... Cheers, Dan On 21 Aug 2014, at 02:59, Bruc

Re: [ceph-users] Serious performance problems with small file writes

2014-08-21 Thread Dan Van Der Ster
Hi Hugo, On 20 Aug 2014, at 17:54, Hugo Mills wrote: >> What are you using for OSD journals? > > On each machine, the three OSD journals live on the same ext4 > filesystem on an SSD, which is also the root filesystem of the > machine. > >> Also check the CPU usage for the mons and osds... >

Re: [ceph-users] RadosGW problems

2014-08-21 Thread Marco Garcês
I have noticed that when I make the request to HTTPS, the responde comes in http form with port 443... Where is this happening, do you have any idea? On Wed, Aug 20, 2014 at 1:30 PM, Marco Garcês wrote: > swift --insecure -V 1 -A https://gateway.bcitestes.local/auth -U > testuser:swift -K MHA4vF