Re: [ceph-users] Unable to download files from ceph radosgw node using openstack juno swift client.

2014-12-16 Thread Vivek Varghese Cherian
Hi, root@ppm-c240-ceph3:/var/run/ceph# ceph --admin-daemon /var/run/ceph/ceph-osd.11.asok config show | less | grep rgw_max_chunk_size rgw_max_chunk_size: 524288, root@ppm-c240-ceph3:/var/run/ceph# And the value is above 4 MB. Regards, -- Vivek Varghese Cherian

Re: [ceph-users] Dual RADOSGW Network

2014-12-16 Thread Georgios Dimitrakakis
Thanks Craig. I will try that! I thought it was more complicate than that because of the entries for the public_network and rgw dns name in the config file... I will give it a try. Best, George That shouldnt be a problem.  Just have Apache bind to all interfaces instead of the

[ceph-users] radosgw timeout

2014-12-16 Thread Alejandro de Brito Fontes
I have a 3 node Ceph 0.87 cluster. After a while I see an error in radosgw and I don’t find references in the list archives heartbeat_map is_healthy 'RGWProcess::m_tp thread 0x7fc4eac2d700' had timed out after 600 The only solution is restart radosgw and for a while it works just fine Any

[ceph-users] OSD Crash makes whole cluster unusable ?

2014-12-16 Thread Christoph Adomeit
Hi there, today I had an osd crash with ceph 0.87/giant which made my hole cluster unusable for 45 Minutes. First it began with a disk error: sd 0:1:2:0: [sdc] CDB: Read(10)Read(10):: 28 28 00 00 0d 15 fe d0 fd 7b e8 f8 00 00 00 00 b0 08 00 00 XFS (sdc1): xfs_imap_to_bp: xfs_trans_read_buf()

Re: [ceph-users] rbd snapshot slow restore

2014-12-16 Thread Lindsay Mathieson
On Tue, 16 Dec 2014 11:26:35 AM you wrote: Is this normal? is ceph just really slow at restoring rbd snapshots, or have I really borked my setup? I'm not looking for a fix or a tuning suggestions, just feedback on whether this is normal -- Lindsay signature.asc Description: This is a

Re: [ceph-users] can not add osd

2014-12-16 Thread Karan Singh
Hi You logs does not provides much information , if you are following any other documentation for Ceph , i would recommend you to follow official Ceph docs. http://ceph.com/docs/master/start/quick-start-preflight/ Karan Singh

Re: [ceph-users] RESOLVED Re: Cluster with pgs in active (unclean) status

2014-12-16 Thread Eneko Lacunza
Hi Gregory, Sorry for the delay getting back. There was no activity at all on those 3 pools. Activity on the fourth pool was under 1 Mbps of writes. I think I waited several hours, but I can't recall exactly. One hour at least is for sure. Thanks Eneko On 11/12/14 19:32, Gregory Farnum

Re: [ceph-users] Number of SSD for OSD journal

2014-12-16 Thread Christian Balzer
On Tue, 16 Dec 2014 12:10:42 +0300 Mike wrote: 16.12.2014 10:53, Daniel Schwager пишет: Hallo Mike, This is also have another way. * for CONF 2,3 replace 200Gb SSD to 800Gb and add another 1-2 SSD to each node. * make tier1 read-write cache on SSDs * also you can add journal

Re: [ceph-users] rbd snapshot slow restore

2014-12-16 Thread Alexandre DERUMIER
Alexandre Derumier Ingénieur système et stockage Fixe : 03 20 68 90 88 Fax : 03 20 68 90 81 45 Bvd du Général Leclerc 59100 Roubaix 12 rue Marivaux 75002 Paris MonSiteEstLent.com - Blog dédié à la webperformance et la gestion de pics de trafic De: Wido den Hollander

Re: [ceph-users] rbd snapshot slow restore

2014-12-16 Thread Alexandre DERUMIER
Hi, That is normal behavior. Snapshotting itself is a fast process, but restoring means merging and rolling back. Any future plan to add something similar to zfs or netapp, where you can instant rollback a snapshot ? (Not sure it's technically possible to implement such snapshot with

[ceph-users] rbd read speed only 1/4 of write speed

2014-12-16 Thread VELARTIS Philipp Dürhammer
Hello, Read speed inside our vms (most of them windows) is only ¼ of the write speed. Write speed is about 450MB/s - 500mb/s and Read is only about 100/MB/s Our network is 10Gbit for OSDs and 10GB for MONS. We have 3 Servers with 15 osds each ___

Re: [ceph-users] rbd snapshot slow restore

2014-12-16 Thread Robert LeBlanc
There are really only two ways to do snapshots that I know of and they have trade-offs: COW into the snapshot (like VMware, Ceph, etc): When a write is committed, the changes are committed to a diff file and the base file is left untouched. This only has a single write penalty, if you want to

Re: [ceph-users] Dual RADOSGW Network

2014-12-16 Thread Craig Lewis
You may need split horizon DNS. The internal machines' DNS should resolve to the internal IP, and the external machines' DNS should resolve to the external IP. There are various ways to do that. The RadosGW config has an example of setting up Dnsmasq:

Re: [ceph-users] rbd read speed only 1/4 of write speed

2014-12-16 Thread David Clarke
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 On 17/12/14 05:26, VELARTIS Philipp Dürhammer wrote: Hello, Read speed inside our vms (most of them windows) is only ¼ of the write speed. Write speed is about 450MB/s – 500mb/s and Read is only about 100/MB/s Our network is

[ceph-users] Erasure coded PGs incomplete

2014-12-16 Thread Italo Santos
Hello, I'm trying to create an erasure pool following http://docs.ceph.com/docs/master/rados/operations/erasure-code/, but when I try create a pool with a specifc erasure-code-profile (myprofile) the PGs became on incomplete state. Anyone can help me? Below the profile I created:

Re: [ceph-users] Erasure coded PGs incomplete

2014-12-16 Thread Loic Dachary
Hi, The 2147483647 means that CRUSH did not find enough OSD for a given PG. If you check the crush rule associated with the erasure coded pool, you will most probably find why. Cheers On 16/12/2014 23:32, Italo Santos wrote: Hello, I'm trying to create an erasure pool following

Re: [ceph-users] Test 6

2014-12-16 Thread Lindsay Mathieson
On Tue, 16 Dec 2014 07:57:19 AM Leen de Braal wrote: If you are trying to see if your mails come through, don't check on the list. You have a gmail account, gmail removes mails that you have sent yourself. Not the case, I am on a dozen other mailman lists via gmail, all of them show my posts.

Re: [ceph-users] rbd snapshot slow restore

2014-12-16 Thread Lindsay Mathieson
On 17 December 2014 at 04:50, Robert LeBlanc rob...@leblancnet.us wrote: There are really only two ways to do snapshots that I know of and they have trade-offs: COW into the snapshot (like VMware, Ceph, etc): When a write is committed, the changes are committed to a diff file and the base

Re: [ceph-users] rbd read speed only 1/4 of write speed

2014-12-16 Thread Christian Balzer
On Tue, 16 Dec 2014 16:26:17 + VELARTIS Philipp Dürhammer wrote: Hello, Read speed inside our vms (most of them windows) is only ¼ of the write speed. Write speed is about 450MB/s - 500mb/s and Read is only about 100/MB/s Our network is 10Gbit for OSDs and 10GB for MONS. We have 3

Re: [ceph-users] rbd read speed only 1/4 of write speed

2014-12-16 Thread Mark Nelson
On 12/16/2014 07:08 PM, Christian Balzer wrote: On Tue, 16 Dec 2014 16:26:17 + VELARTIS Philipp Dürhammer wrote: Hello, Read speed inside our vms (most of them windows) is only ¼ of the write speed. Write speed is about 450MB/s - 500mb/s and Read is only about 100/MB/s Our network is

Re: [ceph-users] can not add osd

2014-12-16 Thread yang . bin18
From official Ceph docs,i still get the same err: [root@node3 ceph-cluster]# ceph-deploy osd activate node2:/dev/sdb1 [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.21): /usr/bin/ceph-deploy osd activate node2:/dev/sdb1

Re: [ceph-users] File System stripping data

2014-12-16 Thread Kevin Shiah
Hello, I am trying to set the extended attribute to a newly created created directory (call it dir here) using setfattr. I run the following command. setfattr -n ceph.dir.layout.stripe_count -v 2 dir And return: setfattr: dir: Operation not supported I am wondering if the underlying file

Re: [ceph-users] rbd snapshot slow restore

2014-12-16 Thread Robert LeBlanc
On Tue, Dec 16, 2014 at 5:37 PM, Lindsay Mathieson lindsay.mathie...@gmail.com wrote: On 17 December 2014 at 04:50, Robert LeBlanc rob...@leblancnet.us wrote: There are really only two ways to do snapshots that I know of and they have trade-offs: COW into the snapshot (like VMware,

Re: [ceph-users] Test 6

2014-12-16 Thread Craig Lewis
I always wondered why my posts didn't show up until somebody replied to them. I thought it was my filters. Thanks! On Mon, Dec 15, 2014 at 10:57 PM, Leen de Braal l...@braha.nl wrote: If you are trying to see if your mails come through, don't check on the list. You have a gmail account,

Re: [ceph-users] OSD Crash makes whole cluster unusable ?

2014-12-16 Thread Craig Lewis
So the problem started once remapping+backfilling started, and lasted until the cluster was healthy again? Have you adjusted any of the recovery tunables? Are you using SSD journals? I had a similar experience the first time my OSDs started backfilling. The average RadosGW operation latency

Re: [ceph-users] rbd snapshot slow restore

2014-12-16 Thread Lindsay Mathieson
On 17 December 2014 at 11:50, Robert LeBlanc rob...@leblancnet.us wrote: On Tue, Dec 16, 2014 at 5:37 PM, Lindsay Mathieson lindsay.mathie...@gmail.com wrote: On 17 December 2014 at 04:50, Robert LeBlanc rob...@leblancnet.us wrote: There are really only two ways to do snapshots that I know

Re: [ceph-users] Placing Different Pools on Different OSDS

2014-12-16 Thread Yujian Peng
I've found the problem. The command ceph osd crush rule create-simple ssd_ruleset ssd root should be ceph osd crush rule create-simple ssd_ruleset ssd host ___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] Help with Integrating Ceph with various Cloud Storage

2014-12-16 Thread Manoj Singh
Hi All, I am new to Ceph. Due to physical machines shortage I have installed Ceph cluster with single OSD and MON in a single Virtual Machine. I have few queries as below: 1. Whether having the Ceph setup on a VM is fine or it require to be on Physical server. 2. Since Amazon S3, Azure Blob