Re: [ceph-users] mon memory usage (again)

2014-04-11 Thread Gregory Farnum
On Fri, Apr 11, 2014 at 11:12 PM, Christian Balzer wrote: > > Hello, > > 3 node cluster (2 storage with 2 OSDs one dedicated mon), 3 mons total. > Debian Jessie, thus 3.13 kernel and Ceph 0.72.2. > > 2 of the mons (including the leader) are using around 100MB RSS and one > was using about 1.1GB. >

[ceph-users] mon memory usage (again)

2014-04-11 Thread Christian Balzer
Hello, 3 node cluster (2 storage with 2 OSDs one dedicated mon), 3 mons total. Debian Jessie, thus 3.13 kernel and Ceph 0.72.2. 2 of the mons (including the leader) are using around 100MB RSS and one was using about 1.1GB. I did my homework and scoured the ML archives and found at least 2 threa

Re: [ceph-users] OSD space usage 2x object size after rados put

2014-04-11 Thread Mark Kirkwood
On 12/04/14 05:42, Gregory Farnum wrote: On Wed, Apr 9, 2014 at 8:41 PM, Mark Kirkwood Looks like the actual object has twice the disk footprint. Interestingly, comparing du vs ls info for it at that point shows: $ ls -l total 2097088 -rw-r--r-- 1 root root 1073741824 Apr 10 15:33 file__head_2

Re: [ceph-users] Dell R515/510 with H710 PERC RAID | JBOD

2014-04-11 Thread Punit Dambiwal
Hi Christian, Thanks for your reply...now it's clear to me.thanks for your help... On Fri, Apr 11, 2014 at 10:25 AM, Christian Balzer wrote: > > Hello, > > On Fri, 11 Apr 2014 09:48:56 +0800 Punit Dambiwal wrote: > > > Hi, > > > > What is the drawback to run the journals on the RAID1...??

Re: [ceph-users] OSD full - All RBD Volumes stopped responding

2014-04-11 Thread Wido den Hollander
On 04/11/2014 02:45 PM, Greg Poirier wrote: So... our storage problems persisted for about 45 minutes. I gave an entire hypervisor worth of VM's time to recover (approx. 30 vms), and none of them recovered on their own. In the end, we had to stop and start every VM (easily done, it was just alarm

[ceph-users] OpenStack Survey Deadline Looms!

2014-04-11 Thread Patrick McGarry
Hey all, Just a quick reminder that the OpenStack User Survey https://www.openstack.org/user-survey will only be open for another 4 hours or so. If you haven't already provided your view of the world with OpenStack && Ceph, please do! This user survey helps to give a snapshot of what the OpenS

Re: [ceph-users] firefly performance - SSD-only based caching pool ./. normal pool (journal on SSD)

2014-04-11 Thread Mark Nelson
On 04/11/2014 12:23 PM, Daniel Schwager wrote: Hi guys, I'm interesting in the question, which setup will be faster: a) a pool with journals on separate SSD's, OSD's on normal disks OR b) a caching pool with SSDs only (osd+journal) in front of a normal pool (osd+journal on normal di

Re: [ceph-users] OSD space usage 2x object size after rados put

2014-04-11 Thread Gregory Farnum
On Wed, Apr 9, 2014 at 8:41 PM, Mark Kirkwood wrote: > Redoing (attached, 1st file is for 2x space, 2nd for normal). I'm seeing: > > $ diff osd-du.0.txt osd-du.1.txt > 924,925c924,925 > < 2048 /var/lib/ceph/osd/ceph-1/current/5.1a_head/file__head_2E6FB49A__5 > < 2048/var/lib/ceph/osd/ceph-1/cu

[ceph-users] firefly performance - SSD-only based caching pool ./. normal pool (journal on SSD)

2014-04-11 Thread Daniel Schwager
Hi guys, I'm interesting in the question, which setup will be faster: a) a pool with journals on separate SSD's, OSD's on normal disks OR b) a caching pool with SSDs only (osd+journal) in front of a normal pool (osd+journal on normal disks) A normal setup could be like this: - 5-node f

Re: [ceph-users] OSD space usage 2x object size after rados put

2014-04-11 Thread Aronesty, Erik
Can ceph act in a raid-5 (or raid-6) mode, storing objects so that storage overhead of n/(n-1)? For some systems where the underlying OSD's are known to be very reliable, but where the storage is very tight, this could be useful. -Original Message- From: ceph-users-boun...@lists.ceph.c

Re: [ceph-users] OSD full - All RBD Volumes stopped responding

2014-04-11 Thread Greg Poirier
So, setting pgp_num to 2048 to match pg_num had a more serious impact than I expected. The cluster is rebalancing quite substantially (8.5% of objects being rebalanced)... which makes sense... Disk utilization is evening out fairly well which is encouraging. We are a little stumped as to why a few

Re: [ceph-users] [Ceph-rgw] pool assigment

2014-04-11 Thread Yehuda Sadeh
On Fri, Apr 11, 2014 at 3:20 AM, wrote: > Hi all, > > Context : CEPH dumpling on Ubuntu 12.04 > > I would like to manage as accurately as possible the pools assigned to Rados > gateway > My goal is to apply specific SLA to applications which use http-driven > storage. > I 'd like to store the con

Re: [ceph-users] create multiple OSDs without changing CRUSH until one last step

2014-04-11 Thread Gregory Farnum
If you never ran "osd rm" then the monitors still believe it's an existing OSD. You can run that command after doing the crush rm stuff, but you should definitely do so. On Friday, April 11, 2014, Chad Seys wrote: > Hi Greg, > > How many monitors do you have? > > 1 . :) > > > It's also possible

Re: [ceph-users] create multiple OSDs without changing CRUSH until one last step

2014-04-11 Thread Chad Seys
Hi Greg, > How many monitors do you have? 1 . :) > It's also possible that re-used numbers won't get caught in this, > depending on the process you went through to clean them up, but I > don't remember the details of the code here. Yeah, too bad. I'm following the standard removal procedure in

Re: [ceph-users] Ceph meetup Amsterdam: April 24th 2014

2014-04-11 Thread Wido den Hollander
Hi, Just another e-mail to promote this Meetup in Amsterdam. If you want to join the meetup, just add your name to the Wiki: https://wiki.ceph.com/Community/Meetups/The_Netherlands/Ceph_meetup_Amsterdam Wido On 03/27/2014 10:50 PM, Wido den Hollander wrote: Hi all, I think it's time to org

[ceph-users] DCB-X...

2014-04-11 Thread N. Richard Solis
Guys, I'm new to ceph in general but I'm wondering if anyone out there is using Ceph with any of the Data Center Bridging (DCB) technologies? I'm specifically thinking of DCB-X support provided by the open-lldp package. I'm wondering if there is any benefit to be gained by making use of the loss

Re: [ceph-users] OSD full - All RBD Volumes stopped responding

2014-04-11 Thread Greg Poirier
So... our storage problems persisted for about 45 minutes. I gave an entire hypervisor worth of VM's time to recover (approx. 30 vms), and none of them recovered on their own. In the end, we had to stop and start every VM (easily done, it was just alarming). Once rebooted, the VMs of course were fi

Re: [ceph-users] Hi and help stale+active+clean

2014-04-11 Thread Matteo Favaro
Il 11/04/2014 12:40, Wido den Hollander ha scritto: Have you tried to make the OSD as lost? $ ceph osd lost 6 That will remove the PGs and the cluster should come back. WOW it worked! ok i found a new recipe for this case i want to share with you: ## STUCK+ACT

Re: [ceph-users] OSD full - All RBD Volumes stopped responding

2014-04-11 Thread Wido den Hollander
On 04/11/2014 09:23 AM, Josef Johansson wrote: On 11/04/14 09:07, Wido den Hollander wrote: Op 11 april 2014 om 8:50 schreef Josef Johansson : Hi, On 11/04/14 07:29, Wido den Hollander wrote: Op 11 april 2014 om 7:13 schreef Greg Poirier : One thing to note All of our kvm VMs have t

Re: [ceph-users] Errors while running any ceph command (Please help me)

2014-04-11 Thread Matteo Favaro
Hi Srinivasa, I met this problem at beginning and I made my recipe to achieve the "initial monitor" working I hope that it can be useful to you ( is a recipe, it is not a manual please pay attention on the path and the ways that I followed): 1) ssh into the node 2) verify that the directory

Re: [ceph-users] Hi and help stale+active+clean

2014-04-11 Thread Wido den Hollander
On 04/11/2014 12:23 PM, Matteo Favaro wrote: Hi to all, my name is Matteo Favaro I'm a employee of CNAF and I'm trying to get working a test base of ceph. I have learnt a lot about ceph and i know quite well how to build and modify it but there is a question and a problem that I don't know how t

[ceph-users] Hi and help stale+active+clean

2014-04-11 Thread Matteo Favaro
Hi to all, my name is Matteo Favaro I'm a employee of CNAF and I'm trying to get working a test base of ceph. I have learnt a lot about ceph and i know quite well how to build and modify it but there is a question and a problem that I don't know how to resolve. my cluster is made in this way:

[ceph-users] [Ceph-rgw] pool assigment

2014-04-11 Thread ghislain.chevalier
Hi all, Context : CEPH dumpling on Ubuntu 12.04 I would like to manage as accurately as possible the pools assigned to Rados gateway My goal is to apply specific SLA to applications which use http-driven storage. I 'd like to store the contents by associating a pool to a bucket, or a pool to a

[ceph-users] Errors while running any ceph command (Please help me)

2014-04-11 Thread Srinivasa Rao Ragolu
Hi All, On our private distribution, I have compiled ceph and could able to install ceph. Now I have added /etc/ceph/ceph.conf as [global] fsid = e5a14ff4-148a-473a-8721-53bda59c74a2 mon initial members = mon mon host = 192.168.0.102 auth cluster required = cephx auth service required = cephx a

[ceph-users] Federated gateways

2014-04-11 Thread Peter
Hello, I am testing out federated gateways. I have created one gateway with one region and one zone. The gateway appears to work. I am trying to test it with s3cmd before I continue with more regions and zones. I create a test gateway user: radosgw-admin user create --uid=test --display-name

Re: [ceph-users] openstack + ceph

2014-04-11 Thread Haomai Wang
On Mon, Mar 31, 2014 at 11:35 AM, 常乐 wrote: > hi all, >I am trying to get ceph work with openstack havana. I am following the > instructions here. > https://ceph.com/docs/master/rbd/rbd-openstack/ > > However, I found i probably need more details on openstack. The instruction > mentioned cinde

[ceph-users] openstack + ceph

2014-04-11 Thread 常乐
-- Forwarded message -- From: "常乐" Date: Mar 31, 2014 11:35 AM Subject: openstack + ceph To: Cc: hi all, I am trying to get ceph work with openstack havana. I am following the instructions here. https://ceph.com/docs/master/rbd/rbd-openstack/ However, I found i probably need

Re: [ceph-users] ceph cluster health monitoring

2014-04-11 Thread Dan Van Der Ster
It’s pretty basic, but we run this hourly: https://github.com/cernceph/ceph-scripts/blob/master/ceph-health-cron/ceph-health-cron -- Dan van der Ster || Data & Storage Services || CERN IT Department -- On 11 Apr 2014 at 09:12:13, Pavel V. Kaygorodov (pa...@inasan.ru) w

Re: [ceph-users] OSD full - All RBD Volumes stopped responding

2014-04-11 Thread Josef Johansson
On 11/04/14 09:07, Wido den Hollander wrote: > >> Op 11 april 2014 om 8:50 schreef Josef Johansson : >> >> >> Hi, >> >> On 11/04/14 07:29, Wido den Hollander wrote: Op 11 april 2014 om 7:13 schreef Greg Poirier : One thing to note All of our kvm VMs have to be rebooted

[ceph-users] ceph cluster health monitoring

2014-04-11 Thread Pavel V. Kaygorodov
Hi! I want to receive email notifications for any ceph errors/warnings and for osd/mon disk full/near_full states. For example, I want to know it immediately if free space on any osd/mon becomes less then 10%. How to properly monitor ceph cluster health? Pavel.

Re: [ceph-users] OSD full - All RBD Volumes stopped responding

2014-04-11 Thread Wido den Hollander
> Op 11 april 2014 om 8:50 schreef Josef Johansson : > > > Hi, > > On 11/04/14 07:29, Wido den Hollander wrote: > > > >> Op 11 april 2014 om 7:13 schreef Greg Poirier : > >> > >> > >> One thing to note > >> All of our kvm VMs have to be rebooted. This is something I wasn't > >> expecting.