[ceph-users] Mix of SATA and SSD

2015-12-11 Thread Mike Miller
Hi, can you please help me with the question I am currently thinking about. I am entertaining a osd node design of a mixture of SATA spinner based osd daemons and SSD based osd daemons. Is it possible to have incoming write traffic go to the SSD first and then when write traffic is becoming

Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-11 Thread Loic Dachary
def list_partitions_device(dev): """ Return a list of partitions on the given device name """ partitions = [] basename = os.path.basename(dev) for name in os.listdir(block_path(dev)): if name.startswith(basename): partitions.append(name) return

[ceph-users] Ceph 2 node cluster | Data availability

2015-12-11 Thread Shetty, Pradeep
Hi, I am using a 2 node ceph cluster with cloudstack. Do we get data availability when we shutdown one of the hosts in the cluster? If yes, can you pls share the fine-tuning steps for the same. Regards, Pradeep ___ ceph-users mailing list

Re: [ceph-users] Blocked requests after "osd in"

2015-12-11 Thread Christian Kauhaus
Am 10.12.2015 um 06:38 schrieb Robert LeBlanc: > Since I'm very interested in > reducing this problem, I'm willing to try and submit a fix after I'm > done with the new OP queue I'm working on. I don't know the best > course of action at the moment, but I hope I can get some input for > when I do

Re: [ceph-users] F21 pkgs for Ceph Hammer release ?

2015-12-11 Thread Alfredo Deza
On Fri, Dec 11, 2015 at 2:46 AM, Deepak Shetty wrote: > > > On Wed, Dec 2, 2015 at 7:35 PM, Alfredo Deza wrote: >> >> On Tue, Dec 1, 2015 at 4:59 AM, Deepak Shetty wrote: >> > Hi, >> > Does anybody how/where I can get the F21 repo for

Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-11 Thread Stolte, Felix
Hi Loic, now it is working as expected. Thanks a lot for fixing it! Output is: /dev/cciss/c0d0p2 other /dev/cciss/c0d0p5 swap, swap /dev/cciss/c0d0p1 other, ext4, mounted on / /dev/cciss/c0d1 : /dev/cciss/c0d1p1 ceph data, active, cluster ceph, osd.0, journal /dev/cciss/c0d7p2

Re: [ceph-users] write speed , leave a little to be desired?

2015-12-11 Thread Zoltan Arnold Nagy
It’s very unfortunate that you guys are using the EVO drives. As we’ve discussed numerous times on the ML, they are not very suitable for this task. I think that 200-300MB/s is actually not bad (without knowing anything about the hardware setup, as you didn’t give details…) coming from those

Re: [ceph-users] write speed , leave a little to be desired?

2015-12-11 Thread Jan Schermer
The drive will actually be writing 500MB/s in this case, if the journal is on the same drive. All writes get to the journal and then to the filestore, so 200MB/s is actually a sane figure. Jan > On 11 Dec 2015, at 13:55, Zoltan Arnold Nagy > wrote: > > It’s very

Re: [ceph-users] write speed , leave a little to be desired?

2015-12-11 Thread Florian Rommel
Hi thanks for the replies, but I was under the impression that the journal is the same as the cache pool, so that there is no extra journal write? About the EVOs, as this is a test cluster we would like to test how far we can push commodity hardware. The servers are all DELL 1U Rack mounts with

[ceph-users] Snapshot creation time

2015-12-11 Thread Alex Gorbachev
Is there any way to obtain a snapshot creation time? rbd snap ls does not list it. Thanks! -- Alex Gorbachev Storcium ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] bucked index, leveldb and journal

2015-12-11 Thread Ludovico Cavedon
Hi everybody, I underdstand that radosgw bucket indexes are stored as leveldb files. Do writes leveldb go though the jounrnal first, or directly to disk? In other words, do they get the benefits of having a fast journal? I started investigating disk because I observe a lot of writes to ldb files

Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-11 Thread Jens Rosenboom
2015-12-11 9:16 GMT+01:00 Stolte, Felix : > Hi Jens, > > output is attached (stderr + stdout) O.k., so now "ls -l /sys/dev/block /sys/dev/block/104:0 /sys/dev/block/104:112" please. ___ ceph-users mailing list

Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-11 Thread Stolte, Felix
Hi Jens, output is attached (stderr + stdout) Regards -Ursprüngliche Nachricht- Von: Jens Rosenboom [mailto:j.rosenb...@x-ion.de] Gesendet: Freitag, 11. Dezember 2015 09:10 An: Stolte, Felix Cc: Loic Dachary; ceph-us...@ceph.com Betreff: Re: [ceph-users] ceph-disk list crashes in

[ceph-users] write speed , leave a little to be desired?

2015-12-11 Thread Florian Rommel
Hi, we are just testing our new ceph cluster and to optimise our spinning disks we created an erasure coded pool and a SSD cache pool. We modified the crush map to make an sad pool as easy server contains 1 ssd drive and 5 spinning drives. Stress testing the cluster in terms of read

Re: [ceph-users] ceph-disk list crashes in infernalis

2015-12-11 Thread Loic Dachary
Hi Felix, Could you try again ? Hopefully that's the right one :-) https://raw.githubusercontent.com/dachary/ceph/741da8ec91919db189ba90432ab4cee76a20309e/src/ceph-disk is the lastest from https://github.com/ceph/ceph/pull/6880 Cheers On 11/12/2015 09:16, Stolte, Felix wrote: > Hi Jens, > >

[ceph-users] Monitors - proactive questions about quantity, placement and protection

2015-12-11 Thread Alex Gorbachev
This is a proactive message to summarize best practices and options working with monitors, especially in a larger production environment (larger for me is > 3 racks). I know MONs do not require a lot of resources, but prefer to run on SSDs for response time. Also that you need an odd number, as