Re: [ceph-users] Consumer-grade SSD in Ceph

2020-01-03 Thread Eneko Lacunza
I'm sure you know also the following, but just in case: - Intel SATA D3-S4610 (I think they're out of stock right now) - Intel SATA D3-S4510 (I see stock of these right now) El 27/12/19 a las 17:56, vita...@yourcmc.ru escribió: SATA: Micron 5100-5200-5300, Seagate Nytro 1351/1551 (don't forget

Re: [ceph-users] Consumer-grade SSD in Ceph

2019-12-22 Thread Eneko Lacunza
Hi Sinan, Just to reiterate: don't do this. Consumer SSDs will destroy your enterprise SSD's performance. Our office cluster is made of consumer-grade servers: cheap gaming motherboards, memory, ryzen processors, desktop HDDs. But SSD drives are Enterprise, we had awful experiences with

Re: [ceph-users] Single threaded IOPS on SSD pool.

2019-06-05 Thread Eneko Lacunza
Hi, El 5/6/19 a las 16:53, vita...@yourcmc.ru escribió: Ok, average network latency from VM to OSD's ~0.4ms. It's rather bad, you can improve the latency by 0.3ms just by upgrading the network. Single threaded performance ~500-600 IOPS - or average latency of 1.6ms Is that comparable to

Re: [ceph-users] Intel D3-S4610 performance

2019-03-13 Thread Eneko Lacunza
Hi Kai, El 12/3/19 a las 9:13, Kai Wembacher escribió: Hi everyone, I have an Intel D3-S4610 SSD with 1.92 TB here for testing and get some pretty bad numbers, when running the fio benchmark suggested by Sébastien Han

Re: [ceph-users] Blocked ops after change from filestore on HDD to bluestore on SDD

2019-02-27 Thread Eneko Lacunza
Hi Uwe, We tried to use a Samsung 840 Pro SSD as OSD some time ago and it was a no-go; it wasn't that performance was bad, it just didn't work for the kind of use of OSD. Any HDD was better than it (the disk was healthy and have been used in a software raid-1 for a pair of years). I suggest

Re: [ceph-users] Low traffic Ceph cluster with consumer SSD.

2018-11-26 Thread Eneko Lacunza
Hi, El 25/11/18 a las 18:23, Виталий Филиппов escribió: Ok... That's better than previous thread with file download where the topic starter suffered from normal only-metadata-journaled fs... Thanks for the link, it would be interesting to repeat similar tests. Although I suspect it shouldn't

[ceph-users] Proxmox with EMC VNXe 3200

2018-06-25 Thread Eneko Lacunza
Hi all, We're planning the migration of a VMWare 5.5 cluster backed by a EMC VNXe 3200 storage appliance to Proxmox. The VNXe has about 3 year of warranty left and half the disks unprovisioned, so the current plan is to use the same VNXe for Proxmox storage. After warranty expires we'll

Re: [ceph-users] Ceph Mimic on Debian 9 Stretch

2018-06-19 Thread Eneko Lacunza
Hi Fabian, Hope your arm is doing well :) unless such a backport is created and tested fairly well (and we will spend some more time investigating this internally despite the caveats above), our plan B will probably involve: - building Luminous for Buster to ease the upgrade from

[ceph-users] Hawk-M4E SSD disks for journal

2018-01-05 Thread Eneko Lacunza
Hi all, We're in the process of deploying a new Proxmox/ceph cluster. We had planned to use S3710 disks for system+journals, but our provider (Dell) is telling us that they're EOL and the only alternative they offer are some "mix use" Hawk-M4E with sizes 200GB/400GB. I really can't find reliable

Re: [ceph-users] Small cluster for VMs hosting

2017-11-07 Thread Eneko Lacunza
Hi Gandalf, El 07/11/17 a las 14:16, Gandalf Corvotempesta escribió: Hi to all I've been far from ceph from a couple of years (CephFS was still unstable) I would like to test it again, some questions for a production cluster for VMs hosting: 1. Is CephFS stable? Yes. 2. Can I spin up a 3

Re: [ceph-users] Sharing SSD journals and SSD drive choice

2017-05-02 Thread Eneko Lacunza
;> >> b) better throughput (I'm speculating that the S3610 isn't 4 times >> faster than the S3520) >> >> c) load spread across 4 SATA channels (I suppose this doesn't really >> matter since the drives can't throttle the SATA bus). >>

Re: [ceph-users] Sharing SSD journals and SSD drive choice

2017-04-26 Thread Eneko Lacunza
Adam, What David said before about SSD drives is very important. I will tell you another way: use enterprise grade SSD drives, not consumer grade. Also, pay attention to endurance. The only suitable drive for Ceph I see in your tests is SSDSC2BB150G7, and probably it isn't even the most

Re: [ceph-users] Ceph Bluestore

2017-03-15 Thread Eneko Lacunza
Hi Michal, El 14/03/17 a las 23:45, Michał Chybowski escribió: I'm going to set up a small cluster (5 nodes with 3 MONs, 2 - 4 HDDs per node) to test if ceph in such small scale is going to perform good enough to put it into production environment (or does it perform well only if there are

Re: [ceph-users] CephFS PG calculation

2017-03-10 Thread Eneko Lacunza
Hi Martin, Take a look at http://ceph.com/pgcalc/ Cheers Eneko El 10/03/17 a las 09:54, Martin Wittwer escribió: Hi List I am creating a POC cluster with CephFS as a backend for our backup infrastructure. The backups are rsyncs of whole servers. I have 4 OSD nodes with 10 4TB disks and 2

Re: [ceph-users] Recovery ceph cluster down OS corruption

2017-02-24 Thread Eneko Lacunza
Hi Iban, Is the monitor data safe? If it is, just install jewel in other servers and plug in the OSD disks, it should work. El 24/02/17 a las 14:41, Iban Cabrillo escribió: Hi, We have a serious issue. We have a mini cluster (jewel version) with two server (Dell RX730), with 16Bays and

Re: [ceph-users] Release schedule and notes.

2016-11-24 Thread Eneko Lacunza
Hi, El 24/11/16 a las 12:09, Stephen Harker escribió: Hi All, This morning I went looking for information on the Ceph release timelines and so on and was directed to this page by Google: http://docs.ceph.com/docs/jewel/releases/ but this doesn't seem to have been updated for a long time.

Re: [ceph-users] KVM / Ceph performance problems

2016-11-22 Thread Eneko Lacunza
Hi Michiel, How are you configuring VM disks on Proxmox? What type (virtio, scsi, ide) and what cache setting? El 23/11/16 a las 07:53, M. Piscaer escribió: Hi, I have an little performance problem with KVM and Ceph. I'm using Proxmox 4.3-10/7230e60f, with KVM version

Re: [ceph-users] Migrating files from ceph fs from cluster a to cluster b without low downtime

2016-06-07 Thread Eneko Lacunza
El 06/06/16 a las 20:53, Oliver Dzombic escribió: Hi, thank you for your suggestion. Rsync will copy the whole file new, if the size is different. Since we talk about raw image files of virtual servers, rsync is no option. We need something which will inside of a file just copy the delta's.

Re: [ceph-users] rbd resize option

2016-05-12 Thread Eneko Lacunza
wami-resize-test-vm e2fsck 1.42.9 (4-Feb-2014) The filesystem size (according to the superblock) is 52428800 blocks The physical size of the device is 13107200 blocks Either the superblock or the partition table is likely to be corrupt! Abort? On Thu, May 12, 2016 at 6:37 PM, Eneko Lacunza <

Re: [ceph-users] rbd resize option

2016-05-12 Thread Eneko Lacunza
Did you shrink the FS to be smaller than the target rbd size before doing "rbd resize"? El 12/05/16 a las 12:33, M Ranga Swami Reddy escribió: When I used "rbd resize" option for size shrink, the image/volume lost its fs sectors and asking for "fs" not found... I have used "mkf" option, then

Re: [ceph-users] Adding new disk/OSD to ceph cluster

2016-04-11 Thread Eneko Lacunza
Hi Mad, El 09/04/16 a las 14:39, Mad Th escribió: We have a 3 node proxmox/ceph cluster ... each with 4 x4 TB disks Are you using 3-way replication? I guess you are. :) 1) If we want to add more disks , what are the things that we need to be careful about? Will the following steps

Re: [ceph-users] Typical architecture in RDB mode - Number of servers explained ?

2016-01-28 Thread Eneko Lacunza
Hi, El 28/01/16 a las 13:53, Gaetan SLONGO escribió: Dear Ceph users, We are currently working on CEPH (RBD mode only). The technology is currently in "preview" state in our lab. We are currently diving into Ceph design... We know it requires at least 3 nodes (OSDs+Monitors inside) to work

Re: [ceph-users] Upgrading Ceph

2016-01-27 Thread Eneko Lacunza
Hi, El 27/01/16 a las 15:00, Vlad Blando escribió: I have a production Ceph Cluster - 3 nodes - 3 mons on each nodes - 9 OSD @ 4TB per node - using ceph version 0.80.5 (38b73c67d375a2552d8ed67843c8a65c2c0feba6) ​Now I want to upgrade it to Hammer, I saw the documentation on upgrading, it

Re: [ceph-users] SSD journals killed by VMs generating 500 IOPs (4kB) non-stop for a month, seemingly because of a syslog-ng bug

2015-11-23 Thread Eneko Lacunza
Hi Mart, El 23/11/15 a las 10:29, Mart van Santen escribió: On 11/22/2015 10:01 PM, Robert LeBlanc wrote: There have been numerous on the mailing list of the Samsung EVO and Pros failing far before their expected wear. This is most likely due to the 'uncommon' workload of Ceph and the

Re: [ceph-users] IO scheduler osd_disk_thread_ioprio_class

2015-06-23 Thread Eneko Lacunza
Hi Jan, What SSD model? I've seen SSDs work quite well usually but suddenly give a totally awful performance for some time (not those 8K you see though). I think there was some kind of firmware process involved, I had to replace the drive with a serious DC one. El 23/06/15 a las 14:07,

Re: [ceph-users] Best setup for SSD

2015-06-02 Thread Eneko Lacunza
Hi, On 02/06/15 16:18, Mark Nelson wrote: On 06/02/2015 09:02 AM, Phil Schwarz wrote: Le 02/06/2015 15:33, Eneko Lacunza a écrit : Hi, On 02/06/15 15:26, Phil Schwarz wrote: On 02/06/15 14:51, Phil Schwarz wrote: i'm gonna have to setup a 4-nodes Ceph(Proxmox+Ceph in fact) cluster. -1

Re: [ceph-users] Recommendations for a driver situation

2015-06-02 Thread Eneko Lacunza
Hi, On 02/06/15 14:18, Pontus Lindgren wrote: We have recently acquired new servers for a new ceph cluster and we want to run Debian on those servers. Unfortunately drivers needed for the raid controller are only available in newer kernels than what Debian Wheezy provides. We need to run the

Re: [ceph-users] Best setup for SSD

2015-06-02 Thread Eneko Lacunza
Hi, On 02/06/15 14:51, Phil Schwarz wrote: i'm gonna have to setup a 4-nodes Ceph(Proxmox+Ceph in fact) cluster. -1 node is a little HP Microserver N54L with 1X opteron + 2SSD+ 3X 4TB SATA It'll be used as OSD+Mon server only. Are these SSDs Intel S3700 too? What amount of RAM? - 3 nodes are

Re: [ceph-users] Best setup for SSD

2015-06-02 Thread Eneko Lacunza
Hi, On 02/06/15 15:26, Phil Schwarz wrote: On 02/06/15 14:51, Phil Schwarz wrote: i'm gonna have to setup a 4-nodes Ceph(Proxmox+Ceph in fact) cluster. -1 node is a little HP Microserver N54L with 1X opteron + 2SSD+ 3X 4TB SATA It'll be used as OSD+Mon server only. Are these SSDs Intel S3700

Re: [ceph-users] Replacing OSD disks with SSD journal - journal disk space use

2015-05-26 Thread Eneko Lacunza
an enhancement of ceph-disk for Hammer that is more aggressive in reusing previous partition. - Robert LeBlanc GPG Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Mon, May 25, 2015 at 4:22 AM, Eneko Lacunza wrote: Hi all, We have a firefly ceph cluster (using Promxox VE

[ceph-users] Replacing OSD disks with SSD journal - journal disk space use

2015-05-25 Thread Eneko Lacunza
Hi all, We have a firefly ceph cluster (using Promxox VE, but I don't think this is revelant), and found a OSD disk was having quite a high amount of errors as reported by SMART, and also quite high wait time as reported by munin, so we decided to replace it. What I have done is down/out

Re: [ceph-users] Possible improvements for a slow write speed (excluding independent SSD journals)

2015-04-21 Thread Eneko Lacunza
Hi, I'm just writing to you to stress out what others have already said, because it is very important that you take it very seriously. On 20/04/15 19:17, J-P Methot wrote: On 4/20/2015 11:01 AM, Christian Balzer wrote: This is similar to another thread running right now, but since our

Re: [ceph-users] journal placement for small office?

2015-02-09 Thread Eneko Lacunza
Hi, The common recommendation is to use a good (Intel S3700) SSD disk for journals for each 3-4 OSDs, or otherwise to use internal journal on each OSD. Don't put more than one journal on the same spinning disk. Also, it is recommended to use 500G-1TB disks, specially if you have a 1gbit

Re: [ceph-users] remote storage

2015-01-26 Thread Eneko Lacunza
Hi Robert, I don't see any reply to your email, so I send you my thoughts. Ceph is all about using cheap local disks to build a large performant and resilient storage. Your use case with SAN and storwise doesn't seem to fit very well to Ceph. (I'm not saying it can't be done). ¿Why are you

Re: [ceph-users] New firefly tiny cluster stuck unclean

2015-01-20 Thread Eneko Lacunza
with size=2. This was done before adding the OSDs of one of the nodes. Thanks Eneko On 20/01/15 16:23, Eneko Lacunza wrote: Hi all, I've just created a new ceph cluster for RBD with latest firefly: - 3 monitors - 2 OSD nodes, each has 1 s3700 (journals) + 2 x 3TB WD red (osd) Network is 1gbit

Re: [ceph-users] Improving Performance with more OSD's?

2014-12-30 Thread Eneko Lacunza
Hi, On 29/12/14 15:12, Christian Balzer wrote: 3rd Node - Monitor only, for quorum - Intel Nuc - 8GB RAM - CPU: Celeron N2820 Uh oh, a bit weak for a monitor. Where does the OS live (on this and the other nodes)? The leveldb (/var/lib/ceph/..) of the monitors likes it fast, SSDs preferably.

Re: [ceph-users] Block and NAS Services for Non Linux OS

2014-12-30 Thread Eneko Lacunza
Hi Steven, Welcome to the list. On 30/12/14 11:47, Steven Sim wrote: This is my first posting and I apologize if the content or query is not appropriate. My understanding for CEPH is the block and NAS services are through specialized (albeit opensource) kernel modules for Linux. What

Re: [ceph-users] Improving Performance with more OSD's?

2014-12-30 Thread Eneko Lacunza
Hi, On 30/12/14 11:55, Lindsay Mathieson wrote: On Tue, 30 Dec 2014 11:26:08 AM Eneko Lacunza wrote: have a small setup with such a node (only 4 GB RAM, another 2 good nodes for OSD and virtualization) - it works like a charm and CPU max is always under 5% in the graphs. It only peaks when

Re: [ceph-users] Ceph PG Incomplete = Cluster unusable

2014-12-30 Thread Eneko Lacunza
Hi Christian, Have you tried to migrate the disk from the old storage (pool) to the new one? I think it should show the same problem, but I think it'd be a much easier path to recover than the posix copy. How full is your storage? Maybe you can customize the crushmap, so that some OSDs

Re: [ceph-users] Ceph PG Incomplete = Cluster unusable

2014-12-30 Thread Eneko Lacunza
30.12.2014 12:23, schrieb Eneko Lacunza: Hi Christian, Have you tried to migrate the disk from the old storage (pool) to the new one? I think it should show the same problem, but I think it'd be a much easier path to recover than the posix copy. How full is your storage? Maybe you can customize

Re: [ceph-users] Block and NAS Services for Non Linux OS

2014-12-30 Thread Eneko Lacunza
Hi Steven, On 30/12/14 13:26, Steven Sim wrote: You mentioned that machines see a QEMU IDE/SCSI disk, they don't know whether its on ceph, NFS, local, LVM, ... so it works OK for any VM guest SO. But what if I want to CEPH cluster to serve a whole range of clients in the data center,

Re: [ceph-users] RESOLVED Re: Cluster with pgs in active (unclean) status

2014-12-16 Thread Eneko Lacunza
. -Greg On Wed, Dec 10, 2014 at 5:27 AM, Eneko Lacunza elacu...@binovo.es wrote: Hi all, I fixed the issue with the following commands: # ceph osd pool set data size 1 (wait some seconds for clean+active state of +64pgs) # ceph osd pool set data size 2 # ceph osd pool set metadata size 1 (wait some

[ceph-users] Cluster with pgs in active (unclean) status

2014-12-10 Thread Eneko Lacunza
Hi all, I have a small ceph cluster with just 2 OSDs, latest firefly. Default data, metadata and rbd pools were created with size=3 and min_size=1 An additional pool rbd2 was created with size=2 and min_size=1 This would give me a warning status, saying that 64 pgs were active+clean and 192

[ceph-users] RESOLVED Re: Cluster with pgs in active (unclean) status

2014-12-10 Thread Eneko Lacunza
? Cheers Eneko On 10/12/14 13:14, Eneko Lacunza wrote: Hi all, I have a small ceph cluster with just 2 OSDs, latest firefly. Default data, metadata and rbd pools were created with size=3 and min_size=1 An additional pool rbd2 was created with size=2 and min_size=1 This would give me a warning

[ceph-users] Suitable SSDs for journal

2014-12-04 Thread Eneko Lacunza
Hi all, Does anyone know about a list of good and bad SSD disks for OSD journals? I was pointed to http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/ But I was looking for something more complete? For example, I have a Samsung 840 Pro

Re: [ceph-users] Suitable SSDs for journal

2014-12-04 Thread Eneko Lacunza
and in fact work out the cheapest in terms of write durability. Nick -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Eneko Lacunza Sent: 04 December 2014 14:35 To: Ceph Users Subject: [ceph-users] Suitable SSDs for journal Hi all, Does anyone know