Re: [ceph-users] Osd auth del

2019-12-03 Thread Willem Jan Withagen
On 3-12-2019 11:43, Wido den Hollander wrote: On 12/3/19 11:40 AM, John Hearns wrote: I had a fat fingered moment yesterday I typed                       ceph auth del osd.3 Where osd.3 is an otherwise healthy little osd I have not set noout or down on  osd.3 yet This is a Nautilus

Re: [ceph-users] Ceph Nautilus Release T-shirt Design

2019-02-15 Thread Willem Jan Withagen
On 15/02/2019 11:56, Dan van der Ster wrote: On Fri, Feb 15, 2019 at 11:40 AM Willem Jan Withagen wrote: On 15/02/2019 10:39, Ilya Dryomov wrote: On Fri, Feb 15, 2019 at 12:05 AM Mike Perez wrote: Hi Marc, You can see previous designs on the Ceph store: https://www.proforma.com

Re: [ceph-users] Ceph Nautilus Release T-shirt Design

2019-02-15 Thread Willem Jan Withagen
On 15/02/2019 10:39, Ilya Dryomov wrote: On Fri, Feb 15, 2019 at 12:05 AM Mike Perez wrote: Hi Marc, You can see previous designs on the Ceph store: https://www.proforma.com/sdscommunitystore Hi Mike, This site stopped working during DevConf and hasn't been working since. I think Greg

Re: [ceph-users] Questions about using existing HW for PoC cluster

2019-01-28 Thread Willem Jan Withagen
On 28-1-2019 02:56, Will Dennis wrote: I mean to use CephFS on this PoC; the initial use would be to back up an existing ZFS server with ~43TB data (may have to limit the backed-up data depending on how much capacity I can get out of the OSD servers) and then share out via NFS as a read-only

Re: [ceph-users] Scheduling deep-scrub operations

2018-12-14 Thread Willem Jan Withagen
On 14/12/2018 13:42, Alexandru Cucu wrote: Hi, Unfortunately there is no way of doing this from the Ceph configuration but you could create some cron jobs to add and remove the nodeep-scrub flag. The only problem would be that your cluster status will show HEALTH_WARN but i think you could

Re: [ceph-users] Fwd: [Ceph-community] After Mimic upgrade OSD's stuck at booting.

2018-09-27 Thread Willem Jan Withagen
On 26/09/2018 12:41, Eugen Block wrote: Hi, I'm not sure how the recovery "still works" with the flag "norecover". Anyway, I think you should unset the flags norecover, nobackfill. Even if not all OSDs come back up you should allow the cluster to backfill PGs. Not sure, but unsetting

Re: [ceph-users] mgr/dashboard: backporting Ceph Dashboard v2 to Luminous

2018-08-23 Thread Willem Jan Withagen
On 23/08/2018 12:47, Ernesto Puerta wrote: @Willem, given your comments come from a technical ground, let's address those technically. As you say, dashboard_v2 is already in Mimic and will be soon in Nautilus when released, so for FreeBSD the issue will anyhow be there. Let's look for a

Re: [ceph-users] mgr/dashboard: backporting Ceph Dashboard v2 to Luminous

2018-08-23 Thread Willem Jan Withagen
On 23/08/2018 11:22, Lenz Grimmer wrote: On 08/22/2018 08:57 PM, David Turner wrote: My initial reaction to this PR/backport was questioning why such a major update would happen on a dot release of Luminous. Your reaction to keeping both dashboards viable goes to support that. Should we

Re: [ceph-users] mgr/dashboard: backporting Ceph Dashboard v2 to Luminous

2018-08-23 Thread Willem Jan Withagen
aid then done. --WjW KR, Ernesto On Wed, Aug 22, 2018 at 12:43 PM Willem Jan Withagen wrote: On 22/08/2018 12:16, Ernesto Puerta wrote: [sent both to ceph-devel and ceph-users lists, as it might be of interest for both audiences] Hi all, This e-mail is just to announce the WIP o

Re: [ceph-users] mgr/dashboard: backporting Ceph Dashboard v2 to Luminous

2018-08-22 Thread Willem Jan Withagen
On 22/08/2018 12:16, Ernesto Puerta wrote: [sent both to ceph-devel and ceph-users lists, as it might be of interest for both audiences] Hi all, This e-mail is just to announce the WIP on backporting dashboard_v2 (http://docs.ceph.com/docs/master/mgr/dashboard/) from master to Luminous

Re: [ceph-users] ceph configuration; Was: FreeBSD rc.d script: sta.rt not found

2018-08-21 Thread Willem Jan Withagen
Norman, I'm cc-ing this back to ceph-users for others the reply to or in future to find On 21/08/2018 12:01, Norman Gray wrote: Willem Jan, hello. Thanks for your detailed notes on my list question. On 20 Aug 2018, at 21:32, Willem Jan Withagen wrote: # zpool create -m/var/lib/ceph

Re: [ceph-users] FreeBSD rc.d script: sta.rt not found

2018-08-16 Thread Willem Jan Withagen
On 16/08/2018 11:01, Willem Jan Withagen wrote: Hi Norman, Thanx for trying the Ceph port. As you will find out it is still rough around the edges... But please feel free to ask questions (on the ceph-user list) Which I will try to help answer as good as I can. And also feel free to send me

Re: [ceph-users] FreeBSD rc.d script: sta.rt not found

2018-08-16 Thread Willem Jan Withagen
gpart add -t freebsd-zfs -l osd1 ada1 zpool create zpool gpt/osd1 zfs create -o mountpoint=/var/lib/ceph/osd/osd.1 osd.1 --WjW On 16/08/2018 00:45, Willem Jan Withagen wrote: On 15/08/2018 19:46, Norman Gray wrote: Greetings. I'm having difficulty starting up the ceph

Re: [ceph-users] FreeBSD rc.d script: sta.rt not found

2018-08-15 Thread Willem Jan Withagen
On 15/08/2018 19:46, Norman Gray wrote: Greetings. I'm having difficulty starting up the ceph monitor on FreeBSD.  The rc.d/ceph script appears to be doing something ... odd. I'm following the instructions on . I've

Re: [ceph-users] Make a ceph options persist

2018-08-13 Thread Willem Jan Withagen
On 13/08/2018 10:51, John Spray wrote: On Fri, Aug 10, 2018 at 10:40 AM Willem Jan Withagen wrote: Hi, The manual of dashboard suggests: ceph config-key set mgr/dashboard/server_addr ${MGR_IP} But that command is required after reboot. config-key settings are persistent

[ceph-users] Make a ceph options persist

2018-08-10 Thread Willem Jan Withagen
Hi, The manual of dashboard suggests: ceph config-key set mgr/dashboard/server_addr ${MGR_IP} But that command is required after reboot. I have tried all kinds of versions, but was not able to get it working... How do I put this into a permanent version in /etc/ceph/ceph.conf --WjW

Re: [ceph-users] Why lvm is recommended method for bleustore

2018-07-23 Thread Willem Jan Withagen
On 22-7-2018 15:51, Satish Patel wrote: I read that post and that's why I open this thread for few more questions and clearence, When you said OSD doesn't come up what actually that means? After reboot of node or after service restart or installation of new disk? You said we are using

Re: [ceph-users] JBOD question

2018-07-21 Thread Willem Jan Withagen
On 21/07/2018 01:45, Oliver Freyermuth wrote: Hi Satish, that really completely depends on your controller. This is what I get on an older AMCC 9550 controller. Note that the disk type is set to JBOD. But the disk descriptors are hidden. And you'll never know what more is not done right.

Re: [ceph-users] RAID question for Ceph

2018-07-19 Thread Willem Jan Withagen
48 lanes ) 3 LSI HBA 9207-8i to connect the trays. --WjW Sent from my iPhone On Jul 19, 2018, at 6:33 AM, Willem Jan Withagen wrote: On 19/07/2018 10:53, Simon Ironside wrote: On 19/07/18 07:59, Dietmar Rieder wrote: We have P840ar controllers with battery backed cache in our O

Re: [ceph-users] RAID question for Ceph

2018-07-19 Thread Willem Jan Withagen
On 19/07/2018 10:53, Simon Ironside wrote: On 19/07/18 07:59, Dietmar Rieder wrote: We have P840ar controllers with battery backed cache in our OSD nodes and configured an individual RAID-0 for each OSD (ceph luminous + bluestore). We have not seen any problems with this setup so far and

Re: [ceph-users] CentOS Dojo at CERN

2018-06-22 Thread Willem Jan Withagen
On 21-6-2018 14:44, Dan van der Ster wrote: On Thu, Jun 21, 2018 at 2:41 PM Kai Wagner wrote: On 20.06.2018 17:39, Dan van der Ster wrote: And BTW, if you can't make it to this event we're in the early days of planning a dedicated Ceph + OpenStack Days at CERN around May/June 2019. More news

Re: [ceph-users] ceph-disk is getting removed from master

2018-05-23 Thread Willem Jan Withagen
On 23-5-2018 17:12, Alfredo Deza wrote: > Now that Mimic is fully branched out from master, ceph-disk is going > to be removed from master so that it is no longer available for the N > release (pull request to follow) > Willem, we don't have a way of directly supporting FreeBSD, I've > suggested

Re: [ceph-users] Difference in speed on Copper of Fiber ports on switches

2018-03-21 Thread Willem Jan Withagen
t mindset. And probably 1800ns at that, because the delay will be a both ends. Or perhaps even 3600ns because the delay is added at every ethernet connector??? But I'm inclined to believe you that the network stack could take quite some time... --WjW > Paul > > 2018-03-21 12:16 GMT+01:00 Willem Jan

[ceph-users] Difference in speed on Copper of Fiber ports on switches

2018-03-21 Thread Willem Jan Withagen
Hi, I just ran into this table for a 10G Netgear switch we use: Fiberdelays: 10 Gbps vezelvertraging (64 bytepakketten): 1.827 µs 10 Gbps vezelvertraging (512 bytepakketten): 1.919 µs 10 Gbps vezelvertraging (1024 bytepakketten): 1.971 µs 10 Gbps vezelvertraging (1518 bytepakketten): 1.905 µs

Re: [ceph-users] Proper procedure to replace DB/WAL SSD

2018-03-03 Thread Willem Jan Withagen
On 23/02/2018 14:27, Caspar Smit wrote: Hi All, What would be the proper way to preventively replace a DB/WAL SSD (when it is nearing it's DWPD/TBW limit and not failed yet). It hosts DB partitions for 5 OSD's Maybe something like: 1) ceph osd reweight 0 the 5 OSD's 2) let backfilling

Re: [ceph-users] ceph-disk vs. ceph-volume: both error prone

2018-02-11 Thread Willem Jan Withagen
On 09/02/2018 21:56, Alfredo Deza wrote: On Fri, Feb 9, 2018 at 10:48 AM, Nico Schottelius wrote: Dear list, for a few days we are disecting ceph-disk and ceph-volume to find out, what is the appropriate way of creating partitions for ceph. ceph-volume does

Re: [ceph-users] formatting bytes and object counts in ceph status ouput

2018-01-03 Thread Willem Jan Withagen
On 3-1-2018 00:44, Dan Mick wrote: > On 01/02/2018 08:54 AM, John Spray wrote: >> On Tue, Jan 2, 2018 at 10:43 AM, Jan Fajerski wrote: >>> Hi lists, >>> Currently the ceph status output formats all numbers with binary unit >>> prefixes, i.e. 1MB equals 1048576 bytes and an

Re: [ceph-users] Deterministic naming of LVM volumes (ceph-volume)

2017-12-13 Thread Willem Jan Withagen
On 13-12-2017 10:36, Stefan Kooman wrote: > Hi, > > The new style "ceph-volume" LVM way of provisioning OSDs introduces a > little challange for us. In order to create the OSDs as logical, > consistent and easily recognizable as possible, we try to name the > Volume Groups (VG) and Logical

Re: [ceph-users] Upgrade from 12.2.1 to 12.2.2 broke my CephFs

2017-12-11 Thread Willem Jan Withagen
On 11/12/2017 15:13, Tobias Prousa wrote: Hi there, I'm running a CEPH cluster for some libvirt VMs and a CephFS providing /home to ~20 desktop machines. There are 4 Hosts running 4 MONs, 4MGRs, 3MDSs (1 active, 2 standby) and 28 OSDs in total. This cluster is up and running since the days

Re: [ceph-users] ceph-disk is now deprecated

2017-11-28 Thread Willem Jan Withagen
On 28-11-2017 13:32, Alfredo Deza wrote: I understand that this would involve a significant effort to fully port over and drop ceph-disk entirely, and I don't think that dropping ceph-disk in Mimic is set in stone (yet). Alfredo, When I expressed my concers about deprecating ceph-disk, I was

Re: [ceph-users] Bluestore OSD_DATA, WAL & DB

2017-11-03 Thread Willem Jan Withagen
On 3-11-2017 00:09, Nigel Williams wrote: > On 3 November 2017 at 07:45, Martin Overgaard Hansen > wrote: >> I want to bring this subject back in the light and hope someone can provide >> insight regarding the issue, thanks. > Is it possible to make the DB partition (on the

Re: [ceph-users] Hammer to Jewel Upgrade - Extreme OSD Boot Time

2017-11-01 Thread Willem Jan Withagen
On 01/11/2017 18:04, Chris Jones wrote: Greg, Thanks so much for the reply! We are not clear on why ZFS is behaving poorly under some circumstances on getxattr system calls, but that appears to be the case. Since the last update we have discovered that back-to-back booting of the OSD

Re: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]

2017-10-10 Thread Willem Jan Withagen
On 10-10-2017 14:21, Alfredo Deza wrote: > On Tue, Oct 10, 2017 at 8:14 AM, Willem Jan Withagen <w...@digiware.nl> wrote: >> On 10-10-2017 13:51, Alfredo Deza wrote: >>> On Mon, Oct 9, 2017 at 8:50 PM, Christian Balzer <ch...@gol.com> wrote: >>>> >>&

Re: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]

2017-10-10 Thread Willem Jan Withagen
On 10-10-2017 13:51, Alfredo Deza wrote: > On Mon, Oct 9, 2017 at 8:50 PM, Christian Balzer wrote: >> >> Hello, >> >> (pet peeve alert) >> On Mon, 9 Oct 2017 15:09:29 + (UTC) Sage Weil wrote: >> >>> To put this in context, the goal here is to kill ceph-disk in mimic. Right,

Re: [ceph-users] Power outages!!! help!

2017-08-29 Thread Willem Jan Withagen
On 29-8-2017 19:12, Steve Taylor wrote: > Hong, > > Probably your best chance at recovering any data without special, > expensive, forensic procedures is to perform a dd from /dev/sdb to > somewhere else large enough to hold a full disk image and attempt to > repair that. You'll want to use

Re: [ceph-users] librados for MacOS

2017-08-03 Thread Willem Jan Withagen
On 03/08/2017 09:36, Brad Hubbard wrote: > On Thu, Aug 3, 2017 at 5:21 PM, Martin Palma wrote: >> Hello, >> >> is there a way to get librados for MacOS? Has anybody tried to build >> librados for MacOS? Is this even possible? > > Yes, it is eminently possible, but would require

Re: [ceph-users] Disk activation issue on 10.2.9, too (Re: v11.2.0 Disk activation issue while booting)

2017-07-21 Thread Willem Jan Withagen
On 21-7-2017 12:45, Fulvio Galeazzi wrote: > Hallo David, all, > sorry for hi-jacking the thread but I am seeing the same issue, > although on 10.2.7/10.2.9... Then this is a problem that had nothing to do with my changes to ceph-disk, since they only went into HEAD and thus end up in

Re: [ceph-users] ceph-disk activate-block: not a block device

2017-07-20 Thread Willem Jan Withagen
Hi Roger, Device detection has recently changed (because FreeBSD does not have blockdevices). So could very well be that this is an actual problem where something is still wrong. Please keep an eye out, and let me know if it comes back. --WjW Op 20-7-2017 om 19:29 schreef Roger Brown: So I

Re: [ceph-users] Ceph random read IOPS

2017-06-26 Thread Willem Jan Withagen
is, the test setup should be something like: 3 hosts with something like 3 disks per host, min_disk=2 and a nice workload. Then turn the GHz-knob and see what happens with client latency and throughput. --WjW > On Sun, Jun 25, 2017 at 4:53 AM, Willem Jan Withagen <w...@digiware.nl > <m

Re: [ceph-users] Ceph random read IOPS

2017-06-24 Thread Willem Jan Withagen
ore important than cores. > But then it is not generic enough to be used as an advise! It is just a line in 3D-space. As there are so many --WjW >> On 2017-06-24 12:52, Willem Jan Withagen wrote: >> >>> On 24-6-2017 05:30, Christian Wuerdig wrote: >>> The genera

Re: [ceph-users] Ceph random read IOPS

2017-06-24 Thread Willem Jan Withagen
On 24-6-2017 05:30, Christian Wuerdig wrote: > The general advice floating around is that your want CPUs with high > clock speeds rather than more cores to reduce latency and increase IOPS > for SSD setups (see also > http://www.sys-pro.co.uk/ceph-storage-fast-cpus-ssd-performance/) So > something

Re: [ceph-users] Transitioning to Intel P4600 from P3700 Journals

2017-06-22 Thread Willem Jan Withagen
On 22-6-2017 03:59, Christian Balzer wrote: >> Agreed. On the topic of journals and double bandwidth, am I correct in >> thinking that btrfs (as insane as it may be) does not require double >> bandwidth like xfs? Furthermore with bluestore being close to stable, will >> my architecture need to

Re: [ceph-users] EXT: ceph-lvm - a tool to deploy OSDs from LVM volumes

2017-06-19 Thread Willem Jan Withagen
Op 19-6-2017 om 19:57 schreef Alfredo Deza: On Mon, Jun 19, 2017 at 11:37 AM, Willem Jan Withagen <w...@digiware.nl> wrote: On 19-6-2017 16:13, Alfredo Deza wrote: On Mon, Jun 19, 2017 at 9:27 AM, John Spray <jsp...@redhat.com> wrote: On Fri, Jun 16, 2017 at 7:23 PM, Alfr

Re: [ceph-users] EXT: ceph-lvm - a tool to deploy OSDs from LVM volumes

2017-06-19 Thread Willem Jan Withagen
On 19-6-2017 16:13, Alfredo Deza wrote: > On Mon, Jun 19, 2017 at 9:27 AM, John Spray wrote: >> On Fri, Jun 16, 2017 at 7:23 PM, Alfredo Deza wrote: >>> On Fri, Jun 16, 2017 at 2:11 PM, Warren Wang - ISD >>> wrote: I would

Re: [ceph-users] EXT: ceph-lvm - a tool to deploy OSDs from LVM volumes

2017-06-16 Thread Willem Jan Withagen
On 16-6-2017 20:23, Alfredo Deza wrote: > On Fri, Jun 16, 2017 at 2:11 PM, Warren Wang - ISD > wrote: >> I would prefer that this is something more generic, to possibly support >> other backends one day, like ceph-volume. Creating one tool per backend >> seems silly. >>

Re: [ceph-users] Sharing SSD journals and SSD drive choice

2017-05-03 Thread Willem Jan Withagen
uble > for a cluster this size to require a cluster write speed of > 833.33 MBps to average 5 MBps on each journal. Also if you have less > than 2,000 OSDs, then everything shrinks fast. > > > On Tue, May 2, 2017 at 5:39 PM Willem Jan Withagen <w...@digiware.nl > <mailto:w

Re: [ceph-users] Sharing SSD journals and SSD drive choice

2017-05-02 Thread Willem Jan Withagen
3710 give in regards to a 3520 SSD per dollar spent. --WjW > On Tue, May 2, 2017 at 1:47 PM Willem Jan Withagen <w...@digiware.nl > <mailto:w...@digiware.nl>> wrote: > > On 02-05-17 19:16, Дробышевский, Владимир wrote: > > Willem, > > > > p

Re: [ceph-users] Sharing SSD journals and SSD drive choice

2017-05-02 Thread Willem Jan Withagen
re 0f 2000. --WjW > Best regards, > Vladimir > > 2017-05-02 21:05 GMT+05:00 Willem Jan Withagen <w...@digiware.nl > <mailto:w...@digiware.nl>>: > > On 27-4-2017 20:46, Alexandre DERUMIER wrote: > > Hi, > > > >>> What I'm

Re: [ceph-users] Sharing SSD journals and SSD drive choice

2017-05-02 Thread Willem Jan Withagen
On 27-4-2017 20:46, Alexandre DERUMIER wrote: > Hi, > >>> What I'm trying to get from the list is /why/ the "enterprise" drives >>> are important. Performance? Reliability? Something else? > > performance, for sure (for SYNC write, >

Re: [ceph-users] Why is cls_log_add logging so much?

2017-04-29 Thread Willem Jan Withagen
On 29-04-17 00:16, Gregory Farnum wrote: > On Tue, Apr 4, 2017 at 2:49 AM, Jens Rosenboom wrote: >> On a busy cluster, I'm seeing a couple of OSDs logging millions of >> lines like this: >> >> 2017-04-04 06:35:18.240136 7f40ff873700 0 >> cls/log/cls_log.cc:129: storing

Re: [ceph-users] FreeBSD port net/ceph-devel released

2017-04-04 Thread Willem Jan Withagen
On 4-4-2017 21:05, Gregory Farnum wrote: > [ Sorry for the empty email there. :o ] > > On Tue, Apr 4, 2017 at 12:28 PM, Patrick Donnelly <pdonn...@redhat.com> wrote: >> On Sat, Apr 1, 2017 at 4:58 PM, Willem Jan Withagen <w...@digiware.nl> wrote: >>> On 1-4-

Re: [ceph-users] FreeBSD port net/ceph-devel released

2017-04-01 Thread Willem Jan Withagen
On 1-4-2017 21:59, Wido den Hollander wrote: > >> Op 31 maart 2017 om 19:15 schreef Willem Jan Withagen <w...@digiware.nl>: >> >> >> On 31-3-2017 17:32, Wido den Hollander wrote: >>> Hi Willem Jan, >>> >>>> Op 30 maart 2

Re: [ceph-users] FreeBSD port net/ceph-devel released

2017-03-31 Thread Willem Jan Withagen
On 31-3-2017 17:32, Wido den Hollander wrote: > Hi Willem Jan, > >> Op 30 maart 2017 om 13:56 schreef Willem Jan Withagen >> <w...@digiware.nl>: >> >> >> Hi, >> >> I'm pleased to announce that my efforts to port to FreeBSD have >

[ceph-users] FreeBSD port net/ceph-devel released

2017-03-30 Thread Willem Jan Withagen
Hi, I'm pleased to announce that my efforts to port to FreeBSD have resulted in a ceph-devel port commit in the ports tree. https://www.freshports.org/net/ceph-devel/ I'd like to thank everybody that helped me by answering my questions, fixing by mistakes, undoing my Git mess. Especially Sage,

Re: [ceph-users] Anyone using LVM or ZFS RAID1 for boot drives?

2017-02-13 Thread Willem Jan Withagen
On 13-2-2017 04:22, Alex Gorbachev wrote: > Hello, with the preference for IT mode HBAs for OSDs and journals, > what redundancy method do you guys use for the boot drives. Some > options beyond RAID1 at hardware level we can think of: > > - LVM > > - ZFS RAID1 mode Since it is not quite Ceph,

Re: [ceph-users] Inherent insecurity of OSD daemons when using only a "public network"

2017-01-26 Thread Willem Jan Withagen
On 13-1-2017 12:45, Willem Jan Withagen wrote: > On 13-1-2017 09:07, Christian Balzer wrote: >> >> Hello, >> >> Something I came across a while agao, but the recent discussion here >> jolted my memory. >> >> If you have a cluster configured with just

Re: [ceph-users] Inherent insecurity of OSD daemons when using only a "public network"

2017-01-13 Thread Willem Jan Withagen
On 13-1-2017 09:07, Christian Balzer wrote: > > Hello, > > Something I came across a while agao, but the recent discussion here > jolted my memory. > > If you have a cluster configured with just a "public network" and that > network being in RFC space like 10.0.0.0/8, you'd think you'd be

Re: [ceph-users] Review of Ceph on ZFS - or how not to deploy Ceph for RBD + OpenStack

2017-01-11 Thread Willem Jan Withagen
On 11-1-2017 08:06, Adrian Saul wrote: > > I would concur having spent a lot of time on ZFS on Solaris. > > ZIL will reduce the fragmentation problem a lot (because it is not > doing intent logging into the filesystem itself which fragments the > block allocations) and write response will be a

Re: [ceph-users] Analysing ceph performance with SSD journal, 10gbe NIC and 2 replicas -Hammer release

2017-01-10 Thread Willem Jan Withagen
On 10-1-2017 20:35, Lionel Bouton wrote: > Hi, I usually don't top post, but this time it is just to agree whole hartedly with what you wrote. And you have again more arguements as to why. Using SSD that don't work right is a certain recipe for losing data. --WjW > Le 10/01/2017 à 19:32, Brian

Re: [ceph-users] Analysing ceph performance with SSD journal, 10gbe NIC and 2 replicas -Hammer release

2017-01-09 Thread Willem Jan Withagen
g OSDs, it's the > O_DSYNC writes they tend to lie about. > > They may have a failure rate higher than enterprise-grade SSDs, but > are otherwise suitable for use as OSDs if journals are placed elsewhere. > > On Mon, Jan 9, 2017 at 2:39 PM, Willem Jan Withagen

Re: [ceph-users] Analysing ceph performance with SSD journal, 10gbe NIC and 2 replicas -Hammer release

2017-01-09 Thread Willem Jan Withagen
On 9-1-2017 18:46, Oliver Humpage wrote: > >> Why would you still be using journals when running fully OSDs on >> SSDs? > > In our case, we use cheaper large SSDs for the data (Samsung 850 Pro > 2TB), whose performance is excellent in the cluster, but as has been > pointed out in this thread can

Re: [ceph-users] Analysing ceph performance with SSD journal, 10gbe NIC and 2 replicas -Hammer release

2017-01-08 Thread Willem Jan Withagen
On 7-1-2017 15:03, Lionel Bouton wrote: > Le 07/01/2017 à 14:11, kevin parrikar a écrit : >> Thanks for your valuable input. >> We were using these SSD in our NAS box(synology) and it was giving >> 13k iops for our fileserver in raid1.We had a few spare disks which we >> added to our ceph nodes