On 1/18/16 2:47 PM, Peter Geoghegan wrote:
On Mon, Jan 18, 2016 at 12:31 PM, Robert Haas <robertmh...@gmail.com> wrote:
<rant>People keep predicting the death of spinning media, but I think
it's not happening to anywhere near as fast as that people think.
Yes, I'm writing this on a laptop with an SSD, and my personal laptop
also has an SSD, but their immediate predecessors did not, and these
are fairly expensive laptops.  And most customers I talk to are still
using spinning disks.  Meanwhile, main memory is getting so large that
even pretty significant databases can be entirely RAM-cached.  So I
tend to think that this is a lot less exciting than people who are not
me seem to think.</rant>

I tend to agree that the case for SSDs as a revolutionary technology
has been significantly overstated. This recent article makes some
interesting points:


I think it's much more true that main memory scaling (in particular,
main memory capacity) has had a huge impact, but that trend appears to
now be stalling.

My original article doesn't talk about SSDs; it's talking about non-volatile memory architectures (quoted extract below). Fusion IO is an example of this, and if NVDIMMs become available we'll see even faster non-volatile performance.

To me, the most interesting point the article makes is that systems now need much better support for multiple classes of NV storage. I agree with your point that spinning rust is here to stay for a long time, simply because it's cheap as heck. So systems need to become much better at moving data between different layers of NV storage so that you're getting the biggest bang for the buck. That will remain critical as long as SCM's remain 25x more expensive than rust.

Quote from article:

Flash-based storage devices are not new: SAS and SATA SSDs have been available for at least the past decade, and have brought flash memory into computers in the same form factor as spinning disks. SCMs reflect a maturing of these flash devices into a new, first-class I/O device: SCMs move flash off the slow SAS and SATA buses historically used by disks, and onto the significantly faster PCIe bus used by more performance-sensitive devices such as network interfaces and GPUs. Further, emerging SCMs, such as non-volatile DIMMs (NVDIMMs), interface with the CPU as if they were DRAM and offer even higher levels of performance for non-volatile storage.

Today's PCIe-based SCMs represent an astounding three-order-of-magnitude performance change relative to spinning disks (~100K I/O operations per second versus ~100). For computer scientists, it is rare that the performance assumptions that we make about an underlying hardware component change by 1,000x or more. This change is punctuated by the fact that the performance and capacity of non-volatile memories continue to outstrip CPUs in year-on-year performance improvements, closing and potentially even inverting the I/O gap.

The performance of SCMs means that systems must no longer "hide" them via caching and data reduction in order to achieve high throughput. Unfortunately, however, this increased performance comes at a high price: SCMs cost 25x as much as traditional spinning disks ($1.50/GB versus $0.06/GB), with enterprise-class PCIe flash devices costing between three and five thousand dollars each. This means that the cost of the non-volatile storage can easily outweigh that of the CPUs, DRAM, and the rest of the server system that they are installed in. The implication of this shift is significant: non-volatile memory is in the process of replacing the CPU as the economic center of the datacenter.
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to