Stefan de Konink wrote: > Iván Sánchez Ortega wrote: >> El Viernes, 13 de Marzo de 2009, Stefan de Konink escribió: >>> [...] Therefore your seek times will only decrease if you can search >>> on the individual disk not as a combined pair. >> >> I actually wonder what the DB performance could be with some of those >> new shiny SSD drives... >> (And how expensive would be to outfit the DB server with a set of them)
Matt and I looked at SSD (mostly just for fun...) It was somewhere around £55k to _fully_ kit out the server using stock SSD. I also looked at SSD for just the DB indexes... But as detailed below by Stefan, the internal block fragmentation is a serious issue, which needs to be fixed first. I am also still very sceptical about SSD MTBF on DB server load levels. Write 1 bit = Full SSD block write. > That is a thing that was benchedmarked by some people a few weeks ago > over here. The main problem with even the most expensive SSD disks > now, that some companies want to hide very much, is the performance > hit you will get after block shuffling takes places. > > It is basically a method to prevent a system to kill a specific piece > of RAM because it is rewritten again and again. Next to this rewriting > will generate a block of a length far beyond your wildest dreams to be > rewritten after one bit changes. > > Now the first time you will run bonnie++ you will see average to great > performance depending what you expect on seek times. (Some people are > lucky with NetApps or Sun Storage series) But if you run this test on > the same disk after lets say about one month of usage the performance > significantly decreased. > > ...that is odd right. > > I hope this issue will be fixed within a few iterations, for now, my > advise and some commercial users in The Netherlands advice: ditch SSD > for now, will see if it works later. _______________________________________________ dev mailing list [email protected] http://lists.openstreetmap.org/listinfo/dev

