On Fri, Mar 13, 2009 at 02:21:18AM +0100, Stefan de Konink wrote:
> Now it is nice you put 32GB (extra expensive) memory in there, but most
> likely your hot performance would be far better with more (cheap) memory
> than more disks. At the time I wrote my paper on OSM Dec2008, there was
The EC
On Fri, Mar 13, 2009 at 8:34 AM, Florian Lohoff wrote:
> So it is sensible to make it mirroring and it might even be a benefit to
> do an 1 -> N mirroring.
>
> Rule of thumb:
>
> More concurrent readers -> More spindles
this is why we have ROMA/TRAPI/etc... they're able to satisy the most
On Fri, Mar 13, 2009 at 03:41:16AM +0100, Stefan de Konink wrote:
> Mirroring will not increase performance because your RAID card will not
> a priori know what files you are interested in, only the blocks you are
> interested in and in the worst case will grab the same data from the
> same disk
On Thu, 12 Mar 2009 16:00:19 +, Grant Slater
wrote:
> Dear all
>
> The API downtime scheduled for the 0.6 API transition has been postponed
> due to delays acquiring the new database server.
>
> The re-scheduled API downtime for the 0.6 API upgrade is now the weekend
> of the 17-20th April
Grant Slater wrote:
> But as detailed below by Stefan, the internal block fragmentation is a
> serious issue, which needs to be fixed first.
> I am also still very sceptical about SSD MTBF on DB server load levels.
> Write 1 bit = Full SSD block write.
Big community site in NL reported less than
Stefan de Konink wrote:
> Iván Sánchez Ortega wrote:
>> El Viernes, 13 de Marzo de 2009, Stefan de Konink escribió:
>>> [...] Therefore your seek times will only decrease if you can search
>>> on the individual disk not as a combined pair.
>>
>> I actually wonder what the DB performance could be w
Grant Slater wrote:
> Stefan de Konink wrote:
>> Stefan de Konink wrote:
>>> Wow... (serious wow) I have never seen the database THAT expanded
>>> unless I was using an XML database.
>>
>> And now I think of it; that is probably because *I* wasn't able to
>> download the history tables. That make
Iván Sánchez Ortega wrote:
> El Viernes, 13 de Marzo de 2009, Stefan de Konink escribió:
>> [...] Therefore your seek times will only decrease if you can search on the
>> individual disk not as a combined pair.
>
> I actually wonder what the DB performance could be with some of those new
> shiny
Stefan de Konink wrote:
> Stefan de Konink wrote:
>> Wow... (serious wow) I have never seen the database THAT expanded
>> unless I was using an XML database.
>
> And now I think of it; that is probably because *I* wasn't able to
> download the history tables. That makes sense; but does it make se
El Viernes, 13 de Marzo de 2009, Stefan de Konink escribió:
> [...] Therefore your seek times will only decrease if you can search on the
> individual disk not as a combined pair.
I actually wonder what the DB performance could be with some of those new
shiny SSD drives...
(And how expensive w
Stefan de Konink wrote:
> Wow... (serious wow) I have never seen the database THAT expanded unless
> I was using an XML database.
And now I think of it; that is probably because *I* wasn't able to
download the history tables. That makes sense; but does it make sense to
have the history tables a
Matt Amos wrote:
>> At the time I wrote my paper on OSM Dec2008, there was
>> about 72GB of CSV data. Thus with lets say 128GB you will have your
>> entire database *IN MEMORY* no fast disks required.
>
> in 8Gb kits? that would be *extra* expensive (about £8,680 according
> to froogle).
Some peo
Grant Slater wrote:
> Large imports in the pipeline.
Partitioning is a scalable solution to that, not buying new hardware.
>> Now it is nice you put 32GB (extra expensive) memory in there, but
>> most likely your hot performance would be far better with more (cheap)
>> memory than more disks. A
On Fri, Mar 13, 2009 at 1:21 AM, Stefan de Konink wrote:
> Grant Slater wrote:
>> Summary:
>> 2x Intel Xeon Processor E5420 Quad Core
>> 32GB ECC (max 128GB)
>> 2x 73GB SAS 15k
>> 10x 450GB SAS 15k (expensive, but stupidly low latency)
>> IPMI + KVM
>
> Maybe a stupid question; but is your databas
Stefan de Konink wrote:
>
> Maybe a stupid question; but is your database server able to exploit
> the above configuration? Especially related to your processor choice.
Yes, the disks are _currently_ over spec'ed, but not for 6 month's time.
Replacing the hardware for the central database server
Grant Slater wrote:
> Summary:
> 2x Intel Xeon Processor E5420 Quad Core
> 32GB ECC (max 128GB)
> 2x 73GB SAS 15k
> 10x 450GB SAS 15k (expensive, but stupidly low latency)
> IPMI + KVM
Maybe a stupid question; but is your database server able to exploit the
above configuration? Especially related
Claudomiro Nascimento Junior wrote:
> Can you bring joy to our hearts describing the winning specs?
>
Full spec here:
http://wiki.openstreetmap.org/wiki/Servers/smaug
Summary:
2x Intel Xeon Processor E5420 Quad Core
32GB ECC (max 128GB)
2x 73GB SAS 15k
10x 450GB SAS 15k (expensive, but stupidly l
Stefan de Konink wrote:
> Grant Slater wrote:
>> The API downtime scheduled for the 0.6 API transition has been
>> postponed due to delays acquiring the new database server.
> So it is impossible to buy a machine for 15k? Only one response: wow!
Took awhile to get all the quotes in and then asked
Grant Slater wrote:
> The API downtime scheduled for the 0.6 API transition has been postponed
> due to delays acquiring the new database server.
So it is impossible to buy a machine for 15k? Only one response: wow!
Stefan
___
dev mailing list
dev@op
Dear all
The API downtime scheduled for the 0.6 API transition has been postponed
due to delays acquiring the new database server.
The re-scheduled API downtime for the 0.6 API upgrade is now the weekend
of the 17-20th April 2009.
Original announcement...
http://lists.openstreetmap.org/piperma
20 matches
Mail list logo