Re: [PERFORM] Contemplating SSD Hardware RAID

2011-06-21 Thread Anton Rommerskirchen
Am Dienstag, 21. Juni 2011 05:54:26 schrieb Dan Harris:
 I'm looking for advice from the I/O gurus who have been in the SSD game
 for a while now.

 I understand that the majority of consumer grade SSD drives lack the
 required capacitor to complete a write on a sudden power loss.  But,
 what about pairing up with a hardware controller with BBU write cache?
 Can the write cache be disabled at the drive and result in a safe setup?

 I'm exploring the combination of an Areca 1880ix-12 controller with 6x
 OCZ Vertex 3 V3LT-25SAT3 2.5 240GB SATA III drives in RAID-10.  Has
 anyone tried this combination?  What nasty surprise am I overlooking here?

 Thanks
 -Dan

Wont work.

period.

long story: the loss of the write in the ssd cache is substantial. 

You will loss perhaps the whole system.

I have tested since 2006 ssd - adtron 2GB for 1200 Euro at first ... 

i can only advice to use a enterprise ready ssd. 

candidates: intel new series , sandforce pro discs.

i tried to submit a call at apc to construct a device thats similar to a 
buffered drive frame (a capacitor holds up the 5 V since cache is written 
back) , but they have not answered. so no luck in using mainstream ssd for 
the job. 

loss of the cache - or for mainstream sandforce the connection - will result 
in loss of changed frames (i.e 16 Mbytes of data per frame) in ssd.

if this is the root of your filesystem - forget the disk.

btw.: since 2 years i have tested 16 discs for speed only. i sell the disc 
after the test. i got 6 returns for failure within those 2 years - its really 
happening to the mainstream discs.
 
-- 
Mit freundlichen Grüssen
Anton Rommerskirchen

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] SSD + RAID

2009-11-19 Thread Anton Rommerskirchen
Am Donnerstag, 19. November 2009 13:29:56 schrieb Craig Ringer:
 On 19/11/2009 12:22 PM, Scott Carey wrote:
  3:  Have PG wait a half second (configurable) after the checkpoint
  fsync() completes before deleting/ overwriting any WAL segments.  This
  would be a trivial feature to add to a postgres release, I think.

 How does that help? It doesn't provide any guarantee that the data has
 hit main storage - it could lurk in SDD cache for hours.

  4: Yet another solution:  The drives DO adhere to write barriers
  properly. A filesystem that used these in the process of fsync() would be
  fine too. So XFS without LVM or MD (or the newer versions of those that
  don't ignore barriers) would work too.

 *if* the WAL is also on the SSD.

 If the WAL is on a separate drive, the write barriers do you no good,
 because they won't ensure that the data hits the main drive storage
 before the WAL recycling hits the WAL disk storage. The two drives
 operate independently and the write barriers don't interact.

 You'd need some kind of inter-drive write barrier.

 --
 Craig Ringer


Hello !

as i understand this:
ssd performace is great, but caching is the problem.

questions:

1. what about conventional disks with 32/64 mb cache ? how do they handle the 
plug test if their caches are on ?

2. what about using seperated power supply for the disks ? it it possible to 
write back the cache after switching the sata to another machine controller ?

3. what about making a statement about a lacking enterprise feature (aka 
emergency battery equipped ssd) and submitting this to the producers ?

I found that one of them (OCZ) seems to handle suggestions of customers (see 
write speed discussins on vertex fro example)

and another (intel) seems to handle serious problems with his disks in 
rewriting and sometimes redesigning his products - if you tell them and 
market dictades to react (see degeneration of performace before 1.11 
firmware).

perhaps its time to act and not only to complain about the fact.

(btw: found funny bonnie++ for my intel 160 gb postville and my samsung pb22 
after using the sam for now approx. 3 months+ ... my conclusion: NOT all SSD 
are equal ...)

best regards 

anton

-- 

ATRSoft GmbH
Bivetsweg 12
D 41542 Dormagen
Deutschland
Tel .: +49(0)2182 8339951
Mobil: +49(0)172 3490817

Geschäftsführer Anton Rommerskirchen

Köln HRB 44927
STNR 122/5701 - 2030
USTID DE213791450

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] strange performance regression between 7.4 and 8.1

2007-03-02 Thread Anton Rommerskirchen
Am Donnerstag 01 März 2007 21:44 schrieb Alex Deucher:
 Hello,

 I have noticed a strange performance regression and I'm at a loss as
 to what's happening.  We have a fairly large database (~16 GB).  The
 original postgres 7.4 was running on a sun v880 with 4 CPUs and 8 GB
 of ram running Solaris on local scsi discs.  The new server is a sun
 Opteron box with 4 cores, 8 GB of ram running postgres 8.1.4 on Linux
 (AMD64) on a 4 Gbps FC SAN volume.  When we created the new database
 it was created from scratch rather than copying over the old one,
 however the table structure is almost identical (UTF8 on the new one
 vs. C on the old). The problem is queries are ~10x slower on the new
 hardware.  I read several places that the SAN might be to blame, but
 testing with bonnie and dd indicates that the SAN is actually almost
 twice as fast as the scsi discs in the old sun server.  I've tried
 adjusting just about every option in the postgres config file, but
 performance remains the same.  Any ideas?



1. Do you use NUMA ctl for locking the db on one node ?

2. do you use bios to interleave memeory ?

3. do you expand cache over mor than one numa node ?

 Thanks,

 Alex

 ---(end of broadcast)---
 TIP 3: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faq

-- 

ATRSoft GmbH
Rosellstrasse 9
D 50354 Hürth
Deutschland
Tel .: +49(0)2233 691324

Geschäftsführer Anton Rommerskirchen

Köln HRB 44927
STNR 224/5701 - 1010

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] Tuning

2007-01-26 Thread Anton Rommerskirchen
Hello !

Am Freitag 26 Januar 2007 12:28 schrieb John Parnefjord:
 Hi!

 I'm planning to move from mysql to postgresql as I believe the latter
 performs better when it comes to complex queries. The mysql database
 that I'm running is about 150 GB in size, with 300 million rows in the
 largest table. We do quite a lot of statistical analysis on the data
 which means heavy queries that run for days. Now that I've got two new
 servers with 32GB of ram I'm eager to switch to postgresql to improve
 perfomance. One database is to be an analysis server and the other an
 OLTP server feeding a web site with pages.

 I'm setting for Postgresql 8.1 as it is available as a package in Debian
 Etch AMD64.

 As I'm new to postgresql I've googled to find some tips and found some
 interesting links how configure and tune the database manager. Among
 others I've found the PowerPostgresql pages with a performance checklist
 and annotated guide to postgresql.conf
 [http://www.powerpostgresql.com/]. And of course the postgresql site
 itself is a good way to start. RevSys have a short guide as well
 [http://www.revsys.com/writings/postgresql-performance.html]

 I just wonder if someone on this list have some tips from the real world
 how to tune postgresql and what is to be avoided. AFAIK the following
 parameters seems important to adjust to start with are:

 -work_mem
 -maintenance_work_mem - 50% of the largest table?
 -shared_buffers - max value 5
 -effective_cache_size - max 2/3 of available ram, ie 24GB on the

Do you use a Opteron with a NUMA architecture ?

You could end up with switching pages between your memory nodes, which slowed 
down heavily my server (Tyan 2895, 2 x 275 cpu, 8 GB)...

Try first to use only one numa node for your cache.

 hardware described above
 -shmmax - how large dare I set this value on dedicated postgres servers?
 -checkpoint_segments - this is crucial as one of the server is
 transaction heavy
 -vacuum_cost_delay

 Of course some values can only be estimated after database has been feed
 data and queries have been run in a production like manner.

 Cheers
 // John

 Ps. I sent to list before but the messages where withheld as I'm not a
 member of any of the restrict_post groups. This is perhaps due to the
 fact that we have changed email address a few weeks ago and there was a
 mismatch between addresses. So I apologize if any similar messages show
 up from me, just ignore them.

 ---(end of broadcast)---
 TIP 2: Don't 'kill -9' the postmaster

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq