On our production server, I can insert 5000 tuples in 2100 ms.

Single Xeon 2.6 Ghz
2 Gigs ram
3ware RAID 5 SATA drives array, 3 drives only :-((
PG 8.0 - fsync off

I do think inserting 5000 tuples in a second (i.e 5000 insert transactions, no bulk load) can be reached with well a configured SCSI RAID 10 array.

Anyway it was a MISTAKE in my former description of the project : (sorry for this)

        -  we need 5000 inserts per MINUTE

My question remain :

        Is pgcluster worth giving a try and can it be trusted for in a production environnement ?
        Will it be possible to get a sort of real-time application ?


Thanks for all your comments.
Benjamin.




 


Rod Taylor <[EMAIL PROTECTED]>
Envoy par : [EMAIL PROTECTED]

06/04/2005 18:40

       
        Pour :        "Mohan, Ross" <[EMAIL PROTECTED]>
        cc :        pgsql-performance@postgresql.org
        Objet :        Re: RE : RE: [PERFORM] Postgresql vs SQLserver for this



On Wed, 2005-04-06 at 16:12 +0000, Mohan, Ross wrote:
> I wish I had a Dell system and run case to show you Alex, but I don't...
> however...using Oracle's "direct path" feature, it's pretty straightforward.
>
> We've done 110,000 rows per second into index-less tables on a big system
> (IBM Power5 chips, Hitachi SAN). ( Yes, I am sure: over 100K a second. Sustained
> for almost 9 minutes. )

Just for kicks I did a local test on a desktop machine (single CPU,
single IDE drive) using COPY from STDIN for a set of integers in via a
single transaction, no indexes.

1572864 tuples were loaded in 13715.613ms, which is approx 115k rows per
second.

Okay, no checkpoints and I didn't cross an index boundary, but I also
haven't tuned the config file beyond bumping up the buffers.

Lets try again with more data this time.

31Million tuples were loaded in approx 279 seconds, or approx 112k rows
per second.

> I'd love to see PG get into this range..i am a big fan of PG (just a
> rank newbie) but I gotta think the underlying code to do this has
> to be not-too-complex.....

I'd say we're there.

> -----Original Message-----
> From: Alex Turner [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, April 06, 2005 11:38 AM
> To: [EMAIL PROTECTED]
> Cc: pgsql-performance@postgresql.org; Mohan, Ross
> Subject: Re: RE : RE: [PERFORM] Postgresql vs SQLserver for this application ?
>
>
> I think everyone was scared off by the 5000 inserts per second number.
>
> I've never seen even Oracle do this on a top end Dell system with copious SCSI attached storage.
>
> Alex Turner
> netEconomist
>
> On Apr 6, 2005 3:17 AM, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> >  
> > Unfortunately.
> >  
> > But we are in the the process to choose Postgresql with pgcluster. I'm
> > currently running some tests (performance, stability...) Save the
> > money on the license fees, you get it for your hardware ;-)
> >  
> > I still welcome any advices or comments and I'll let you know how the
> > project is going on.
> >  
> > Benjamin.
> >  
> >  
> >  
> >  "Mohan, Ross" <[EMAIL PROTECTED]>
> >
> > 05/04/2005 20:48
> >          
> >         Pour :        <[EMAIL PROTECTED]>
> >         cc :        
> >         Objet :        RE: [PERFORM] Postgresql vs SQLserver for this
> > application ?
> >  
> >  
> > You never got answers on this? Apologies, I don't have one, but'd be
> > curious to hear about any you did get....
> >  
> > thx
> >  
> > Ross
> >
> > -----Original Message-----
> >  From: [EMAIL PROTECTED]
> > [mailto:[EMAIL PROTECTED] On Behalf
> > Of [EMAIL PROTECTED]
> >  Sent: Monday, April 04, 2005 4:02 AM
> >  To: pgsql-performance@postgresql.org
> >  Subject: [PERFORM] Postgresql vs SQLserver for this application ?
> >  
> >
> >  hi all.
> >  
> >  We are designing a quite big application that requires a
> > high-performance database backend.  The rates we need to obtain are at
> > least  5000 inserts per second and 15 selects per second for one
> > connection. There should only be 3 or 4 simultaneous connections.
> >  I think our main concern is to deal with the constant flow of data coming
> > from the inserts that must be available for selection as fast as possible.

> > (kind of real time access ...)
> >  
> >  As a consequence, the database should rapidly increase up to more
> > than one hundred gigs. We still have to determine how and when we
> > shoud backup old data to prevent the application from a performance
> > drop. We intend to develop some kind of real-time partionning on our
> > main table keep the flows up.
> >  
> >  At first, we were planning to use SQL Server as it has features that
> > in my opinion could help us a lot :
> >         - replication
> >         - clustering
> >  
> >  Recently we started to study Postgresql as a solution for our project :
> >         - it also has replication
> >         - Postgis module can handle geographic datatypes (which would
> > facilitate our developments)
> >         - We do have a strong knowledge on Postgresql administration
> > (we use it for production processes)
> >         - it is free (!) and we could save money for hardware
> > purchase.
> >  
> >  Is SQL server clustering a real asset ? How reliable are Postgresql
> > replication tools  ? Should I trust Postgresql performance for this
> > kind of needs ?
> >  
> >  My question is a bit fuzzy but any advices are most welcome...
> > hardware,tuning or design tips as well :))
> >  
> >  Thanks a lot.
> >  
> >  Benjamin.
> >  
> >  
> >
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
>     (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
>
--


---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
     joining column's datatypes do not match


Reply via email to