Re: [PERFORM] Equivalents in PostgreSQL of MySQL's ENGINE=MEMORY MAX_ROWS=1000

2007-04-18 Thread Hannes Dorbath

Arnau wrote:

Hi Thor,

Thor-Michael Støre wrote:

On 2007-04-04 Arnau wrote:

Josh Berkus wrote:

Arnau,


Is there anything similar in PostgreSQL? The idea behind this
is how I can do in PostgreSQL to have tables where I can query
on them very often something like every few seconds and get
results very fast without overloading the postmaster.

If you're only querying the tables every few seconds, then you
don't really need to worry about performance.



Well, the idea behind this is to have events tables, and a
monitoring system polls that table every few seconds.  I'd like to
have a kind of FIFO stack. From the events producer point of view
he'll be pushing rows into that table, when it's filled the oldest
one will be removed to leave room to the newest one. From the
consumer point of view he'll read all the contents of that table.



So I'll not only querying the tables, I'll need to also modify that
tables.


Please try to refrain from doing this. This is the Database as an
IPC antipattern (Antipatterns are commonly-reinvented bad solutions
to problems, I.E. you can be sure someone has tried this very thing
before and found it to be a bad solution)

http://en.wikipedia.org/wiki/Database_as_an_IPC

Best solution is (like Ansgar hinted at) to use a real IPC system.

Ofcourse, I've done it myself (not on PostgreSQL though) when working
at a large corporation where corporate politics prevented me from
introducing any new interdependency between systems (like having two
start talking with eachother when they previously didn't), the only
common ground for systems that needed to communicate was a
database, and one of the systems was only able to run simple SQL
statements and not stored procedures.



  First of all, thanks for your interested but let me explain what I 
need to do.


  We have a web application where customers want to monitor how it's 
performing, but not performing in terms of speed but how many customers 
are now browsing in the application, how many have payed browsing 
sessions, how many payments have been done, ... More or less is to have 
a control panel. The difference is that they want that the information 
displayed on a web browser must be real-time that is a query every 
1-10 seconds.



Though that has been suggested earlier, but why not use pgmemcache and 
push each event as a new key? As memcached is FIFO by design that is 
exacly what you ask for. Besides that memcached is so fast that your OS 
is more busy with handling all that TCP connections than running memcached.


And in case you'd like to display statistical data and not tailing 
events, let PG push that to memcached keys as well. See memcached as a 
materialized view in that case.


As middleware I'd recommend lighttpd with mod_magnet.

You should be able to delivery that admin page way more than 5000 times 
/ sec with some outdated desktop hardware. If that's not enough read up 
on things like 
http://blog.lighttpd.net/articles/2006/11/27/comet-meets-mod_mailbox



--
Best regards,
Hannes Dorbath

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] Equivalents in PostgreSQL of MySQL's ENGINE=MEMORY MAX_ROWS=1000

2007-04-12 Thread Robert Treat
On Wednesday 04 April 2007 07:51, Arnau wrote:
 Hi Ansgar ,

  On 2007-04-04 Arnau wrote:
  Josh Berkus wrote:
  Is there anything similar in PostgreSQL? The idea behind this is how
  I can do in PostgreSQL to have tables where I can query on them very
  often something like every few seconds and get results very fast
  without overloading the postmaster.
 
  If you're only querying the tables every few seconds, then you don't
  really need to worry about performance.
 
  Well, the idea behind this is to have events tables, and a monitoring
  system polls that table every few seconds.  I'd like to have a kind of
  FIFO stack. From the events producer point of view he'll be pushing
  rows into that table, when it's filled the oldest one will be removed
  to leave room to the newest one. From the consumer point of view
  he'll read all the contents of that table.
 
  So I'll not only querying the tables, I'll need to also modify that
  tables.
 
  Ummm... this may be a dumb question, but why are you trying to implement
  something like a FIFO with an RDBMS in the first place? Wouldn't it be
  much easier to implement something like that as a separate program or
  script?

 Well, the idea is have a table with a maximum number of rows. As the
 number of queries over this table will be very high, I'd like to keep it
 as small as possible and without indexes and so on that could make the
 update slower.

 Maybe it's the moment to change my question, is there any trick to get a
 table that can be modified/queried very fast and with the minimum of
 overhead? This table will have several queries every second and I'd like
 to do this as fast as possible


If you're wedded to the FIFO idea, I'd suggest reading this:
http://people.planetpostgresql.org/greg/index.php?/archives/89-Implementing-a-queue-in-SQL-Postgres-version.html

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] Equivalents in PostgreSQL of MySQL's ENGINE=MEMORY MAX_ROWS=1000

2007-04-05 Thread Josh Berkus
Dimitri,

 Probably another helpful solution may be to implement:

ALTER TABLE LOGGING OFF/ON;

 just to disable/enable WAL?

Actually, a patch similar to this is currently in the queue for 8.3.  

-- 
Josh Berkus
PostgreSQL @ Sun
San Francisco

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [PERFORM] Equivalents in PostgreSQL of MySQL's ENGINE=MEMORY MAX_ROWS=1000

2007-04-05 Thread Dimitri

Wow, it's excellent!  :))

probably the next step is:

  ALTER TABLE CACHE ON/OFF;

just to force keeping any table in the cache. What do you think?...

Rgds,
-Dimitri

On 4/5/07, Josh Berkus josh@agliodbs.com wrote:

Dimitri,

 Probably another helpful solution may be to implement:

ALTER TABLE LOGGING OFF/ON;

 just to disable/enable WAL?

Actually, a patch similar to this is currently in the queue for 8.3.

--
Josh Berkus
PostgreSQL @ Sun
San Francisco

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Equivalents in PostgreSQL of MySQL's ENGINE=MEMORY MAX_ROWS=1000

2007-04-04 Thread Arnau

Hi Josh,

Josh Berkus wrote:

Arnau,


Is there anything similar in PostgreSQL? The idea behind this is how I
can do in PostgreSQL to have tables where I can query on them very often
something like every few seconds and get results very fast without
overloading the postmaster.


If you're only querying the tables every few seconds, then you don't 
really need to worry about performance.


Well, the idea behind this is to have events tables, and a monitoring 
system polls that table every few seconds.  I'd like to have a kind of 
FIFO stack. From the events producer point of view he'll be pushing 
rows into that table, when it's filled the oldest one will be removed to 
leave room to the newest one. From the consumer point of view he'll 
read all the contents of that table.


So I'll not only querying the tables, I'll need to also modify that tables.


--
Arnau

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Equivalents in PostgreSQL of MySQL's ENGINE=MEMORY MAX_ROWS=1000

2007-04-04 Thread Ansgar -59cobalt- Wiechers
On 2007-04-04 Arnau wrote:
 Josh Berkus wrote:
 Is there anything similar in PostgreSQL? The idea behind this is how
 I can do in PostgreSQL to have tables where I can query on them very
 often something like every few seconds and get results very fast
 without overloading the postmaster.
 
 If you're only querying the tables every few seconds, then you don't
 really need to worry about performance.
 
 Well, the idea behind this is to have events tables, and a monitoring
 system polls that table every few seconds.  I'd like to have a kind of
 FIFO stack. From the events producer point of view he'll be pushing
 rows into that table, when it's filled the oldest one will be removed
 to leave room to the newest one. From the consumer point of view
 he'll read all the contents of that table.
 
 So I'll not only querying the tables, I'll need to also modify that
 tables.

Ummm... this may be a dumb question, but why are you trying to implement
something like a FIFO with an RDBMS in the first place? Wouldn't it be
much easier to implement something like that as a separate program or
script?

Regards
Ansgar Wiechers
-- 
The Mac OS X kernel should never panic because, when it does, it
seriously inconveniences the user.
--http://developer.apple.com/technotes/tn2004/tn2118.html

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] Equivalents in PostgreSQL of MySQL's ENGINE=MEMORY MAX_ROWS=1000

2007-04-04 Thread Arnau

Hi Ansgar ,


On 2007-04-04 Arnau wrote:

Josh Berkus wrote:

Is there anything similar in PostgreSQL? The idea behind this is how
I can do in PostgreSQL to have tables where I can query on them very
often something like every few seconds and get results very fast
without overloading the postmaster.

If you're only querying the tables every few seconds, then you don't
really need to worry about performance.

Well, the idea behind this is to have events tables, and a monitoring
system polls that table every few seconds.  I'd like to have a kind of
FIFO stack. From the events producer point of view he'll be pushing
rows into that table, when it's filled the oldest one will be removed
to leave room to the newest one. From the consumer point of view
he'll read all the contents of that table.

So I'll not only querying the tables, I'll need to also modify that
tables.


Ummm... this may be a dumb question, but why are you trying to implement
something like a FIFO with an RDBMS in the first place? Wouldn't it be
much easier to implement something like that as a separate program or
script?


Well, the idea is have a table with a maximum number of rows. As the 
number of queries over this table will be very high, I'd like to keep it 
as small as possible and without indexes and so on that could make the 
update slower.


Maybe it's the moment to change my question, is there any trick to get a 
table that can be modified/queried very fast and with the minimum of 
overhead? This table will have several queries every second and I'd like 
to do this as fast as possible


Thanks
--
Arnau

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [PERFORM] Equivalents in PostgreSQL of MySQL's ENGINE=MEMORY MAX_ROWS=1000

2007-04-04 Thread Arnau

Hi Thor,

Thor-Michael Støre wrote:

On 2007-04-04 Arnau wrote:

Josh Berkus wrote:

Arnau,


Is there anything similar in PostgreSQL? The idea behind this
is how I can do in PostgreSQL to have tables where I can query
on them very often something like every few seconds and get
results very fast without overloading the postmaster.

If you're only querying the tables every few seconds, then you
don't really need to worry about performance.



Well, the idea behind this is to have events tables, and a
monitoring system polls that table every few seconds.  I'd like to
have a kind of FIFO stack. From the events producer point of view
he'll be pushing rows into that table, when it's filled the oldest
one will be removed to leave room to the newest one. From the
consumer point of view he'll read all the contents of that table.



So I'll not only querying the tables, I'll need to also modify that
tables.


Please try to refrain from doing this. This is the Database as an
IPC antipattern (Antipatterns are commonly-reinvented bad solutions
to problems, I.E. you can be sure someone has tried this very thing
before and found it to be a bad solution)

http://en.wikipedia.org/wiki/Database_as_an_IPC

Best solution is (like Ansgar hinted at) to use a real IPC system.

Ofcourse, I've done it myself (not on PostgreSQL though) when working
at a large corporation where corporate politics prevented me from
introducing any new interdependency between systems (like having two
start talking with eachother when they previously didn't), the only
common ground for systems that needed to communicate was a
database, and one of the systems was only able to run simple SQL
statements and not stored procedures.



  First of all, thanks for your interested but let me explain what I 
need to do.


  We have a web application where customers want to monitor how it's 
performing, but not performing in terms of speed but how many customers 
are now browsing in the application, how many have payed browsing 
sessions, how many payments have been done, ... More or less is to have 
a control panel. The difference is that they want that the information 
displayed on a web browser must be real-time that is a query every 
1-10 seconds.


  Then, I haven't read yet the article but I'll do it, how you'd do 
what I need to do?


Thanks
--
Arnau

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

   http://www.postgresql.org/about/donate


Re: [PERFORM] Equivalents in PostgreSQL of MySQL's ENGINE=MEMORY MAX_ROWS=1000

2007-04-03 Thread Josh Berkus

Arnau,


Is there anything similar in PostgreSQL? The idea behind this is how I
can do in PostgreSQL to have tables where I can query on them very often
something like every few seconds and get results very fast without
overloading the postmaster.


If you're only querying the tables every few seconds, then you don't 
really need to worry about performance.


--Josh Berkus


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [PERFORM] Equivalents in PostgreSQL of MySQL's ENGINE=MEMORY MAX_ROWS=1000

2007-04-03 Thread Tom Lane
Arnau [EMAIL PROTECTED] writes:
 MySQL manual says:
 The MEMORY storage engine creates tables with contents that are stored
 in memory. As indicated by the name, MEMORY tables are stored in memory.

 Is there anything similar in PostgreSQL?

As long as you have shared_buffers large enough (or temp_buffers if
you're dealing with temp tables), everything will stay in memory anyway.
Don't sweat it.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] Equivalents in PostgreSQL of MySQL's ENGINE=MEMORY MAX_ROWS=1000

2007-04-03 Thread A.M.
Indeed... I looked through the official TODO list and was unable to  
find an entry for global temporary tables- such a thing would be  
ideal for any transient data such as web sessions or materialized  
views. Is there any reason why global temp tables shouldn't be  
implemented? (And, no, I'm not simply referring to in-memory  
tables- they can simply be handled with a ram disk.)


-M

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Equivalents in PostgreSQL of MySQL's ENGINE=MEMORY MAX_ROWS=1000

2007-04-03 Thread C. Bergström

A.M. wrote:
Indeed... I looked through the official TODO list and was unable to 
find an entry for global temporary tables- such a thing would be ideal 
for any transient data such as web sessions or materialized views. Is 
there any reason why global temp tables shouldn't be implemented? 
(And, no, I'm not simply referring to in-memory tables- they can 
simply be handled with a ram disk.)
Not exactly what you're looking for and a simple API, but the 
performance is very nice and has a lot of potential.


http://pgfoundry.org/projects/pgmemcache/

Implementing a cleaner more transparent sql wrapper would be even nicer.

http://tangent.org/index.pl?lastnode_id=478node_id=506

Just sharing/tossing some ideas around..

C.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [PERFORM] Equivalents in PostgreSQL of MySQL's ENGINE=MEMORY MAX_ROWS=1000

2007-04-03 Thread A.M.


On Apr 3, 2007, at 15:39 , C. Bergström wrote:


A.M. wrote:
Indeed... I looked through the official TODO list and was unable  
to find an entry for global temporary tables- such a thing would  
be ideal for any transient data such as web sessions or  
materialized views. Is there any reason why global temp tables  
shouldn't be implemented? (And, no, I'm not simply referring to  
in-memory tables- they can simply be handled with a ram disk.)
Not exactly what you're looking for and a simple API, but the  
performance is very nice and has a lot of potential.


http://pgfoundry.org/projects/pgmemcache/


I would like to use transactional semantics over tables that can  
disappear whenever the server fails. memcached does not offer that.


Cheers,
M
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Equivalents in PostgreSQL of MySQL's ENGINE=MEMORY MAX_ROWS=1000

2007-04-03 Thread Alan Hodgson
On Tuesday 03 April 2007 12:47, A.M. [EMAIL PROTECTED] wrote:
 On Apr 3, 2007, at 15:39 , C. Bergström wrote:
 I would like to use transactional semantics over tables that can
 disappear whenever the server fails. memcached does not offer that.

How would temporary tables?

-- 
Ginsberg's Theorem:
 1) You can't win.
 2) You can't break even.
 3) You can't quit the game.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Equivalents in PostgreSQL of MySQL's ENGINE=MEMORY MAX_ROWS=1000

2007-04-03 Thread A.M.


On Apr 3, 2007, at 16:00 , Alan Hodgson wrote:

On Tuesday 03 April 2007 12:47, A.M.  
[EMAIL PROTECTED] wrote:

On Apr 3, 2007, at 15:39 , C. Bergström wrote:
I would like to use transactional semantics over tables that can
disappear whenever the server fails. memcached does not offer that.


How would temporary tables?


The only difference between temporary tables and standard tables is  
the WAL. Global temporary tables would be accessible by all sessions  
and would be truncated on postmaster start. For a further potential  
speed boost, global temp tables could be put in a ramdisk tablespace.


Well, that's at least how I envision them.

Cheers,
M
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match