Re: [PERFORM] corrupted shared memory message

2008-04-29 Thread Pablo Alcaraz
Tom Lane wrote: Pablo Alcaraz <[EMAIL PROTECTED]> writes: We have a database running smoothly for months. 2 days ago I get this error message. I tried a restore, a full restore (deleting the old database and recovering from backup all the information) but we are getting this error

[PERFORM] corrupted shared memory message

2008-04-20 Thread Pablo Alcaraz
Hi All! We have a database running smoothly for months. 2 days ago I get this error message. I tried a restore, a full restore (deleting the old database and recovering from backup all the information) but we are getting this error every time. In this case I got this error when I was trying

Re: [PERFORM] TB-sized databases

2008-03-17 Thread Pablo Alcaraz
All that helps to pgsql to perform good in a TB-sized database enviroment is a Good Think (r) :D Pablo Bruce Momjian wrote: I have _not_ added a TODO for this item. Let me know if one is needed. -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes t

Re: [PERFORM] TB-sized databases

2007-11-28 Thread Pablo Alcaraz
Pablo Alcaraz wrote: Simon Riggs wrote: All of those responses have cooked up quite a few topics into one. Large databases might mean text warehouses, XML message stores, relational archives and fact-based business data warehouses. The main thing is that TB-sized databases are performance

Re: [PERFORM] TB-sized databases

2007-11-28 Thread Pablo Alcaraz
Matthew wrote: On Tue, 27 Nov 2007, Pablo Alcaraz wrote: it would be nice to do something with selects so we can recover a rowset on huge tables using a criteria with indexes without fall running a full scan. You mean: Be able to tell Postgres "Don't ever do a sequential sc

Re: [PERFORM] TB-sized databases

2007-11-27 Thread Pablo Alcaraz
Simon Riggs wrote: All of those responses have cooked up quite a few topics into one. Large databases might mean text warehouses, XML message stores, relational archives and fact-based business data warehouses. The main thing is that TB-sized databases are performance critical. So it all depends

Re: [PERFORM] Base de Datos Transaccional

2007-11-26 Thread Pablo Alcaraz
Si tenes el hardware necesario y planificas el deployment de la base de datos apropiadamente sin dudas puede llegar a manejar esa carga. Saludos Pablo Fabio Arias wrote: Hola amigos, les escribo por que necesito conocer si PostgreSQL es lo suficientemente robusto para manejar una plataforma t

Re: [PERFORM] TB-sized databases

2007-11-26 Thread Pablo Alcaraz
I had a client that tried to use Ms Sql Server to run a 500Gb+ database. The database simply colapsed. They switched to Teradata and it is running good. This database has now 1.5Tb+. Currently I have clients using postgresql huge databases and they are happy. In one client's database the bigge

Re: [PERFORM] doubt with pg_dump and high concurrent used databases

2007-11-25 Thread Pablo Alcaraz
Tom Lane wrote: "Peter Childs" <[EMAIL PROTECTED]> writes: On 25/11/2007, Erik Jones <[EMAIL PROTECTED]> wrote: Does the pg_dump create this kind of "consistent backups"? Or do I need to do the backups using another program? Yes, that is exactly what pg_dump does. Yes

[PERFORM] doubt with pg_dump and high concurrent used databases

2007-11-25 Thread Pablo Alcaraz
Hi all, I read that pg_dump can run while the database is being used and makes "consistent backups". I have a huge and *heavy* selected, inserted and updated database. Currently I have a cron task that disconnect the database users, make a backup using pg_dump and put the database online again.

Re: [PERFORM] tables with 300+ partitions

2007-10-31 Thread Pablo Alcaraz
Scott Marlowe wrote: On 10/31/07, Pablo Alcaraz <[EMAIL PROTECTED]> wrote: Steven Flatt wrote: On 10/30/07, Pablo Alcaraz <[EMAIL PROTECTED]> wrote: I did some testing. I created a 300 partitioned empty table. Then, I inserted some rows on it and the perfomance

Re: [PERFORM] tables with 300+ partitions

2007-10-31 Thread Pablo Alcaraz
Steven Flatt wrote: On 10/30/07, *Pablo Alcaraz* <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote: I did some testing. I created a 300 partitioned empty table. Then, I inserted some rows on it and the perfomance was SLOW too. Is the problem with inserting to the

[PERFORM] tables with 300+ partitions

2007-10-30 Thread Pablo Alcaraz
there a workaround? Can I replace the partitioned version with another schema? any suggestion? I prefer to use something transparent for the program because it uses EJB3 = deep changes and testing on any change to the database layer. Regards Pablo Alcaraz ---(end of

Re: [PERFORM] Speed difference between select ... union select ... and select from partitioned_table

2007-10-27 Thread Pablo Alcaraz
Pablo Alcaraz wrote: These are the EXPLAIN ANALIZE: If you raise work_mem enough to let the second query use a hash aggregate (probably a few MB would do it), I think it'll be about the same speed as the first one. The reason it's not picking that on its own is the overestimate of

Re: [PERFORM] Speed difference between select ... union select ... and select from partitioned_table

2007-10-26 Thread Pablo Alcaraz
) (actual time=0.002..0.002 rows=0 loops=1)" " -> Seq Scan on tt_00027 tt (cost=0.00..18.30 rows=830 width=8) (actual time=0.002..0.002 rows=0 loops=1)" " -> Seq Scan on tt_00030 tt (cost=0.00..18.30 rows=830 width=8) (actual time=0.002..0.

Re: [PERFORM] Speed difference between select ... union select ... and select from partitioned_table

2007-10-26 Thread Pablo Alcaraz
I forgot to post the times: query-union: 21:59 query-heritage: 1:31:24 Regards Pablo Pablo Alcaraz wrote: Hi List! I executed 2 equivalents queries. The first one uses a union structure. The second uses a partitioned table. The tables are the same with 30 millions of rows each one and the

[PERFORM] Speed difference between select ... union select ... and select from partitioned_table

2007-10-26 Thread Pablo Alcaraz
Hi List! I executed 2 equivalents queries. The first one uses a union structure. The second uses a partitioned table. The tables are the same with 30 millions of rows each one and the returned rows are the same. But the union query perform faster than the partitioned query. My question is: w