Tom Lane wrote:
Pablo Alcaraz <[EMAIL PROTECTED]> writes:
We have a database running smoothly for months. 2 days ago I get this
error message. I tried a restore, a full restore (deleting the old
database and recovering from backup all the information) but we are
getting this error
Hi All!
We have a database running smoothly for months. 2 days ago I get this
error message. I tried a restore, a full restore (deleting the old
database and recovering from backup all the information) but we are
getting this error every time.
In this case I got this error when I was trying
All that helps to pgsql to perform good in a TB-sized database
enviroment is a Good Think (r) :D
Pablo
Bruce Momjian wrote:
I have _not_ added a TODO for this item. Let me know if one is needed.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes t
Pablo Alcaraz wrote:
Simon Riggs wrote:
All of those responses have cooked up quite a few topics into one. Large
databases might mean text warehouses, XML message stores, relational
archives and fact-based business data warehouses.
The main thing is that TB-sized databases are performance
Matthew wrote:
On Tue, 27 Nov 2007, Pablo Alcaraz wrote:
it would be nice to do something with selects so we can recover a rowset
on huge tables using a criteria with indexes without fall running a full
scan.
You mean: Be able to tell Postgres "Don't ever do a sequential sc
Simon Riggs wrote:
All of those responses have cooked up quite a few topics into one. Large
databases might mean text warehouses, XML message stores, relational
archives and fact-based business data warehouses.
The main thing is that TB-sized databases are performance critical. So
it all depends
Si tenes el hardware necesario y planificas el deployment de la base de
datos apropiadamente sin dudas puede llegar a manejar esa carga.
Saludos
Pablo
Fabio Arias wrote:
Hola amigos, les escribo por que necesito conocer si PostgreSQL es lo
suficientemente robusto para manejar una plataforma t
I had a client that tried to use Ms Sql Server to run a 500Gb+ database.
The database simply colapsed. They switched to Teradata and it is
running good. This database has now 1.5Tb+.
Currently I have clients using postgresql huge databases and they are
happy. In one client's database the bigge
Tom Lane wrote:
"Peter Childs" <[EMAIL PROTECTED]> writes:
On 25/11/2007, Erik Jones <[EMAIL PROTECTED]> wrote:
Does the pg_dump create this kind of "consistent backups"? Or do I
need to do the backups using another program?
Yes, that is exactly what pg_dump does.
Yes
Hi all,
I read that pg_dump can run while the database is being used and makes
"consistent backups".
I have a huge and *heavy* selected, inserted and updated database.
Currently I have a cron task that disconnect the database users, make a
backup using pg_dump and put the database online again.
Scott Marlowe wrote:
On 10/31/07, Pablo Alcaraz <[EMAIL PROTECTED]> wrote:
Steven Flatt wrote:
On 10/30/07, Pablo Alcaraz <[EMAIL PROTECTED]> wrote:
I did some testing. I created a 300 partitioned empty table. Then, I
inserted some rows on it and the perfomance
Steven Flatt wrote:
On 10/30/07, *Pablo Alcaraz* <[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>> wrote:
I did some testing. I created a 300 partitioned empty table. Then, I
inserted some rows on it and the perfomance was SLOW too.
Is the problem with inserting to the
there a workaround? Can I replace the partitioned version
with another schema? any suggestion? I prefer to use something
transparent for the program because it uses EJB3 = deep changes and
testing on any change to the database layer.
Regards
Pablo Alcaraz
---(end of
Pablo Alcaraz wrote:
These are the EXPLAIN ANALIZE:
If you raise work_mem enough to let the second query use a hash
aggregate (probably a few MB would do it), I think it'll be about
the same speed as the first one.
The reason it's not picking that on its own is the overestimate
of
) (actual time=0.002..0.002 rows=0 loops=1)"
" -> Seq Scan on tt_00027 tt (cost=0.00..18.30 rows=830
width=8) (actual time=0.002..0.002 rows=0 loops=1)"
" -> Seq Scan on tt_00030 tt (cost=0.00..18.30 rows=830
width=8) (actual time=0.002..0.
I forgot to post the times:
query-union: 21:59
query-heritage: 1:31:24
Regards
Pablo
Pablo Alcaraz wrote:
Hi List!
I executed 2 equivalents queries. The first one uses a union
structure. The second uses a partitioned table. The tables are the
same with 30 millions of rows each one and the
Hi List!
I executed 2 equivalents queries. The first one uses a union structure.
The second uses a partitioned table. The tables are the same with 30
millions of rows each one and the returned rows are the same.
But the union query perform faster than the partitioned query.
My question is: w
17 matches
Mail list logo