You should search the archives for Luke Lonegran's posting about how IO in
Postgresql is significantly bottlenecked because it's not async. A 12 disk
array is going to max out Postgresql's max theoretical write capacity to
disk, and therefore BigRDBMS is always going to win in such a config. You
On Fri, 29 Dec 2006, Alvaro Herrera wrote:
Ron wrote:
C= What file system are you using? Unlike BigDBMS, pg does not have
its own native one, so you have to choose the one that best suits
your needs. For update heavy applications involving lots of small
updates jfs and XFS should both be ser
Ron wrote:
> C= What file system are you using? Unlike BigDBMS, pg does not have
> its own native one, so you have to choose the one that best suits
> your needs. For update heavy applications involving lots of small
> updates jfs and XFS should both be seriously considered.
Actually it has
Sebastián Baioni wrote:
Thanks for answering.
This is my configuration:
# - Memory -
shared_buffers = 1000# min 16, at least max_connections*2, 8KB
each
#work_mem = 1024# min 64, size in KB
#maintenance_work_mem = 16384# min 1024, size in KB
#max_stack_depth = 2048#
On Sat, 2006-12-23 at 13:13 -0500, Bruce Momjian wrote:
> The bottom line is that we know of now cases where a long-running
> transaction would delay recycling of the WAL files, so there is
> certainly something not understood here.
We can see from all of this that a checkpoint definitely didn't
Thanks for answering.
This is my configuration:
# - Memory -
shared_buffers = 1000# min 16, at least max_connections*2, 8KB each
#work_mem = 1024# min 64, size in KB
#maintenance_work_mem = 16384# min 1024, size in KB
#max_stack_depth = 2048# min 100, size in KB
The PC
Rod Taylor <[EMAIL PROTECTED]> writes:
> Rebuilding the indexes or integrity confirmations are probably taking
> most of the time.
> What is your work_mem setting?
maintenance_work_mem is the thing to look at, actually. I concur that
bumping it up might help.
regards, to
Depends on what the query is. If the queries take 3 to 5 days to
execute, then 1 query per day on a 4 CPU machine would be at capacity.
On 23-Dec-06, at 3:12 AM, [EMAIL PROTECTED] wrote:
Hey Everyone,
I am having a bit of trouble with a web host, and was wondering as
what
you would class
Rebuilding the indexes or integrity confirmations are probably taking
most of the time.
What is your work_mem setting?
On 22-Dec-06, at 9:32 AM, Sebastián Baioni wrote:
Hi,
We have a database with one table of 10,000,000 tuples and 4 tables
with 5,000,000 tuples.
While in SQL Server it tak
Good day,
I have been reading about the configuration of postgresql, but I have a
server who does not give me the performance that should. The tables are
indexed and made vacuum regularly, i monitor with top, ps and
pg_stat_activity and when i checked was slow without a heavy load overage.
B
how can i get the disk usage for each table? can i do it via SQL?
Thanks,
Mailing-Lists
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Hello all,
I've been running performance tests on various incantations of Postgres
on/off for a month or so. And, I've just come across some unexpected
results.
When I start my Postgres build as such:
# (Scenario 1)
./configure --prefix=/usr --libdir=/usr/lib --bindir=/usr/bin
--includedir=/us
Hey Everyone,
I am having a bit of trouble with a web host, and was wondering as what
you would class as a high level of traffic to a database (queries per
second) to an average server running postgres in a shared hosting
environment (very modern servers).
Many Thanks in Advance,
Oliver
---
Hi,
We have a database with one table of 10,000,000 tuples and 4 tables with
5,000,000 tuples.
While in SQL Server it takes 3 minutes to restore this complete database, in
PostgreSQL it takes more than 2 hours.
The Backup takes 6 minutes in SQLServer and 13 minutes (which is not a problem)
We ar
Hi all,
A= go through each query and see what work_mem needs to be for that
query to be as RAM resident as possible. If you have enough RAM, set
work_mem for that query that large. Remember that work_mem is =per
query=, so queries running in parallel eat the sum of each of their
work_mem's.
At 12:46 AM 12/28/2006, Guy Rouillier wrote:
I don't want to violate any license agreement by discussing
performance, so I'll refer to a large, commercial
PostgreSQL-compatible DBMS only as BigDBMS here.
I'm trying to convince my employer to replace BigDBMS with
PostgreSQL for at least some o
16 matches
Mail list logo