Hei:
We have PostgreSQL 8.3 running on Debian Linux server. We built an applicantion
using PHP programming language and Postgres database. There are appoximatly 150
users using the software constantly. We had some performance degration before
and after some studies we figured out we will
As for the question in the title, no, if the server starts, shared
buffers should not be the reason for a subsequent crash.
In debian, it is common that the maximum allowed shared memory setting
on your kernel will prevent a server from even starting, but I guess
that's not your problem (because
Can pgpoolAdmin utility handle(administer) more than one pgpool-II custer?
We have a need of setting up 3 independent postgres clusters. One cluster
handling cadastral maps, one handling raster maps and one handling vector
maps. Each of these clusters must have a load balancer - EG pgpool-II.
On 04/29/2011 04:54 PM, Ben Chobot wrote:
We have a bunch of their cards, purchased when we were still on 8.1 and
were having difficulty with vacuums. (Duh.) They helped out a bunch for
that. They're fast, no question about it. Each FusionIO device (they
have cards with multiple devices) can do
TMS RAMSAN is a DRAM device. TMS built DRAM SSDs going back decades, but have
recently gotten into flash SSDs as well. The DRAM parts are in an order of
magnitude more expensive than others' flash SSDs, gig by gig. Also, about as
fast as off cpu storage gets.
regards,
Robert
Original
Howdy. We've got a query that takes less than a second unless we add a
order by to it, after which it takes 40 seconds. Here's the query:
select page_number, ps_id, ps_page_id from ps_page where ps_page_id in
(select ps_page_id from documents_ps_page where document_id in (select
document_id
James Mansion wrote:
Does the server know which IO it thinks is sequential, and which it
thinks is random? Could it not time the IOs (perhaps optionally) and
at least keep some sort of statistics of the actual observed times?
It makes some assumptions based on what the individual query nodes
On 04/29/2011 06:52 PM, gnuo...@rcn.com wrote:
TMS RAMSAN is a DRAM device. TMS built DRAM SSDs going back decades,
but have recently gotten into flash SSDs as well. The DRAM parts are
in an order of magnitude more expensive than others' flash SSDs, gig
by gig. Also, about as fast as off cpu
On 4/29/2011 1:55 PM, Greg Smith wrote:
James Mansion wrote:
Does the server know which IO it thinks is sequential, and which it
thinks is random? Could it not time the IOs (perhaps optionally) and
at least keep some sort of statistics of the actual observed times?
It makes some assumptions
Greg Smith wrote:
There are also some severe query plan stability issues with this idea
beyond this. The idea that your plan might vary based on execution
latency, that the system load going up can make query plans alter with
it, is terrifying for a production server.
I thought I was clear
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Also, make sure that you run ANALYZE against your temp table right
before running your query.
Yeah. It's fairly hard to credit that temp_document_ids has any stats
given the way-off estimates for it. Keep in mind that autovacuum
can't help
James Mansion wrote:
I thought I was clear that it should present some stats to the DBA,
not that it would try to auto-tune?
You were. But people are bound to make decisions about how to retune
their database based on that information. The situation when doing
manual tuning isn't that much
On Fri, Apr 29, 2011 at 11:37 PM, Greg Smith g...@2ndquadrant.com wrote:
Anyway, how to collect this data is a separate problem from what should be
done with it in the optimizer. I don't actually care about the collection
part very much; there are a bunch of approaches with various trade-offs.
On Apr 27, 2011, at 11:11 PM, Joseph Shraibman j...@selectacast.net wrote:
On 04/27/2011 04:32 PM, Robert Haas wrote:
In the first case, PostgreSQL evidently thinks that using the indexes
will be slower than just ignoring them. You could find out whether
it's right by trying it with
On Apr 29, 2011, at 10:25 AM, James Mansion ja...@mansionfamily.plus.com
wrote:
Robert Haas wrote:
The server can and does measure hit rates for the PG buffer pool, but to my
knowledge there is no clear-cut way for PG to know whether read() is
satisfied from the OS cache or a drive cache or
On Apr 27, 2011, at 11:01 PM, Claudio Freire klaussfre...@gmail.com wrote:
The patch may be simple, the testing not so much. I know that.
What tools do we have to do that testing? There are lots, and all
imply a lot of work. Is that work worth the trouble? Because if it
is... why not work?
Can pgpoolAdmin utility handle(administer) more than one pgpool-II custer?
No. pgpoolAdmin only supports one pgpool-II server.
We have a need of setting up 3 independent postgres clusters. One cluster
handling cadastral maps, one handling raster maps and one handling vector
maps. Each of
On 04/30/2011 12:24 AM, Hsien-Wen Chu wrote:
I'm little bit confuse why it is not safe. and my question is following.
for database application, we need to avoid double cache, PostgreSQL
shared_buffer will cache the data, so we do not want to file system to
cache the data right?. so the DIRECT
18 matches
Mail list logo