> > This smells like a TCP communication problem.
>
> I'm puzzled by that remark. How much does TCP get into the
> picture in a local Windows client/server environment?
Windows has no Unix Domain Sockets (no surprise there), so TCP
connections over the loopback interface are used to connect
>From: Josh Berkus
>Sent: Sep 29, 2005 12:54 PM
>Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
>
>The biggest single area where I see PostgreSQL external
>sort sucking is on index creation on large tables. For
>example, for free version of TPCH, it takes only 1.5 hours to
>load a 60GB
Jeff,
On 9/29/05 10:44 AM, "Jeffrey W. Baker" <[EMAIL PROTECTED]> wrote:
> On Thu, 2005-09-29 at 10:06 -0700, Luke Lonergan wrote:
> Looking through tuplesort.c, I have a couple of initial ideas. Are we
> allowed to fork here? That would open up the possibility of using the
> CPU and the I/O in
>From: Zeugswetter Andreas DAZ SD <[EMAIL PROTECTED]>
>Sent: Sep 29, 2005 9:28 AM
>Subject: RE: [HACKERS] [PERFORM] A Better External Sort?
>
>>In my original example, a sequential scan of the 1TB of 2KB
>>or 4KB records, => 250M or 500M records of data, being sorted
>>on a binary value key will
>From: Pailloncy Jean-Gerard <[EMAIL PROTECTED]>
>Sent: Sep 29, 2005 7:11 AM
>Subject: Re: [HACKERS] [PERFORM] A Better External Sort?
>>>Jeff Baker:
>>>Your main example seems to focus on a large table where a key
>>>column has constrained values. This case is interesting in
>>>proportion to th
On Thu, Sep 29, 2005 at 07:21:08PM -0400, Lane Van Ingen wrote:
> (1) Make a table memory-resident only ?
You might want to look into memcached, but it's impossible to say whether it
will fit your needs or not without more details.
/* Steinar */
--
Homepage: http://www.sesse.net/
Quoting Lane Van Ingen <[EMAIL PROTECTED]>:
> ... to do the following:
> (1) Make a table memory-resident only ?
Put it on a RAM filesystem. On Linux, shmfs. On *BSD, mfs. Solaris, tmpfs.
> (2) Set up user variables in memory that are persistent across all
> sessions, for
> as long
1) AFAIK, no. Just in case you are thinking "There should be a way coz I
know it will be used all the time", you must know that postgresql philosophy
is "I'm smarter than you". If table is used all the time, it will be in
memory, if not, it won't waste memory.
2) don't know.
3) see number 1) Of cou
Autovacuum does exactly what I understood you want :-)
-Mensaje original-
De: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] nombre de Lane Van
Ingen
Enviado el: jueves, 29 de septiembre de 2005 20:06
Para: pgsql-performance@postgresql.org
Asunto: [PERFORM] How to Trigger An Automtic Vacuum o
... to do the following:
(1) Make a table memory-resident only ?
(2) Set up user variables in memory that are persistent across all
sessions, for
as long as the database is up and running ?
(3) Assure that a disk-based table is always in memory (outside of keeping
it in
memory buf
I am running version 8.0.1 on Windows 2003. I have an application that
subjects PostgreSQL to sudden bursts of activity at times which cannot be
predicted. The bursts are significant enough to cause performance
degradation, which can be fixed by a 'vacuum analyze'.
I am aware of the existence and
Arnau wrote:
Hi all,
I have been "googling" a bit searching info about a way to monitor
postgresql (CPU & Memory, num processes, ... ) and I haven't found
anything relevant. I'm using munin to monitor others parameters of my
servers and I'd like to include postgresql or have a similar tool
Hi,
impressive
But you forgot to include those scipts as attachment or they got lost
somehow ;-)
could you post them (again)?
thanx,
Juraj
Am Donnerstag, den 29.09.2005, 13:02 -0700 schrieb Tony Wasson:
> On 9/28/05, Matthew Nuzum <[EMAIL PROTECTED]> wrote:
> > On 9/28/05, Arnau <[EMAIL PROTEC
PFC wrote:
Even though this query isn't that optimized, it's still only 16
milliseconds.
Why does it take this long for PHP to get the results ?
Can you try pg_query'ing this exact same query, FROM PHP, and timing
it with getmicrotime() ?
Thanks, that's what I was looking for.
Andreas Pflug wrote:
Hm, if you only have 4 tables, why do you need 12 queries?
To reduce queries, join them in the query; no need to merge them
physically. If you have only two main tables, I'd bet you only need 1-2
queries for the whole page.
There are more than four tables and the queries
On 9/28/05, Matthew Nuzum <[EMAIL PROTECTED]> wrote:
> On 9/28/05, Arnau <[EMAIL PROTECTED]> wrote:
> > Hi all,
> >
> >I have been "googling" a bit searching info about a way to monitor
> > postgresql (CPU & Memory, num processes, ... ) and I haven't found
> > anything relevant. I'm using munin
Hi All,
I have a SQL function like :
CREATE OR REPLACE FUNCTION
fn_get_yetkili_inisyer_listesi(int4, int4)
RETURNS SETOF kod_adi_liste_type AS
$BODY$
SELECT Y.KOD,Y.ADI
FROM T_YER Y
WHERE EXISTS (SELECT 1
FROM T_GUZER G
WHERE (G.BIN_YER_KOD = $1 OR COALESCE($1,0)=0
Jeff,
> Josh, do you happen to know how many passes are needed in the multiphase
> merge on your 60GB table?
No, any idea how to test that?
> I think the largest speedup will be to dump the multiphase merge and
> merge all tapes in one pass, no matter how large M. Currently M is
> capped at 6,
On Thu, 2005-09-29 at 10:06 -0700, Luke Lonergan wrote:
> Josh,
>
> On 9/29/05 9:54 AM, "Josh Berkus" wrote:
>
> > Following an index creation, we see that 95% of the time required is the
> > external sort, which averages 2mb/s. This is with seperate drives for
> > the WAL, the pg_tmp, the tabl
Josh,
On 9/29/05 9:54 AM, "Josh Berkus" wrote:
> Following an index creation, we see that 95% of the time required is the
> external sort, which averages 2mb/s. This is with seperate drives for
> the WAL, the pg_tmp, the table and the index. I've confirmed that
> increasing work_mem beyond a s
Jeff, Ron,
First off, Jeff, please take it easy. We're discussing 8.2 features at
this point and there's no reason to get stressed out at Ron. You can
get plenty stressed out when 8.2 is near feature freeze. ;-)
Regarding use cases for better sorts:
The biggest single area where I see Po
Total runtime: 16.000 ms
Even though this query isn't that optimized, it's still only 16
milliseconds.
Why does it take this long for PHP to get the results ?
Can you try pg_query'ing this exact same query, FROM PHP, and timing it
with getmicrotime() ?
You can even do an E
Just to add a little anarchy in your nice debate...
Who really needs all the results of a sort on your terabyte table ?
I guess not many people do a SELECT from such a table and want all the
results. So, this leaves :
- Really wanting all the results, to fetch using
I just tried using pg_pconnect() and I didn't notice any significant
improvement. What bothers me most is that with Postgres I tend to see
jerky behavior on almost every page: the upper 1/2 or 2/3 of the page
is displayed first and you can see a blank bottom (or you can see a
half-fill
I think the answer is simple
if the question is low end Raid card or software ? go on the software
and youll get better performance.
If this is a high end server i wouldnt think twice. HW RAID is a must
and not only because the performance but because the easynes ( hot swap
and such ) and t
Arnau wrote:
> Hi all,
>
> I have been "googling" a bit searching info about a way to monitor
> postgresql (CPU & Memory, num processes, ... ) and I haven't found
> anything relevant. I'm using munin to monitor others parameters of my
> servers and I'd like to include postgresql or have a simila
CREATE SEQUENCE ai_id;
CREATE TABLE badusers (
id int DEFAULT nextval('ai_id') NOT NULL,
UserName varchar(30),
Date datetime DEFAULT '-00-00 00:00:00' NOT NULL,
Reason varchar(200),
Admin varchar(30) DEFAULT '-',
PRIMARY KEY (id),
KEY UserName (UserName),
KEY Date (Date)
);
Mark Lewis wrote:
> I imported my test dataset
> and was almost immediately able to track down the cause of my
> performance problem.
Why don't you tell us what the problem was :-) ?
Regards
Gaetano Mendola
---(end of broadcast)---
TIP 1: if p
Joe wrote:
The pages do use a number of queries to collect all the data for display
but nowhere near 50. I'd say it's probably less than a dozen.
The schema is fairly simple having two main tables: topic and entry
(sort of like account and transaction in an accounting scenario). There
Gavin Sherry wrote:
Please post the table definitions, queries and explain analyze results so
we can tell you why the performance is poor.
I did try to post that last night but apparently my reply didn't make it to the
list. Here it is again:
Matthew Nuzum wrote:
> This is the right list.
PFC wrote:
From my experience, the postgres libraries in PHP are a piece of
crap, and add a lot of overhead even from small queries.
For instance, a simple query like "SELECT * FROM table WHERE
primary_key_id=1234" can take the following time, on my laptop, with
data in the filesyste
On Thu, 29 Sep 2005, Joe wrote:
> Magnus Hagander wrote:
> > That actually depends a lot on *how* you use it. I've seen pg-on-windows
> > deployments that come within a few percent of the linux performance.
> > I've also seen those that are absolutely horrible compared.
> >
> > One sure way to kil
On Thu, Sep 29, 2005 at 08:16:11AM -0400, Joe wrote:
> I just tried using pg_pconnect() and I didn't notice any significant
> improvement.
PHP persistent connections are not really persistent -- or so I've been told.
Anyhow, what was discussed here was pg_query, not pg_connect. You really want
t
Magnus Hagander wrote:
That actually depends a lot on *how* you use it. I've seen pg-on-windows
deployments that come within a few percent of the linux performance.
I've also seen those that are absolutely horrible compared.
One sure way to kill the performance is to do a lot of small
connection
Your main example seems to focus on a large table where a key
column has
constrained values. This case is interesting in proportion to the
number of possible values. If I have billions of rows, each
having one
of only two values, I can think of a trivial and very fast method of
returning th
It appears that PostgreSQL is two to three times slower than MySQL. For
example, some pages that have some 30,000 characters (when saved as
HTML) take 1 to 1 1/2 seconds with MySQL but 3 to 4 seconds with
PostgreSQL. I had read that the former was generally faster than the
latter, parti
>From: "Jeffrey W. Baker" <[EMAIL PROTECTED]>
>Sent: Sep 29, 2005 12:33 AM
>Subject: Sequential I/O Cost (was Re: [PERFORM] A Better External Sort?)
>
>On Wed, 2005-09-28 at 12:03 -0400, Ron Peacetree wrote:
>>>From: "Jeffrey W. Baker" <[EMAIL PROTECTED]>
>>>Perhaps I believe this because you can n
37 matches
Mail list logo