"Joshua D. Drake" <[EMAIL PROTECTED]> writes:
> > while($howmany--) { push @morepgGurus, $pgGuru; }
>
> This is just wrong...
yeah, it would have been much clearer written as:
push @morepgGurus, ($pgGuru)x$howmany;
Or at least the perlish:
for (1..$howmany)
instead of C style while syntax.
O
my $pgGuru = "Tom Lane"; my @morepgGurus; my $howmany = 10;
while($howmany--) { push @morepgGurus, $pgGuru; }
This is just wrong...
--
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - [EM
Manfred Koizar <[EMAIL PROTECTED]> writes:
> My biggest concern at the moment is that the new sampling method
> violates the contract of returning each possible sample with he same
> probability: getting several tuples from the same page is more likely
> than with the old method.
Hm, are you sure
[Just a quick note here; a more thorough discussion of my test results
will be posted to -hackers]
On Tue, 13 Apr 2004 15:18:42 -0400, Tom Lane <[EMAIL PROTECTED]> wrote:
>Well, the first problem is why is ANALYZE's estimate of the total row
>count so bad :-( ? I suspect you are running into the
Greg Stark wrote:
Bruno Wolff III <[EMAIL PROTECTED]> writes:
I have seen exactly this happen a number of times over the last several
years. However there is still only one Tom Lane implementing the
improvements.
Ob: Well clearly the problem is we need more Tom Lanes.
my $pgGuru = "Tom Lane";
On Tue, 13 Apr 2004 13:55:49 -0400, Tom Lane <[EMAIL PROTECTED]> wrote:
>Possibly the
>nonuniform clumping of CID has something to do with the poor results.
It shouldn't. The sampling algorithm is designed to give each tuple the
same chance of ending up in the sample, and tuples are selected
inde
Greg Stark <[EMAIL PROTECTED]> writes:
> Ob: Well clearly the problem is we need more Tom Lanes.
ObHHGReference: "Haven't you heard? I come in six-packs!"
regards, tom lane
---(end of broadcast)---
TIP 9: the planner will i
Bruno Wolff III <[EMAIL PROTECTED]> writes:
> I have seen exactly this happen a number of times over the last several
> years. However there is still only one Tom Lane implementing the
> improvements.
Ob: Well clearly the problem is we need more Tom Lanes.
--
greg
---
Simon,
> Is the problem "a person interested" or is there another issue there?
IMHO, it's "a person interested".
> Treating the optimizer as a black box is something I'm very used to from
> other RDBMS. My question is, how can you explicitly re-write a query now
> to "improve" it? If there's no
Folks,
> I am currently chasing what seems to be the same issue: massive context
> swapping on a dual Xeon system. I tried back-patching the above-mentioned
> patch ... it helps a little but by no means solves the problem ...
BTW, I'm currently pursuing the possibility that this has something to
Shea,Dan [CIS] wrote:
Bill, if you had alot of updates and deletions and wanted to optimize your
table, can you just issue the cluster command.
Will the cluster command rewrite the table without the obsolete data that a
vacuum flags or do you need to issue a vacuum first?
From the reference docs:
> Bruno Wolff
> Simon Riggs <[EMAIL PROTECTED]> wrote:
> >
> > I guess what I'm saying is it's not how many people you've
> got working
> > on the optimizer, its how many accurate field reports of less-than
> > perfect optimization reach them. In that case, PostgreSQL
> is likely in a
> > better
Joe Conway <[EMAIL PROTECTED]> writes:
>> Improve spinlock code for recent x86 processors: insert a PAUSE
>> instruction in the s_lock() wait loop, and use test before test-and-set
>> in TAS() macro to avoid unnecessary bus traffic. Patch from Manfred
>> Spraul, reworked a bit by Tom.
> I thought
Bill, if you had alot of updates and deletions and wanted to optimize your
table, can you just issue the cluster command.
Will the cluster command rewrite the table without the obsolete data that a
vacuum flags or do you need to issue a vacuum first?
Dan.
-Original Message-
From: Bill Mora
Rajesh Kumar Mallah wrote:
Bill Moran wrote:
Rajesh Kumar Mallah wrote:
Hi,
The problem was solved by reloading the Table.
the query now takes only 3 seconds. But that is
not a solution.
If dropping/recreating the table improves things, then we can reasonably
assume that the table is pretty acti
On Thu, 2004-04-15 at 06:39, Gavin M. Roy wrote:
> Your IDE drive is the biggest hardward bottleneck here. RPM's and bus
> transfers are slower than SCSI or SATA.
Individual disk throughput generally has very little bearing on database
performance compared to other factors. In fact, IDE bandwi
On Apr 15, 2004, at 12:44 PM, Richard Huxton wrote:
On Thursday 15 April 2004 17:19, Rajesh Kumar Mallah wrote:
Bill Moran wrote:
BTW
is there any way to disable checks and triggers on
a table temporarily while loading data (is updating
reltriggers in pg_class safe?)
You can take a look at pg_re
On Thursday 15 April 2004 17:19, Rajesh Kumar Mallah wrote:
> Bill Moran wrote:
> > Rajesh Kumar Mallah wrote:
> >> Hi,
> >>
> >> The problem was solved by reloading the Table.
> >> the query now takes only 3 seconds. But that is
> >> not a solution.
> >
> > If dropping/recreating the table improve
Joe,
> I believe this was fixed in 7.4.2, although I can't seem to find it in
> the release notes.
Depends on the cause of the issue. If it's the same issue that I'm currently
struggling with, it's not fixed.
--
-Josh Berkus
Aglio Database Solutions
San Francisco
On Wed, Apr 14, 2004 at 21:12:18 +0100,
Simon Riggs <[EMAIL PROTECTED]> wrote:
>
> I guess what I'm saying is it's not how many people you've got working
> on the optimizer, its how many accurate field reports of less-than
> perfect optimization reach them. In that case, PostgreSQL is likely in
Dirk Lutzebäck wrote:
Joe, do you know where I should look in the 7.4.2 code to find this out?
I think I was wrong. I just looked in CVS and found the commit I was
thinking about:
http://developer.postgresql.org/cvsweb.cgi/pgsql-server/src/backend/storage/lmgr/s_lock.c.diff?r1=1.22&r2=1.23
http:/
Joe, do you know where I should look in the 7.4.2 code to find this out?
Dirk
Joe Conway wrote:
Dirk Lutzebäck wrote:
postgresql 7.4.1
a new Dual Xeon MP
too much context switches (way more than 100.000) on higher load
(meaning system load > 2).
I believe this was fixed in 7.4.2, although
Bill Moran wrote:
Rajesh Kumar Mallah wrote:
Hi,
The problem was solved by reloading the Table.
the query now takes only 3 seconds. But that is
not a solution.
If dropping/recreating the table improves things, then we can reasonably
assume that the table is pretty active with updates/inserts.
Dirk Lutzebäck wrote:
postgresql 7.4.1
a new Dual Xeon MP
too much context switches (way more than 100.000) on higher load (meaning system
load > 2).
I believe this was fixed in 7.4.2, although I can't seem to find it in
the release notes.
Joe
---(end of broadcast)
I am searching for best pg distro to run pg (7.4.1).
This is generally based upon opinion. Honestly though, your kernel
version is more important for performance than the distro. Personally I
use gentoo, love gentoo, and would recommend very few other distros
(Slackware) for servers. RedH
Rajesh Kumar Mallah wrote:
Hi,
The problem was solved by reloading the Table.
the query now takes only 3 seconds. But that is
not a solution.
If dropping/recreating the table improves things, then we can reasonably
assume that the table is pretty active with updates/inserts. Correct?
The problem
Hi ,
I am not sure, but I remember the same problem.
It was ot 7.3.x version and and I needet to reindex the table.
I think after 7.4 vacuum also work correct with reindex.
But I am not sure.
regards,
ivan.
Rajesh Kumar Mallah wrote:
> Hi,
>
> The problem was solved by reloading the Table.
> t
Hi,
we have a complex modperl database application using postgresql 7.4.1 on
a new Dual Xeon MP Machine with SLES8 which seems to generate too much
context switches (way more than 100.000) on higher load (meaning system
load > 2). System response times significantly slow down then. We have
tun
Richard Huxton wrote:
On Thursday 15 April 2004 08:10, Rajesh Kumar Mallah wrote:
The problem is that i want to know if i need a Hardware upgrade
at the moment.
Eg i have another table rfis which contains ~ .6 million records.
SELECT count(*) from rfis where sender_uid > 0;
Time
Hi,
The problem was solved by reloading the Table.
the query now takes only 3 seconds. But that is
not a solution.
The problem is that such phenomenon obscures our
judgement used in optimising queries and database.
If a query runs slow we really cant tell if its a problem
with query itself , har
On Thursday 15 April 2004 08:10, Rajesh Kumar Mallah wrote:
> The problem is that i want to know if i need a Hardware upgrade
> at the moment.
>
> Eg i have another table rfis which contains ~ .6 million records.
> SELECT count(*) from rfis where sender_uid > 0;
> Time: 117560.635 ms
>
> Which is
The relation size for this table is 1.7 GB
tradein_clients=# SELECT public.relation_size ('general.rfis');
+--+
| relation_size|
+--+
|1,762,639,872 |
+--+
(1 row)
Regds
mallah.
Rajesh Kumar Mallah wrote:
The problem is that i want to kno
Hi,
I am using pg from 3 y. and generaly I do not have big problems with it.
I am searching for best pg distro to run pg (7.4.1).
At the moment I am using RedHat AS 3.0, but I think it have some
performance problems (I am not sure).
My configuration:
P4 2.8 GHz
1 GB RAM
120 GB IDE 7200 disk.
Kern
The problem is that i want to know if i need a Hardware upgrade
at the moment.
Eg i have another table rfis which contains ~ .6 million records.
SELECT count(*) from rfis where sender_uid > 0;
++
| count |
++
| 564870 |
++
Time: 117560.635 ms
Which is approximate 4804 re
34 matches
Mail list logo