Colton A Smith <[EMAIL PROTECTED]> wrote ..
> Seq Scan on sensor (cost=0.00..1.25 rows=1 width=6) (actual
> time=0.055..0.068 rows=1 loops=1)
> Filter: (sensor_id = 12)
> Total runtime: 80164
Pailloncy Jean-Gerard <[EMAIL PROTECTED]> wrote ..
[snip]
THIS MAY SEEM SILLY but vacuum is mispelled below and presumably there was
never any ANALYZE done.
>
> postgres=# vaccum full verbose analyze;
---(end of broadcast)---
TIP 3: Have you che
I don't believe DROP is necessary; use TRUNCATE instead. No need to re-create
dependent objects.
"Peter Childs" <[EMAIL PROTECTED]> wrote ..
> On 24/11/06, Arnau <[EMAIL PROTECTED]> wrote:
> > Hi all,
> >
> >I have a table with statistics with more than 15 million rows. I'd
> > like to delete
- Original Message -
From: "Jim C. Nasby" <[EMAIL PROTECTED]>
To: "Bob Dusek" <[EMAIL PROTECTED]>
Cc:
Sent: Wednesday, January 10, 2007 2:01 PM
Subject: Re: [PERFORM] performance implications of binary placement
Are you 100% certain that both builds are using all the same libraries?
valgog <[EMAIL PROTECTED]> wrote ..
> On Jul 23, 7:00 pm, [EMAIL PROTECTED] (Tom Lane) wrote:
> > valgog <[EMAIL PROTECTED]> writes:
> > > how to build an multicolumn index with one column order ASCENDING and
> > > another column order DESCENDING?
> >
> > Use 8.3 ;-)
> >
> > In existing releases yo
Tilmann Singer <[EMAIL PROTECTED]> wrote ..
> * Nis Jørgensen <[EMAIL PROTECTED]> [20070727 20:31]:
> > How does the "obvious" UNION query do - ie:
> >
> > SELECT * FROM (
> > SELECT * FROM large_table lt
> > WHERE lt.user_id = 12345
> >
> > UNION
> >
> > SELECT * FROM large_table lt
> > WHERE u
As other posters have pointed out, you can overcome the ORDER BY/LIMIT
restriction on UNIONs with parentheses. I think I misbalanced the parentheses
in my original post, which would have caused an error if you just copied and
pasted.
I don't think the limitation has to do with planning--just pa
If you have only recently analyzed the tables in the query, close your psql
session (if that's what you were using) and then restart it. I've gotten burned
by asking a query using the function, which I believe is when PG creates the
plan for the function, and then making significant changes to t
Here's a simplified version of the schema:
Table A has an ID field, an observation date, and other stuff. There are about
20K IDs and 3K observations per ID. Table B has a matching ID field, minimum
and maximum dates, a code, and other stuff, about 0-50 records per ID. For a
given ID, the dates
> I have a performance problem when traversing a table in index order with
> multiple columns including a date column in date reverse order. Below
> follows a simplified description of the table, the index and the
> associated query
>
> \d prcdedit
> prcdedit_prcd | character(20)
EXPLAIN
INSERT INTO public.historical_price ( security_serial_id, [7 fields of proprietary
data])
SELECT public.security_series.security_serial_id, [7 fields of data],
FROM obsolete.datadb_fix INNER JOIN (obsolete.calcdb INNER JOIN public.security_series
ON obsolete.calcdb.serial=public.security_
I have (among other things) a parent table with 200 records and a child table with
20MM or more. I set up referential integrity on the FK with ON DELETE CASCADE.
It appears that when a DELETE is done on the parent table, the child table deletion is
done with a sequential scan. I say this because
PREPARE c(int4) AS DELETE FROM childtable WHERE fk=$1;
EXPLAIN EXECUTE c(-1);
gives an index scan.
PREPARE c2(int4) AS DELETE FROM parenttable WHERE key=$1;
EXPLAIN EXECUTE c2(1);
gives a seq scan on the parent table (itself a little curious) and no explanation of
what the triggers are doing.
I FOUND IT!
A second trigger that doesn't belong..
OK, we're set now, and thanks for showing me some ways to check what the planner is up
to. Is there a way of seeing what the triggers will do?
---(end of broadcast)---
TIP 4: Don't 'kill -9' t
"Merlin Moncure" <[EMAIL PROTECTED]> wrote ..
[snip]
> select * from t where
> a >= a1 and
> (a > a1 or b >= b1) and
> (a > a1 or b > b1 or c > c1)
I don't see why this is guaranteed to work without an ORDER BY clause, even if TABLE t
is clustered on the correct index. Am I miss
I'm still having trouble with slow cascading DELETEs. What commands can I issue to see
the sequence of events that occurs after I execute
DELETE FROM x WHERE p;
so that I can see if indexes being used correctly, or I have a constraint I don't
want, etc.
---(end of broa
Is it possible (to mix two threads) that you had CLUSTERed the table on the old
database in a way that retrieved the records in this query faster?
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
I use "EXECUTE" inside a stored procedure for just this purpose. This is not the same
as PREPARE/EXECUTE, it lets you send an arbitrary string as SQL within the procedure.
You have to write the query text on the fly in the procedure, which can be a little
messy with quoting and escaping.
Gaetan
Let's see if I have been paying enough attention to the SQL gurus. The planner
is making a different estimate of how many deprecated<>'' versus how many
broken <> ''. I would try SET STATISTICS to a larger number on the ports table,
and re-analyze.
---(end of broadcast)-
I'm also an autodidact on DB design, although it's well more than a year now.
If you are planning to clean up the design, I strongly suggest getting a visual
tool. Google for something like "database design tool". Some are extremely
expensive (e.g. ERwin, which I think is renamed having been bou
Loïc Marteau <[EMAIL PROTECTED]> wrote ..
> Steve Crawford wrote:
> > If this
> > is correct, I'd first investigate simply loading the csv data into a
> > temporary table, creating appropriate indexes, and running a single
> > query to update your other table.
My experience is that this is MUCH f
I wrote
> That's what I did at first, but later I found better performance with
> a TRIGGER on the permanent table that deletes the target of an UPDATE,
> if any, before the UPDATE. [INSERT!] That's what PG does anyway, and now I
> can do
> the entire UPDATE [INSERT] in one command.
It's probably
Adam PAPAI <[EMAIL PROTECTED]> wrote ..
> Hi pgsql-performance,
>
> I've a problem with the select * on a small table.
>
I can think of two possibilities for such incredibly slow performance.
One: your table has not been VACUUMed for a long time and is full of dead
tuples. Try VACUUM FULL on i
"James DeMichele" <[EMAIL PROTECTED]> wrote ..
> Then, I have the following index on the table:
> "this_index" (status, source_id, )
If you have many queries of this type, do
CLUSTER this_index ON tablename;
and retry the SELECT.
MySQL is using some sort of auto-clustering ISAM on the other
Miguel Arroz <[EMAIL PROTECTED]> wrote ..
>I'm testing an update on 66k rows on Postgresql, and it seems
> something is not right here.
>
>My server is a Quad-Xeon 3.2 Ghz with 2 GB RAM and a RAID 1 running
> FreeBSD 6.3 and PgSQL 8.3. My development machine is a PowerBook G4
> 1.67 G
shouldn't ever block other transactions, and this approach
will definitely run that risk.
A
--
Andrew Sullivan | [EMAIL PROTECTED]
A certain description of men are for getting out of debt, yet are
against all taxes for raising money to pay it off.
-
dependently came up with the same formula you
say above we are applying. The formula is on the page that is numbered 6,
although it's the tenth page in the PDF.
q = n/N = ratio of sample size to population size
d_sub_n = d = number of distinct classes in sample
cheers
andrew
--
sure
that the result remains bounded by [d,N].
cheers
andrew
---(end of broadcast)---
TIP 8: explain analyze is your friend
to answer :-)
cheers
andrew
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
t is being aplied. If every value in the sample appears more than
once, then f1 in the formula is 0, and the result is then just d, the
number of distinct values in the sample.
cheers
andrew
---(end of broadcast)---
TIP 6: Have you searched our lis
.3, the estimates on very large tables at least are way way
out.
Or maybe we need to support more than one estimation method.
Or both ;-)
cheers
andrew
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
On Fri, May 06, 2005 at 02:39:11PM -0700, Mischa Sandberg wrote:
> IBM, Sun and HP have their fairly pricey Opteron systems.
We've had some quite good experiences with the HP boxes. They're not
cheap, it's true, but boy are they sweet.
A
--
Andrew Sullivan | [EMAIL PROTEC
posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that
your
message can get through to the mailing list cleanly
---(end of
broadcast)---
TIP 8: explain analyze is your friend
The systems are nevertheless performing very well --
we did a load test that was pretty impressive. Also, Chris Browne
pointed me to this for the drivers:
http://sourceforge.net/projects/cciss/
A
--
Andrew Sullivan | [EMAIL PROTECTED]
A certain description of men are for getting out of debt, ye
esoteric like that. I would think the ability to have an index where the
columns don't all collate in the same direction would be an easy feature
to add.
begin:vcard
fn:Andrew Lazarus
n:Lazarus;Andrew
org:Pillette Investment Management;Research and Development
adr;dom:;;3028 Fillmore;San Francis
ge row is (e.g.) 2k then the OID will
only be a very small fraction of the data, and removing it will only
make a small difference.
Regards,
Andrew McMillan.
-----
Andrew @ Cataly
ID column and the other columns in
your data.
Regards,
Andrew McMillan.
-----
Andrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington
WEB: http://catalyst.net.nz/
OS is
Debian "Sarge" AMD64 with kernel 2.6.11.8 and PostgreSQL 7.4.7.
Cheers,
Andrew.
-
Andrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington
WEB: http://c
pace available by some amount, but
also meaning that we were unable to migrate our system from a previous
non-RAID card cleanly.
Regards,
Andrew.
-----
Andrew @ Catalyst .Net .NZ Lt
g "Sarge" with 2.6.x kernels.
Regards,
Andrew.
-----
Andrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington
WEB: http://catalyst.net.nz/PHYS: Level 2, 150
n from 461 pages to 1 page.
Regards,
Andrew.
-----
Andrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington
WEB: http://catalyst.net.nz/PHYS: Level 2, 150-154 Willis St
DDI: +64(4)803-2201 MOB: +
uld be lost if you did have to restore your data from a day old
backup, so perhaps fsync=false is OK for this particular application.
Regards,
Andrew McMillan.
-
Andrew @ Catalyst .Net .NZ
quent vacuuming would help.
begin:vcard
fn:Andrew Lazarus
n:Lazarus;Andrew
org:Pillette Investment Management;Research and Development
adr;dom:;;3028 Fillmore;San Francisco;CA;94123
email;internet:[EMAIL PROTECTED]
title:Director
tel;work:800-366-0688
tel;fax:415-440-4093
url:http://www.pillette.com
ve
ld be paying more attention if such a demonstration were
forthcoming, in the form of a viable patch and some benchmark results.
cheers
andrew
---(end of broadcast)---
TIP 6: explain analyze is your friend
, but not in this case.
Cheers,
Andrew.
-----
Andrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington
WEB: http://catalyst.net.nz/PHYS: Level 2, 150-154 Willis St
DDI: +64(4)803-2201 MOB: +64(272)DEBIAN
more work than
you need to. If your UPDATEs are chasing down a lot of dead tuples,
for instance, you'll peg your I/O even though you ought to have I/O
to burn.
A
--
Andrew Sullivan | [EMAIL PROTECTED]
The whole tendency of modern prose is away from concreteness.
--George
etail in what
> was done...
Yes, it shows the actual timings, and the actual number of rows. But
if the estimates that the planner makes are wildly different than the
actual results, then you know your statistics are wrong, and that the
planner is going about things the wrong way. ANALYSE is a bi
ris and Tom were telling me
about how to tune my database.
A
--
Andrew Sullivan | [EMAIL PROTECTED]
In the future this spectacle of the middle classes shocking the avant-
garde will probably become the textbook definition of Postmodernism.
--Brad Holland
---
occasional REINDEX to
solve; I forget which version you said you were using).
The painful part about tuning a production system is really that you
have to keep about 50 variables juggling in your head, just so you
can uncover the one thing that you have to put your finger on to make
it all play
? If it's very large compared to the data you have stored in
there, you may want to ask if you're "leaking" space from the free
space map (because of that table turnover, which seems pretty
severe).
A
--
Andrew Sullivan | [EMAIL PROTECTED]
The whole tendency of modern
On Tue, 2005-11-08 at 10:22 +, Christian Paul B. Cosinas wrote:
> I see.
>
> But How Can I put this in the Cron of my Linux Server?
> I really don't have an idea :)
> What I want to do is to loop around all the databases in my server and
> execute the vacuum of these 3 tables in each tables.
achine?). Is this a time, for
example, when logrotate is killing your I/O with file moves?
A
--
Andrew Sullivan | [EMAIL PROTECTED]
I remember when computers were frustrating because they *did* exactly what
you told them to. That actually seems sort of quaint now.
--J.D. B
really wedded to this design? (I have a feeling that something along
the lines of what Tom Lane said would be a better answer -- I think
you need to be more clever, because I don't think this will ever work
well, on any system.)
A
--
Andrew Sullivan | [EMAIL PROTECTED]
This work was visionary an
The Apple is, as you say, cheap (except, the Apple markup on the disks
fuzzes that a bit). Its easy to set up, and has been quite reliable for me,
but do not expect anything resembling good DB performance out of it (I gave
up running anything but backup DBs on it). From the mouth of Apple guys, it
On Sun, Nov 26, 2006 at 09:24:29AM -0500, Rod Taylor wrote:
> attempt and fail a large number of insert transactions then you will
> still need to vacuum.
And you still need to vacuum an insert-only table sometimes, because
of the system-wide vacuum requirement.
A
--
Andrew Su
For the reasons indicated (that is, MVCC), PG can not do a DISTINCT or the
equivalent
GROUP BY from index values alone.
If this table is large, perhaps you could denormalize and maintain a
summary table with date (using truncation) and count, updated with
triggers on the original table. This tabl
64bit PCI bus).
Andrew
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
>> How about this option:
>>
>> SELECT distinct ip_info.* FROM ip_info RIGHT JOIN network_events USING
>> (ip) RIGHT JOIN host_events USING (ip) WHERE
>> (network_events.name='blah' OR host_events.name = 'blah') AND
>> ip_info.ip IS NOT NULL;
MA> Nah, that seems to be much much worse. The othe
herefore you have to
patch the sources.
The symptom to look for is: largescale filesystem deadlocks with many
processes (especially syncer) blocked in "nbufkv" state.
--
Andrew, Supernews
http://www.supernews.com - individual and corporate NNTP services
---(end of bro
rder,
thus enabling it to take advantage of prefetch and other sequential-read
optimizations (in the underlying OS and disk subsystem, rather than in
pg itself).
--
Andrew, Supernews
http://www.supernews.com - individual and corporate NNTP services
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
with the
index in cache, is on the order of 30ms (where the data is cached in
shared_buffers) to 60ms (where the data is cached by the OS). That's on
a 2.8GHz xeon.
--
Andrew, Supernews
http://www.supernews.com - individual and corporate NNTP services
---(end of broadcast)---
TIP 6: explain analyze is your friend
using a
shared_buffers setting of 5), and increase work_mem to 16364, and
see if there are any noticable changes in behaviour.
--
Andrew, Supernews
http://www.supernews.com - individual and corporate NNTP services
---(end of broadcast)---
TIP
dred
MB will degrade performance. Have you done any benchmarks on 8.2.x that
show that you get an improvement from this, or did you just take the
"too much of a good thing is wonderful" approach?
Cheers,
erred to_ table if the old and new values match, but not on update
of the _referring_ table.
--
Andrew, Supernews
http://www.supernews.com - individual and corporate NNTP services
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
On 2007-04-15, Tom Lane <[EMAIL PROTECTED]> wrote:
> Andrew - Supernews <[EMAIL PROTECTED]> writes:
>> Looking at current CVS code the RI check seems to be skipped on update of
>> the _referred to_ table if the old and new values match, but not on update
>> of th
t of issue?
Thanks.
Andrew Lazarus [EMAIL PROTECTED]
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
rs, when I killed
it, and I wasn't in retrospect sure an index that took something like
6GB by itself would be helpful on a 2GB of RAM box.
MK> I don't think that will work for the vector norm i.e:
MK> |x - y| = sqrt(sum over j ((x[j] - y[j])^2))
MK> Cheers
MK> Mark
Let me just thank the list, especially for the references. (I found
similar papers myself with Google: and to think I have a university
library alumni card and barely need it any more!)
I'll write again on the sorts of results I get.
BEGIN:VCARD
VERSION:2.1
N:Lazarus;Andrew;;;Ph.D.
FN:A
x27;m sure
there are improvements that could be made, but overall they don't get in
the way, they do the right thing in the minimal case, and they give the
advanced user a lot more choices about multiple DB instances on the same
machine.
C
duling will really hurt.
This means that, to use CPU scheduling safely, you have to be really
sure that you know what the other transactions are doing.
A
--
Andrew Sullivan | [EMAIL PROTECTED]
Information security isn't a technological problem. It's an economics
problem.
he
> minimum go to a RAID1). Workload will primarily be comprised of queries
I bet that single disk is your problem. Iostat is your friend, I'd
say.
A
--
Andrew Sullivan | [EMAIL PROTECTED]
Everything that happens in the world happens at some place.
efore running this query? Have you altered any other
configuration or planner parameters?
As your "backup_location_rid" is not necessary, I would recommend
dropping that index and creating a new one on just backup_id. This
should be a net wash on space, and the new index should
On Fri, May 18, 2007 at 03:26:08PM -0700, Abu Mushayeed wrote:
> Also, this query ran today and it already finished. Today it was
> IO intensive.
Are you entirely sure that it's not a coincidence, and something
_else_ in the system is causing the CPU issues?
A
--
Andr
worth storing correctly, and so
doing things to improve the chances of correct storage is a good
idea.
A
--
Andrew Sullivan | [EMAIL PROTECTED]
Everything that happens in the world happens at some place.
--Jane Jacobs
---(end of broadcast)
lity, introduced so that _other_ transactions don't get I/O
starved. ("Make vacuum fast" isn't in most cases an interesting
goal.)
A
--
Andrew Sullivan | [EMAIL PROTECTED]
I remember when computers were frustrating because they *did* exactly what
you told them to. Th
read-only data segments (maybe
partitions, maybe something else) would help, so I know for sure that
someone is working on a problem like this, but I don't think it's the
sort of thing that's going to come any time soon.
A
--
Andrew Sullivan | [EMAIL PROTECTED]
I remember when compu
row enough hardware money at it. But
it seems a waste to re-implement something that's already apparently
working for you in favour of something more expensive that you don't
seem to need.
A
--
Andrew Sullivan | [EMAIL PROTECTED]
When my information changes, I alter my con
er system
that I've used that certainly had a similar issue, but I couldn't
show you the data to prove it. Everyone who used it knew about it,
though.
A
--
Andrew Sullivan | [EMAIL PROTECTED]
A certain description of men are for getting out of debt, yet are
against al
On Wed, Jun 06, 2007 at 09:20:54PM +0200, Gunther Mayer wrote:
>
> What the heck could cause such erratic behaviour? I suspect some type of
> resource problem but what and how could I dig deeper?
Is something (perhaps implicitly) locking the table? That will cause
this.
A
--
Andrew
nly get
> told about the slow query *after* it has completed and postgres has told
> me so by logging a slow query entry in my logs?
You can't :(
A
--
Andrew Sullivan | [EMAIL PROTECTED]
This work was visionary and imaginative, and goes to show that visionary
-- first thing I'd look at is to
see whether you are in fact hitting 100% of your I/O capacity and, if
so, what your options are for getting more room there.
A
--
Andrew Sullivan | [EMAIL PROTECTED]
"The year's penultimate month" is not in truth a good way
ase works
differently, by taking an exclusive lock, but the basic conceptual
problem is the same.
A
--
Andrew Sullivan | [EMAIL PROTECTED]
Unfortunately reformatting the Internet is a little more painful
than reformatting your hard drive when it gets out of whack.
--Scott Morris
arge one? In the past, that wasn't the case
for relatively small buffers; with the replacement of single-pass
LRU, that has certainly changed, but I'd be surprised if anyone
tested a buffer as large as 32G.
A
--
Andrew Sullivan | [EMAIL PROTECTED]
The whole tendency of modern
least limit it to one list?
A
--
Andrew Sullivan | [EMAIL PROTECTED]
Everything that happens in the world happens at some place.
--Jane Jacobs
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http
, we'd be
violating it, and since we're not, we can't possibly know about it,
right ;-) But there are some materials about why to use Postgres on
the website:
http://www.postgresql.org/about/advantages
A
--
Andrew Sullivan | [EMAIL PROTECTED]
When my information changes, I alte
On Mon, Jun 18, 2007 at 02:38:32PM -0400, Andrew Sullivan wrote:
> I've picked -advocacy.
Actually, I _had_ picked advocacy, but had an itchy trigger finger.
Apologies, all.
A
--
Andrew Sullivan | [EMAIL PROTECTED]
A certain description of men are for getting out of debt, yet are
aga
at problem. Another part of the
problem was that, for high-contention workloads like the ones we
happened to be working on, an optimistic approach like Postgres-R is
probably always going to be a loser.
A
--
Andrew Sullivan | [EMAIL PROTECTED]
In the future this spectacle of the middle classes sh
each table -- like maybe in a loop
-- would be better for your case.
A
--
Andrew Sullivan | [EMAIL PROTECTED]
In the future this spectacle of the middle classes shocking the avant-
garde will probably become the textbook definition of Postmodernism.
--B
ob. You probably need more I/O, and actually more CPU wouldn't
hurt, because then you could run three VACUUMs on three separate
tables (on three separate disks, of course) and not have to switch
them off and on the CPU.
A
--
Andrew Sullivan | [EMAIL PROTECTED]
A certain description of men
y a working
low-level part of your design to get an undemonstrated benefit and
probably a whole lot of new bugs?
A
--
Andrew Sullivan | [EMAIL PROTECTED]
In the future this spectacle of the middle classes shocking the avant-
garde will probably become the textbook definition of Po
to get faster disk that 6 drives in
RAID5, even if they're 15,000 RPM. The rotation speed is the least
of your problems in many RAID implementations.
A
--
Andrew Sullivan | [EMAIL PROTECTED]
"The year's penultimate month" is not in truth a good way of saying
Novem
r update to the table).
A
--
Andrew Sullivan | [EMAIL PROTECTED]
Unfortunately reformatting the Internet is a little more painful
than reformatting your hard drive when it gets out of whack.
--Scott Morris
---(end of broadcast)---
TIP 9:
ds on your SAN and its hard- and
firm-ware, as well as its ability to interact with the OS. I think
the best answer is "sometimes yes".
A
--
Andrew Sullivan | [EMAIL PROTECTED]
However important originality may be in some fields, restraint and
adherence to procedure emerge as the mor
where the battery is. Even if it's slower (and I don't
know whether it will be), I assume that having the right data more
slowly is better than maybe not having the data at all, quickly.
A
--
Andrew Sullivan | [EMAIL PROTECTED]
The plural of anecdote is not data.
r?
> ...can I use \timing??? I don't get any time when using the
> \timing option...
How so? It returns Time: N ms at the end of output for me.
A
--
Andrew Sullivan | [EMAIL PROTECTED]
In the future this spectacle of the middle classes shocking the avant-
garde will probably becom
size?
> On a dedicated postgres server with 4 Giga RAM. Is there any rule of
> thumb?
> Actually I set it to +-256M.
There has been Much Discussion of this lately on this list. I
suggest you have a look through the recent archives on that topic.
A
--
Andrew Sullivan | [EMAIL PRO
on the order of hours for the EXPLAIN
ANALYSE to return, I assumed that the problem is one of impatience and
not clock cycles. After all, the gettimeofday() additional overhead
is still not going to come in on the order of minutes without a
_bursting_ huge query plan.
A
--
Andrew Sulliva
ur WAL is near to its I/O limits, the only way you're going
to get your redundancy back is to go noticably slower :-(
> will lose a very little bit in comparison. Andrew Sullivan had a
> somewhat similar finding a few years ago on some old Solaris hardware
> that unfortunately isn
e has any
effects on your database. I don't have any PG machines running anything
prior to 8.1.X, so I can't really test these. I just saw the bigint
value as a default for an integer column and it caught my eye.
Hope this might help you avoid some problems when upgrading.
Andrew
our architecture seem, at least to me, to be insurmountable.
Andrew
On 8/19/07, Niklas Saers <[EMAIL PROTECTED]> wrote:
>
> Hi,
> the company I'm doing work for is expecting a 20 times increase in
> data and seeks a 10 times increase in performance. Having pushed our
>
1 - 100 of 471 matches
Mail list logo