If I insert a bunch of rows, then truncate, then insert a bunch more rows, do I
need to vacuum? I've been assuming that TRUNCATE TABLE is a brute-force
technique that more-or-less tosses the old table and starts fresh so that no
vacuum is necessary.
Second question: Same scenario as above, bu
There have been dozens, perhaps hundreds, of entries in the pg-admin,
pg-general, and pg-performance lists regarding killing a session, but as far as
I can tell, there is no Postgres solution. Did I miss something?
This raises the question: Why doesn't Postgres have a "kill session" command th
Magnus Hagander wrote:
This raises the question: Why doesn't Postgres have a "kill
session" command that works? Oracle has it, and it's
invaluable; there is no substitute. Various writers to these
PG lists have raised the question repeatedly. Is it just a
matter that nobody has had the time
I can't find an address to complain about the mailing list itself, so apologies
but I'm posting directly to this list. Every time I post to this group, I get
returned mails about OTHER subscribers' invalid accounts, like the one below.
What's up? This seems to be a new phenomenon. Should th
I wrote:
I can't find an address to complain about the mailing list itself, so
apologies but I'm posting directly to this list. Every time I post to
this group, I get returned mails about OTHER subscribers' invalid
accounts, like the one below.
Michael Glaesemann replied:
Is this when you're
Thanks for your reply, Mark:
I'm back to my original question: How do you kill a runaway query
without bringing down the whole database? Is there really no answer
to this?
... if you actually want to disconnect a backend
process then there is nothing to let you do this remotely. I recently
Steinar H. Gunderson wrote:
On Wed, Jul 12, 2006 at 08:43:18AM -0700, Craig A. James wrote:
Then you killed the wrong backend...
No queries run in postmaster. They all run in postgres backends. The
postmaster does very little actual work, other than keeping track of
everybody else.
It turns out
Tom Lane wrote:
"Craig A. James" <[EMAIL PROTECTED]> writes:
Bottom line is that I was expecting "instant death" with SIGTERM, but
instead got an agonizing, drawn out -- but safe -- death of the query.
What was the query exactly?
Our expectation is that all or at
Hassan,
1. I have a function that given two parameter produces an arbitrary id, and
text. However arbitrary the id and text are, they are in certain order. i.e. it
is imperative that whatever processing I do, the order is preserved.
What type of function is this? Did you write it in C? An SQ
The real issue here is caching across successive queries, an effect that
Postgres doesn't deal with very well at the moment. If you run these
queries from a standing start (freshly booted machine) you'll likely
find that the indexscan plan is indeed slower than the seqscan/hash
plan, just like th
Michael Stone wrote:
On Sat, Jul 22, 2006 at 10:26:53AM -0700, Craig A. James wrote:
This causes massive file-system activity and flushes all files that
the kernel has cached. If you run this between each Postgres test
(let it run for a couple minutes), it gives you an apples-to-apples
Suppose, hypothetically of course, someone lacked foresight, and put a tablespace somewhere with a
dumb name, like "/disk2", instead of using a symbolic link with a more descriptive name.
And then /disk2 needs to be renamed, say to "/postgres_data", and this (hypothetical)
DBA realizes he has
tin tout KB/t tps MB/s KB/t tps MB/s us ni sy in id
1 14 128.00 1 0.10 128.00 1 0.10 5 0 94 1 0
0 12 123.98 104 12.56 123.74 104 12.56 8 0 90 2 0
0 12 125.66 128 15.75 125.26 128 15.68 10 0 85 6 0
0 12 124.66 129 15.67 124.39 129 15.64 12 0
I'm experiment with RAID, looking for an inexpensive way to boost performance.
I bought 4 Seagate 7200.9 120 GB SATA drives and two SIIG dual-port SATA cards.
(NB: I don't plan to run RAID 0 in production, probably RAID 10, so no need to
comment on the failure rate of RAID 0.)
I used this ra
Alan Hodgson wrote:
On Thursday 14 September 2006 11:05, "Craig A. James"
<[EMAIL PROTECTED]> wrote:
I'm experiment with RAID, looking for an inexpensive way to boost
performance. I bought 4 Seagate 7200.9 120 GB SATA drives and two SIIG
dual-port SATA cards. (NB: I don
I have two tables, SAMPLE and HITLIST that when joined, generate a monsterous
sort.
HITLIST_ROWS has about 48,000 rows
SAMPLE has about 16 million rows
The joined column is indexed in SAMPLE
HITLIST_ROWS is a scratch table which is used a few times then discarded.
HITLIST_ROWS has no index
... and add 100 other problems. Hints are used because the DBA thinks that
they are smarter than the optimizer; 99% of the time, they are wrong.
Just try manually optimizing a complex query, you'll see -- with three
join types, several scan types, aggregates, bitmaps, internal and external
so
Bruce Momjian wrote:
I can do 100! on my computer, but can't do it in my head.
A poor example. 100! is a simple repetative calculation, something computers
are very good at. Optimizing an SQL query is very difficult, and a completely
different class of problem.
The fact is the PG team has
Mark Kirkwood wrote:
The result? I can't use my function in any WHERE clause that involves
any other conditions or joins. Only by itself. PG will occasionally
decide to use my function as a filter instead of doing the join or the
other WHERE conditions first, and I'm dead.
this is an argum
Brian Herlihy wrote:
PG does support hints actually..
The only thing is, the hints are expressed in an obscure, ad-hoc and
implementation dependant language.
For example, the "Don't use index X" hint (the one I used) can be accessed by
replacing your index with an index on values derived from t
Josh Berkus wrote:
I actually think the way to attack this issue is to discuss the kinds of
errors the planner makes, and what tweaks we could do to correct them.
Here's the ones I'm aware of:
-- Incorrect selectivity of WHERE clause
-- Incorrect selectivity of JOIN
-- Wrong estimate of rows r
So let's cut to the bone: If someone thinks a proposal is a bad idea, and
they're volunteering their time on an open-source project, why would they
implement the proposal?
In all the heat and smoke, I believe there are two basic conclusions we all
agree on.
1. Optimizer:
a) A perfect optimi
These tables are particularly egregious examples of ignorant database
design. You need to understand the relational model
This email is a *particularly* egregious example of rudeness. You owe Mr.
Staubo, and the Postgress community, an apology.
There is absolutely no reason to insult peop
Alexander Staubo wrote:
On Oct 17, 2006, at 17:10 , Craig A. James wrote:
These tables are particularly egregious examples of ignorant
database design. You need to understand the relational model
This email is a *particularly* egregious example of rudeness. You owe
Mr. Staubo, and the
Jim C. Nasby wrote:
http://stats.distributed.net used to use a perl script to do some
transformations before loading data into the database. IIRC, when we
switched to using C we saw 100x improvement in speed, so I suspect that
if you want performance perl isn't the way to go. I think you can
comp
Jim C. Nasby wrote:
Well, given that perl is using an entire CPU, it sounds like you should
start looking either at ways to remove some of the overhead from perl,
or to split that perl into multiple processes.
I use Perl for big database copies (usually with some processing/transformation
alon
Jim C. Nasby wrote:
Wait... so you're using perl to copy data between two tables? And using
a cursor to boot? I can't think of any way that could be more
inefficient...
What's wrong with a plain old INSERT INTO ... SELECT? Or if you really
need to break it into multiple transaction blocks, at le
Spiegelberg, Greg wrote:
The data are on two different computers, and I do processing
of the data as it passes through the application. Otherwise,
the INSERT INTO ... SELECT is my first choice.
Would dblink() help in any way?
It might if perl wasn't so damned good at this. ;-)
Craig
A few months ago a couple guys got "bragging rights" for having the most
separate databases. A couple guys claimed several hundred databases and one said he had
several thousand databases. The concensus was that Postgres has no problem handling many
separate databases.
I took that to heart a
For the third time today, our server has crashed, or frozen, actually something
in between. Normally there are about 30-50 connections because of mod_perl
processes that keep connections open. After the crash, there are three
processes remaining:
# ps -ef | grep postgres
postgres 23832 1
Russell Smith wrote:
For the third time today, our server has crashed...
I would guess it's the linux OOM if you are running linux. You need to
turn off killing of processes when you run out of memory. Are you
getting close to running out of memory?
Good suggestion, it was a memory leak in
By the way, in spite of my questions and concerns, I was *very* impressed by
the recovery process. I know it might seem like old hat to you guys to watch
the WAL in action, and I know on a theoretical level it's supposed to work, but
watching it recover 150 separate databases, and find and fix
Richard Troy wrote:
I did that - spent about two f-ing hours looking for what I wanted. (Guess
I entered poor choices for my searches. -frown- ) There are a LOT of
articles that TALK ABOUT OOM, but prescious few actually tell you what you
can do about it.
Trying to save you some time:
On linux
Michael Stone wrote:
At one point someone complained about the ability to configure, e.g.,
IRIX to allow memory overcommit. I worked on some large IRIX
installations where full memory accounting would have required on the
order of 100s of gigabytes of swap, due to large shared memory
allocatio
You realize that it had to be turned on explicitly on IRIX, right? But
don't let facts get in the way of a good rant...
On the contrary, with Irix 4 and earlier it was the default, but it caused so many problems that
SGI switched the default to OFF in IRIX 5. But because it had been available
101 - 135 of 135 matches
Mail list logo