Scott Marlowe wrote:
On Mon, Jan 4, 2010 at 3:13 PM, Greg Smith wrote:
Madison Kelly wrote:
I think for now, I will stick with 8.1, but I will certainly try out your
repo edit above on a test machine and see how that works out. I am always
reticent to change something as fundamental as
Rosser Schwarz wrote:
On Mon, Jan 4, 2010 at 2:57 PM, Madison Kelly wrote:
As for autovacuum, I assumed (yes, I know) that all v8.x releases enabled it
by default. How would I confirm that it's running or not?
I believe it's not enabled by default in 8.1-land, and is as of 8.2
Steve Crawford wrote:
Madison Kelly wrote:
I wanted to get ahead of the problem, hence my question here. :)
I've set this to run at night ('iwt' being the DB in question):
su postgres -c "psql iwt -c \"VACUUM ANALYZE VERBOSE\"
And why not the vacu
Steve Crawford wrote:
Madison Kelly wrote:
Steve Crawford wrote:
Madison Kelly wrote:
Hi all,
I've got a fairly small DB...
It slows down over time and I can't seem to find a way to get the
performance to return without doing a dump and reload of the
database...
Yup, I
Brad Nicholson wrote:
I think you are going down the wrong route here - you should be looking
at preventative maintenance instead of fixing it after its broken.
Ensure that autovacuum is running for the database (assuming that you
are on a relatively modern version of PG), and possibly tune it t
Kevin Grittner wrote:
Madison Kelly wrote:
I've added CLUSTER -> ANALYZE -> VACUUM to my nightly
routine and dropped the VACUUM FULL call.
The CLUSTER is probably not going to make much difference once
you've eliminated bloat, unless your queries do a lot of searches in
Gary Doades wrote:
From your queries it definitely looks like its your stats that are the
problem. When the stats get well out of date the planner is choosing a
hash join because it thinks thousands of rows are involved where as only
a few are actually involved. Thats why, with better stats, t
Steve Crawford wrote:
Madison Kelly wrote:
Hi all,
I've got a fairly small DB...
It slows down over time and I can't seem to find a way to get the
performance to return without doing a dump and reload of the database...
Some questions:
Is autovacuum running? This is the m
Hi all,
I've got a fairly small DB (~850MB when pg_dump'ed) running on PgSQL
v8.1.11 a CentOS 5.3 x86_64 Xen-based virtual machine. The filesystem is
ext3 on LVM with 32MB extents. It's about the only real resource-hungry
VM on the server.
It slows down over time and I can't seem to find a w
Decibel! wrote:
On Aug 15, 2008, at 1:36 PM, Madison Kelly wrote:
The 'cust_id' references the customer that the given data belongs to.
The reason for this "data bucket" (does this structure have a proper
name?) is that the data I need to store on a give customer is quite
Hi all,
I've got a simple table with a lot of data in it:
CREATE TABLE customer_data (
cd_id int primary key
default(nextval('cd_seq')),
cd_cust_id int not null,
cd_variable textnot null
Mark Stosberg wrote:
Madison Kelly wrote:
I think the more direct question I was trying to get at is "How do you
build a 'relavence' search engine? One where results are returned/sorted
by relevance of some sort?". At this point, the best I can think of,
would be to perf
Mark Stosberg wrote:
Joshua D. Drake wrote:
Madison Kelly wrote:
Hi all,
I am asking in this list because, at the end of the day, this is a
performance question.
I am looking at writing a search engine of sorts for my database. I
have only ever written very simple search engines before
Hi all,
I am asking in this list because, at the end of the day, this is a
performance question.
I am looking at writing a search engine of sorts for my database. I
have only ever written very simple search engines before which amounted
to not much more that the query string being used w
Tobias Brox wrote:
[Madison Kelly - Mon at 08:48:19AM -0500]
Ah, sorry, long single queries is what you meant.
No - long running single transactions :-) If it's only read-only
queries, one will probably benefit by having one transaction for every
query.
In this case, what happens i
Tobias Brox wrote:
[Madison Kelly - Mon at 08:10:12AM -0500]
to run, which puts it into your "drawback" section. The server in
question is also almost under load of some sort, too.
A great tip and one I am sure to make use of later, thanks!
I must have been sleepy, listing up
Andreas Kostyrka wrote:
Am Donnerstag, den 02.11.2006, 09:41 -0600 schrieb Scott Marlowe:
Sometimes it's the simple solutions that work best. :) Welcome to the
world of pgsql, btw...
OTOH, there are also non-simple solutions to this, which might make
sense anyway: Install slony, and run your
Scott Marlowe wrote:
nope, the priorities don't pass down. you connect via a client lib to
the server, which spawns a backend process that does the work for you.
The backend process inherits its priority from the postmaster that
spawns it, and they all run at the same priority.
Shoot, but fi
Tobias Brox wrote:
[Madison Kelly - Thu at 10:25:07AM -0500]
Will the priority of the script pass down to the pgsql queries it calls?
I figured (likely incorrectly) that because the queries were executed by
the psql server the queries ran with the server's priority.
I think you are
Scott Marlowe wrote:
On Thu, 2006-11-02 at 09:14, Madison Kelly wrote:
Hi all,
I've got a script (perl, in case it matters) that I need to run once
a month to prepare statements. This script queries and updates the
database a *lot*. I am not concerned with the performance of the SQL
Hi all,
I've got a script (perl, in case it matters) that I need to run once
a month to prepare statements. This script queries and updates the
database a *lot*. I am not concerned with the performance of the SQL
calls so much as I am about the impact it has on the server's load.
Is there a
t, last I checked, MySQL doesn't have an equivalent to
PostgreSQL's 'fsync' which helps insure that data is actually written to
the disk. This costs performance but increases reliability and crash
recovery.
DBs (using v4.1 for MySQL which is also
mature at this point).
As others mentioned though, so far the most likely explanation is the
'fsync' being enabled on PostgreSQL.
Thanks for the reply!
Madison
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Madison Kelly (Digimer)
Stephen Frost wrote:
* Madison Kelly ([EMAIL PROTECTED]) wrote:
If the performace difference comes from the 'COPY...' command being
slower because of the automatic quoting can I somehow tell PostgreSQL
that the data is pre-quoted? Could the performance difference be
something
lse?
If it would help I can provide code samples. I haven't done so yet
because it's a little convoluded. ^_^;
Thanks as always!
Madison
Where the big performance concern is when
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Madison Kelly (Digimer)
TLE-BU; The Lin
p = 't'::bpchar) OR ((file_backup = 'f'::bpchar)
AND (file_parent_dir ~ '^/'::text)))
Total runtime: 60.359 ms
(4 rows)
Bingo!
Hopefully someone might find this useful in the archives. :p
Madison
Madison Kelly wrote:
Hi all,
I am trying to do an update o
Hi all,
I am trying to do an update on a table but so far I can't seem to
come up with a usable index. After my last question/thread the user
'PFC' recommended I store whether a file was to be backed up as either
't'(rue), 'f'(alse) or 'i'(nherit) to speed up changing files and sub
director
PFC wrote:
Hello,
I once upon a time worked in a company doing backup software and I
remember these problems, we had exactly the same !
Prety neat. :)
The file tree was all into memory and everytime the user clicked on
something it haaad to update everything. Being C++ it wa
Hi all,
I hope I am not asking too many questions. :)
I have been trying to solve a performance problem in my program for a
while now and, after getting an index to work which didn't speed things
up enough, I am stumped. I am hoping someone here might have come across
a similar issue and
Tom Lane wrote:
Madison Kelly <[EMAIL PROTECTED]> writes:
Can anyone see why the index might not be being used?
You didn't initdb in 'C' locale. You can either re-initdb,
or create a specialized index with a non-default operator class
to support LIKE.
being used?
I know that 'tsearch2' would probably work but it seems like way more
than I need (because I will never be searching the middle of a string).
Thanks for any advice/help/pointers!
Madison
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Madison Kelly (Digimer)
TLE-BU, The L
) || die...
} else {
# do nothing -- postgresql will figure it out
}
That is a wonderful idea and I already have the foundation in place to
easily implement this. Thanks!!
Madison
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Madison Kelly (Digimer)
TLE-BU, The Linux Experience;
Tom Lane wrote:
Madison Kelly <[EMAIL PROTECTED]> writes:
So the index obiously provides a major performance boost! I just need
to figure out how to tell the planner how to use it...
Simple division shows that the planner's cost estimate ratio between the
seqscan and th
Bruno Wolff III wrote:
On Mon, Jun 13, 2005 at 15:05:00 -0400,
Madison Kelly <[EMAIL PROTECTED]> wrote:
Wow!
With the sequence scan off my query took less than 2sec. When I turned
it back on the time jumped back up to just under 14sec.
tle-bu=> set enable_seqscan = off; S
(16GB RAM should
allow for that plus other OS overhead).
Well, I'm relatively new so defer to others but this is my suggestion.
Best of luck!
Madison
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Madison Kelly (Digimer)
TLE-BU, The Linux Experience; Back Up
http://tle-bu.thelinuxexperienc
Tom Lane wrote:
Madison Kelly <[EMAIL PROTECTED]> writes:
Bruno Wolff III wrote:
Please actually try this before changing anything else.
If I follow then I tried it but still got the sequential scan.
Given the fairly large number of rows being selected, it seems likely
th
stead of using the index.
For what it's worth, and being somewhat of a n00b, I agree with the idea
of a smarter, more flexible planner. I guess the trade off is the added
overhead neaded versus the size of the average query.
Madison
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Madison K
ow all three 'ORDER BY...' items match the three columns in
the index.
Again, thanks!
Madison
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Madison Kelly (Digimer)
TLE-BU, The Linux Experience; Back Up
http://tle-bu.thelinuxexperience.com
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Bruno Wolff III wrote:
On Sun, Jun 12, 2005 at 23:42:05 -0400,
Madison Kelly <[EMAIL PROTECTED]> wrote:
As you probably saw in my last reply, I went back to the old index and
tried the query you and Tom Lane recommended. Should this not have
caught the index?
Probably, but there mi
Bruno Wolff III wrote:
On Sun, Jun 12, 2005 at 22:00:01 -0500,
Bruno Wolff III <[EMAIL PROTECTED]> wrote:
On Sun, Jun 12, 2005 at 18:52:05 -0400,
Madison Kelly <[EMAIL PROTECTED]> wrote:
After sending that email I kept plucking away and in the course of
doing so decided t
Bruno Wolff III wrote:
On Sun, Jun 12, 2005 at 18:52:05 -0400,
Madison Kelly <[EMAIL PROTECTED]> wrote:
After sending that email I kept plucking away and in the course of
doing so decided that I didn't need to return the 'file_type' column.
Other than that, it w
Tom Lane wrote:
Madison Kelly <[EMAIL PROTECTED]> writes:
Here is my full query:
tle-bu=> EXPLAIN ANALYZE SELECT file_name, file_parent_dir, file_type
FROM file_info_7 WHERE file_type='d' ORDER BY file_parent_dir ASC,
file_name ASC;
This is my index (whi
use. Do I need to do
something different because of the 'ORDER BY...'?
Thanks again for the replies!
Madison
--
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Madison Kelly (Digimer)
TLE-BU, The Linux Experience; Back Up
http://tle-bu.thelinuxexperience.com
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
27;...' it will still do the
sequential scan anyway.
So I need to either find an Index that will work with regexes or
re-write my code to update each subdirectory separately and use simpler
UPDATE statement for each.
Thanks again!
Madison
PS - I cc'ed the list to follow up on what I
ndex but is I am reading the "EXPLAIN"
output right it isn't but is instead doing a sequencial scan. If that is
the case, how would I best built the index? Should I have just used the
'file_parent_dir' and 'fil
Richard Huxton wrote:
Madison Kelly wrote:
Hi all,
Is there a fast(er) way to get the sum of all integer values for a
certain condition over many thousands of rows? What I am currently
doing is this (which takes ~5-10sec.):
OK, I'm assuming you've configured PG to your satisfactio
Hi all,
Is there a fast(er) way to get the sum of all integer values for a
certain condition over many thousands of rows? What I am currently doing
is this (which takes ~5-10sec.):
SELECT SUM (a.file_size) FROM file_info_1 a, file_set_1 b WHERE
a.file_name=b.fs_name AND a.file_parent_dir=b.fs
Madison Kelly wrote:
Nope, Raid 10 (one zero) is a mirror is stripes, no parity. with r10
Woops, that should be "mirror of stripes".
By the way, what you are thinking of is possible, it would be 51 (five
one; a raid 5 built on mirrors) or 15 (a mirror of raid 5 arrays).
Always be c
Mitch Pirtle wrote:
On Mon, 13 Dec 2004 09:23:13 -0800, Joshua D. Drake
<[EMAIL PROTECTED]> wrote:
RAID 10 will typically always outperform RAID 5 with the same HD config.
Isn't RAID10 just RAID5 mirrored? How does that speed up performance?
Or am I missing something?
-- Mitch
Hi Mitch,
Nope,
49 matches
Mail list logo