Problem sovled by setting:
kern.ipc.semmni: 280
kern.ipc.semmns: 300
Chris.
Mark Kirkwood wrote:
Chris Hebrard wrote:
kern.ipc.shmmax and kern.ipc.shmmin will not stay to what I set them
to.
What am I doing wrong or not doing at all?
These need to go in /etc/sysctl.conf. You might need to set
[EMAIL PROTECTED] (Christopher Petrilli) writes:
On 5/2/05, Tim Terlegård [EMAIL PROTECTED] wrote:
Howdy!
I'm converting an application to be using postgresql instead of
oracle. There seems to be only one issue left, batch inserts in
postgresql seem significant slower than in oracle. I
josh@agliodbs.com (Josh Berkus) writes:
Bill,
What about if an out-of-the-ordinary number of rows
were deleted (say 75% of rows in the table, as opposed
to normal 5%) followed by a 'VACUUM ANALYZE'? Could
things get out of whack because of that situation?
Yes. You'd want to run REINDEX
[EMAIL PROTECTED] (Joshua D. Drake) writes:
So, my question is this: My server currently works great,
performance wise. I need to add fail-over capability, but I'm
afraid that introducing a stressful task such as replication will
hurt my server's performance. Is there any foundation to my fears?
[EMAIL PROTECTED] (Dave Held) writes:
-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Tuesday, March 22, 2005 3:48 PM
To: Greg Stark
Cc: Christopher Browne; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] What about utility to calculate planner cost
parametert.
One thing that stands out is how terribly
bad Windows performed with many small single
transactions and fsync=true.
Appearantly fsync on Windows is a very costly
operation.
Another (good) thing is that PG beats FB on all
other tests :-)
Bye, Chris
On Thursday 10 February 2005 01:58 pm, Tom Lane wrote:
Chris Kratz [EMAIL PROTECTED] writes:
Does anyone have any idea why there be over a 4s difference between
running the statement directly and using explain analyze?
Aggregate (cost=9848.12..9848.12 rows=1 width=0) (actual
time
that. Any other ideas?
-Chris
On this particular development server, we have:
Athlon XP,3000
1.5G Mem
4x Sata drives in Raid 0
Postgresql 7.4.5 installed via RPM running on Linux kernel 2.6.8.1
Items changed in the postgresql.conf:
tcpip_socket = true
max_connections = 32
port = 5432
the explain again goes
back to almost 5s. Now I wonder why that would be different.
Changing random cpu cost back to 2 nets little difference (4991.940ms for
explain and 496ms) But we will leave it at that for now.
--
Chris Kratz
Systems Analyst/Programmer
VistaShare LLC
www.vistashare.com
On Wednesday 09 February 2005 03:59 pm, Greg Stark wrote:
Chris Kratz [EMAIL PROTECTED] writes:
We continue to tune our individual queries where we can, but it seems we
still are waiting on the db a lot in our app. When we run most queries,
top shows the postmaster running at 90
On Wednesday 09 February 2005 05:08 pm, Merlin Moncure wrote:
Hello All,
In contrast to what we hear from most others on this list, we find our
database servers are mostly CPU bound. We are wondering if this is
because
we have postgres configured incorrectly in some way, or if we
shared mem buffers yet to be done)
If you call this select statement directly from psql instead of through
the PHP thing, does timing change?
(just to make sure, time is actually spent in the query and not
somewhere else)
PS: use \timing in psql to see timing information
Bye, Chris
[EMAIL PROTECTED] (Dawid Kuroczko) writes:
ALTER TABLE foo ALTER COLUMN bar SET STATISTICS n; .
I wonder what are the implications of using this statement,
I know by using, say n=100, ANALYZE will take more time,
pg_statistics will be bigger, planner will take longer time,
on the other
[EMAIL PROTECTED] (Pierre-Frédéric Caillaud) writes:
posix_fadvise(2) may be a candidate. Read/Write bareers another pone, as
well asn syncing a bunch of data in different files with a single call
(so that the OS can determine the best write order). I can also imagine
some interaction with the
[EMAIL PROTECTED] (Matt Clark) writes:
As for vendor support for Opteron, that sure looks like a
trainwreck... If you're going through IBM, then they won't want to
respond to any issues if you're not running a bog-standard RHAS/RHES
release from Red Hat. And that, on Opteron, is preposterous,
[EMAIL PROTECTED] (Simon Riggs) writes:
Well, its fairly straightforward to auto-generate the UNION ALL view, and
important as well, since it needs to be re-specified each time a new
partition is loaded or an old one is cleared down. The main point is that
the constant placed in front of each
have to create an rtree type for my timestamp integer column?
Existing rtree columns are below.
Pls help.
Thanks,
Chris
server= select am.amname as acc_method, opc.opcname as ops_name from
pg_am am, pg_opclass opc where opc.opcamid = am.oid order by
acc_method, ops_name;
acc_method
Thanks, Chris and Tom.
I had read *incorrectly* that rtrees are better for = and = comparisons.
Chris
On Tue, 13 Jul 2004 14:33:48 +0800, Christopher Kings-Lynne
[EMAIL PROTECTED] wrote:
I'm storing some timestamps as integers (UTF) in a table and I want to
query by = and = for times
improved performance
even more.
Thanks,
Chris
On Tue, 29 Jun 2004 09:03:24 -0700, Gavin M. Roy [EMAIL PROTECTED] wrote:
Is the from field nullable? If not, try create index calllogs_from on
calllogs ( from ); and then do an explain analyze of your query.
Gavin
Chris Cheston wrote
on one column, and another
query that selects on two other columns - then create one index on the
first column and another index over the second two columns.
Chris
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster
running SMP is the cause.
Thanks,
Chris
live=# explain analyze SELECT id FROM calllogs WHERE from = 'you';
QUERY PLAN
--
Seq Scan on calllogs (cost
ok i just vacuumed it and it's taking slightly longer now to execute
(only about 8 ms longer, to around 701 ms).
Not using indexes for calllogs(from)... should I? The values for
calllogs(from) are not unique (sorry if I'm misunderstanding your
point).
Thanks,
Chris
On Tue, 29 Jun 2004 16:21
. Is this the right way to go?
Any other suggestions for me to figure out why Postmaster is using so much CPU?
Thanks in advance,
Chris
numnet=# select * from pg_stat_activity;
datid | datname | procpid | usesysid | usename
i686 i686
i386 GNU/Linux
on a single processor P4 1.4GHz, 512 MB RAM. Does the SMP kernel do
something with the single processor CPU? or should this not affect
psql?
Thanks in advance!!!
Chris
---(end of broadcast)---
TIP 6: Have you searched our
figured out they get better performance
this way.
HTH,
Chris
---(end of broadcast)---
TIP 8: explain analyze is your friend
[EMAIL PROTECTED] (Neil Conway) writes:
Christopher Browne wrote:
One of our sysadmins did all the configuring OS stuff part; I don't
recall offhand if there was a need to twiddle something in order to
get it to have great gobs of shared memory.
FWIW, the section on configuring kernel
[EMAIL PROTECTED] (Dan Harris) writes:
Christopher Browne wrote:
We have a couple of these at work; they're nice and fast, although the
process of compiling things, well, makes me feel a little unclean.
Thanks very much for your detailed reply, Christopher. Would you mind
elaborating on the
[EMAIL PROTECTED] (Richard Huxton) writes:
If you could pin data in the cache it would run quicker, but at the
cost of everything else running slower.
Suggested steps:
1. Read the configuration/tuning guide at:
http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php
2. Post a sample
[EMAIL PROTECTED] (Anderson Boechat Lopes) writes:
I´m new here and i´m not sure if this is the right email to
solve my problem.
This should be OK...
Well, i have a very large database, with vary tables and very
registers. Every day, too many operations are perfomed in that DB,
with
[EMAIL PROTECTED] (James Thornton) writes:
Back in 2001, there was a lengthy thread on the PG Hackers list about
PG and journaling file systems
(http://archives.postgresql.org/pgsql-hackers/2001-05/msg00017.php),
but there was no decisive conclusion regarding what FS to use. At the
time the
.
Thanks Chris
INFO: --Relation public.clmhdr--
INFO: Pages 32191: Changed 0, reaped 5357, Empty 0, New 0; Tup 339351: Vac
48358, Keep/VTL 0/0, UnUsed 129, MinLen 560, MaxLen 696; Re-using: Free/Av
ail. Space 42011004/32546120; EndEmpty/Avail. Pages 0/5310.
CPU 0.53s/0.09u sec elapsed
,
Chris
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])
db's so that I can minimize thrashing between the postgres
memory pools and the hard drive. I am thinking that this may be a big issue
here?
Thanks for any help,
Chris
On Friday 23 April 2004 12:42, Josh Berkus wrote:
Chris,
I need some help. I have 5 db servers running our database
the script: ${PATHNAME}psql
$PSQLOPT $ECHOOPT -c SET vacuum_mem=524288;SET autocommit TO 'on';VACUUM
$full $verbose $analyze $table -d $db ), and I reset it to 8192 at the end.
Anyway, thank you for the ideas so far, and any additional will be greatly
appreciated.
Chris
On Friday 23 April 2004 13:44
On Friday 23 April 2004 14:57, Ron St-Pierre wrote:
Does this apply to 7.3.4 also?
Actually, since he's running 7.4, there's an even better way. Do a
VACUUM VERBOSE (full-database vacuum --- doesn't matter whether you
ANALYZE or not). At the end of the very voluminous output, you'll see
for this table,
right? If not, please help me understand what these numbers mean.
Thanks,
Chris
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so
I think I have figured my problem out.
I was taking heap_blks_hit / heap_blks_read for my hit pct.
It should be heap_blks_hit/(heap_blks_read+heap_blks_hit), correct?
Thanks
On Wednesday 21 April 2004 11:34, Chris Hoover wrote:
I just want to make sure that I am interpreting this data
this index (or another) and not
sequentially scan the table?
I'm running 7.3.4 on RedHat EL 2.1.
Thanks,
Chris
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
, or are there other things we should be
looking at hardware wise. Thank you for your time.
--
Chris Kratz
Systems Analyst/Programmer
VistaShare LLC
www.vistashare.com
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
that. Might be interesting to do something like
that in a few key places where we have problems.
--
Mike Nolan
--
Chris Kratz
Systems Analyst/Programmer
VistaShare LLC
www.vistashare.com
---(end of broadcast)---
TIP 9: the planner will ignore your
-
throughput option. It guarantees internal file system
integrity, however it can allow old data to appear in
files after a crash and journal recovery.
How does this relate to fflush()? Does fflush still garantee
all data has ben written?
Bye, Chris.
---(end
identify partitions?
Seems suspicious to me...
Does it work? When you give just mount at the command line what output
do you get?
Bye, Chris.
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
in
the triggers. Time spent in triggers is not shown in the pg 7.3.4 version of
explain (nor would I necessarily expect it to).
Thanks for your time, expertise and responses.
-Chris
On Tuesday 09 March 2004 7:18 pm, Stephan Szabo wrote:
On Wed, 3 Mar 2004, Chris Kratz wrote:
Which certainly points
. The table structure has worked
quite well up till now and we are hoping to not have to drop our foreign keys
and inheritance if possible. Any ideas?
Thanks for your time,
-Chris
--
Chris Kratz
Systems Analyst/Programmer
VistaShare LLC
---(end of broadcast
Title: Message
Eek.
Casting both to varchar makes it super quick so I'll
fix up the tables.
Added to the list of things to check for next
time...
On a
side note - I tried it with 7.4.1 on another box and it handled it
ok.
Thanks again :)
Chris.
-Original Message-From
ter:
((permission = 1) AND ("access" = '1'::bpchar) AND (userid = '0'::character
varying)) - Seq Scan on sq_asset a
(cost=0.00..1825.67 rows=16467 width=4) (actual time=1.40..29.09 rows=16467
loops=12873)Total runtime: 759331.85 msec(6
rows)
It's a straight
join so I can't see why it would be this slow.. The tables are pretty small
too.
Thanks for any
suggestions :)
Chris.
[EMAIL PROTECTED] wrote:
my data base is very slow. The machine is a processor Xeon 2GB with
256 MB of RAM DDR. My archive of configuration is this:
This is a joke, right?
chris
---(end of broadcast)---
TIP 6: Have you searched our list
[EMAIL PROTECTED] writes:
my data base is very slow. The machine is a processor Xeon 2GB with
256 MB of RAM DDR. My archive of configuration is this:
sort_mem = 131072 # min 64, size in KB
#vacuum_mem = 8192 # min 1024, size in KB
Change it back to 8192, or perhaps
. Then again, I also got a similar boost out of 7.4. The
two together tickled my bank account. ;)
One question though... It sounds like your 7.3 binaries are 64-bit and
your 7.4 binaries are 32-bit. Have you tried grabbing the SRPM for 7.4
and recompiling it for X86_64?
chris
or conditional rules.
Best Wishes,
Chris Travers
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
this right? Is it only using half of the fully-qualified
pk index? How do I diagnose this? Has anyone seen this before?
postgresql 7.3.1
linux 2.6.0
quad xeon 450
chris
---(end of broadcast)---
TIP 2: you can get off all lists at once
Actually, it would appear that I was born yesterday. I had no idea.
Added the cast and it fell right in. Thanks!
chris -- feeling pretty dumb right now
On Sat, 2004-01-03 at 00:57, Tom Lane wrote:
Chris Trawick [EMAIL PROTECTED] writes:
contactid | bigint | not null
running). Is there any performance to be gained, and if so is
it worth the large cost? Any thoughts/experience are much
appreciated...
--
Chris Field
[EMAIL PROTECTED]
Affinity Solutions Inc.
386 Park Avenue South
Suite 1209
New York, NY 10016
(212) 685-8748 ext. 32
signature.asc
Description
PROTECTED]
Sent: Tuesday, November 11, 2003 8:24 PM
Subject: Re: [PERFORM] Value of Quad vs. Dual Processor machine
On Tue, 2003-11-11 at 17:32, Chris Field wrote:
We are getting ready to spec out a new machine and are wondering about
the wisdom of buying a quad versus a dual processor machine
'::bpchar) OR (TILE_REF =
'TQ38SW'::bpchar))
I am seeing this message in my logs.
bt_fixroot: not valid old root page
Maybe this is relevant to my performance problems.
I know this has been a long message but I would really appreciate any
performance tips.
Thanks
Chris
: (n.TILE_REF = outer.TILE_REF)
Filter: ((TILE_REF = 'TQ27NE'::bpchar) OR (TILE_REF =
'TQ28SE'::bpchar) OR (TILE_REF = 'TQ37NW'::bpchar) OR (TILE_REF =
'TQ38SW'::bpchar))
Total runtime: 12325.00 msec
(9 rows)
Thanks
Chris
---(end of broadcast
of the 7000 datasets once or twice
a day and then read-process the entire data set as many times as I can
in a 12 hour period - nearly every day of the year. Currently there is
only single table but I had planned to add several others.
Thanks,
- Chris
---(end of broadcast
that are
hyper-threaded and a system that has very high disk I/O causes the
system to be sluggish and slow. But after disabling the hyper-threading
itself, our system flew..
--
Chris Bowlby [EMAIL PROTECTED]
Hub.Org Networking Services
---(end of broadcast
to take a close look at many of the posts on
the Performance list (searching the archives) and paying attention to
things such as effective_cache_size and shared_buffers. If these don't
answer your questions, ask this list again.
Best Wishes,
Chris Travers
xin fu wrote:
Dear master:
I have
At 11:31 PM 7/13/03 -0300, Chris Bowlby wrote:
Woops, this might not go through via the address I used : (not
subscribed with that address)..
At 01:46 PM 7/13/03 -0700, Steve Wampler wrote:
The following left join should work if I've done my select right, you
might want to play with a left
| -0.24305
So the column is very distinct (assuming that's what a negative number
means). What I'm looking for is any form of explaination that might be
casusing those spikes, but at the same time a possible solution that
might help me bring it down some...
--
Chris Bowlby [EMAIL PROTECTED]
Hub.Org
. This would be a big win for
the project. Unfortunately I am not knowledgable on this topic to
really do this subject justice.
Best Wishes,
Chris Travers
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send
301 - 362 of 362 matches
Mail list logo