On Fri, Jun 09, 2017 at 03:22:35PM +, Frits Jalvingh wrote:
> Hi Babu,
>
> That was all already done, as it is common practice for JDBC. Your
> parameter was added to the code that already did all that - and worked
> brilliantly there ;)
>
Hi Frits,
What was the parameter? I did not see an
On Fri, Jun 09, 2017 at 02:39:37PM +, Frits Jalvingh wrote:
> Hi all,
>
> Thanks a lot for the many responses!
>
> About preparing statements: this is done properly in Java, and pgsql does
> it by itself. So that cannot be done better ;)
>
> I tried the copy command, and that indeed works
On Fri, Jun 09, 2017 at 01:56:58PM +, Frits Jalvingh wrote:
> Hi Kenneth, Andreas,
>
> Thanks for your tips!
>
> I increased shared_buffers to 8GB but it has no measurable effect at all. I
> think that is logical: shared buffers are important for querying but not
> for inserting; for that
On Fri, Jun 09, 2017 at 03:24:15PM +0200, Andreas Kretschmer wrote:
>
>
> Am 09.06.2017 um 15:04 schrieb Frits Jalvingh:
> >Hi all,
> >
> >I am trying to improve the runtime of a big data warehouse
> >application. One significant bottleneck found was insert
> >performance, so I am investigating
On Sat, Apr 30, 2011 at 05:26:36PM +0800, Phoenix Kiula wrote:
On Sat, Apr 30, 2011 at 4:07 PM, Greg Smith g...@2ndquadrant.com wrote:
On 04/23/2011 03:44 PM, Robert Haas wrote:
On Apr 17, 2011, at 11:30 AM, Phoenix Kiulaphoenix.ki...@gmail.com
?wrote:
Postgres is 8.2.9.
An
On Tue, Apr 26, 2011 at 09:58:49AM -0500, Kevin Grittner wrote:
J Sisson sisso...@gmail.com wrote:
Rob Wultsch wult...@gmail.com wrote:
Tip from someone that manages thousands of MySQL servers: Use
InnoDB when using MySQL.
Granted, my knowledge of PostgreSQL (and even MSSQL) far
On Fri, Apr 15, 2011 at 10:17:32AM -0700, Mark Williams wrote:
We are experiencing a problem with our query plans when using a range query
in Postgresql 8.3. The query we are executing attempts to select the
minimum primary key id after a certain date. Our date columns are bigint's
holding
On Thu, Mar 17, 2011 at 09:49:45AM -0500, Kevin Grittner wrote:
Derrick Rice derrick.r...@gmail.com wrote:
Kevin Grittner kevin.gritt...@wicourts.gov wrote:
there is a feature to probe the end of an index's range in
situations where data skew was often causing less than optimal
plans
On Wed, Mar 16, 2011 at 02:43:38PM +0530, Adarsh Sharma wrote:
Dear all,
I am facing a problem while creating the index to make the below query run
faster. My table size is near about 1065 MB and 428467 rows.
explain analyze select count(*) from page_content where publishing_date
like
On Fri, Mar 11, 2011 at 06:54:39PM +0100, hans wulf wrote:
Thanks for the answer.
so there's no way around this problem? A nice index bitmap merge thing would
be super fast. Big table ANTI JOIN queries with only a few results expected,
are totally broken, if this is true.
This way the
On Mon, Mar 07, 2011 at 03:17:05PM +0100, Andreas For? Tollefsen wrote:
Thanks, Ken.
It seems like the tip to turn off synchronous_commit did the trick:
/usr/lib/postgresql/8.4/bin/pgbench -T 60 test1
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
query mode:
On Thu, Feb 03, 2011 at 04:39:12PM -0800, da...@lang.hm wrote:
On Thu, 3 Feb 2011, Robert Haas wrote:
On Thu, Feb 3, 2011 at 3:54 PM, da...@lang.hm wrote:
with the current code, this is a completely separate process that knows
nothing about the load, so if you kick it off when you start the
On Thu, Feb 03, 2011 at 09:05:52PM -0500, Robert Haas wrote:
On Thu, Feb 3, 2011 at 8:37 PM, da...@lang.hm wrote:
On Thu, 3 Feb 2011, Robert Haas wrote:
On Thu, Feb 3, 2011 at 7:39 PM, ?da...@lang.hm wrote:
Yeah, but you'll be passing the entire table through this separate
process
On Fri, Feb 04, 2011 at 05:20:27PM +0100, felix wrote:
reply was meant for the list
-- Forwarded message --
From: felix crucialfe...@gmail.com
Date: Fri, Feb 4, 2011 at 4:39 PM
Subject: Re: [PERFORM] Really really slow select count(*)
To: Greg Smith g...@2ndquadrant.com
On Thu, Feb 03, 2011 at 02:11:58AM -0800, da...@lang.hm wrote:
On Thu, 3 Feb 2011, Vitalii Tymchyshyn wrote:
02.02.11 20:32, Robert Haas ???(??):
Yeah. Any kind of bulk load into an empty table can be a problem,
even if it's not temporary. When you load a bunch of data and then
On Wed, Feb 02, 2011 at 03:15:22PM -0300, Cesar Arrieta wrote:
Hi, I have a Server with Fedora Core 11, Tomcat and Postgresql 8.3.
With Hardware:
* 8GB RAM
* 8 processors Intel Xeon E5520 @2.27GHz
* 250GB SATA DISK
Actually, it serves at most 250 connections.
The problem happends when it
On Wed, Feb 02, 2011 at 03:54:26PM -0500, Mladen Gogala wrote:
Greg Smith wrote:
Given that even Oracle kicked out the RBO a long time ago, I'm not so sure
longing for those good old days will go very far. I regularly see queries
that were tweaked to always use an index run at 1/10 or less
On Fri, Jan 28, 2011 at 09:30:19AM -0800, yazan suleiman wrote:
I am evaluating postgres 9 to migrate away from Oracle. The following query
runs too slow, also please find the explain plan:
explain analyze select DISTINCT
On Thu, Jan 27, 2011 at 10:41:08AM -0500, Mladen Gogala wrote:
I have a table EMP, with 14 rows and a description like this:
scott= \d+ emp
Table public.emp
Column |Type | Modifiers | Storage |
Description
On Wed, Jan 12, 2011 at 03:21:45PM +0100, Laszlo Nagy wrote:
On 2011-01-12 14:42, Florian Weimer wrote:
* Laszlo Nagy:
This query:
select hid from product_price_history where id=35547581
Returns 759 rows in 8837 msec! How can this be that slow???
If most records are on different heap
On Thu, Dec 23, 2010 at 09:20:59PM +0700, tuanhoanganh wrote:
Could you show me what parameter of pgbouncer.ini can do that. I read
pgbouncer and can not make pgbouncer open and keep 200 connect to postgres
(Sorry for my English)
Thanks you very much.
Tuan Hoang ANh
You need to use
On Mon, Dec 20, 2010 at 10:33:26AM -0500, James Cloos wrote:
MG == Mladen Gogala mladen.gog...@vmsinfo.com writes:
MG Good time accounting is the most compelling reason for having a wait
MG event interface, like Oracle. Without the wait event interface, one
MG cannot really tell where the
On Tue, Dec 07, 2010 at 11:56:51AM -0800, Richard Broersma wrote:
On Tue, Dec 7, 2010 at 11:43 AM, Andy Colson a...@squeakycode.net wrote:
In PG the first statement you fire off (like an insert into for example)
will start a transaction. ?If you dont commit before you disconnect that
On Mon, Dec 06, 2010 at 03:24:31PM -0500, Josh Kupershmidt wrote:
On Mon, Dec 6, 2010 at 2:48 PM, Jon Nelson jnelson+pg...@jamponi.net wrote:
On Mon, Dec 6, 2010 at 1:46 PM, bricklen brick...@gmail.com wrote:
Not sure if anyone replied about killing your query, but you can do it
like so:
On Wed, Dec 01, 2010 at 12:15:19PM -0500, Mladen Gogala wrote:
Mario Splivalo wrote:
I'll try what Pierre suggested, on whole new filesystem. This one did get
quite filled with thousands of files that I deleted while the database was
working.
Mario
Yes, that is a good idea.
On Fri, Nov 12, 2010 at 03:47:30PM +0200, Kyriacos Kyriacou wrote:
This is my first post in this mailing list and I would like to raise an
issue that in my opinion is causing performance issues of PostgreSQL
especially in a transaction processing environment. In my company we are
using
On Fri, Nov 12, 2010 at 07:34:36AM -0800, bricklen wrote:
On Fri, Nov 12, 2010 at 5:52 AM, Kenneth Marshall k...@rice.edu wrote:
I cannot speak to your suggestion, but it sounds like you are not
vacuuming enough and a lot of the bloat/randomization would be helped
by making use of HOT
Ah, this is a very old version. If you can take advantage of
a version with HOT support, you should be much, much happier.
Cheers,
Ken
On Fri, Nov 12, 2010 at 06:14:00PM +0200, Kyriacos Kyriacou wrote:
We are still using PostgreSQL 8.2.4. We are running a 24x7 system and
database size is
On Wed, Nov 10, 2010 at 10:47:21PM -0500, Robert Haas wrote:
On Wed, Nov 10, 2010 at 6:07 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Robert Haas robertmh...@gmail.com wrote:
Unfortunately, to know how much data we're going to grovel
through,
On Thu, Nov 11, 2010 at 02:41:12PM +0100, Marc Mamin wrote:
Hello,
in the last years, we have successfully manage to cope with our data
growth
using partitioning and splitting large aggregation tasks on multiple
threads.
The partitioning is done logically by our applicationn server, thus
On Thu, Nov 11, 2010 at 09:15:58AM -0500, Mladen Gogala wrote:
Kenneth Marshall wrote:
I agree with the goal of avoiding the need for a GUC. This needs to
be as automatic as possible. One idea I had had was computing a value
for the amount of cache data in the system by keeping a sum
On Thu, Nov 11, 2010 at 03:56:25PM -0500, gnuo...@rcn.com wrote:
On a thread some time ago, on a similar subject, I opined that I missed the
ability to assign tables to tablespaces and buffers to tablespaces, thus
having the ability to isolate needed tables (perhaps a One True Lookup Table,
Mladen,
You would need to check the mailing lists. The release notes
have it as being a clock sweep algorithm starting in version
8. Then additional changes were added to eliminate the cache
blowout caused by a sequential scan and by vacuum/autovacuum.
I do not believe that there are any
On Wed, Oct 27, 2010 at 05:49:42PM -0400, Tom Lane wrote:
Kenneth Marshall k...@rice.edu writes:
Just keeping the hope alive for faster compression.
Is there any evidence that that's something we should worry about?
I can't recall ever having seen a code profile that shows
On Wed, Oct 27, 2010 at 09:52:49PM +0200, Pierre C wrote:
Even if somebody had a
great idea that would make things smaller without any other penalty,
which I'm not sure I believe either.
I'd say that the only things likely to bring an improvement significant
enough to warrant the (quite
Hi,
There are a lot of details missing about your system:
http://wiki.postgresql.org/wiki/SlowQueryQuestions
Cheers,
Ken
On Thu, Oct 21, 2010 at 02:25:44PM +0200, Gael Le Mignot wrote:
Hello,
We are using PostgreSQL for storing data and full-text search indexes
for the webiste of a
Hi,
Interesting data points. The amount of rows that you managed to
insert into PostgreSQL before Oracle gave up the ghost is 95%
of the rows in the Oracle version of the database. To count 5%
fewer rows, it took PostgreSQL 24 seconds longer. Or adjusting
for the missing rows, 52 seconds longer
You DB is more than likely cached. You should adjust your
page costs to better reflect reality and then the planner
can make more accurate estimates and then choose the proper
plan.
Cheers,
Ken
On Tue, Sep 21, 2010 at 12:32:01PM -0500, Ogden wrote:
Hello,
I have received some help from the
On Mon, Jul 12, 2010 at 06:11:31AM -0700, Rob Wultsch wrote:
On Mon, Jul 12, 2010 at 4:15 AM, A. Kretschmer
andreas.kretsch...@schollglas.com wrote:
In response to atul.g...@globaldatapoint.com :
Hi,
I need to log the start and end time of the procedures in a table. But the
start
Hi Eliot,
Would you mind posting your code for reference. It is nice to
have working examples when trying to figure out how it all fits
together.
Regards,
Ken
On Wed, Jul 07, 2010 at 03:23:12PM -0400, Eliot Gable wrote:
Thanks again for all the input and suggestions from people. I have this
On Thu, Jun 24, 2010 at 02:43:33PM +0200, Janning wrote:
Hi,
at the moment we encounter some performance problems with our database server.
We have a 12 GB RAM machine with intel i7-975 and using
3 disks Seagate Barracuda 7200.11, ST31500341AS (1.5 GB)
One disk for the system and WAL
On Fri, Jun 18, 2010 at 12:46:11AM +0100, Tom Wilcox wrote:
On 17/06/2010 22:41, Greg Smith wrote:
Tom Wilcox wrote:
Any suggestions for good monitoring software for linux?
By monitoring, do you mean for alerting purposes or for graphing purposes?
Nagios is the only reasonable choice for
Hi Anj,
That is an indication that your system was less correctly
modeled with a random_page_cost=2 which means that the system
will assume that random I/O is cheaper than it is and will
choose plans based on that model. If this is not the case,
the plan chosen will almost certainly be slower for
On Thu, Jun 10, 2010 at 12:34:07PM -0700, Anne Rosset wrote:
Jochen Erwied wrote:
Thursday, June 10, 2010, 8:36:08 PM you wrote:
psrdb=# (SELECT
psrdb(#MAX(item_rank.rank) AS maxRank
psrdb(# FROM
psrdb(#item_rank item_rank
psrdb(# WHERE
psrdb(#
On Fri, May 07, 2010 at 09:37:42AM -0400, Mark Stosberg wrote:
Hello,
We've been a satified user of PostgreSQL for several years, and use it
to power a national pet adoption website: http://www.adoptapet.com/
Recently we've had a regularly-timed middle-of-the-night problem where
Check out the manual:
http://www.postgresql.org/docs/8.4/static/routine-vacuuming.html#AUTOVACUUM
Cheers,
Ken
On Wed, Apr 28, 2010 at 10:37:35AM -0400, akp geek wrote:
Hi -
don't want to side track the discussion. We have 8.4, which of
AUTOVACUUM PARAMETERS can be set to handle
If you expect this DB to be memory resident, you should update
the cpu/disk cost parameters in postgresql.conf. There was a
post earlier today with some more reasonable starting values.
Certainly your test DB will be memory resident.
Ken
On Thu, Mar 18, 2010 at 03:31:18PM +0100, Corin wrote:
Hi
EXISTS matches NULLs too and since they are not indexed a
sequential scan is needed to check for them. Try using
IN instead.
Cheers,
Ken
On Wed, Mar 10, 2010 at 02:26:20PM +0100, Benoit Delbosc wrote:
Hi all,
I am trying to understand why inside an EXISTS clause the query planner
does not
On Mon, Mar 08, 2010 at 10:24:56AM -0700, Kevin Kempter wrote:
Hi all;
we've found that partition elimination is not happening for a prepared
statement, however running the same statement in psql manually does give us
partition elimination.
Is this a known issue?
Yes, see the recent
On Sun, Feb 28, 2010 at 12:29:14PM -0800, Josh Berkus wrote:
However the same query against the base table when specifying the check
constraint key in the where clause produces sequential scans:
Does the master table have the same indexes as the slave partitions?
--Josh Berkus
Does
On Thu, Feb 18, 2010 at 01:18:10PM +0100, A. Kretschmer wrote:
In response to Thom Brown :
On 18 February 2010 11:55, AI Rumman rumman...@gmail.com wrote:
Not like operation does not use index.
select * from vtiger_contactscf where lower(cf_1253) not like
lower('Former%')
I
On Mon, Jan 18, 2010 at 12:13:24PM -0500, Tom Lane wrote:
Kenneth Marshall k...@rice.edu writes:
We have just upgraded our monitoring server software and
now the following query for graphing the data performs
abysmally with the default settings. Here is the results
of the EXPLAIN ANALYZE
Dear performance group:
We have just upgraded our monitoring server software and
now the following query for graphing the data performs
abysmally with the default settings. Here is the results
of the EXPLAIN ANALYZE run with nestloops enabled:
SET enable_nestloop = 'on';
EXPLAIN SELECT g.graphid
On Fri, Jan 15, 2010 at 04:58:57PM -0600, Kevin Grittner wrote:
Kenneth Marshall k...@rice.edu wrote:
with the default settings
Do you mean you haven't changed any settings in your postgresql.conf
file from their defaults?
-Kevin
Sorry, here are the differences from the default
On Sat, Jan 09, 2010 at 03:42:08PM +0300, Nickolay wrote:
I do not see any way to normalize this table anymore. it's size is 4Gig for
~4M rows, i.e. 1Kb per row, i think it's ok.
Also there are 2 indexes: by date_time and by a couple of service fields
(total index size is 250Mb now).
I
On Fri, Jan 08, 2010 at 12:38:46PM -0500, Carlo Stonebanks wrote:
I thought that post mentioned that the plan
was one statement in an iteration, and that the cache would have
been primed by a previous query checking whether there were any rows
to update. If that was the case, it might be
Hint bit I/O?
Ken
On Wed, Dec 09, 2009 at 01:29:00PM -0500, Joseph S wrote:
I just installed a shiny new database server with pg 8.4.1 running on
CentOS 5.4. After using slony to replicate over my database I decided to
do some basic performance tests to see how spiffy my shiny new server
On Mon, Nov 16, 2009 at 12:45:46PM -0800, Eddy Escardo-Raffo wrote:
Yeah this kind of thing would probably work. Doing this in java with
separate queries would be easy to code but require multiple round trips.
Doing it as a stored procedure would be nicer but I'd have to think a little
more
On Sat, Nov 14, 2009 at 12:25:05PM +0100, Wojciech Knapik wrote:
Hello
I just finished implementing a search engine for my site and found
ts_headline extremely slow when used with a Polish tsearch configuration,
while fast with English. All of it boils down to a simple testcase, but
On Tue, Nov 10, 2009 at 12:55:42PM +0100, Laurent Laborde wrote:
Hi !
We recently had a problem with wal archiving badly impacting the
performance of our postgresql master.
And i discovered cstream, that can limite the bandwidth of pipe stream.
Here is our new archive command, FYI, that
On Tue, Oct 13, 2009 at 01:03:10AM -0600, Scott Marlowe wrote:
On Mon, Oct 12, 2009 at 1:06 PM, Tory M Blue tmb...@gmail.com wrote:
Any issues, has it baked long enough, is it time for us 8.3 folks to deal
with the pain and upgrade?
I am running 8.4.1 for my stats and search databases, and
On Fri, Oct 09, 2009 at 08:31:54PM +0800, Xia Qingran wrote:
On Mon, Oct 5, 2009 at 9:58 AM, Omar Kilani omar.kil...@gmail.com wrote:
Hi Xia,
Try this patch:
http://treehou.se/~omar/postgresql-8.4.1-array_sel_hack.patch
It's a hack, but it works for us. I think you're probably
The planner does not yet work as efficiently as it could
with child tables. Check the recent mail archives for a
long discussion of the same.
Regards,
Ken
On Wed, Sep 02, 2009 at 08:52:30AM -0600, Kevin Kempter wrote:
Hi all;
I cant figure out why we're scanning all of our partitions.
We
On Thu, Aug 06, 2009 at 01:42:06PM -0500, Campbell, Lance wrote:
PostgreSQL 8.3
Linux RedHat 4.X
24G of memory
When loading a file generated from pg_dumpall is there a key setting in
the configuration file that would allow the load to work faster.
Thanks,
Lance Campbell
Project
Clustering reorganizes the layout of a table according to
the ordering of a SINGLE index. This will place items that
are adjacent in the index adjacent in the heap. So you need
to cluster on the index that will help the locality of
reference for the queries which will benefit you the most.
Hi,
Looking at the XLogInsert() from 8.3 and 8.4, the 8.4
version includes a call to RecoveryInProgress() at
the top as well as a call to TRACE_POSTGRESQL_XLOG_INSERT().
Could either of those have caused a context switch or
cache flush resulting in worse performance.
Cheers,
Ken
--
Sent via
On Thu, Jun 18, 2009 at 08:27:02PM +0200, Peter Alban wrote:
Hi All,
We are having a reasonably powerful machine for supporting about 20
databases but in total they're not more then 4GB in size.
The machine is 2 processor 8 core and 8 Gig or ram so I would expect that PG
should cache the
in an individual query. You can set it on a per query basis to
help manage it use, i.e. up it for only the query that needs it.
With our systems, which run smaller number of queries we do use
256MB. I hope that this helps.
Regards,
Ken
On Thu, Jun 18, 2009 at 8:30 PM, Kenneth Marshall k...@rice.edu
On Thu, May 28, 2009 at 04:55:34PM +0200, Ivan Voras wrote:
2009/5/28 Heikki Linnakangas heikki.linnakan...@enterprisedb.com:
Ivan Voras wrote:
I need to store data about sensor readings. There is a known (but
configurable) number of sensors which can send update data at any time.
The
On Thu, May 28, 2009 at 05:24:33PM +0200, Ivan Voras wrote:
2009/5/28 Kenneth Marshall k...@rice.edu:
One big benefit of partitioning is that you can prune old data with
minimal impact to the running system. Doing a large bulk delete would
be extremely I/O impacting without partion
On Wed, May 06, 2009 at 04:01:03PM +0800, Craig Ringer wrote:
Dimitri wrote:
Hi,
any idea if there is a more optimal execution plan possible for this
query:
select S.REF as stref, S.NAME as stnm, H.HORDER as hord, H.BEGIN_DATE as
hbeg,
H.END_DATE as hend, H.NOTE as hnote
On Wed, May 06, 2009 at 02:49:23PM +0200, Dimitri wrote:
The story is simple: for the launching of MySQL 5.4 I've done a
testing comparing available on that time variations of InnoDB engines,
and at the end by curiosity started the same test with PostgreSQL
8.3.7 to see if MySQL performance
No.
Ken
On Wed, May 06, 2009 at 10:04:33PM +0800, Craig Ringer wrote:
Dimitri wrote:
Hi Chris,
the only problem I see here is it's 2 times slower vs InnoDB, so
before I'll say myself it's ok I want to be sure there is nothing else
to do.. :-)
Can the genetic query optimizer come
On Wed, May 06, 2009 at 09:38:59AM +0200, Dimitri wrote:
Hi,
any idea if there is a more optimal execution plan possible for this query:
select S.REF as stref, S.NAME as stnm, H.HORDER as hord, H.BEGIN_DATE as hbeg,
H.END_DATE as hend, H.NOTE as hnote
from HISTORY H, STAT
? or do I
need external tools?..
Rgds,
-Dimitri
I only suggested it because it might have the effect of changing
the sequential scan on the stat table to an indexed scan.
Cheers,
Ken
On 5/6/09, Kenneth Marshall k...@rice.edu wrote:
On Wed, May 06, 2009 at 09:38:59AM +0200, Dimitri wrote
On Tue, Apr 28, 2009 at 11:56:25AM -0600, Scott Marlowe wrote:
On Tue, Apr 28, 2009 at 11:48 AM, Whit Armstrong
armstrong.w...@gmail.com wrote:
Thanks, Scott.
Just to clarify you said:
postgres. ?So, my pg_xlog and all OS and logging stuff goes on the
RAID-10 and the main store for
Hi,
I just finished reading this thread. We are currently working on
setting up a central log system using rsyslog and PostgreSQL. It
works well once we patched the memory leak. We also looked at what
could be done to improve the efficiency of the DB interface. On the
rsyslog side, moving to
2009, Kenneth Marshall wrote:
Date: Tue, 21 Apr 2009 08:33:30 -0500
From: Kenneth Marshall k...@rice.edu
To: Richard Huxton d...@archonet.com
Cc: da...@lang.hm, Stephen Frost sfr...@snowman.net,
Greg Smith gsm...@gregsmith.com, pgsql-performance@postgresql.org
Subject: Re: [PERFORM
On Tue, Apr 21, 2009 at 11:09:18AM -0700, da...@lang.hm wrote:
On Tue, 21 Apr 2009, Greg Smith wrote:
On Mon, 20 Apr 2009, da...@lang.hm wrote:
while I fully understand the 'benchmark your situation' need, this isn't
that simple. in this case we are trying to decide what API/interface to
On Fri, Feb 20, 2009 at 04:34:23PM -0500, Battle Mage wrote:
I have a server box that has 4GB of RAM, Quad core CPU AMD Opteron 200.152
Mhz (1024 KB cache size each) with plenty of hard drive space.
I installed both postgresql 8.2.6 and 8.3.3 on it. I've created a basic
test db and used
On Fri, Feb 13, 2009 at 04:12:53PM +0300, Teodor Sigaev wrote:
The short-term workaround for Rusty is probably to create his GIN
index using the intarray-provided gin__int_ops opclass. But it
Right
seems to me that we ought to get rid of intarray's @ and @ operators
and have the module
On Mon, Jan 26, 2009 at 10:10:13AM +0100, Thomas Finneid wrote:
Scott Marlowe wrote:
I'm guessing that you just had more data in the table or something by
the time you tested that, or some cron job was running in the
background, or some other issue, not the index.
It starts from scratch and
On Mon, Jan 26, 2009 at 03:49:00PM +0100, Thomas Finneid wrote:
Kenneth Marshall wrote:
It may be that the smaller index has update contention for the same
blocks that the larger index does not.
Is that an assumption based on both indexes existing? if so I might agree,
but if you
On Thu, Jan 22, 2009 at 02:52:12PM -0500, Harold A. Gim?nez Ch. wrote:
Hi list,
Clustering my indexes dramatically improves the query performance of many of
my queries. Also, the actual clustering takes a very long time for big
databases, roughly 20 hours. I have two questions about how to
On Thu, Jan 22, 2009 at 10:58:25PM +0100, Thomas Finneid wrote:
Thomas Markus wrote:
try to reorganize your data with CLUSTER and create appropriate indixes
(dont forget to check statistics).
One question. Assume I have clustered and new data has been added after
that, according to the
Josh,
Since a number of these performance patches use our hash function, would
it make sense to apply the last patch to upgrade the hash function mix()
to the two function mix()/final()? Since the additional changes increases
the performance of the hash function by another 50% or so. My two
Are you using the same locales for both?
Ken
On Wed, Jun 11, 2008 at 09:40:20PM -0400, Josh Rovero wrote:
We run GCC-compiled postgresql on a number
of HP-UX and Linux boxes.
Our measurements to date show 8.3.1
performance to be about 30% *worse*
than 8.2 on HP-UX for the same drink the
On Fri, Feb 15, 2008 at 01:35:29PM +0100, Peter Schuller wrote:
Hello,
my impression has been that in the past, there has been a general
semi-consensus that upping shared_buffers to use the majority of RAM
has not generally been recommended, with reliance on the buffer cache
instead being
On Wed, Feb 13, 2008 at 10:56:54AM -0600, Peter Koczan wrote:
Hi all,
We're considering setting up a SAN where I work. Is there anyone using
a SAN, for postgres or other purposes? If so I have a few questions
for you.
- Are there any vendors to avoid or ones that are particularly good?
On Thu, Feb 07, 2008 at 12:06:42PM -0500, Greg Smith wrote:
On Thu, 7 Feb 2008, Dimitri Fontaine wrote:
I was thinking of not even reading the file content from the controller
thread, just decide splitting points in bytes (0..ST_SIZE/4 -
ST_SIZE/4+1..2*ST_SIZE/4 etc) and let the reading
On Wed, Aug 22, 2007 at 07:33:35PM +0400, Dmitry Potapov wrote:
Hello!
We run a large (~66Gb) web-backend database on Postgresql 8.2.4 on
Linux. The hardware is Dual Xeon 5130 with 16Gb ram, LSI Megaraid U320-2x
scsi controller w/512Mb writeback cache and a BBU. Storage
Mark,
You are not alone in the fact that when you post your system
specifications, CPU and memory are always listed while the
disk I/O subsystem invariably is not. This is a very disk
intensive operation and I suspect that your disk system is
maxed-out. If you want it faster, you will need more
On Sat, Jun 02, 2007 at 09:13:32AM -0400, Douglas J Hunley wrote:
Hello great gurus of performance:
Our 'esteemed' Engr group recently informed a customer that in their testing,
upgrading to 8.2.x improved the performance of our J2EE
application approximately 20%, so of course, the customer
On Thu, Mar 16, 2006 at 11:45:12AM +0100, Guillaume Smet wrote:
Hello,
We are experiencing performances problem with a quad Xeon MP and
PostgreSQL 7.4 for a year now. Our context switch rate is not so high
but the load of the server is blocked to 4 even on very high load and
we have 60% cpu
On Thu, Jan 12, 2006 at 09:48:41AM +, Simon Riggs wrote:
On Wed, 2006-01-11 at 22:23 -0500, Tom Lane wrote:
=?iso-8859-1?Q?Jean-Philippe_C=F4t=E9?= [EMAIL PROTECTED] writes:
Thanks a lot for this info, I was indeed exceeding the genetic
optimizer's threshold. Now that it is turned
On Thu, Jan 12, 2006 at 03:23:14PM -0500, Jean-Philippe Cote wrote:
Can I actully know whether a given plan is excuted with GEQO on ?
In other words, if I launch 'explain query', I'll get a given plan, but if
I re-launch
the query (withtout the 'explain' keyword), could I get a different
Nicolas,
These sizes would not be considered large. I would leave them
as single tables.
Ken
On Wed, Jul 13, 2005 at 12:08:54PM +0200, Nicolas Beaume wrote:
Hello
I have a large database with 4 large tables (each containing at least
200 000 rows, perhaps even 1 or 2 million) and i ask
On Thu, Apr 21, 2005 at 08:24:15AM -0400, Jeff wrote:
On Apr 21, 2005, at 7:49 AM, Shoaib Burq (VPAC) wrote:
Now I have not touch the $PGDATA/postgresql.conf (As I know very little
about memory tuning) Have run VACCUM ANALYZE.
You should really, really bump up shared_buffers and given
On Tue, Mar 22, 2005 at 08:09:40AM -0500, Christopher Browne wrote:
Martha Stewart called it a Good Thing when [EMAIL PROTECTED] (Greg Stark)
wrote:
I don't think it would be very hard at all actually.
It's just a linear algebra problem with a bunch of independent
variables and a system
On Wed, Apr 21, 2004 at 02:51:31PM -0400, Tom Lane wrote:
The context swap storm is happening because of contention at the next
level up (LWLocks rather than spinlocks). It could be an independent
issue that just happens to be triggered by the same sort of access
pattern. I put forward a
1 - 100 of 101 matches
Mail list logo