This is a weird problem. A "limit 5" query runs quicky as expected, but a
"limit 1" query never finishes -- it just blasts along at 100% CPU until I
give up. And this is a join between two small tables (262K rows and 109K
rows). Both tables were recently analyzed.
This is Postgres 9.3.5 (yes, we'l
On Sun, Sep 25, 2016 at 7:50 PM, Greg Spiegelberg
wrote:
> Hey all,
>
> Obviously everyone who's been in PostgreSQL or almost any RDBMS for a time
> has said not to have millions of tables. I too have long believed it until
> recently.
>
> AWS d2.8xlarge instance with 9.5 is my test rig using XF
On Fri, Aug 26, 2016 at 9:11 PM, Jim Nasby wrote:
> On 8/26/16 3:26 PM, Mike Sofen wrote:
>
>> Is there way to keep query time constant as the database size grows.
>>
>
> No. More data == more time. Unless you find a way to break the laws of
> physics.
>
Straight hash-table indexes (which Postgr
On Fri, Aug 26, 2016 at 6:17 AM, Tommi K wrote:
> Hello,
> thanks for the response. I did not get the response to my email even
> though I am subscribed to the pgsql-performance mail list. Let's hope that
> I get the next one :)
>
Please include the email you are replying to when you respond. It
On Thu, Jun 23, 2016 at 8:47 AM, Jeff Janes wrote:
> On Wed, Jun 22, 2016 at 9:36 PM, Craig James
> wrote:
> > On Wed, Jun 22, 2016 at 11:36 AM, Jeff Janes
> wrote:
> ...
> But, JChem's cartridge is apparently not using a GiST index, which is
> what my first gues
On Wed, Jun 22, 2016 at 11:36 AM, Jeff Janes wrote:
> On Wed, Jun 22, 2016 at 9:03 AM, Craig James
> wrote:
> > I'm working with a third-party plugin that does chemistry.
>
>
> Out of personal/professional curiosity, which one are you using, if
> that can be
I'm working with a third-party plugin that does chemistry. It's very fast.
However, I'm trying to do a sampling query, such as the first 1% of the
database, and I just can't get the planner to create a good plan. Here is
the full query (the |>| operator does a subgraph match of a molecular
substru
At some point in the next year we're going to reconsider our hosting
environment, currently consisting of several medium-sized servers (2x4
CPUs, 48GB RAM, 12-disk RAID system with 8 in RAID 10 and 2 in RAID 1 for
WAL). We use barman to keep a hot standby and an archive.
The last time we dug into
On Thu, Nov 12, 2015 at 7:12 AM, Merlin Moncure wrote:
> On Wed, Nov 11, 2015 at 1:33 PM, Tom Lane wrote:
> > Massalin Yerzhan writes:
> >> I'm having an issue. The query never ends:
> >> delete from bb_gamelist_league;
> >
> > 9 times out of 10, the answer to this type of problem is that you h
On Sat, Jul 25, 2015 at 8:04 AM, Tom Lane wrote:
> Craig James writes:
> > ... This would result in a thousand
> > or so Postgres connections on a machine with 32 CPUs.
>
> > So the question is: do idle connections impact performance?
>
> Yes. Those connec
The canonical advice here is to avoid more connections than you have CPUs,
and to use something like pg_pooler to achieve that under heavy load.
We are considering using the Apache mod_perl "fast-CGI" system and perl's
Apache::DBI module, which caches persistent connections in order to improve
per
On Wed, Jul 8, 2015 at 1:27 PM, Andres Freund wrote:
> On 2015-07-08 13:46:53 -0500, Merlin Moncure wrote:
> > On Wed, Jul 8, 2015 at 12:48 PM, Craig James
> wrote:
> > > On Tue, Jul 7, 2015 at 10:31 PM, Joshua D. Drake >
> > >> Using Apache Fast-CGI, you
On Wed, Jul 8, 2015 at 10:52 AM, Joshua D. Drake
wrote:
>
> On 07/08/2015 10:48 AM, Craig James wrote:
>
> I admit that I haven't read this whole thread but:
>>
>> Using Apache Fast-CGI, you are going to fork a process for each
>> instance of the
On Tue, Jul 7, 2015 at 10:31 PM, Joshua D. Drake
wrote:
>
> On 07/07/2015 08:05 PM, Craig James wrote:
>
>>
>>
>> No ideas, but I ran into the same thing. I have a set of C/C++ functions
>> that put some chemistry calculations into Postgres as extensions (things
On Thu, Jul 2, 2015 at 9:15 AM, Graeme B. Bell wrote:
> Hi everyone,
>
> I've written a new open source tool for easily parallelising SQL scripts
> in postgres. [obligatory plug: https://github.com/gbb/par_psql ]
>
> Using it, I'm seeing a problem I've seen in other postgres projects
> invo
On Wed, Jul 1, 2015 at 4:57 PM, Scott Marlowe
wrote:
> On Wed, Jul 1, 2015 at 5:06 PM, Craig James wrote:
> > We're buying a new server in the near future to replace an aging system.
> I'd
> > appreciate advice on the best SSD devices and RAID controll
On Wed, Jul 1, 2015 at 4:56 PM, Andreas Joseph Krogh
wrote:
> På torsdag 02. juli 2015 kl. 01:06:57, skrev Craig James <
> cja...@emolecules.com>:
>
> We're buying a new server in the near future to replace an aging system.
> I'd appreciate advice on the best
.@postgresql.org] *On Behalf Of *Andreas Joseph
> Krogh
> *Sent:* Wednesday, July 01, 2015 6:56 PM
> *To:* pgsql-performance@postgresql.org
> *Subject:* Re: [PERFORM] New server: SSD/RAID recommendations?
>
>
>
> På torsdag 02. juli 2015 kl. 01:06:57, skrev Craig James <
We're buying a new server in the near future to replace an aging system.
I'd appreciate advice on the best SSD devices and RAID controller cards
available today.
The database is about 750 GB. This is a "warehouse" server. We load
supplier catalogs throughout a typical work week, then on the weeken
On Fri, Oct 10, 2014 at 9:53 AM, Tomas Vondra wrote:
>
> On 10.10.2014 16:21, Craig James wrote:
> > Our index is for chemical structures. Chemicals are indexed on
> > chemical fragments
> > <http://emolecules.com/info/molecular-informatics>. A search
> >
On Fri, Oct 10, 2014 at 5:10 AM, Tomas Vondra wrote:
> > I've gone looking for papers on this topic but from what I read this
> > isn't so. To get any noticeable improvement you need to read 10-50% of
> > the table and that's effectively the same as reading the entire table
> > -- and it still ha
Hi Jiri,
I’m really interested in those [clustering] algorithms and study them. But
> I would need somebody to point me directly at a specific algorithm to look
> at. The main problem with choosing the right one (which couldn’t get over
> even my university teacher) is that you don’t know the numb
Jiri,
If you haven't looked at clustering algorithms yet, you might want to do
so. Your problem is a special case of clustering, where you have a large
number of small clusters. A good place to start is the overview on
Wikipedia: http://en.wikipedia.org/wiki/Cluster_analysis
A lot of people have
Day and night, the postgres stats collector process runs at about 20 MB/sec
output. vmstat shows this:
$ vmstat 2
procs ---memory-- ---swap-- -io -system--
cpu
r b swpd free buff cache si sobibo in cs us sy id
wa
0 0 55864 135740 123804
On Thu, May 1, 2014 at 4:26 AM, Andreas Joseph Krogh wrote:
> I have a schema where I have lots of messages and some users who might
> have read some of them. When a message is read by a user I create an entry
> i a table message_property holding the property (is_read) for that user.
>
> The schem
Shaun Thomas wrote:
>
>> these issues tend to get solved through optimization fences.
>>> Reorganize a query into a CTE, or use the (gross) OFFSET 0 trick.
>>> How are these nothing other than unofficial hints?
>>>
>> Yeah, the cognitive dissonance levels get pretty high around this
>> issue. S
There have been many discussions about adding hints to Postgres over the
years. All have been firmly rejected by the Postgres developers, with
well-argued reasons. Search the archives to learn more about this topic.
On the other hand, Postgres does have hints. They're just called settings.
You c
On Sun, Dec 15, 2013 at 8:18 AM, Yuri Levinsky wrote:
> Dear ALL,
>
> I am running PL/pgsql procedure with sql statements that taking a long
> time. I able to see them in the log just after their completion. How can I
> see currently running SQL statement? I am able to see in pg_stat_activity
>
On Wed, Dec 11, 2013 at 2:29 PM, Janek Sendrowski wrote:
> Hi,
>
> How can I use this ORDER BY using index feature presented in this
> implementation.
> It doesn't seem to be in use, when I have a look in my query plan.
> It still does an cost intensive Bitmap Heap Scan and a Bitmap Index scan.
>
On Tue, Nov 26, 2013 at 10:40 AM, Ben Chobot wrote:
> On Nov 26, 2013, at 9:24 AM, Craig James wrote:
>
> So far I'm impressed by what I've read about Amazon's Postgres instances.
> Maybe the reality will be disappointing, but (for example) the idea of
> setting up
On Tue, Nov 26, 2013 at 8:29 AM, David Boreham wrote:
> On 11/26/2013 7:26 AM, Craig James wrote:
>
>>
>> For those of us with small (a few to a dozen servers), we'd like to get
>> out of server maintenance completely. Can anyone with experience on a cloud
>> VM
>
> On 25.11.2013 22:01, Lee Nguyen wrote:
>>
>>>
>>> Why shouldn't we run Postgres in a VM? What are the downsides? Does
>>> anyone
>>> have any metrics or benchmarks with the latest Postgres?
>>>
>>
For those of us with small (a few to a dozen servers), we'd like to get out
of server maintenance
2013/11/18 Rogerio Pereira
> Which can be the error :
>
> -- could not send data to client: Broken pipe
> -- FATAL: connection to client lost
>
It means the client program disconnected from the Postgres server (or was
killed) before the server finished a query, and the server had no place to
se
On Sun, Sep 29, 2013 at 2:24 PM, Ken Tanzer wrote:
> p.s., This script runs fine on my computer (Ubuntu 13.04), but on a
>>> Fedora 11 machine it dies with
>>>
>>> pg_analyze_info.sh: line 18: unexpected EOF while looking for matching
>>> `)'
>>> pg_analyze_info.sh: line 57: syntax error: unexp
On Sun, Sep 29, 2013 at 2:09 AM, Ken Tanzer wrote:
> I just sent off to this list for query help, and found the process of
> gathering all the requested info somewhat tedious. So I created a little
> BASH script to try to pull together as much of this information as
> possible.
>
> The script re
On Sat, Sep 14, 2013 at 11:36 AM, bricklen wrote:
> On Sat, Sep 14, 2013 at 11:28 AM, Craig James wrote:
>
>> I'm trying to do a pg_dump of a database, and it more-or-less just sits
>> there doing nothing.
>>
>
> What is running in the db? Perhaps there is som
I'm trying to do a pg_dump of a database, and it more-or-less just sits
there doing nothing. "vmstat 2" looked like this during pg_dump:
procs ---memory-- ---swap-- -io -system--
cpu
r b swpdfree buff cachesi sobibo in cs us sy
id wa
On Sat, Aug 31, 2013 at 10:06 AM, Łukasz Walkowski <
lukasz.walkow...@homplex.pl> wrote:
> > I think the main "pro" of this approach is that it doesn't use any
> > nonstandard SQL features, so you preserve your options to move to some
> > other database in the future. The main "con" is that you'd
On Wed, May 15, 2013 at 1:15 PM, Gavin Flower wrote:
> On 16/05/13 04:23, Craig James wrote:
>
> On Wed, May 15, 2013 at 8:31 AM, Shaun Thomas wrote:
>
>> [Inefficient plans for correlated columns] has been a pain point for
>> quite a while. While we've had severa
On Wed, May 15, 2013 at 8:31 AM, Shaun Thomas wrote:
> [Inefficient plans for correlated columns] has been a pain point for quite
> a while. While we've had several discussions in the area, it always seems
> to just kinda trail off and eventually vanish every time it comes up.
>
> ...
> Since ther
On Wed, May 1, 2013 at 5:18 PM, Tom Lane wrote:
> Craig James writes:
> > I have two tables that are nearly identical, yet the same query runs 100x
> > slower on the newer one. ...
>
> > db=> explain analyze select id, 1 from str_conntab
> > where (id >= 1200
I have two tables that are nearly identical, yet the same query runs 100x
slower on the newer one. The two tables have the same number of rows (+/-
about 1%), and are roughly the same size:
db=> SELECT relname AS table_name,
db-> pg_size_pretty(pg_relation_size(oid)) AS table_size,
db-> pg_size_p
On Fri, Mar 1, 2013 at 1:52 AM, Steven Crandell
wrote:
> Recently I moved my ~600G / ~15K TPS database from a
> 48 core@2.0GHz server with 512GB RAM on 15K RPM disk
> to a newer server with
> 64 core@2.2Ghz server with 1T of RAM on 15K RPM disks
>
> The move was from v9.1.4 to v9.1.8 (eventually a
about. Dell basically doesn't understand Postgres.
They boast excellent on-site service, but for the price of their computers
and their service contract, you can buy two servers from a white-box
vendor. Our white-box servers have been just as reliable as the Dell
servers -- no failures.
If I drop and then recreate a trigger inside of a single transaction, how
does it affect other processes trying to use the same table? Can they just
merrily go along their way using the table, or will they be blocked by an
exclusive lock?
We have a trigger that detects illegal drugs and dangerous
On Wed, Nov 21, 2012 at 9:25 AM, Joe Conway wrote:
> On 11/21/2012 08:05 AM, Heikki Linnakangas wrote:
> > Rather than telling the planner what to do or not to do, I'd much rather
> > have hints that give the planner more information about the tables and
> > quals involved in the query. A typical
vice. The correct hint
syntax would be suggested only when all other avenues failed.
Craig James
>
> -Kevin
>
rly an important need, given the nature of the dialog above (and
many others that have passed through this mailing list).
Why not make an explicit hint syntax and document it? I've still don't
understand why "hint" is a dirty word in Postgres. There are a half-dozen
or so ways i
... did you
do an "analyze" on the small table? I've been hit by this before ... it's
natural to think that Postgres would always check a very small table first
no matter what the statistics are. But it's not true. If you analyze the
small table, even if it only has one or two rows in it, it will often
radically change the plan that Postgres chooses.
Craig James
ms of sequential input whether or not cache is enabled
>>> on
>>> the
>>> RAID1 (SAS 15K, sdb).
>
>
> Maybe there's a misunderstanding here.. :) Craig (James) is the one
> the had started this thread. I've joined later suggesting a way to
> disable HT
On Mon, Oct 15, 2012 at 1:27 AM, Andrea Suisani wrote:
> On 10/11/2012 04:40 PM, Andrea Suisani wrote:
>>
>> On 10/11/2012 04:19 PM, Claudio Freire wrote:
>>>
>>> On Thu, Oct 11, 2012 at 11:14 AM, Andrea Suisani
>>> wrote:
sorry to come late to the party, but being in a similar conditio
Sent this to Claudio rather than the whole list ... here it is.
On Wed, Oct 10, 2012 at 7:44 AM, Claudio Freire wrote:
> On Wed, Oct 10, 2012 at 9:52 AM, Shaun Thomas
> wrote:
> > On 10/09/2012 06:30 PM, Craig James wrote:
> >
> >>ra:8192 walb:1M ra
distance prevents me from going to the co-location facility
and trying this with hyperthreading turned back on.
Craig
On Tue, Oct 9, 2012 at 1:12 PM, Craig James wrote:
> I've confirmed that hyperthreading causes a huge drop in performance on a
> 2x4-core Intel Xeon E5620 2.40
I've confirmed that hyperthreading causes a huge drop in performance on a
2x4-core Intel Xeon E5620 2.40GHz system. The bottom line is:
~3200 TPS max with hyperthreading
~9000 TPS max without hyprethreading
Here are my results.
"Hyprethreads" (Run1) is "out of the box" with hyperthreads ena
On Tue, Oct 9, 2012 at 9:02 AM, Shaun Thomas wrote:
> On 10/08/2012 06:40 PM, Craig James wrote:
>
> Nobody has commented on the hyperthreading question yet ... does it
>> really matter? The old (fast) server has hyperthreading disabled, and
>> the new (slower) server has
On Tue, Oct 9, 2012 at 9:14 AM, David Thomas wrote:
> On Mon, Oct 08, 2012 at 04:40:31PM -0700, Craig James wrote:
> >Nobody has commented on the hyperthreading question yet ... does it
> >really matter? The old (fast) server has hyperthreading disabled, and
> &g
On Mon, Oct 8, 2012 at 3:29 PM, Craig James wrote:
> One mistake in my descriptions...
>
> On Mon, Oct 8, 2012 at 2:45 PM, Craig James wrote:
>
>> This is driving me crazy. A new server, virtually identical to an old
>> one, has 50% of the performance with pgbench. I
On Mon, Oct 8, 2012 at 3:50 PM, Claudio Freire wrote:
> On Mon, Oct 8, 2012 at 7:48 PM, Craig James wrote:
> >> > # blockdev --getra /dev/sdb1
> >> > 256
> >>
> >>
> >> It's probably this. 256 is way too low to saturate your I/O system.
On Mon, Oct 8, 2012 at 3:44 PM, Claudio Freire wrote:
> On Mon, Oct 8, 2012 at 7:25 PM, Craig James wrote:
> >> > But why? What have I overlooked?
> >>
> >> Do you have readahead properly set up on the new one?
> >
> >
> > # blockdev --getra
On Mon, Oct 8, 2012 at 3:33 PM, Evgeny Shishkin wrote:
>
> On Oct 9, 2012, at 1:45 AM, Craig James wrote:
>
> One dramatic difference I noted via vmstat. On the old server, the I/O
> load during the bonnie++ run was steady, like this:
>
> procs ---memory---
One mistake in my descriptions...
On Mon, Oct 8, 2012 at 2:45 PM, Craig James wrote:
> This is driving me crazy. A new server, virtually identical to an old
> one, has 50% of the performance with pgbench. I've checked everything I
> can think of.
>
> The setups (call t
On Mon, Oct 8, 2012 at 3:09 PM, Claudio Freire wrote:
> On Mon, Oct 8, 2012 at 7:06 PM, Craig James wrote:
> >> Sequential Input on the new one is 279MB/s, on the old 400MB/s.
> >>
> >
> > But why? What have I overlooked?
>
> Do you have readahead properl
On Mon, Oct 8, 2012 at 2:57 PM, Evgeny Shishkin wrote:
>
> On Oct 9, 2012, at 1:45 AM, Craig James wrote:
>
> I tested both the RAID10 data disk and the RAID1 xlog disk with bonnie++.
> The xlog disks were almost identical in performance. The RAID10 pg-data
> disks looked
This is driving me crazy. A new server, virtually identical to an old one,
has 50% of the performance with pgbench. I've checked everything I can
think of.
The setups (call the servers "old" and "new"):
old: 2 x 4-core Intel Xeon E5620
new: 4 x 4-core Intel Xeon E5606
both:
memory: 12 GB DD
On 9/27/2012 1:56 PM, M. D. wrote:
>>
>> I'm in Belize, so what I'm considering is from ebay, where it's unlikely
>> that I'll get the warranty. Should I consider some other brand rather? To
>> build my own or buy custom might be an option too, but I would not get any
>> warranty.
Your best warra
On Thu, Sep 27, 2012 at 12:11 PM, M. D. wrote:
> Hi everyone,
>
> I want to buy a new server, and am contemplating a Dell R710 or the newer
> R720. The R710 has the x5600 series CPU, while the R720 has the newer
> E5-2600 series CPU.
>
> At this point I'm dealing with a fairly small database of 8
On Fri, Sep 14, 2012 at 12:40 AM, Nikolay Ulyanitsky wrote:
> Hi
> I compiled the 3.6-rc5 kernel with the same config from 3.5.3 and got
> the 15-20% performance drop of PostgreSQL 9.2 on AMD chipsets (880G,
> 990X).
>
Did you compile the AMD code on the AMD system?
We use a different open-sourc
On Tue, Aug 21, 2012 at 1:03 PM, Scott Marlowe wrote:
> That seems unnecessarily complex. how about this:
>
> create sequence s;
> select array_agg (a.b) from (select nextval('s') as b from
> generate_series(1,1000)) as a;
>
> Then you just iterate that array for the ids you need.
For brevity I
On Mon, Aug 20, 2012 at 5:10 PM, Tom Lane wrote:
> Craig James writes:
>> I want to do this:
>
>> select setval('object_id_seq', nextval('object_id_seq') + 1000, false);
>
>> Now suppose two processes do this simultaneously. Maybe they'
On Mon, Aug 20, 2012 at 6:06 PM, Scott Marlowe wrote:
> On Mon, Aug 20, 2012 at 6:59 PM, Scott Marlowe
> wrote:
>> On Mon, Aug 20, 2012 at 6:10 PM, Tom Lane wrote:
>>> Craig James writes:
>>>> I want to do this:
>>>
>>>> select s
Is seq.setval() "non transactional" in the same sense as seq.nextval()
is? More specifically, suppose I sometimes want to get IDs one-by-one
using nextval(), but sometimes I want a block of a thousand IDs. To
get the latter, I want to do this:
select setval('object_id_seq', nextval('object_i
On Tue, Aug 7, 2012 at 3:22 PM, Jeff Janes wrote:
> On Tue, Aug 7, 2012 at 2:39 PM, Craig James wrote:
>> On Tue, Aug 7, 2012 at 1:45 PM, Jeff Janes wrote:
>>> On Tue, Aug 7, 2012 at 1:15 PM, Merlin Moncure wrote:
>>>>
>>>> IF current_user = 'b
On Tue, Aug 7, 2012 at 1:45 PM, Jeff Janes wrote:
> On Tue, Aug 7, 2012 at 1:15 PM, Merlin Moncure wrote:
>> On Tue, Aug 7, 2012 at 1:48 PM, Craig James wrote:
>>> I found this discussion from 2005 that says you can drop and restore a
>>> trigger inside a transacti
I found this discussion from 2005 that says you can drop and restore a
trigger inside a transaction, but that doing so locks the whole table:
http://archives.postgresql.org/pgsql-general/2005-01/msg01347.php
> From: Jeff Davis
>
> It got me curious enough that I tested it, and apparently droping a
"building of interest" property.
The inner query would reduce your sample set to a much smaller set of
buildings, and presumably the outer query could handle that pretty quickly.
Craig James
>
> My questions:
>
> 1. Any comments about the nature of this problem?
>
On Tue, Jul 31, 2012 at 1:50 AM, Laszlo Nagy wrote:
>
> When Intel RAID controller is that? All of the ones on the motherboard
>> are pretty much useless if that's what you have. Those are slower than
>> software RAID and it's going to add driver issues you could otherwise
>> avoid. Better to
On Tue, Jul 24, 2012 at 11:27 AM, Laszlo Nagy wrote:
>
> > I wonder if UFS has better performance or not. Or can you suggest
>> > another fs? Just of the PGDATA directory.
>>
>
> Relying on physically moving a disk isn't a good backup/recovery
> strategy. Disks are the least reliable single co
>
> On 24/07/2012 14:51, Laszlo Nagy wrote:
> >
> > Hello,
> >
> > Under FreeBSD 9, what filesystem should I use for PostgreSQL? (Dell
> > PowerEdge 2900, 24G mem, 10x2T SATA2 disk, Intel RAID controller.)
> >
> > * ZFS is journaled, and it is more independent of the hardware. So if
> >the c
On Tue, Jul 24, 2012 at 6:22 AM, Ioannis Anagnostopoulos wrote:
> Hello,
> The Postres 9.0 database we use gets about 20K inserts per minute. As long
> as you don't query at the same time the database is copying fine. However
> long running queries seems to delay so much the db that the applicati
On Wed, Jul 11, 2012 at 2:32 PM, Mark Thornton wrote:
> On 11/07/12 21:18, Craig James wrote:
>
>>
>> It strikes me as a contrived case rather than a use case. What sort of
>> app repeatedly fills and truncates a small table thousands of times ...
>> other than a t
On Wed, Jul 11, 2012 at 7:05 AM, Tom Lane wrote:
> Daniel Farina writes:
> > TRUNCATE should simply be very nearly the fastest way to remove data
> > from a table while retaining its type information, and if that means
> > doing DELETE without triggers when the table is small, then it should.
>
wraparound if you have a fairly busy database. I can't
> think of a single situation where either client caching or LIMIT/OFFSET
> can't supplant it with better risk levels and costs.
>
A good solution to this general problem is "hitlists." I wrote about this
con
Any thoughts about this? It seems to be a new database system designed
from scratch to take advantage of the growth in RAM size (data sets that
fit in memory) and the availability of SSD drives. It claims to be "the
world's fastest database."
http://www.i-programmer.info/news/84-database/4397-me
t to ensure we always use prepared statements and variable
> bindings to help protect from SQL injection vulnerabilities.
>
JDBC has some features that are supposed to be convenient (automatic
preparing based on a number-of-executions threshold) that strike me as
misguided. It's one thin
On Thu, May 31, 2012 at 4:01 PM, Trevor Campbell wrote:
> On 01/06/12 08:55, Craig James wrote:
>
>
>
> On Thu, May 31, 2012 at 3:29 PM, Trevor Campbell
> wrote:
>
>> We are having trouble with a particular query being slow in a strange
>> manner.
>>
On Thu, May 31, 2012 at 3:29 PM, Trevor Campbell wrote:
> We are having trouble with a particular query being slow in a strange
> manner.
>
> The query is a join over two large tables that are suitably indexed.
>
> select CG.ID, CG.ISSUEID, CG.AUTHOR, CG.CREATED, CI.ID, CI.FIELDTYPE,
> CI.FIELD,
On Fri, May 25, 2012 at 4:58 AM, Greg Spiegelberg wrote:
> On Sun, May 13, 2012 at 10:01 AM, Craig James wrote:
>
>>
>> On Sun, May 13, 2012 at 1:12 AM, Віталій Тимчишин wrote:
>>
>>>
>>> The sequences AFAIK are accounted as relations. Large list of r
On Thu, May 24, 2012 at 12:06 AM, Hugo wrote:
> Hi everyone,
>
> We have a production database (postgresql 9.0) with more than 20,000
> schemas
> and 40Gb size. In the past we had all that information in just one schema
> and pg_dump used to work just fine (2-3 hours to dump everything). Then we
On Sun, May 13, 2012 at 1:12 AM, Віталій Тимчишин wrote:
>
>
> 2012/5/11 Robert Klemme
>
>> On Fri, May 11, 2012 at 12:50 PM, Vidhya Bondre
>> wrote:
>> > Is there any max limit set on sequences that can be created on the
>> database
>> > ? Also would like to know if we create millions of seque
On Thu, May 3, 2012 at 6:42 AM, Jan Nielsen wrote:
> Hi John,
>
> On Thu, May 3, 2012 at 12:54 AM, John Lister
> wrote:
>>
>> On 03/05/2012 03:10, Jan Nielsen wrote:
>>
>>
>> 300GB RAID10 2x15k drive for OS on local storage
>> */dev/sda1 RA* 4096
>> */de
An update to our system means I'm going to be rewriting every row of
some large tables (20 million rows by 15 columns). In a situation
like this, can auto-vacuum take care of it, or should I plan on
vacuum-full/reindex to clean up?
This is 8.4.4.
Thanks,
Craig
--
Sent via pgsql-performance mai
On Fri, Mar 30, 2012 at 7:53 AM, k...@rice.edu wrote:
> On Fri, Mar 30, 2012 at 02:45:36PM +, Campbell, Lance wrote:
>> PostgreSQL 9.0.x
>> When PostgreSQL storage is using a relatively large raid 5 or 6 array is
>> there any value in having your tables distributed across multiple
>> table
On Wed, Mar 7, 2012 at 12:18 PM, Merlin Moncure wrote:
> On Mon, Mar 5, 2012 at 10:56 AM, Craig James
> wrote:
> > On Sun, Mar 4, 2012 at 10:36 AM, Rory Campbell-Lange
> > wrote:
> >>
> >> We do have complex transactions, but I haven't benchmarked th
On Sun, Mar 4, 2012 at 10:36 AM, Rory Campbell-Lange <
r...@campbell-lange.net> wrote:
> We do have complex transactions, but I haven't benchmarked the
> performance so I can't describe it. Few of the databases are at the many
> million row size at the moment, and we are moving to an agressive sch
On Thu, Mar 1, 2012 at 10:27 AM, Alessandro Gagliardi
wrote:
> Hi folks,
>
> I have a system that racks up about 40M log lines per day. I'm able to COPY
> the log files into a PostgreSQL table that looks like this:
Since you're using a COPY command and the table has a simple column
with exactly t
On Thu, Mar 1, 2012 at 9:50 AM, Tom Lane wrote:
> "Kevin Grittner" writes:
>> Marcin Miros*aw wrote:
>>> SELECT count(*)
>>> from (select * from users_profile order by id) u_p;
>
>>> "order by id" can be ignored by planner.
>
>> This has been discussed before. Certainly not all ORDER BY clauses
On 11/17/11 9:10 AM, Jon Nelson wrote:
I have one query which does not run very often. Sometimes it may be
months between runs.
However, when it does get executed, it scans approximately 100
identically-structured tables (a form of partitioning), extracts and
groups on a subset of the columns, an
On 11/2/11 10:22 AM, Claudio Freire wrote:
On Wed, Nov 2, 2011 at 12:13 PM, Robert Haas wrote:
I wonder if we need to rethink, though. We've gotten a number of
reports of problems that were caused by single-use CTEs not being
equivalent - in terms of performance - to a non-CTE formulation of t
On 10/19/11 2:46 AM, d.davo...@mastertraining.it wrote:
Hi everybody,
I googled a bit around and also checked the mailing lists but I still can't
make an idea. We plan to use postgres 9 and the Cluster Database Replica.
My colleagues are asking how many Cluster Databases (initdb) can I create an
On 9/26/11 10:07 AM, Tom Lane wrote:
Cranking up the statistics target for the hts_code_id column (and re-ANALYZEing) ought to fix it. If all your tables are this large you might want to just increase default_statistics_target across the board. regards, tom lane
This is common advice in this for
1 - 100 of 293 matches
Mail list logo