On Tue, Jul 16, 2013 at 9:51 AM, Brian Fehrle bri...@consistentstate.comwrote:
Are there any known differences between the database systems in terms of
query planners or general operations (sorting, unions) that are notable
different between the systems that would make postgres slow down when
On Thu, Nov 8, 2012 at 1:36 AM, Denis soc...@gmail.com wrote:
P.S.
Not to start a holywar, but FYI: in a similar project where we used MySQL
now we have about 6000 DBs and everything works like a charm.
You seem to have answered your own question here. If my recollection of a
previous
This was answered on the list last time you asked it. You are exceeding a
maximum buffer size. There was an implication that it was related to
converting a string from one encoding to another that could maybe be
alleviated by using the same encoding in both client and server, but a more
reliable
On Tue, Oct 16, 2012 at 4:45 PM, Chris Ruprecht ch...@cdrbill.com wrote:
Hi guys,
PG = 9.1.5
OS = winDOS 2008R8
I have a table that currently has 207 million rows.
there is a timestamp field that contains data.
more data gets copied from another database into this database.
How do I make
On Mon, Oct 8, 2012 at 1:25 PM, Navaneethan R n...@gridlex.com wrote:
After created the index for WHERE clause WHERE dealer_id = 270001..It
is performing better.I have more dealer ids Should I do it for each
dealer_id?
All you've really done is confuse the issue. Please read the wiki
On Wed, Sep 26, 2012 at 1:21 PM, hubert depesz lubaczewski
dep...@depesz.com wrote:
On Wed, Sep 26, 2012 at 02:38:09PM -0400, Robert Sosinski wrote:
The first query shows a cost of 190,169.55 and runs in 199,806.951 ms.
When I disable nested loop, I get a cost of 2,535,992.34 which runs in
On Tue, Aug 7, 2012 at 2:39 PM, Craig James cja...@emolecules.com wrote:
Obviously this is a very expensive trigger, but one that we can drop
in a very specific circumstance. But we NEVER want to drop it for
everyone. It seems like a very reasonable use-case to me.
Sounds like you should
On Mon, Jul 16, 2012 at 7:06 AM, Mark Thornton mthorn...@optrak.com wrote:
Every insert updates four indexes, so at least 3 of those will be in
random order. The indexes don't fit in memory, so all those updates will
involve reading most of the relevant b-tree pages from disk (or at least
On Thu, Jul 5, 2012 at 12:13 PM, Reza Taheri rtah...@vmware.com wrote:
Hi Robert,
Yes, the same concept. Oracle's IOT feature is used often with TPC
benchmarks.
Reza, it would be very helpful if you were to provide the list with a lot
more information about your current software and hardware
On Thu, Jul 5, 2012 at 1:37 PM, Reza Taheri rtah...@vmware.com wrote:
I provided more config details in an earlier email.
** **
I hate to disagree, but unless I didn't get a message sent to the list, you
haven't provided any details about your postgresql config or otherwise
adhered to
On Mon, Jun 18, 2012 at 9:39 AM, Anish Kejariwal anish...@gmail.com wrote:
So, it looks like clustering the index appropriately fixes things! Also,
I'll recreate the index switching the order to (dataset_id, stat_id,data_id)
Just keep in mind that clustering is a one-time operation.
On Wed, Apr 25, 2012 at 11:52 AM, Venki Ramachandran
venki_ramachand...@yahoo.com wrote:
Hi all:
Can someone please guide me as to how to solve this problem? If this is
the wrong forum, please let me know which one to post this one in. I am new
to Postgres (about 3 months into it)
I have
On Sun, Apr 1, 2012 at 6:11 PM, Andrew Dunstan and...@dunslane.net wrote:
On 04/01/2012 08:29 PM, Claudio Freire wrote:
On Sun, Apr 1, 2012 at 8:54 PM, Andrew Dunstanand...@dunslane.net
wrote:
You could try using Unix domain socket and see if the performance
improves. A relevant link:
On Mon, Feb 27, 2012 at 6:59 AM, Reuven M. Lerner reu...@lerner.co.ilwrote:
So for now, we'll just try to DELETE faster than we INSERT, and combined
with autovacuum, I'm hoping that this crisis will be averted. That said,
the current state of affairs with these machines is pretty fragile,
On Thu, Feb 23, 2012 at 10:39 PM, Reuven M. Lerner reu...@lerner.co.ilwrote:
Hi, everyone.
So it turns out that we're not using 25 GB of virtual memory. (That's
what I had been shown yesterday, and it was a bit surprising, to say the
least...)
A few statistics that I managed to get from
On Tue, Feb 21, 2012 at 9:59 AM, Alessandro Gagliardi
alessan...@path.comwrote:
I was thinking about that (as per your presentation last week) but my
problem is that when I'm building up a series of inserts, if one of them
fails (very likely in this case due to a unique_violation) I have to
On Fri, Jan 6, 2012 at 6:35 AM, anto...@inaps.org wrote:
Hello,
I've a table with approximately 50 million rows with a schema like this:
id bigint NOT NULL DEFAULT nextval('stats_5mn'::regclass)**,
t_value integer NOT NULL DEFAULT 0,
t_record integer NOT NULL DEFAULT 0,
On Fri, Jan 6, 2012 at 12:22 PM, Marc Eberhard eberhar...@googlemail.comwrote:
Hi Samuel!
On 6 January 2012 20:02, Samuel Gendler sgend...@ideasculptor.com wrote:
Have you considered doing the insert by doing a bulk insert into a temp
table and then pulling rows that don't exist across
On Mon, Nov 7, 2011 at 10:33 PM, Bhakti Ghatkar bghat...@zedo.com wrote:
Tom,
Currently we are using version 9.0.1.
Which version shall we update to? 9.05 or 9.1 ?
9.0.5 should be compatible with your installed db and contain any bug fixes
that have been released. Which isn't to say that
On Fri, Oct 28, 2011 at 2:20 PM, Merlin Moncure mmonc...@gmail.com wrote:
hrm -- it doesn't look like you are i/o bound -- postgres is
definitely the bottleneck. taking a dump off of production is
throwing something else out of whack which is affecting your other
processes.
band aid
I've got a large mixed-used database, with the data warehouse side of things
consisting of several tables at hundreds of millions of rows, plus a number
of tables with tens of millions. There is partitioning, but as the volume
of data has risen, individual partitions have gotten quite large.
On Thu, Oct 27, 2011 at 1:45 PM, Nicholson, Brad (Toronto, ON, CA)
bnichol...@hp.com wrote:
From: pgsql-performance-ow...@postgresql.org [mailto:
pgsql-performance-ow...@postgresql.org] On Behalf Of Samuel Gendler
Sent: Thursday, October 27, 2011 12:47 PM
To: pgsql-performance@postgresql.org
On Thu, Oct 27, 2011 at 2:15 PM, Samuel Gendler
sgend...@ideasculptor.comwrote:
There are definitely no bloated tables. The large tables are all
insert-only, and old data is aggregated up and then removed by dropping
whole partitions. There should be no bloat whatsoever. The OLTP side
On Mon, Oct 10, 2011 at 1:52 PM, Kevin Grittner kevin.gritt...@wicourts.gov
wrote:
alexandre - aldeia digital adald...@gmail.com wrote:
I came to the list to see if anyone else has experienced the same
problem
A high load average or low idle CPU isn't a problem, it's a
potentially
On Mon, Sep 26, 2011 at 2:11 PM, Timothy Garnett tgarn...@panjiva.comwrote:
Though maybe in a lot of common use situations people only supply values
that are known present so maybe this would make things worse more often then
better (maybe limit 1 or better EXISTS would be a hint the value is
I'm just beginning the process of benchmarking and tuning a new server.
Something I really haven't done before. I'm using Greg's book as a guide.
I started with bonnie++ (1.96) and immediately got anomalous results (I
think).
Hardware is as follows:
2x quad core xeon 5504 2.0Ghz, 2x4MB cache
On Tue, Sep 13, 2011 at 12:13 PM, Samuel Gendler
sgend...@ideasculptor.comwrote:
I'm just beginning the process of benchmarking and tuning a new server.
Something I really haven't done before. I'm using Greg's book as a guide.
I started with bonnie++ (1.96) and immediately got anomalous
On Wed, Aug 3, 2011 at 9:56 PM, Adarsh Sharma adarsh.sha...@orkash.comwrote:
Dear all,
From the last few days, I researched a lot on Postgresql Performance Tuning
due to slow speed of my server.
My application selects data from mysql database about 10 rows , process
it insert into
I'm just top posting this because this whole thread needs a reset before it
goes any farther.
Start with a real description of these hosts - Number and types of disks,
filesystem configs, processors, memory, OS, etc. If your db is small enough
to fit into RAM, please show us the db config you
On Sun, Jul 10, 2011 at 4:41 AM, Radhya sahal rad_cs_2...@yahoo.com wrote:
Dear all ,
could any one help me?
when i use pgadmin to exceute a query it shows the total time for query ..
such as
(select * form table_name.)query total time is for example 100 ms
i want to know the
On Sun, Jul 10, 2011 at 2:16 PM, lars lhofha...@yahoo.com wrote:
I know this has been discussed various times...
We are maintaining a large multi tenant database where *all* tables have a
tenant-id and all indexes and PKs lead with the tenant-id.
Statistics and counts for the all other
On Tue, Jul 5, 2011 at 1:50 PM, Matthias Howell
matthias.how...@voxco.comwrote:
I've just copied a database from one linux machine to another.
Fast machine is CentOS 5.5, running postgres 9.0.0 64 bit
** **
Slow machine is Red Hat 5.5 running postgres 9.0.2 64 bit.
** **
On Wed, Jul 6, 2011 at 5:50 AM, Matthias Howell
matthias.how...@voxco.comwrote:
However, in the end, it was user brain damage.
** **
It does use the doc id index for the subquery, but for some reason, the
primary key on sentences - the sentenceid - was not set. So in fact, there
is
On Fri, Jul 1, 2011 at 3:46 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Samuel Gendler sgend...@ideasculptor.com writes:
I've got 2 nearly identical queries that perform incredibly differently.
The reason the slow query sucks is that the planner is estimating at
most one s row will match
Here's the setup:
I'm cross joining two dimensions before left outer joining to a fact table
so that I can throw a default value into the resultset wherever a value is
missing from the fact table. I have a time dimension and another dimension.
I want the cross join to only cross a subset of
On Thu, Jun 30, 2011 at 1:53 AM, Samuel Gendler
sgend...@ideasculptor.comwrote:
If I could figure out either a query structure or an index structure which
will force the fast query plan, I'd be much happier. So that is what I am
looking for - an explanation of how I might convince the planner
On Thu, Jun 30, 2011 at 1:53 AM, Samuel Gendler
sgend...@ideasculptor.comwrote:
If I could figure out either a query structure or an index structure which
will force the fast query plan, I'd be much happier. So that is what I am
looking for - an explanation of how I might convince the planner
On Sun, Jun 19, 2011 at 11:36 PM, Julius Tuskenis jul...@nsoft.lt wrote:
Hello,
I'm sorry to write again, but as I received no answer I wonder if there is
a better mailing list to address concerning this question? Or is there
nothing to be done about the speed of xmlagg ?. Please let me as
On Wed, Jun 8, 2011 at 10:57 PM, Greg Smith g...@2ndquadrant.com wrote:
Samuel Gendler wrote:
Sure, but if it is a query that is slow enough for a time estimate to be
useful, odds are good that stats that are that far out of whack would
actually be interesting to whoever is looking
On Wed, Jun 8, 2011 at 12:03 PM, Tony Capobianco tcapobia...@prospectiv.com
wrote:
My current setting is 22G. According to some documentation, I want to
set effective_cache_size to my OS disk cache + shared_buffers. In this
case, I have 4 quad-core processors with 512K cache (8G) and my
On Tue, May 31, 2011 at 8:35 AM, Robert Haas robertmh...@gmail.com wrote:
So if you're running a RHEL5.4 or RHEL5.5 system, are you basically
stuck with ext3? I'm not sure if I'm remembering correctly, but ISTM
that you've been uncomfortable with BOTH ext4 and XFS prior to RHEL6;
but OK
On Thu, May 26, 2011 at 4:10 PM, Greg Smith g...@2ndquadrant.com wrote:
As for figuring out how this impacts more complicated cases, I hear
somebody wrote a book or something that went into pages and pages of detail
about all this. You might want to check it out.
I was just going to
I've got a stored proc that constructs some aggregation queries as strings
and then executes them. I'd like to be able to increase work_mem before
running those queries. If I set a new value for work_mem within the stored
proc prior to executing my query string, will that actually have an impact
On Sat, Apr 30, 2011 at 5:12 PM, Jeff Janes jeff.ja...@gmail.com wrote:
gist indices are designed to make this type of thing fast, by using
techniques to rule out most of those comparisons without actually
performing them. I don't know enough about the
guts of either your distance function
On Wed, Apr 27, 2011 at 2:11 PM, Joseph Shraibman j...@selectacast.netwrote:
On 04/27/2011 04:32 PM, Robert Haas wrote:
In the first case, PostgreSQL evidently thinks that using the indexes
will be slower than just ignoring them. You could find out whether
it's right by trying it with
On Mon, Apr 18, 2011 at 8:11 AM, Nikolas Everett nik9...@gmail.com wrote:
This probably isn't the right place to ask that question but you may as
well try `pg_dump -t PATTERN`. Man pg_dump for more information on how to
form that pattern.
On Mon, Apr 18, 2011 at 11:05 AM, Sethu Prasad
I've got some functionality that necessarily must scan a relatively large
table. Even worse, the total workload is actually 3 similar, but different
queries, each of which requires a table scan. They all have a resultset
that has the same structure, and all get inserted into a temp table. Is
On Tue, Mar 29, 2011 at 5:05 PM, Marti Raudsepp ma...@juffo.org wrote:
On Wed, Mar 30, 2011 at 01:16, Samuel Gendler sgend...@ideasculptor.com
wrote:
You can trick Postgres (8.3.x and newer) into doing it in parallel
anyway: open 3 separate database connections and issue each
On Tue, Mar 22, 2011 at 11:28 PM, Adarsh Sharma adarsh.sha...@orkash.comwrote:
*
*I perform a join query on it as :
* explain analyze select distinct(p.crawled_page_id) from page_content p ,
clause2 c where p.crawled_page_id != c.source_id ;
*What it takes more than 1 hour to complete. As
On Wed, Mar 2, 2011 at 7:29 AM, Robert Haas robertmh...@gmail.com wrote:
On Mon, Feb 28, 2011 at 2:09 PM, Josh Berkus j...@agliodbs.com wrote:
Does anyone have the hardware to test FlashCache with PostgreSQL?
http://perspectives.mvdirona.com/2010/04/29/FacebookFlashcache.aspx
I'd be
Neat. That was my 'you learn something every day' moment. Thanks.
On Thu, Feb 3, 2011 at 9:06 PM, David Wilson david.t.wil...@gmail.comwrote:
On Thu, Feb 3, 2011 at 6:05 PM, Robert Haas robertmh...@gmail.com wrote:
If you want to randomly pick 10,000 rows out of all the rows that are
On Tue, Feb 1, 2011 at 3:44 PM, Mladen Gogala mladen.gog...@vmsinfo.comwrote:
On 2/1/2011 6:03 PM, Andrew Dunstan wrote:
Whether or not it's bad application design, it's ubiquitous, and we
should make it work as best we can, IMNSHO. This often generates
complaints about Postgres, and if we
On Tue, Feb 1, 2011 at 7:40 PM, Mladen Gogala mladen.gog...@vmsinfo.comwrote:
Samuel Gendler wrote:
Don't listen to him. He's got an oracle bias.
And bad sinuses, too.
Slashdot already announced that NoSQL is actually going to dominate the
world, so postgres has already lost
On Thu, Jan 6, 2011 at 8:37 PM, Greg Smith g...@2ndquadrant.com wrote:
Josh Berkus wrote:
We talked about bumping it to 512kB or 1MB for 9.1. Did that get in?
Do I need to write that patch?
If it defaulted to 3% of shared_buffers, min 64K max 16MB for the auto
setting, it would for the
On Fri, Jan 7, 2011 at 7:07 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Samuel Gendler sgend...@ideasculptor.com writes:
Does it not seem that this insistence on shipping a default config that
works out of the box on every system incurs a dramatic penalty when it
comes
to getting a useful
On Sun, Nov 21, 2010 at 10:21 PM, Humair Mohammed huma...@hotmail.comwrote:
Correct, the optimizer did not take the settings with the pg_ctl reload
command. I did a pg_ctl restart and work_mem now displays the updated value.
I had to bump up all the way to 2047 MB to get the response below
On Fri, Nov 5, 2010 at 12:23 PM, Samuel Gendler
sgend...@ideasculptor.comwrote:
On Thu, Nov 4, 2010 at 8:07 AM, Vitalii Tymchyshyn tiv...@gmail.comwrote:
04.11.10 16:31, Nick Matheson написав(ла):
Heikki-
Try COPY, ie. COPY bulk_performance.counts TO STDOUT BINARY.
Thanks
On Thu, Nov 4, 2010 at 8:07 AM, Vitalii Tymchyshyn tiv...@gmail.com wrote:
04.11.10 16:31, Nick Matheson написав(ла):
Heikki-
Try COPY, ie. COPY bulk_performance.counts TO STDOUT BINARY.
Thanks for the suggestion. A preliminary test shows an improvement
closer to our expected 35 MB/s.
please provide non-default config options on this host plus the same from a
host which is using an index scan, please. Also, postgresql version, OS,
and all of the other stuff that is asked for in this document:
http://wiki.postgresql.org/wiki/SlowQueryQuestions. It is impossible to say
why the
On Sat, Oct 16, 2010 at 8:29 AM, Alvaro Herrera
alvhe...@commandprompt.comwrote:
Excerpts from Samuel Gendler's message of sáb oct 16 02:35:46 -0300 2010:
An issue with automatically analyzing the entire hierarchy is 'abstract'
table definitions. I've got a set of tables for storing the
On Thu, Oct 14, 2010 at 8:59 PM, Mladen Gogala mladen.gog...@vmsinfo.comwrote:
If working with partitioning, be very aware that PostgreSQL optimizer has
certain problems with partitions, especially with group functions. If you
want speed, everything must be prefixed with partitioning column:
On Fri, Oct 15, 2010 at 10:22 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Alvaro Herrera alvhe...@commandprompt.com writes:
If we analyze the parent, do we also update the children stats, or is it
just that we keep two stats for the parent, one with children and one
without, both being updated
On Tue, Oct 12, 2010 at 9:02 AM, Scott Carey sc...@richrelevance.comwrote:
However, for large reporting queries and sequential scans, XFS will win in
the long run if you use the online defragmenter. Otherwise, your sequential
scans won't be all that sequential on any file system over time if
I've got a few tables that periodically get entirely refreshed via a COPY. I
don't really have a good mechanism for detecting only rows which have
changed so even though the differences are small, a full copy is easiest.
However, the data includes a primary key column, so I can't simply load
into
On Mon, Oct 11, 2010 at 12:54 PM, Neil Whelchel neil.whelc...@gmail.comwrote:
2. You need a slice of the data which requires another scan to the table to
get, and using the same WHERE clause as above. This seems like a total
waste,
because we just did that with the exception of actually
On Mon, Oct 11, 2010 at 7:19 PM, Greg Smith g...@2ndquadrant.com wrote:
This is a problem for the operating system to solve, and such solutions out
there are already good enough that PostgreSQL has little reason to try and
innovate in this area. I routinely see seq scan throughput double on
On Mon, Oct 11, 2010 at 9:06 PM, Scott Carey sc...@richrelevance.comwrote:
I can't speak to documentation, but it is something that helps as your I/O
subsystem gets more powerful, and how much it helps depends more on your
hardware, which may have adaptive read ahead on its own, and your file
On Sat, Oct 9, 2010 at 7:44 PM, Mladen Gogala mladen.gog...@vmsinfo.comwrote:
The architects of Postgres database would be well advised to operate under
the assumption that every production database has a competent DBA keeping an
eye on the database.
I'd actually go so far as to say that
2010/10/1 Fabrício dos Anjos Silva fabricio.si...@linkcom.com.br
Craig,
I agree with you. Not completely, but I do.
I'm just stuck in a situation where I can't figure out what values to
use for the parameters. I can't even think of a way on how to test and
discover that.
I
2010/9/29 Fabrício dos Anjos Silva fabricio.si...@linkcom.com.br
When setting seq_page_cost and random_page_cost, do I have to consider
the probability that data will be in memory? Or does seq_page_cost mean
sequential access on disk and random_page_cost mean random access on
disk?
The
Without knowing more about your queries and table structure, it is hard to
say if there is a better solution. But one thing you should probably
consider doing is just finding the queries where disabling nested loops is
verifiably effective and then just disabling nested loops on that connection
Please forgive the barrage of questions. I'm just learning how to tune
things in postgres and I've still got a bit of learning curve to get over,
apparently. I have done a lot of reading, though, I swear.
I've got two identical queries except for a change of one condition which
cuts the number
:
On Wed, Aug 18, 2010 at 11:45 PM, Samuel Gendler
sgend...@ideasculptor.com wrote:
Answered my own question. Cranking work_mem up to 350MB revealed that
the in-memory sort requires more memory than the disk sort.
Note that unless you run VERY few client connections, it's usually
better to leave
wizard 2010-08-17
cpu_tuple_cost = 0.0030 # sam
cpu_index_tuple_cost = 0.0010 # sam
cpu_operator_cost = 0.0005 # sam
random_page_cost = 2.0 # sam
On Wed, Aug 18, 2010 at 11:50 PM, Samuel Gendler
sgend...@ideasculptor.comwrote:
On Wed, Aug 18
cache, which means going to
disk for everybody's data, and all the queries are slow instead of
one. Keep an eye on how high work_mem affects your kernel cache.
On Thu, Aug 19, 2010 at 12:38 AM, Samuel Gendler
sgend...@ideasculptor.com wrote:
Yeah, although with 48GB of available memory
On Thu, Aug 19, 2010 at 12:06 AM, Samuel Gendler
sgend...@ideasculptor.comwrote:
Incidentally, if I set values on the connection before querying, is there
an easy way to get things back to default values or will my code need to
know the prior value and explicitly set it back? Something like
I'm just starting the process of trying to tune new hardware, which is
2x quad core xeon, 48GB RAM, 8x300GB SAS 15K drives in RAID 1+0,
2x72GB 15K SAS drives in RAID 1 for WAL and system. It is a PERC 6/i
card with BBU. Write-back cache is enabled. The system volume is
ext3. The large data
I am. I was giving mean numbers
On Wed, Aug 18, 2010 at 12:56 PM, Craig James
craig_ja...@emolecules.com wrote:
On 8/18/10 12:24 PM, Samuel Gendler wrote:
With barriers off, I saw a transaction rate of about 1200. With
barriers on, it was closer to 1050. The test had a concurrency of 40
On Wed, Aug 18, 2010 at 1:25 PM, Yeb Havinga yebhavi...@gmail.com wrote:
Samuel Gendler wrote:
When running pgbench on a db which fits easily into RAM (10% of RAM =
-s 380), I see transaction counts a little less than 5K. When I go to
90% of RAM (-s 3420), transaction rate dropped to around
I've got this explain: http://explain.depesz.com/s/Xh9
And these settings:
default_statistics_target = 50 # pgtune wizard 2010-08-17
maintenance_work_mem = 1GB # pgtune wizard 2010-08-17
constraint_exclusion = on # pgtune wizard 2010-08-17
checkpoint_completion_target = 0.9 # pgtune wizard
Answered my own question. Cranking work_mem up to 350MB revealed that
the in-memory sort requires more memory than the disk sort.
On Wed, Aug 18, 2010 at 10:23 PM, Samuel Gendler
sgend...@ideasculptor.com wrote:
I've got this explain: http://explain.depesz.com/s/Xh9
And these settings
On Fri, Jul 9, 2010 at 2:08 AM, Russell Smith mr-r...@pws.com.au wrote:
On 09/07/10 02:31, Ryan Wexler wrote:
The only other difference between the boxes is the postgresql version. The
new one has 8.4-2 from the yum install instructions on the site:
Sent from my iPhone
On Jul 9, 2010, at 18:25, Josh Berkus j...@agliodbs.com wrote:
So while adding (b) to core alone would be very useful for some users,
ironically it's generally for the more advanced users which are not
the
ones we're trying to help on this thread.
It would seem from
On Thu, Jul 8, 2010 at 8:11 PM, Craig Ringer
cr...@postnewspapers.com.au wrote:
If you're not using a connection pool, start using one.
Do you really need 100 *active* working query threads at one time? Because
if you do, you're going to need a scary-big disk subsystem and a lot of
I've been reading this list for a couple of weeks, so I've got some
sense of what you folks are likely to recommend, but I'm curious what
is considered an ideal storage solution if building a database system
from scratch. I just got an exploratory call from my boss, asking
what my preference
I know this problem crops up all the time and I have read what I could
find, but I'm still not finding an answer to my problem. This is all
postgres 8.3. Yes, I've enabled constraint_exclusion. Yes, there are
indexes on the partitions, not just on the parent.
I've got a table with 1 month
86 matches
Mail list logo