lzo is much, much, (much) faster than zlib. Note, I've tried several
decompression speed is even more awesome...
times to contact the author to get clarification on licensing terms
and have been unable to get a response.
lzop and the LZO library are distributed under the terms of the GNU
I get very different (contradictory) behavior. Server with fast RAID,
32GB
RAM, 2 x 4 core 3.16Ghz Xeon 54xx CPUs. CentOS 5.2
8.3.6
That's a very different serup from my (much less powerful) box, so that
would explain it...
No disk wait time during any test. One test beforehand was
,some of our queries to the database taking long time to return the
results.
fsync: off (even we tested this parameter is on ,we observed the same
slowness )
If your queries take long time to return results, I suppose you are
talking about SELECTs.
fsync = off will not make SELECT
On Fri, 31 Jul 2009 19:04:52 +0200, Tom Lane wrote:
Greg Stark writes:
On Thu, Jul 30, 2009 at 11:30 PM, Tom Lane wrote:
I did some tracing and verified that pg_dump passes data to deflate()
one table row at a time. I'm not sure about the performance
implications of that, but it does seem l
The query:
select events_events.id FROM events_events
left join events_event_types on events_events.eventType_id=
events_event_types.id
where events_event_types.severity=70
and events_events.cleared='f'
order by events_events.dateTime DESC
The main problem seems to be lack of a suitabl
If love is an uncommon word, there's no help for queries of this type
being slow unless the GIN index can return the results in order. But
if love is a common word, then it would be faster to do an index scan
by timestamp on the baserel and then treat comment_tsv @@
plainto_tsquery('love') as a
A smartphone... you're right, I didn't think of that, but the hardware I
described is very much like the one of a modern smartphone!!!
Are you saying that PostgreSQL+PostGIS can actually run on a
smartphone??? Intriguing...
Did anyone ever actually tried that???
While the performance of A
They didn't give me complete information, but it should be some kind of
industrial PC with a 600MHz CPU. Memory should be not huge nor small,
maybe a couple of GBytes, hard disk should be some type of industrial
Compact Flash of maybe 16 GBytes.
It should work perfectly OK.
Reme
An octocore server with 32GB of ram, running postgresql 8.3.6
Running only postgresql, slony-I and pgbouncer.
Just for testing purpose, i tried a setting with 26GB of shared_buffer.
I quickly noticed that the performances wasn't very good and the
server started to swap slowly but surely.
(but
Blocking round trips to another process on the same server should be
fairly cheap--that is, writing to a socket (or pipe, or localhost TCP
connection) where the other side is listening for it; and then
blocking in return for the response. The act of writing to an FD that
another process is wait
What can I do to prevent the index from getting bloated, or in whatever
state it was in?
What else can I do to further improve queries on this table? Someone
suggested posting details of my conf file. Which settings are most
likely to
be useful for this?
If you often do range queries on
When I try to do queries on this
table I always find them slower than what I need and what I believe
should be possible.
-> Bitmap Index Scan on ad_log_date_all (cost=0.00..72750.51
rows=2488252 width=0) (actual time=49776.332..49776.332 rows=2268490
loops=1)
Index Cond: ((date(
Hi,
I was reading a benchmark that sets out block sizes against raw IO
performance for a number of different RAID configurations involving high
end SSDs (the Mtron 7535) on a powerful RAID controller (the Areca
1680IX with 4GB RAM). See
http://jdevelopment.nl/hardware/one-dvd-per-second/
python w/ psycopg (or psycopg2), which wraps libpq. Same results w/
either version.
I've seen psycopg2 saturate a 100 Mbps ethernet connection (direct
connection with crossover cable) between postgres server and client during
a benchmark... I had to change the benchmark to not retrieve a
My list can contain 1 .. 10 records and table contains 300
records and is growing.
Ah. No IN(), then ;)
Temp table + ANALYZE seems your only option...
In 8.3 or 8.4 I think that IN() or temp table produce exactly the same
result.
Andrus.
Oh, I just thought about something,
My list can contain 1 .. 10 records and table contains 300
records and is growing.
Ah. No IN(), then ;)
Temp table + ANALYZE seems your only option...
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
I noticed that query
SELECT dok.*
FROM dok
JOIN (SELECT DISTINCT dokumnr FROM temptbl ) x USING(dokumnr);
is slow in 8.1.4
I cannot use explain analyze since this query uses results from
temporary table temptbl which is not available.
Generally if you know your temptbl will always contai
Maybe this is an obviously dumb thing to do, but it looked reasonable to
me.
Looks reasonable here too - except I'm not sure what I'd do with 2
million rows of sorted table in my console. I'm guessing you're piping
the output into something.
Probably it's psql that is choking from bu
I am struggeling with the following query which fetches a random subset
of 200 questions that matches certain tags within certain languages.
However, the query takes forever to evaluate, even though I have a
"limit 200" appended. Any ideas on how to optimize it?
QUERY: ===
First off, any thoughts per tuning inserts into large tables. I have a
large
table with an insert like this:
insert into public.bigtab1 (text_col1, text_col2, id) values ...
QUERY PLAN
--
Result (cost=0.00..0.01 rows=1 width=0)
(1 ro
SELECT ... FROM bigtable ORDER BY intprimarykey OFFSET 100 LIMIT 100
I think pagination is overrated.
If the query produces, for instance, something like 100 rows or less,
more often than not, getting all the rows will take the exact same time as
getting a portion of the rows... in
Not yet no, but that's a good suggestion and I do intend to give it a
whirl. I get about 27MB/s from raid 1 (10 is about the same) so
hopefully I can up the throughput to the speed of about one disk with
sw raid.
FYI I get more than 200 MB/s out of a Linux Software RAID5 of 3 SATA
drives (
Thank you very much for great sample.
I tried to create testcase from this to match production db:
1.2 million orders
3.5 million order details
13400 products with char(20) as primary keys containing ean-13 codes
mostly
3 last year data
every order has usually 1..3 detail lines
same product
On Fri, 21 Nov 2008 21:07:02 +0100, Tom Lane <[EMAIL PROTECTED]> wrote:
PFC <[EMAIL PROTECTED]> writes:
Index on orders_products( product_id ) and orders_products( order_id ):
=> Same plan
Note that in this case, a smarter planner would use the new ind
log file seems that mostly only those queries are slow:
SELECT ...
FROM dok JOIN rid USING (dokumnr)
JOIN ProductId USING (ProductId)
WHERE rid.ProductId LIKE :p1 || '%' AND dok.SaleDate>=:p2
:p1 and :p2 are parameters different for different queries.
dok contains several years of d
Server has 2 GB RAM.
It has SATA RAID 0,1 integrated controller (1.5Gbps) and SAMSUNG HD160JJ
mirrored disks.
You could perhaps run a little check on the performance of the RAID, is
it better than linux software RAID ?
Does it leverage NCQ appropriately when running queries in para
How to vacuum full pg_shdepend automatically so that other users can
work at same time ?
Your table is horribly bloated.
You must use VACUUM FULL + REINDEX (as superuser) on it, however
unfortunately, it is blocking.
Therefore, you should wait for sunday night to do this, when noo
OK so vmstat says you are IO-bound, this seems logical if the same plan
has widely varying timings...
Let's look at the usual suspects :
- how many dead rows in your tables ? are your tables data, or bloat ?
(check vacuum verbose, etc)
- what's the size of the dataset re
Query below seems to use indexes everywhere in most optimal way.
dokumnr column is of type int
Speed of this query varies rapidly:
In live db fastest response I have got is 8 seconds.
Re-running same query after 10 seconds may take 60 seconds.
Re-running it again after 10 seconds may take 114
OK, I see your problem. Try this :
read this : http://www.postgresql.org/docs/current/static/pgtrgm.html
locate and \i the pg_trgm.sql file
CREATE TABLE dict( s TEXT );
I loaded the english - german dictionary in a test table. I didn't parse
it, so it's just a bunch of 418552 strings, englis
I've been searching for performance metrics and tweaks for a few weeks
now. I'm trying to determine if the length of time to process my queries
is accurate or not and I'm having a difficult time determining that. I
know postgres performance is very dependent on hardware and settings and
I
update songs set views = 0;
UPDATE 54909
Time: 101907.837 ms
time is actually less than 10 minutes, but it is still very long :(
Wow.
test=> CREATE TABLE test (id SERIAL PRIMARY KEY, value INTEGER);
test=> INSERT INTO test (value) SELECT n FROM generate_series( 1,10 )
AS n;
Tem
Dear List,
I would like to improve seq scan performance. :-)
I have many cols in a table. I use only 1 col for search on it. It is
indexed with btree with text_pattern_ops. The search method is: r like
'%aaa%'
When I make another table with only this col values, the search time is
bett
PFC, I have to say these kind of posts make me a fan of yours. I've
read many of your storage-related replied and have found them all very
educational. I just want to let you know I found your assessment of the
impact of Flash storage perfectly-worded and unbelievably insig
*) is the flash random write problem going to be solved in hardware or
specialized solid state write caching techniques. At least
currently, it seems like software is filling the role.
Those flash chips are page-based, not unlike a harddisk, ie. you cannot
erase and write a byte, you mus
by the way, there is a foreign key on another table that references the
primary key col0 on table test.
Is there an index on the referencing field in the other table ? Postgres
must find the rows referencing the deleted rows, so if you forget to index
the referencing column, this can tak
I want to know if the PostGree has limitations about the concurrent
access,
because a lot of people will access this database at the same time.
PostgreSQL has excellent concurrency provided you use it correctly.
But what do you mean by concurrent access ?
* Number o
SELECT relpages, reltuples FROM pg_class WHERE relname ='users';
relpages | reltuples
--+---
54063 | 2307
(1 row)
This is a horribly bloated table.
The Output of query on the old server which is fast
relpages | reltuples
--+---
42 |
I am using the c-library interface and for these particular transactions
I preload PREPARE statements. Then as I get requests, I issue a BEGIN,
followed by at most 300 EXECUTES and then a COMMIT. That is the
general scenario. What value beyond 300 should I try?
Thanks.
Do you have PREPARE sta
On Wed, 04 Jun 2008 00:36:09 +0200, andrew klassen <[EMAIL PROTECTED]>
wrote:
Running postgres 8.2.5
I have a table that has 5 indices, no foreign keys or any
dependency on any other table. If delete the database and
start entering entries, everything works very well until I get
to some poin
Thank you for your reply. I don't really need to count rows in
transactions table, I just thought this was a good example to show how
slow the query was.
Usually you're more interested in the performance of the queries you need
to make rather than the ones you don't need to make ;)
But b
On Wed, 21 May 2008 16:10:53 +0200, H. Hall <[EMAIL PROTECTED]>
wrote:
Does anyone know if there is a source that provides "Big O" notation for
postgres's aggregate functions and operations? For example is count(*)
= O(1) or O(n)?
Do the developers for postgres use Big O when selecting a
On Wed, 21 May 2008 15:09:49 +0200, Richard Huxton <[EMAIL PROTECTED]>
wrote:
Luke Lonergan wrote:
The problem is that the implied join predicate is not being
propagated. This is definitely a planner deficiency.
IIRC only equality conditions are propagated and gt, lt, between aren't.
I
On Tue, 20 May 2008 22:03:30 +0200, kevin kempter
<[EMAIL PROTECTED]> wrote:
Version 8.3.1
On May 20, 2008, at 1:51 PM, kevin kempter wrote:
Hi all;
I have 2 tables where I basically want to delete from the first table
(seg_id_tmp7) any rows where the entire row already exists in the
The real question in my mind is why this turns into a bottleneck before
the similar task of cleaning the 16MB XLOG segment does. I expected
that one would need to be cracked before the CLOG switch time could
possibly be an issue, but reports from the field seem to suggest
otherwise.
H
The problem seem to arise when a SELECT that returns a lot of rows is
Does the SELECT return a lot of rows, or does it scan a lot of rows ?
(for instance, if you use aggregates, it might scan lots of data but only
return few rows).
The problem is that when the SELECTs are run the main a
On Wed, 14 May 2008 06:40:40 +0200, fernando castano
<[EMAIL PROTECTED]> wrote:
Hi all,
This sql is taking too long for the size of my tiny db. Any tips from
this alias? I tried moving the sort to the first left outer join
(between projects and features tables) using a nested subquery, b
You say that like you don't mind having PCI in a server whose job is
to perform massive query over large data sets.
I am in my 4th week at a new job. Trying to figure what I am working
with.
LOOL, ok, hehe, not exactly the time to have a "let's change everything"
fit ;)
From what I
Will it pay to go to a controller with higher memory for existing
machines? The one machine I am about to redo has PCI which seems to
somewhat limit our options.
Urgh.
You say that like you don't mind having PCI in a server whose job is to
perform massive query over large data s
Hi, all. I want to ask what type of index is better to create for
bigint types. I have table with bigint (bigserial) primary key. What
type is better to use for it? I tried btree and hash, but didn't
notice any differences in execution time. For GiST and GIN there is a
trouble that I must create o
When you create a foreign key to a table is there an index that is
created on the foreign key automatically?
No, Postgres doesn't do it for you, because if you create (ref_id)
references table.id, you will perhaps create an index on (ref_id, date)
which would then fill the purpose (and ot
What is a "period" ? Is it a month, or something more "custom" ?
Can periods overlap ?
No periods can never overlap. If the periods did you would be in
violation of many tax laws around the world. Plus it you would not know
how much money you are making or losing.
I was wondering
On Tue, 06 May 2008 10:21:43 +0200, Frank van Vugt <[EMAIL PROTECTED]>
wrote:
L.S.
I'm noticing a difference in planning between a join and an in() clause,
before trying to create an independent test-case, I'd like to know if
there's
an obvious reason why this would be happening:
Is the
i've had to write queries to get trail balance values out of the GL
transaction table and i'm not happy with its performance The table has
76K rows growing about 1000 rows per working day so the performance is
not that great it takes about 20 to 30 seconds to get all the records
for the t
I created several indices for the primary table, yes. Sure I can do a
table for a volatile column, but then I'll have to create a new such
table for each derived column -- that's why I tried to add a column to
the existing table. Yet seeing this is really slow, and I need to to
many der
Hello,
Executive summary :
Last year I wrote a database benchmark which simulates a forum.
It works on Postgres and MySQL.
It could be useful.
I have no time to touch this, so it is rotting on my harddrive.
Who wants to adopt it ? I will put it on pgfoundry.
I can spend a few hours documenting
You got the order slightly wrong I guess.
1) hardware
Would only come first if your RAM is really too small, or you use RAID5
on write-heavy tables, or what limits you is transaction fsync (hint :
8.3).
Adding RAM is cheap.
2) rewriting my queries and table structures
An index scan looks through the index and pulls in each pages as it sees
it.
A bitmap index scan looks through the index and makes a sorted list of
all
the pages it needs and then the bitmap heap scan reads all the pages.
If your data is scattered then you may as well do the index scan, but
Do you mean, that the overhead is an artefact of timing the query? In
that case, the query should run faster than its evaluation with EXPLAIN
ANALYZE, correct?
Is there a way to test this assumption regarding the speed of
gettimeofday? I'm on a Macbook and have no idea about the performan
Our ~600,000,000
row table is changed very infrequently and is on a 12 disk software
raid-6
for historical reasons using an LSI Logic / Symbios Logic SAS1068 PCI-X
Fusion-MPT SAS Our ~50,000,000 row staging table is on a 12 disk
hardware
raid-10 using a Dell PowerEdge Expandable RAID con
On Thu, 24 Apr 2008 03:14:54 +0200, Vlad Arkhipov <[EMAIL PROTECTED]>
wrote:
I found strange issue in very simple query. Statistics for all columns
is on the level 1000 but I also tried other levels.
create table g (
id bigint primary key,
isgroup boolean not null);
create table a (
gr
I should say that this is on postgresql 7.4.16 (debian stable).
Whoa.
I cannot understand why the following two queries differ so much in
execution time (almost ten times)
Post EXPLAIN ANALYZE for both, and also post table definitions (with
indexes), use \d table. This will allow
about 2300 connections in idle
(ps auxwww | grep postgres | idle)
[...]
The server that connects to the db is an apache server using persistent
connections. MaxClients is 2048 thus the high number of connections
needed. Application was written in PHP using the Pear DB class.
that's correct, there are nightly (at least at the moment) processes that
insert around 2-3 mio rows and delete about the same amount. I can see
that
those 'checkpoints are occurring too frequently' messages are only logged
during that timeframe.
Perhaps you should increase the quantity of
On Sun, 20 Apr 2008 17:15:36 +0200, Francisco Reyes
<[EMAIL PROTECTED]> wrote:
PFC writes:
- If you process up to some percentage of your RAM worth of data,
hashing is going to be a lot faster
Thanks for the excellent breakdown and explanation. I will try and get
sizes of the
On Fri, 18 Apr 2008 11:36:02 +0200, Gregory Stark <[EMAIL PROTECTED]>
wrote:
"Francisco Reyes" <[EMAIL PROTECTED]> writes:
Is there any dissadvantage of using "group by" to obtain a unique list?
On a small dataset the difference was about 20% percent.
Group by
HashAggregate (cost=369.61..
- why am I still getting a seq scan ?
You'll seq scan tmp1 obviously, and also the other table since you fetch a
very large part of it in the process.
It's the only way to do this query since there is no WHERE to restrict the
number of rows and the DISTINCT applies on columns from both tab
On Wed, 16 Apr 2008 23:14:11 +0200, samantha mahindrakar
<[EMAIL PROTECTED]> wrote:
Hi
Iam finding the following query is working a bit slow:
EXECUTE '(SELECT ARRAY(SELECT DISTINCT date_part(''day'',
measurement_start)
FROM ' || gettablestring(dates)|| '
WHERE lane_id IN (' || lanesidar
On Wed, 16 Apr 2008 06:07:04 +0200, Thomas Spreng <[EMAIL PROTECTED]> wrote:
On 16.04.2008, at 01:24, PFC wrote:
The queries in question (select's) occasionally take up to 5 mins even
if they take ~2-3 sec under "normal" conditions, there are no
sequencial scans
The queries in question (select's) occasionally take up to 5 mins even
if they take ~2-3 sec under "normal" conditions, there are no sequencial
scans done in those queries. There are not many users connected (around
3, maybe) to this database usually since it's still in a testing phase.
I
Will this help with performance ?
Depends if the bloat is in part of your working set. If debloating can
make the working set fit in RAM, or lower your IOs, you'll get a boost.
Now, is the bloat in the tables (which tables ?) or in the indexes
(which indexes ?), or in the toast tabl
If you suspect your tables or indexes are bloated, restore your
dump to a test box.
Use fsync=off during restore, you don't care about integrity on the
test box.
This will avoid slowing down your production database.
Then look at the size of the restored database.
If it i
Hi
We currently have a 16CPU 32GB box running postgres 8.2.
When I do a pg_dump with the following parameters "/usr/bin/pg_dump -E
UTF8 -F c -b" I get a file of 14GB in size.
But the database is 110GB in size on the disk. Why the big difference
in size? Does this have anything to do with
Well if you're caching per-connection then it doesn't really matter
whether
you do it on the client side or the server side, it's pretty much
exactly the
same problem.
Actually I thought about doing it on the server since it would then also
work with connection pooling.
Doing it on the c
I tried removing the index from the part_master table and got the same
result
Since all the data is in the partitions, the part_master table is empty,
so the index is not useful for your query.
myDB=# explain SELECT min(logdate) FROM part_master;
Proposals :
1- Use plpgsql t
Thanks a lot, all of you - this is excellent advice. With the data
clustered and statistics at a more reasonable value of 100, it now
reproducibly takes even less time - 20-57 ms per query.
1000x speedup with proper tuning - always impressive, lol.
IO seeks are always your w
Perfect - thanks Arjen. Using your value of 200 decreased the time to
15 seconds, and using a value of 800 makes it almost instantaneous. I'm
really not concerned about space usage; if having more statistics
increases performance this much, maybe I'll just default it to 1000?
Strangely, th
Hi, I've started my first project with Postgres (after several years of
using Mysql), and I'm having an odd performance problem that I was
hoping someone might be able to explain the cause of.
My query
- select count(*) from gene_prediction_view where gene_ref = 523
- takes
Well, this table has a primary key index on first column called acctid
which is an integer; instead the calldate column is a TIMESTAMPTZ and in
fact I'm using to do (calldate)::date in the ON clause because since the
time part of that column is always different and in the nesting I have
t
I have a performance problem with a script that does massive bulk
insert in 6 tables. When the script starts the performance is really
good but will degrade minute after minute and take almost a day to
finish!
Looks like foreign key checks slow you down.
- Batch INSERTS in trans
When traffic to our PostgreSQL-backed website spikes, the first resource
we see being exhausted is the DB slots on the master server (currently
set to about 400).
I expect that as new Apache/mod_perl children are being put to us, they
are creating new database connections.
I'm interested in re
On Fri, 4 Apr 2008, Ow Mun Heng wrote:
select * from table
where A=X
and B = Y
and C = Z
and D = AA
and E = BB
With that kind of WHERE condition, Postgres will use a Bitmap Index Scan
to combine your indices. If, however, postgres notices while looking at
the statistics gathered during A
Why do you claim that 'More platters also means slower seeks
and generally slower performance.'?
More platters -> more heads -> heavier head assembly -> slower seek time
But..
More platters -> higher density -> less seek distance (in mm of head
movement) -> faster seek time
Also, sometimes offine TOASTing is evil :
Say you have a forum, you want the posts table to be CLUSTER'ed on
(topic_id, post_id) so displaying 1 page with 30 posts on it uses 1 seek,
not 30 seeks. But CLUSTER doesn't touch the data that has been pushed
offline in the toast table. So, in tha
Hm, so this table has 10 million entries and it does not fit in 32GB of
RAM ?
Could you investigate :
- average size of rows in both tables
- a quick description of your table columns especially the average size of
your TEXT fields, especially the large one(s) like comments etc (don't
bother
is this 'normal'? (loaded question I know)
Depends. If you are on the edge, disk-wise, yes a big fat query can push
it over and make it fall.
Should I be looking to offload expensive reporting queries to read-only
replicants of my database?
You could do this, especially if the heavy q
create index foo1 on bla (a);
create index foo2 on bla (b);
create index foo3 on bla (a,b);
You say you need faster INSERT performance. Getting rid of some indexes
is a way, but can you tell a bit more about your hardware setup ?
For instance, if you only have one HDD, put an extra HDD in
Hello,
I am using Postgres with PHP and persistent connections.
For simple queries, parsing & preparing time is often longer than actual
query execution time...
I would like to execute a bunch of PREPARE statements to prepare my most
often used small queries on connection
i am using postgresql 8.1.8,
Following configurations:
shared_buffers = 5000
work_mem = 65536
maintenance_work_mem = 65536
effective_cache_size = 16000
random_page_cost = 0.1
The cpu is waiting percentage goes upto 50%, and query result
Today, I looked at 'top' on my PG server and saw a pid that reported 270
hours of CPU time. Considering this is a very simple query, I was
surprised to say the least. I was about to just kill the pid, but I
figured I'd try and see exactly what it was stuck doing for so long.
If you are
It's the time to parse statements, plan, execute, roundtrips with
the client, context switches, time for your client library to escape
the data and encode it and for postgres to decode it, etc. In a word :
OVERHEAD.
I know there is some overhead, but that much when running it batched.
I was doing some testing on "insert" compared to "select into". I
inserted 100 000 rows (with 8 column values) into a table, which took 14
seconds, compared to a select into, which took 0.8 seconds.
(fyi, the inserts where batched, autocommit was turned off and it all
happend on the local m
On Tue, 17 > We have the oppotunity to benchmark our application on a
large server. I
have to prepare the Postgres configuration and I'd appreciate some
comments on it as I am not experienced with servers of such a scale.
Moreover the configuration should be fail-proof as I won't be able to
atte
Well, that's not completely trivial => the plan might depend upon the
concrete value of $1,$2 and $3.
When you use PREPARE, it doesn't. I could live with that.
The purpose of this would be to have a library of "persistent prepared
statements" (just like lightweight functions) for y
Suppose a web application with persistent database connections.
I have some queries which take longer to plan than to execute !
I with there was a way to issue a PREPARE (like "PERSISTENT PREPARE").
Now all Postgres connections would know that prepared statement foo( $1,
I did not find a solution so far; and for bulk data transfers I now
>programmed a workaround.
But that is surely based on some component installed on the server,
isn't
it?
Correct. I use a pyro-remote server. On request this remote server copies
the relevant rows into a temporary table, u
I have this "poll results" table with just 3 integer fields, which is
never updated, only inserted/deleted...
Did the Devs consider an option to have VACUUM reduce the row header
sizes for tuples that are long commited and are currently visible to all
transactions ? (even if this makes th
To me, the worst catch-22 we face in this area is that we'd like the
optimizer's choices of plan to be stable and understandable, but the
real-world costs of queries depend enormously on short-term conditions
such as how much of the table has been sucked into RAM recently by
other queries. I ha
What version of PostgreSQL are you using?
I think newbies should be pushed a bit to use the latest versions, maybe
with some advice on how to setup the apt sources (in debian/ubuntu) to get
them.
How much memory will be available to PostgreSQL?
How many connections will be made to Po
Question: Does anyone have any idea what bottleneck I am hitting? An
index's performance should in theory scale as the log of the number of
rows - what am I missing here?
These can help people on the list to help you :
- Your hardware config (CPU, RAM, disk) ?
- E
1 - 100 of 318 matches
Mail list logo