johannes graën wrote:
> Hi Pavel, *,
>
> you were right with ANALYZing the DB first. However, even after doing
> so, I frequently see Seq Scans where an index was used before. This
> usually cooccurs with parallelization and looked different before
> upgrading to 10. I can provide an example for
Pavel Stehule wrote:
> 2017-06-29 7:17 GMT+02:00 Yevhenii Kurtov :
>
> > I just tried UNION queries and got following error:
> >
> > ERROR: FOR UPDATE is not allowed with UNION/INTERSECT/EXCEPT
>
> it is sad :(
I think we could lift this restriction for UNION ALL,
Vladimir Sitnikov wrote:
> Alvaro>Something like
> INSERT INTO .. VALUES ('col1', 'col2'), ('col1', 'col2'), ('col1', 'col2')>I
> did not
> Frits>try that, to be honest.
>
> pgjdbc does automatically rewrite insert values(); into insert ...
> values(),(),(),() when reWriteBatchedInserts=true. I
Frits Jalvingh wrote:
> So, I am still very interested in getting normal inserts faster, because
> that will gain speed for all work.. If Oracle can do it, and Postgres is
> able to insert fast with copy- where lies the bottleneck with the insert
> command? There seems to be quite a performance
Hustler DBA wrote:
> I am seeing this strange behavior, I don't know if this is by design by
> Postgres.
>
> I have an index on a column which is defined as "character varying(255)".
> When the value I am searching for is of a certain length, the optimizer
> uses the index but when the value is
Kouber Saparev wrote:
> I tried to DELETE about 7 million rows at once, and the query went up to
> 15% of the RAM (120 GB in total), which pushed some indexes out and the
> server load went up to 250, so I had to kill the query.
>
> The involved table does not have neither foreign keys referring
Andrew Dunstan wrote:
> On 03/31/2016 03:12 PM, Igor Neyman wrote:
> > > We are going to build system based on PostgreSQL database for huge
> > > number of individual users (few thousands). Each user will have his own
> > > account, for authorization we will use Kerberos (MIT or Windows).
>
Scott Marlowe wrote:
> On Wed, Mar 2, 2016 at 9:11 AM, Moreno Andreo
> wrote:
> > ... or maybe add some more RAM to have more disk caching (if you're on
> > *nix) this worked for me in the past... even if IMHO it's more a
> > temporary "patch" while upgrading (if
Tomas Vondra wrote:
> Also, I don't think it makes much sense to set
>
>(checkpoint_warning > checkpoint_timeout)
>
> as it kinda defeats the whole purpose of the warning.
I agree, but actually, what is the sense of checkpoint_warning? I think
it was useful back when we didn't have
Jim Nasby wrote:
Yeah, this is not something that would be very easy to accomplish, because a
buffer can get evicted and written to disk at any point. It wouldn't be too
hard to read every unlogged table during recovery and see if there are any
pages that were written after the last
Matheus de Oliveira wrote:
On Mon, Feb 9, 2015 at 1:58 PM, bkrug bk...@usatech.com wrote:
Couldn't postgres reserve a special XID that is never available for normal
transactions but that indicates that any transaction can see it because it
is so old? Then instead of constantly having to
bkrug wrote:
The problem I'm facing is that I have many large (several GB) tables that are
not being changed (they are several days old) but auto-vacuum keeps scanning
and updating them every time the xid wraps around and thus my rsync back-up
process sees that the disk files have changed and
bkrug wrote:
Matheus de Oliveira wrote
It changed in recent versions (9.3 or 9.4, I don't recall exactly which)
and moved to tuple header, but what you described is exactly what was
done,
the xid was 2.
Should the relfrozenxid of pg_class then equal 2 for very old and already
vacuumed
Jesper Krogh wrote:
On 10/11/2014, at 22.40, Alvaro Herrera alvhe...@2ndquadrant.com wrote:
Josh Berkus wrote:
All,
pg version: 9.3.5
RHEL 6.5
128GB/32 cores
Configured with shared_buffers=16GB
Java/Tomcat/JDBC application
Server has an issue that whenever we get lock
Josh Berkus wrote:
All,
pg version: 9.3.5
RHEL 6.5
128GB/32 cores
Configured with shared_buffers=16GB
Java/Tomcat/JDBC application
Server has an issue that whenever we get lock waits (transaction lock
waits, usually on an FK dependancy) lasting over a minute or more than
10 at once,
Matt Clarkson wrote:
The LWLOCK_STATS below suggest that ProcArrayLock might be the main
source of locking that's causing throughput to take a dive as the client
count increases beyond the core count.
Any thoughts or comments on these results are welcome!
Do these results change if you use
Ying He escribió:
Thank you Josh. Won't double post again. Just thought reorg mailing list is
quite inactive.
Well, that tells you something about its maintenance state and what sort
of help you can expect if you find yourself in trouble with it.
--
Álvaro Herrera
Sergey Konoplev escribió:
On Fri, Dec 20, 2013 at 7:59 AM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
Eh. Why can't you just do something like
WITH moved AS (
DELETE FROM src WHERE ..
RETURNING *
) INSERT INTO dst SELECT * FROM moved;
Avero, I think it could
Dave Johansen escribió:
On Thu, Dec 19, 2013 at 10:27 AM, Scott Marlowe
scott.marl...@gmail.comwrote:
That's pretty much it. What I did was to create the new month table
and day tables, alter my triggers to reflect this, then move the data
with insert into / select from query for each
Metin Doslu wrote:
When we send concurrent Select queries to these tables, query performance
doesn't scale out with the number of CPU cores. We find that complex Select
queries scale out better than simpler ones. We also find that increasing
the block size from 8 KB to 32 KB, or increasing
Claudio Freire escribió:
On Wed, Aug 7, 2013 at 3:34 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Note that there's no particular need to specify desc in the index
definition. This same index can support searches in either direction
on the called column.
Yeah, but it's faster if it's in the
Tom Lane escribió:
Jeff Janes jeff.ja...@gmail.com writes:
On Thu, Aug 1, 2013 at 5:16 PM, Sergey Burladyan eshkin...@gmail.com
wrote:
If I not mistaken, may be two code paths like this here:
(1) mergejoinscansel - scalarineqsel- ineq_histogram_selectivity -
get_actual_variable_range
Sergey Burladyan escribió:
I also find this trace for other query:
explain select * from xview.user_items_v v where ( v.item_id = 132358330 );
#0 0x7ff766967620 in read () from /lib/libc.so.6
#1 0x7ff7689cfc25 in FileRead ()
#2 0x7ff7689ea2f6 in mdread ()
#3
AJ Weber escribió:
On 1/23/2013 2:13 PM, Jeff Janes wrote:
Scheduling a manual vacuum should be fine (but keep in mind that
vacuum has very different default cost_delay settings than autovacuum
does. If the server is completely idle that shouldn't matter, but if
it is only mostly idle, you
Rodrigo Rosenfeld Rosas escribió:
Em 07-11-2012 22:58, Tom Lane escreveu:
Rodrigo Rosenfeld Rosasrr.ro...@gmail.com writes:
Ok, I could finally strip part of my database schema that will allow you
to run the explain query and reproduce the issue.
There is a simple SQL dump in plain format
Albe Laurenz wrote:
I am configuring streaming replication with hot standby
with PostgreSQL 9.1.3 on RHEL 6 (kernel 2.6.32-220.el6.x86_64).
PostgreSQL was compiled from source.
It works fine, except that starting the standby took for ever:
it took the system more than 80 minutes to replay
Excerpts from kuopo's message of dom nov 21 11:15:52 -0300 2010:
In my experiment, I need about 1~3 min to finish the analyze operation
on the big table (which depends on the value of vacuum_cost_delay). I
am not surprised because this table is a really big one (now, it has
over 200M records).
Excerpts from kuopo's message of jue nov 18 04:10:36 -0300 2010:
Hi,
Thanks for your response. I've checked it again and found that the
main cause is the execution of ANALYZE. As I have mentioned, I have
two tables: table A is a big one (around 10M~100M records) for log
data and table B is
Excerpts from kuopo's message of vie nov 12 05:01:24 -0300 2010:
Hi,
I have a question about the behavior of autovacuum. When I have a big
table A which is being processed by autovacuum, I also manually use
(full) vacuum to clean another table B. Then I found that I always got
something
Excerpts from Greg Smith's message of jue oct 21 14:04:17 -0300 2010:
What I would like to do is beef up the documentation with some concrete
examples of how to figure out if your cache and associated write path
are working reliably or not. It should be possible to include does
this
Excerpts from Samuel Gendler's message of lun oct 18 03:13:01 -0300 2010:
On Sat, Oct 16, 2010 at 8:29 AM, Alvaro Herrera
alvhe...@commandprompt.comwrote:
Excerpts from Samuel Gendler's message of sáb oct 16 02:35:46 -0300 2010:
An issue with automatically analyzing the entire hierarchy
Excerpts from Samuel Gendler's message of sáb oct 16 02:35:46 -0300 2010:
An issue with automatically analyzing the entire hierarchy is 'abstract'
table definitions. I've got a set of tables for storing the same data at
different granularities of aggregation. Within each granularity, I've
Excerpts from Robert Haas's message of mié jun 09 15:47:55 -0400 2010:
In going back through emails I had marked as possibly needing another
look before 9.0 is released, I came across this issue again. As I
understand it, analyze (or analyse) now collects statistics for both
the parent
Excerpts from Josh Berkus's message of lun sep 13 20:53:51 -0400 2010:
If you select from pg_stat_user_tables, the counters should be
reasonably close unless your default_statistics_target is way off and
then pg_class.reltuples would be wrong.
At least in 8.3, running ANALYZE does not
Excerpts from David Kerr's message of mié sep 08 18:29:59 -0400 2010:
Thanks for the insight. we're currently in performance testing of the
app. Currently, the JVM is the bottleneck, once we get past that
i'm sure it will be the database at which point I'll have the kind
of data you're
Excerpts from Jann Röder's message of lun ago 23 00:23:38 -0400 2010:
Hash Join (cost=516.66..17710110.47 rows=8358225 width=16)
Hash Cond: ((b.itemid)::bpchar = a.itemid)
- Seq Scan on b (cost=0.00..15110856.68 rows=670707968 width=16)
- Hash (cost=504.12..504.12 rows=1003
Excerpts from David Kerr's message of lun ago 23 18:15:56 -0400 2010:
Howdy all,
We're doing some performance testing, and when we scaled it our app up to
about 250 concurrent users
we started seeing a bunch of processes sititng in PARSE WAITING state.
Can anyone give me insite on what
Excerpts from David Kerr's message of lun ago 23 18:47:02 -0400 2010:
unlink(base/pgsql_tmp/pgsql_tmp28335.12593) = 0
unlink(base/pgsql_tmp/pgsql_tmp28335.6041) = 0
unlink(base/pgsql_tmp/pgsql_tmp28335.3030) = 0
unlink(base/pgsql_tmp/pgsql_tmp28335.14737) = 0
which isn't the fastest
Excerpts from Tom Lane's message of lun ago 16 23:33:29 -0400 2010:
Benjamin Krajmalnik k...@servoyant.com writes:
A little background - I have various multi-column indexes whenever I
have queries which restrict the output based on the values of the 2
fields (for example, a client code and
Excerpts from Matthew Wakeling's message of mar ago 10 11:40:16 -0400 2010:
I am trying to retrieve, for many sets of rows grouped on a couple of
fields, the value of an ungrouped field where the row has the highest
value in another ungrouped field.
I think this does what you want (schema
Excerpts from Tom Lane's message of mar jul 27 20:05:02 -0400 2010:
Peter Hussey pe...@labkey.com writes:
2) How is work_mem used by a query execution?
Well, the issue you're hitting is that the executor is dividing the
query into batches to keep the size of the in-memory hash table below
Excerpts from Patrick Donlin's message of jue jul 15 11:12:53 -0400 2010:
I'll read over that wiki entry, but for now here is the EXPLAIN ANALYZE
output assuming I did it correctly. I have run vacuumdb --full --analyze, it
actually runs as a nightly cron job.
These plans seem identical
Excerpts from Merlin Moncure's message of sáb jul 03 18:53:46 -0400 2010:
What about my suggestion doesn't work for your requirements? (btw,
let me disagree with my peers and state pl/perl is lousy for this type
of job, only sql/and pl/sql can interact with postgresql variables
natively for
Excerpts from Rajesh Kumar Mallah's message of jue jun 24 13:25:32 -0400 2010:
What prompted me to post to list is that the server transitioned from
being IO bound to CPU bound and 90% of syscalls being
lseek(XXX, 0, SEEK_END) = YYY
It could be useful to find out what file is being
Excerpts from Craig James's message of jue jun 24 19:03:00 -0400 2010:
select relname, pg_relation_size(relname) from pg_class
where pg_get_userbyid(relowner) = 'emol_warehouse_1'
and relname not like 'pg_%'
order by pg_relation_size(relname) desc;
ERROR: relation
Excerpts from Craig James's message of jue jun 24 19:24:44 -0400 2010:
On 6/24/10 4:19 PM, Alvaro Herrera wrote:
Excerpts from Craig James's message of jue jun 24 19:03:00 -0400 2010:
select relname, pg_relation_size(relname) from pg_class
where pg_get_userbyid(relowner
Excerpts from Anj Adu's message of mar jun 22 17:44:39 -0400 2010:
This query seems unreasonable slow on a well-indexed table (13 million
rows). Separate indexes are present on guardid_id , from_num and
targetprt columns.
Maybe you need to vacuum or reindex?
--
Álvaro Herrera
Excerpts from Scott Marlowe's message of dom jun 20 16:13:15 -0400 2010:
On Sun, Jun 20, 2010 at 11:44 AM, Jesper Krogh jes...@krogh.cc wrote:
Hi.
I have been wondering if anyone has been experimenting with really
agressive
autovacuuming.
I have been using moderately aggressive
Excerpts from jgard...@jonathangardner.net's message of mié jun 16 02:30:30
-0400 2010:
NOTE: If I do one giant commit instead of lots of littler ones, I get
much better speeds for the slower cases, but I never exceed 5,500
which appears to be some kind of wall I can't break through.
If
Excerpts from Tom Lane's message of lun jun 14 23:57:11 -0400 2010:
Scott Carey sc...@richrelevance.com writes:
Great points. There is one other option that is decent for the WAL:
If splitting out a volume is not acceptable for the OS and WAL --
absolutely split those two out into their
Excerpts from Tom Lane's message of jue jun 10 11:46:25 -0400 2010:
Yes, the folks at commandprompt need to be told about this. Loudly.
It's a serious packaging error.
Just notified Lacey, the packager (not so loudly, though); she's working
on new packages, and apologizes for the
Excerpts from Matthew Wakeling's message of mar jun 01 05:55:35 -0400 2010:
On Sun, 23 May 2010, David Jarvis wrote:
The measurement table indexes (on date and weather station) were not being
used because the only given date ranges (e.g., 1900 - 2009) were causing the
planner to do a full
Excerpts from David Jarvis's message of mar jun 01 14:01:22 -0400 2010:
Sorry, Alvaro.
I was contemplating using a GIN or GiST index as a way of optimizing the
query.
My fault -- I didn't read the whole thread.
Instead, I found that re-inserting the data in order of station ID (the
doing a DBT2/DBT5 test run with different autovac
settings post-8.4 so see if we should specifically change the vacuum
threshold.
Right.
--
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.
--
Sent via pgsql-performance
and leave threshold alone.
--
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref
in the last 16 hours.
How many autovac workers are there?
--
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription
; is there a way in
Solaris to report what each file descriptor is pointing to? (In linux
I'd look at /proc/pid/fd)
We don't call pollsys anywhere. Something in Solaris must be doing it
under the hood.
--
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company
Tom Lane wrote:
Alvaro Herrera alvhe...@commandprompt.com writes:
We don't call pollsys anywhere. Something in Solaris must be doing it
under the hood.
pg_usleep calls select(), and some googling indicates that select() is
implemented as pollsys() on recent Solaris versions. So Josh's
of the pgstat.stat file?
Maybe you could try resetting stats, so that the file goes back to an
initial size and is slowly repopulated. I'd suggest monitoring the size
of the stats file, just in case there's something abnormal with it.
--
Alvaro Herrerahttp
baz;
assuming that table foo has a column bar which is already of type baz.
--
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
--
Sent via pgsql-performance mailing list (pgsql-performance
.
--
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql
.
--
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Tom Lane escribió:
Alvaro Herrera alvhe...@commandprompt.com writes:
Maybe it would make more sense to try to reorder the fsync calls
instead.
Reorder to what, though? You still have the problem that we don't know
much about the physical layout on-disk.
Well, to block numbers as a first
Tom Lane escribió:
Alvaro Herrera alvhe...@commandprompt.com writes:
Tom Lane escribi�:
Reorder to what, though? You still have the problem that we don't know
much about the physical layout on-disk.
Well, to block numbers as a first step.
fsync is a file-based operation, and we know
Tom Lane escribió:
Alvaro Herrera alvhe...@commandprompt.com writes:
Tom Lane escribió:
That's not going to do anything towards reducing the actual I/O volume.
Although I suppose it might be useful if it just cuts the number of
seeks.
Oh, they had no problems with I/O volume
is searched again. But that's fast, because it's a hash
table.
--
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription
George Sexton wrote:
If I'm cold starting the system, would it vacuum all 330 databases and then
wait 720 minutes and then do them all again, or would it distribute the
databases more or less evenly over the time period?
the latter
--
Alvaro Herrerahttp
there's no reason not to try asynchronous commit.
Yeah; since the transactions only ever write commit records to WAL, it
wouldn't matter a bit that they are lost on crash. And you should see
an improvement, because they wouldn't have to flush at all.
--
Alvaro Herrera
Alvaro Herrera wrote:
Kevin Grittner wrote:
Anyway, given that these are replication
targets, and aren't the database of origin for any data of their
own, I guess there's no reason not to try asynchronous commit.
Yeah; since the transactions only ever write commit records to WAL
Kevin Grittner wrote:
Alvaro Herrera alvhe...@commandprompt.com wrote:
Actually, a transaction that performed no writes doesn't get a
commit WAL record written, so it shouldn't make any difference at
all.
Well, concurrent to the web application is the replication. Would
asynchronous
.
data:/dev/sdc is RAID5 - 30 x 3.5 15k SAS disk
These reside on the DS3200 disk subsystem with 512MB BB cache per
controller and redundant drive loops.
Hmm, so maybe the performance benefit is not from it being on a separate
array, but from it being RAID1 instead of RAID5?
--
Alvaro
.
--
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
.
--
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
in the current implementation
--
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
in production.
Not often, maybe once every couple of months, but just enough that I'm
not ready to try and use it there yet. And I can't force the same
failure in testing, at least not yet.
uh. Is there a report of the crash somewhere with details, say stack
traces and such?
--
Alvaro Herrera
Scott Marlowe escribió:
On Thu, Jan 21, 2010 at 8:51 AM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
Scott Marlowe escribió:
On Thu, Jan 14, 2010 at 12:17 PM, Carlo Stonebanks
stonec.regis...@sympatico.ca wrote:
4) Is this the right PG version for our needs?
8.3 is very
-oriented storage.
--
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
Alvaro Herrera escribió:
No amount of tinkering is going to change the fact that a seqscan is the
fastest way to execute these queries. Even if you got it to be all in
memory, it would still be much slower than the other systems which, I
gather, are using columnar storage and thus
Craig James wrote:
Heikki Linnakangas wrote:
The parameter is gone in 8.4, BTW.
Both max_fsm_relations and max_fsm_pages?
Yes, both are gone.
--
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
,
you don't know much. It's only when you've tested a number of times
without failure that you've gained any real knowledge.
Of course, you're only truly safe when you've tested infinite times,
which may take a bit longer than management expects.
--
Alvaro Herrera
of that, we had Command Prompt create a full
set of hash datatypes (SHA*, and I think md5). That stuff should be
on pgFoundry; if it's not drop me a note at jna...@cashnetusa.com
and I'll get it added.
It's at project shatypes.
--
Alvaro Herrerahttp
take some time to propagate.)
--
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http
of extra
bits for site-specific use, where hard-coding booleans does not, and
since the executable is a binary it instantly becomes a huge problem for
everyone but me.
Did you try hiding the bitmask operations inside a function as Tom
suggested?
--
Alvaro Herrerahttp
://archives.postgresql.org/pgsql-performance/2009-09/msg00059.php
No EXPLAINs anywhere to be seen.
--
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
--
Sent via pgsql-performance mailing list (pgsql-performance
Robert Haas escribió:
Scott, did you check whether a toast table got created here and what
the size of it was?
A table with only bool columns (and, say, one int8 column) would not
have a toast table. Only varlena columns produce toast tables.
--
Alvaro Herrera
)
--
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
? Of course, it would
have to ensure that global objects are also recovered, but we could
simply ignore commands for other databases.
--
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
--
Sent via pgsql
:52.48 postgres:
cribq cribq [local] idle
I am writing moderately large (~3k) records to my database a few times
a second. Even when I stop doing that, the process continues to take
up all of that memory.
Am I reading this right? Why is it using so much memory?
shared_buffers?
--
Alvaro
they are increasingly rare, there are still situations in which the heap
tuple machinery messes up and the xmin/xmax/etc fields of the tuple are
the best/only way to find out what happened and thus fix the bug. If
you freeze early, there's just no way to know.
--
Alvaro Herrera
of the flat files, but to speed
them up. If we're worried about speed in the pg_authid flatfile, and
come up with a solution to that problem, what will we do with the
pg_database flatfile when it grows too large? We can't just get rid of
it, because autovacuum needs to access it.
--
Alvaro Herrera
anymore...)
As long as there's a simple API, there should be no problem.
(Except that it would be nice to be able to build the file incrementally
... If we have to write out a million lines each time a millionth user
is created, there will still be a bottleneck at CREATE USER time.)
--
Alvaro
cache
file containing only entries for the global tables, and load that before
we have identified the database we want to join (after which, we'll load
another cache file for the local entries).
This sounds good, because autovacuum could probably use this too.
--
Alvaro Herrera
to ship syslog to a remote
machine. Since it uses UDP sockets, it wouldn't block when overloaded
but rather lose messages (besides, it means it has low overhead).
--
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc
to identify each user and speed up system?
No. The users still need to exist in the PG auth system.
I'm sure this is just some missing optimization. Feel free to work on
the code to improve performance for these cases.
--
Alvaro Herrerahttp://www.CommandPrompt.com
volume group, which is not
used that much for now.
You know, that's the first thing it came to me when I read you're using
DRDB. Have you tried setting temp_tablespace to a non-replicated disk?
--
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication
accounts
to every human being in the planet or what? I mean, what's the point of
this test?
--
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
toruvinn wrote:
On Wed, 15 Jul 2009 16:02:09 +0200, Alvaro Herrera
alvhe...@commandprompt.com wrote:
My bet is on the pg_auth flat file. I doubt we have ever tested the
behavior of that code with 1 billion users ...
I was always wondering, though, why PostgreSQL uses this approach
there
are several differences between the two instances, but I wonder
whether query planning improvements in 8.4 could essentially account
for it.
Of course. Great news. Congratulations.
--
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command
the supress_redundant_updates_trigger() function
that has been included in 8.4 (should be easy to backport)
--
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
--
Sent via pgsql-performance mailing list
Aaron Turner escribió:
I'm trying to figure out how to optimize this query (yes, I ran
vacuum/analyze):
musecurity=# explain DELETE FROM muapp.pcap_store WHERE pcap_storeid
NOT IN (SELECT pcap_storeid FROM muapp.pcap_store_log);
What PG version is this?
--
Alvaro Herrera
1 - 100 of 426 matches
Mail list logo