can do exactly what you're asking. However, you should
be able to achieve the same result by setting a default schema for the
user that you're connecting as. See the docs for ALTER ROLE and SET.
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To ma
eem to be re-used.
It's possible that the early pages don't have enough usable space for the
updated
rows. Depending on your update patterns, you may end up with bloat scattered
across
many pages, with no individual page having enough space to be reused. That seems
unlikely as the blo
.
See ALTER
TABLE and the available settings to tweak autovacuum behavior.
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
esting process.
In any event, if your testing doesn't help any; you'll probably need to
include answers to at least the above questions before the list will be
much help.
That is, of course, unless someone familar with pointcloud has seen this
exact problem and already knows the answer ...
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
E can use
to do index searches instead of always having to do sequential scans
or push the LIKE matching to another part of the plan tree. Based on
your described situation, I have a theory that it might improve things
quite a bit.
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
age
it, and it's not part of the issue.
> 從我的 Samsung Galaxy 智慧型手機傳送。
> 原始訊息 自: Bill Moran 日期: 2017/5/24
> 20:24 (GMT+08:00) 至: pgsql-general@postgresql.org 主旨: [GENERAL] Current best
> practice for maximum shared_buffers settings on big hardware?
>
> A f
lopers care to comment on any work
that's been done since 2012 to make large values work better?
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Fri, 5 May 2017 19:52:42 +0100
Tony Finch wrote:
> Bill Moran wrote:
> >
> > There's a well-written article I saw recently that directly addresses
> > your question ... I'm too lazy to find it, but google will probably
> > turn it up for you.
>
>
ut failed. Can you give me some keywords to find
> this well-written article?
I can't seem find it again. Sorry.
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
when our caching requirements exceeded
what Postgres could do in that capacity. We never switched to
anything else for blob storage, as Postgres was always sufficient.
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://w
h something
specifically designed for that purpose.
Of course, if you need structured, relational data to be stored reliably,
you can't do much better than Postgres.
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
his:
> >> > select * from dati256 where id >4300 limit 100;
> >> > I don't understand why are there WAL writings during read only
> >> transactions.
These are hint bits. The mechanism and behavior are known and documented:
https://wiki.postgresql.
eeding to access the actual
table; but I'm just speculating.
In my experience, queries like these rarely benefit from filter indexes,
because most of the time involved is in the grouping and aggregate processing,
and the index does nothing to help with that. But, again, without EXPLAIN
output
s a crash
on the PostgreSQL end when you failover.
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
perfect if they don't know what is being done with
the data on the drives.
Whether it's good enough depends heavily on what your expectation is.
Before trusting it to meet your needs, I would spend some time simulating
failures and seeing what actually happens.
--
Bill Moran
--
ucardo for replication, but this feature doesn't work, we need
> to manually do it on the
> SLAVE and some other steps to have both DB sync.
>
> We already know how to do it, bucardo works.
>
> Just wondering if PSQL can handle this automatically?
Postgres' built-in s
ally, I feel like the existing behavior is preferrable. Software
teams need to take the time to understand the locking implications of their
actions or they'll have nothing but trouble anyway.
As I've seen time and again: writing an application that handles low load
and low concurrency is f
On Wed, 08 Feb 2017 10:44:24 -0500
Tom Lane wrote:
> Albe Laurenz writes:
> > Bill Moran wrote:
> >> What I feel is the best way to mitigate the situation, is to have some
> >> setting that limits the maximum RAM any backend can consume.
>
> > I'd dele
ply not
finding it for some reason?
Assuming this doesn't exist (I haven't found it) my next question is
whether there's a philosophical or technical reason that such a feature
doesn't exist? Should I take this discussion to -hackers?
--
Bill Moran
--
Sent via pgsql
any more.
Anyone know? Or, alternatively, anyone have another option to get the
same job done?
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
t be a bug in Postgres, does anyone have
any suggestions on what other ways the stats could be reset that I need to check
on? Has anyone else experienced this to lend credence to the possibility that
it's
a bug? I have no clue how to reproduce it, as the occurrance is rare and still
seems rando
just noticed it on a
particularly problematic day last time I looked.
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
gets
interrupted. I'm trying to understand if the partial runs are at least
making _some_ progress so the next vacuum has less to do, or if this is
a serious problem that I need to fiddle with tuning to fix.
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To ma
gt; return false if count is less it returns true.
An exclusion constraint might be a better solution.
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
eneral theme, a lot of developers don't
seem to think it's useful for them to know SQL, and therefore don't
bother trying -- or even actively resist learning.
So if the overall theme is "knowing this makes things better", I would
buy multiple copies of the book an mysteriousl
um_md5", "st_size")
> WHERE "checksum_md5" IS NOT NULL
>
> *Question:*
>
> What else can I do to improve the Performance of the Query?
>From the explain, it looks like the biggest pain point is the
inode_segments table, specifically, this condition:
s.full_path ~ '^/userfiles/account/[0-9]+/[a-z]+/[0-9]+'
It's doing a full scan of every record in that table, which is
a large number, and that regex can't be cheap over that kind of
volume.
If you do:
SELECT count(*)
FROM inode_segments
WHERE full_path ~ '^/userfiles/account/[0-9]+/[a-z]+/[0-9]+';
how many tuples actually match that condition? If the number
is a large percentage of the total table, then I'm not sure
how to help you, but if the percentage is small, you might
be able to speed things up by adding an index:
CREATE INDEX is_fp_trunc_idx ON inode_segments(substring(full_path FROM 1 FOR
19));
Then adding this condition to the where clause:
substring(s.full_path FROM 1 FOR 19) = '/userfiles/account/'
There are other index combinations that may help as well,
depending on the nature of the values in that table, but,
in general, anything you can do to reduce the number of
records that have to be examined in that table is liable
to speed things up.
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
f they differ if you cannot recreate the correct one
> > exactly from source-controllled DDL? Or know how they are supposed to
> > differ if this is a migration point?
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
TO mro;
>
> It complicates the usage of pg_dump to compare the structures of the two
> similar databases like DEV and PROD, two development branches etc.
I don't think pg_dump was ever intended to serve that purpose.
dbsteward, on the other hand, does what you want:
https://gith
o you have any idea what other systems to try?
http://dbsteward.org/
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
hat writes completed successfully, otherwise, Postgres would
be able to recover after a restart.
Beyond that, running Postgres on a filesystem that frequently fills up
is going to be problematic all around anyway. If you don't improve the
hardware situation, you're going to continue to have problems like this.
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
u the former, whereas
CITEXT will give you both.
I don't think your syntax will work, though. I'm guessing that
PRIMARY KEY pk_stock_code lower(stock_code) will, though.
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your su
to figure out _which_ rows
to fetch faster on the small table than the large table, which is what
you would expect, since a smaller index should be faster than a large one.
However, when it goes to actually fetch the row data, it takes
significantly longer on the small table, despite the fact that it's
only fetching 1/3 as many rows. It is, however, doing 2.5x as many
disk reads to get those rows: For the large table, it reads 61MB from
disk, but it reads 160MB to get all the data for the smaller table.
How the data was inserted into each table could lead to similar data
being clustered on common pages on the large table, while it's spread
across many more pages on the small table.
That still doesn't explain it all, though. 2.5x the disk
activity normally wouldn't equate to 28x the time required. Unless
you're disks are horrifically slow? Does this server have a lot of
other activity against the disks? I.e. are other people running
queries that you would have to contend with, or is the server a VM
sharing storage with other VMs, or even a combined use server that
has to share disk access with (for example) a web or mail server
as well? Is the performance difference consistently ~28x?
Other things: what is shared_buffers set to? The queries would seem
to indicate that this server has less than 1M of those two tables
cached in memory at the time you ran those queries, which seems to
suggest that either you've got shared_buffers set very low, or that
there are a lot of other tables that other queries are accessing at
the time you're running these. Perhaps installing pg_buffercache to
have a look at what's using your shared_buffers would be helpful.
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
n of the schema, as a primary key implies that other tables can
rely on this set of columns as a unique identifier for rows. This
metadata may be used by external programs, but is also utilized interally
by the server in some cases."
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Thu, 28 Jan 2016 00:37:54 +0100
Ivan Voras wrote:
> On 28 January 2016 at 00:13, Bill Moran wrote:
>
> > On Wed, 27 Jan 2016 23:54:37 +0100
> > Ivan Voras wrote:
> >
> > > So, question #1: WTF? How could this happen, on a regularly vacuumed
> > >
bles. How often is something that will require some guesswork
and/or experimenting, but I would recommend at least once per hour. Since
you're only vacuuming selected tables, the performance impact should be
minimal.
You'll have to do a VACUUM FULL on the bloated tables _once_ to get the siz
On Tue, 19 Jan 2016 23:53:19 -0300
Alvaro Herrera wrote:
> Bill Moran wrote:
>
> > As far as a current solution: my solution would be to decompose the
> > JSON into an optimized table. I.e.:
> >
> > CREATE TABLE store1 (
> > id SERIAL PRIMARY KEY,
> >
ritical remarks regarding patches and/or technical work are
necessary to ensure a quality product; however, critical remarks
directed at individuals are not constructive and therefore not
acceptable." or something ...
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
t
will always involve _some_ pain, but less is better.
I've done the job of #3 with other groups, and 99% of the time
there was nothing to do. The one incident I had to handle was
terrible, but at least I had some guidance on how to deal with
it.
--
Bill Moran
--
Sent vi
go, you may want to try
something more along the lines of this for your check:
SELECT true WHERE NOT EXISTS(SELECT 1 FROM keyz WHERE xfk NOT IN (akeys($1)));
Not tested, so it's possible that I have some typo or something; but overall
I've found that the NOT EXISTS construct
seriously doubt that trying to make your UUIDs generate in a
predictable fashon will produce any measurable improvement, and I
see no evidence in the articles you cited that claims otherwise
have any real basis or were made by anyone knowledgeable enough
to know.
--
Bill Moran
--
Sent via p
sync, which is the only thing diskchecker.pl tests for.
> >
>
> I was thinking that since the disk have a 32M write-cache (with not
> battery) it would lie to the OS (and postgres) about when data are really
> on disk (not in the disk write cache). But maybe that thinking was wrong
On Sun, 13 Dec 2015 22:23:19 +0100
Gerhard Wiesinger wrote:
> On 13.12.2015 21:14, Bill Moran wrote:
> > Wait ... this is a combined HTTP/Postgres server? You didn't mention that
> > earlier, and it's kind of important.
> >
> > What evidence do you have
fically memory-inefficient
manner. Since you mention that you see nothing in the PG logs, that makes it
even more likely (to me) that you're looking entirely in the wrong place.
I'd be willing to bet a steak dinner that if you put the web server on a
different server than the DB, that the memory problems would follow the
web server and not the DB server.
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Sun, 13 Dec 2015 16:35:08 +0100
Gerhard Wiesinger wrote:
> Hello Bill,
>
> Thank you for your response, comments inline:
>
> On 13.12.2015 16:05, Bill Moran wrote:
> > On Sun, 13 Dec 2015 09:57:21 +0100
> > Gerhard Wiesinger wrote:
> >> some further de
postgres [local] SELECT
> > -- idle again, memory still allocated
> > 26851 postgres 20 0 2365732 920668 918748 S 0.0 22.7 1:22.54
> > postgres: postgres postgres [local] idle
> >
> > Memory will only be released if psql is exited. According to the
> > PostgreSQL design memory should be freed when the transaction completed.
> >
> > top commands on FreeBSD: top -SaPz -o res -s 1
> > top commands on Linux: top -o RES d1
> >
> > Config: VMs with 4GB of RAM, 2 vCPUs
> > shared_buffers = 2048MB # min 128kB
> > effective_cache_size = 2GB
> > work_mem = 892MB
> > wal_buffers = 8MB
> > checkpoint_segments = 16
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
The third solution is probably _really_ the correct one, from
a pedantic standpoint, but it's a bit more work to implement.
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Wed, 2 Dec 2015 09:31:44 -0800
Christophe Pettus wrote:
>
> On Dec 2, 2015, at 9:25 AM, Bill Moran wrote:
>
> > No. See the section on row level locks here:
> > http://www.postgresql.org/docs/9.4/static/explicit-locking.html
>
> That wasn't quite my ques
If it's
happening frequently, you'll want to investigate what process is holding
the locks for so long and see what can be done about it.
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
knowledge) they're the best answers available at this time. I'd really
like to build the alternate TOAST storage, but I'm not in a position to
start on a project that ambitious right ... I'm not even really keeping
up with the project I'm currently supposed to be doing.
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
E EXTERNAL;
?
The default storage for a JSONB field is EXTENDED. Switching it to
EXTERNAL will disable compression. You'll have to insert your data over
again, since this change doesn't alter any existing data, but see
if that change improves performance.
--
Bill Moran
--
Sent via pgsq
rest of the data on the other
database server configured with higher shared_buffers.
I know these probably aren't the kind of answers you're looking
for, but I don't have anything better to suggest; and the rest
of the mailing list seems to be devoid of ideas as well.
--
Bill Mo
On Wed, 4 Nov 2015 16:43:57 +0100
Bertrand Roos wrote:
>
> Le 04/11/2015 14:55, Bill Moran a écrit :
> > On Wed, 4 Nov 2015 14:32:37 +0100
> > Bertrand Roos wrote:
> >> I try to configure auto-analyse task with postgresql 9.4.
> >> I have the following
les every 5 minutes (based on the config you show) ... so are
you doing the inserts then checking the table without leaving enough time
in between for the system to wake up and notice the change?
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
the
problem occurs, and the contents of the pg_locks table when
the problem is occurring.
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
of the server, or have some command
that tests to ensure the server is started and blocks until it is
before running the create command.
The only point I'm unclear on is whether you've confirmed that
Postgres actually _is_ started once the server is up (albiet without
the CREATE statem
things like user's groups, and it does a pretty good
job of letting us coordinate activities. Basic membership on the
site is free and includes participating in as many groups as you
desire. (it only costs something if you want to host your own group).
Hope to see you soon.
--
Bill Moran
ackers list
so the developers can chime in. My opinion is that this is a bug, but it's
an obscure enough bug that it's not surprising that it's gone unfixed for
a while.
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
the
session data in a Postgres table with great success.
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
he best way to synchronise JUST the changes on a table
> between servers please?
Sounds like a problem custom-made to be solved by Slony:
http://slony.info/
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
sort by oid then in the
> oid group sort by relname?
> Can somebody explain what does the database done for hashAggregate?
It combines the values for oid and relname for each returned row, generates a
hashkey
for them, then uses that hashkey to aggregate (compute the GROUP BY,
essentially, in
th
gt; and we'll see where it goes ...
> >
> > regards, tom lane
>
>
>
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
.
> However, I know from experience that's not entirely true, (although it's not
> always easy to measure all aspects of your I/O bandwith).
>
> Am I missing something?
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
far as when things get evicted from memory, you'll have to look at the
source code, but it's your typical "keep the most commonly needed data in
memory" algorithms.
What problem are you seeing? What is your performance requirement, and what
is the observed performance? I as
ng that if your database has a lot of indexes, the pg_dump
might actually be faster.
But the only way to know is to try it out on your particular system.
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
dd capacity at
minimal cost.
There's nothing like a VM where you never know what the performance
will be because you never know when some other VMs (completely unrelated
to you and/or your work) will saturate the IO with some ridiculous
grep recursive command or something.
--
Bill Moran
2768 Batches: 2 Memory Usage: 159kB
> -> Seq Scan on table84 table84 (cost=0.00..14600.96
> rows=189496 width=20) (actual time=0.059..1661.482 rows=5749 loops=1)
> Total runtime: 13458.301 ms
> (12 rows)
>
> Thank you again for your advice and I hope that with your help I'll be able
> to solve this issue.
>
> Best regards.
> Lukasz
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
be a BEFORE trigger FOR EACH ROW. Otherwise, the returned
value
won't do anything. It should read like this:
CREATE TRIGGER trigger_name
BEFORE INSERT ON table_ebscb_spa_log02
FOR EACH ROW EXECUTE PROCEDURE on_ai_myTable();
If you created it with AFTER INSERT or FOR EACH STATEMENT, then the trigger
won't
work as desired.
The other thing about assignment being := was already mentioned.
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
uld emit a warning when you used the non-standard way
> of escaping single quotes (unless you explicitly turned that off)
>
>
> Could you please provide below information.
> How to change standard_conforming_strings value of postgresql.conf? I have
> checked but this option is
On Wed, 17 Jun 2015 10:33:37 -0300
Anderson Valadares wrote:
> 2015-06-15 18:19 GMT-03:00 Bill Moran :
>
> > On Mon, 15 Jun 2015 17:48:54 -0300
> > Anderson Valadares wrote:
> >
> > > 2015-06-12 19:56 GMT-03:00 Bill Moran :
> > >
> > > > P
de: Canceled on identification as a pivot, during
> commit attempt.
> HINT: The transaction might succeed if retried.
>
> and
>
> ERROR: could not serialize access due to read/write dependencies among
> transactions
> DETAIL: Reason code: Canceled on commit attempt with conflict in from
> prepared pivot.
> HINT: The transaction might succeed if retried.
>
> Thanks!
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
ase function that can be used
to compress data; you'd have to write your own or do it at the application
level.
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Mon, 15 Jun 2015 17:48:54 -0300
Anderson Valadares wrote:
> 2015-06-12 19:56 GMT-03:00 Bill Moran :
>
> > Please do not remove the mailing list from replies. See below.
> >
> > On Fri, 12 Jun 2015 09:21:19 -0300
> > Anderson Valadares wrote:
> >
> &g
Please do not remove the mailing list from replies. See below.
On Fri, 12 Jun 2015 09:21:19 -0300
Anderson Valadares wrote:
> 2015-06-08 20:33 GMT-03:00 Bill Moran :
>
> > On Mon, 8 Jun 2015 11:59:31 -0300
> > Anderson Valadares wrote:
> >
> > > Hi
> >
hould I increase the statistic target to 500, or even
> to 1000?
> Is there something else I can trigger to get the appropriate plan?
>
> Comments/explanations would be appreciated
> Daniel
>
>
> -Original Message-
> From: pgsql-general-ow...@postgresql.org
> [m
b, gis_iti_itinerario iti_sub
> where dvi_sub.via_cod_viagem = via_sub.via_cod_viagem
>and via_sub.via_status_viagem = 'A'
>and via_sub.via_dt_hora_ini > now() - interval '9 hours'
>and iti_sub.lin_cod_linha = 389
>and iti_sub.iti
Db is a failover manager which relies on virtual IP
> management, not the one I described above.
pgpool has this capacity.
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
> but follows a power law (some user_id would return millions of records while
> others only one).
> This is the farthest I can go at this point. Maybe someone can provide me
> with more explanations regarding planner's behavior and ways to go further
> to make it work properl
ms
> > Execution time: 1.390 ms
> > (15 rows)
> >
> > regards, tom lane
>
> Wow, sorry I screwed up the query. It should be:
>
> ORDER BY c.created_at DESC
>
> Not b, or as you noted its trivial to index. Sorry!
Creating an index on c.created_at sped things up by a factor of over
1000, which caused the case you defined to run in ~0.5ms for me.
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
62 ms.
> 1 row retrieved.
>
> This time, the history tab shows that both took the same time to run (an
> improvement!?)
If your environment is providing such wildly variant results, then
you need to start running multiple tests instead of assuming that a single
run of a query is indicative of a pattern.
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
s=51340 width=66)"
> "Index Cond: (group_id = subset.id)"
> Total query runtime: 3986 ms. 5978 rows retrieved.
>
>
> select * from newtable where group_id IN (select * from subset)
> "Hash Join (cost=41.25..138092255.85 rows=1935067087 width=66)"
> " Hash Cond: (newtable.group_id = subset.id)"
> " -> Append (cost=0.00..84877869.72 rows=3870134173 width=66)"
> "-> Seq Scan on newtable (cost=0.00..0.00 rows=1 width=66)"
> "-> Seq Scan on newtable_01 (cost=0.00..946235.96 rows=46526896
> width=66)"
> ...
> "-> Seq Scan on newtable_86 (cost=0.00..986527.64 rows=44269664
> width=66)"
> " -> Hash (cost=38.75..38.75 rows=200 width=8)"
> "-> HashAggregate (cost=36.75..38.75 rows=200 width=8)"
> " -> Seq Scan on subset (cost=0.00..31.40 rows=2140 width=8)"
> Execution Cancelled after 766702 ms !
>
> I tried the same with "SET enable_seqscan = OFF" and got an index scan of all
> tables;
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Fri, 22 May 2015 12:44:40 -0400
Tom Lane wrote:
> Bill Moran writes:
> > Tom Lane wrote:
> >> Other questions you'd have to think about: what is the data type of
> >> 0x; what do you do with 0x (too big
> >> even f
On Fri, 22 May 2015 11:27:49 -0500
Dennis Jenkins wrote:
> On Fri, May 22, 2015 at 10:02 AM, Tom Lane wrote:
>
> > Alban Hertroys writes:
> > > On 22 May 2015 at 04:46, Bill Moran wrote:
> > >> With all that being said, if I were to build a patch, would i
On Fri, 22 May 2015 11:02:47 -0400
Tom Lane wrote:
> Alban Hertroys writes:
> > On 22 May 2015 at 04:46, Bill Moran wrote:
> >> With all that being said, if I were to build a patch, would it be likely
> >> to be accepted into core?
>
> > Wouldn't yo
On Thu, 21 May 2015 13:57:24 -0400
Tom Lane wrote:
> Bill Moran writes:
> > My other question: is there a specific reason why PostgreSQL doesn't support
> > this syntax, aside from "nobody has bothered to add such support"? Because
> > I'm considering w
I'm working on a project converting a bunch of code from another database
system to PostgreSQL.
One of the issues is that the previous system accepted integers and binary
data in the same hex format as C ... i.e. 0xff
I understand that the proper way to handle this in postgres is x'ff', but
the
x27;t find
any information on why that sit is down or where it might have
gone to.
Is this a temporary outage? Or has the RPM data moved somewhere
else and isn't documented yet? Any help is appreciated.
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.or
tand how the
connection pool
works.
You could confirm this by turning on full query logging in Postgres and see
which connection
does what. If it turns out to be the case, then you'll have sort out how your
code is
getting confused.
If it's not the case, then I don't have any oth
ell enough. There are certainly cases when you want to create very complex
logic in the database and plpgsql is liable to make that difficult. But
there are a lot of cases where having to manage pointers and a build
environment and all the things that go with C aren't justified, because
plpgsql
irely possible I missed a memo, so I am
> > open to a more detailed explanation of the inefficiencies involved.
> >
>
> The Postgres source is written in C, not in plpgsql. C has a good
> optimizing compiler and plpgsql doesn't.
Maybe that's a roundabout way of
ror and the
data won't be accepted. The same thing happens if you try to
store invalid XML in an XML field (such as XML without proper
closing tags, etc). It seems that this strictness causes a lot
of people to avoid those data types, as there seem to be a lot
of people who would rather have garb
you can not
guarantee that, and it doesn't hold true for all functions.
In general, it's inappropriate for a function to be able to manipulate
a transaction beyond aborting it. And the abort has to bubble up so
that other statements involved in the transaction are also notified.
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
r of tries). Is this
> possible without going into pgsql source code?
I suspect that savepoints will accomplish what you want:
http://www.postgresql.org/docs/9.4/static/sql-savepoint.html
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
r your memory problems. I'd suggest to set it
> > to 16MB, and see if you can avoid "on disk" sorting. If not - gradually
> > increase work_mem.
> >
> > Regards,
> >
> >
> > Igor Neyman
> >
>
>
> Thanks Igor,
>
> I will tr
On Wed, 1 Apr 2015 06:26:36 -0700 (MST)
TonyS wrote:
> On Wed, April 1, 2015 8:48 am, Bill Moran [via PostgreSQL] wrote:
> >
>
> >>>> Running "analyze verbose;" and watching top, the system starts out
> >>>> using no swap data and about 4GB
) was terminated by
> signal 9: Killed
> 2015-04-01 06:24:38 EDT DETAIL: Failed process was running: analyze verbose;
> 2015-04-01 06:24:38 EDT LOG: terminating any other active server processes
>
> I started this process at 11PM, so it ran for about 7.5 hours before
> crashing.
8
>
> But if I do:
>
> DELETE From sessions WHERE "SESSIONTIMESTAMP" < '2010-01-01 10:02:02'
>
> It DOES work.
>
> Why the db doesn't recognize the name of the table without quotes?.
See:
http://www.postgresql.org/docs/9.4/static/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
a tuples as the data in the table changes. It's not
unusal for the table to be 2x the size of the actual data on
a heavily updated table.
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
he situation on
the list. That's why I'm just looking for a lawyer who understands
the situation and can advise me.
>
> On Wed, Mar 11, 2015 at 1:28 PM, Bill Moran
> wrote:
>
> >
> > I've been asked to sign a legal document related to a PostgreSQL-
>
ntacting $random_legal_service.
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
and see if the rows are sill nonremovable. I bet you $5.34 that
everything works fine after that, which would indicate that the folks
who made the snapshot didn't do it correctly.
--
Bill Moran
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
1 - 100 of 829 matches
Mail list logo