-> Foreign Scan on orders_fdw
> (cost=0.00..294718.76 rows=2284376 width=8)
> -> Hash (cost=32605.64..32605.64 rows=1500032
> width=8)
>-> Foreign Scan on customer_fdw
> (cost=0.00..32605.64 rows=150
s about to go up in flames, which is of course not true.
maybe we could do "consider increasing to ensure good performance"
or so ...
regards,
hans
--
Hans-Jürgen Schönig
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://w
5% or so. but this requires a value as low as 32 MB
of shared buffers or so.
many thanks,
hans
--
Hans-Jürgen Schönig
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt
Web: http://www.postgresql-support.de, http://www.cybertec.at
--
Sent via pgsql-hack
On 18 Aug 2015, at 11:19, Albe Laurenz laurenz.a...@wien.gv.at wrote:
Hans-Jürgen Schönig wrote:
in addition to that you have the “problem” of transactions. if you failover
in the middle
of a transaction, strange things might happen from the application point of
view.
the good thing
On 18 Aug 2015, at 10:32, Albe Laurenz laurenz.a...@wien.gv.at wrote:
Victor Wagner wrote:
Rationale
=
Since introduction of the WAL-based replication into the PostgreSQL, it is
possible to create high-availability and load-balancing clusters.
However, there is no support for
On 20 Feb 2014, at 01:38, Tom Lane t...@sss.pgh.pa.us wrote:
Haribabu Kommi kommi.harib...@gmail.com writes:
I want to propose a new feature called priority table or cache table.
This is same as regular table except the pages of these tables are having
high priority than normal tables. These
i think there is one more thing which would be really good in GIN and which
would solve a ton of issues.
atm GIN entries are sorted by item pointer.
if we could sort them by a column it would fix a couple of real work issues
such as ...
SELECT ... FROM foo WHERE tsearch_query ORDER BY
On Jan 8, 2014, at 9:27 PM, Bruce Momjian wrote:
On Wed, Jan 8, 2014 at 05:39:23PM +, Simon Riggs wrote:
On 8 January 2014 09:07, Heikki Linnakangas hlinnakan...@vmware.com wrote:
I'm going to say right off the bat that I think the whole notion to
automatically disable synchronous
because we won't have 10
pg_basebackup-style tools making use of this feature anyway.
of course, if you got 20 disk and a 1 gbit network this is useless - but many
people don't have that.
regards,
hans-jürgen schönig
--
Cybertec Schönig Schönig GmbH
Gröhrmühlgasse 26
On Aug 21, 2013, at 10:57 AM, Andres Freund wrote:
On 2013-08-21 08:10:42 +0200, PostgreSQL - Hans-Jürgen Schönig wrote:
On Aug 19, 2013, at 9:11 PM, Andres Freund wrote:
On 2013-08-19 20:15:51 +0200, Boszormenyi Zoltan wrote:
2013-08-19 19:20 keltezéssel, Andres Freund írta:
Hi
hello,
some time ago oleg and teodor have posted a PostgreSQL version of bloom filters.
as this appears to be a useful thing for many people i have ported this
prototype to PostgreSQL 9.2.
it seems to work as expected on OS X and Linux.
as it is a contrib module it lacks xlog support.
maybe
On Jul 1, 2012, at 5:44 PM, Magnus Hagander wrote:
On Sun, Jul 1, 2012 at 1:02 PM, Boszormenyi Zoltan z...@cybertec.at wrote:
Hi,
attached is a patch that does $SUBJECT.
It's a usability enhancement, to take a backup, write
a minimalistic recovery.conf and start the streaming
standby in
On Jun 16, 2012, at 8:27 AM, Magnus Hagander wrote:
I'm not sure if this is something I don't know how to do, or if it's
something we simply can't do, or if it's something we could do but the
syntax can't handle :-)
Basically, I'd like to combine a recursive and a non-recursive CTE in
the
On May 22, 2012, at 9:57 AM, Simon Riggs wrote:
On 22 May 2012 06:50, Tom Lane t...@sss.pgh.pa.us wrote:
Currently, the planner keeps paths that appear to win on the grounds of
either cheapest startup cost or cheapest total cost. It suddenly struck
me that in many simple cases (viz, those
hello,
does the problem show up on 2% of all problems after 2 weeks or so?
we had a similar problem on UNIX as well. it even materialized on 100 identical
boxes (on 2% of them). it pops up randomly and never stops …
i checked some code paths. some of those messages are direct output via stderr
On Mar 9, 2012, at 2:34 PM, Robert Haas wrote:
On Fri, Mar 9, 2012 at 5:42 AM, Hans-Jürgen Schönig
postg...@cybertec.at wrote:
we had some different idea here in the past: what if we had a procedure /
method to allow people to save the list of current buffers / cached blocks
to be written
we had some different idea here in the past: what if we had a procedure /
method to allow people to save the list of current buffers / cached blocks to
be written to disk (sorted). we could then reload this cache profile on
startup in the background or people could load a certain cache content
On Oct 7, 2011, at 8:47 PM, Joshua D. Drake wrote:
On 10/07/2011 11:40 AM, Tom Lane wrote:
Robert Haasrobertmh...@gmail.com writes:
Please find attached a patch implementing a basic version of
index-only scans.
I'm making some progress with this, but I notice what seems like a
missing
On Oct 10, 2011, at 4:21 AM, Alex Goncharov wrote:
[ Thanks all for the very productive discussion in the thread
libpq, PQdescribePrepared - PQftype, PQfmod, no PQnullable
which I originated. Very useful. Now on something different. ]
About two years ago, I had to research some
On Sep 19, 2011, at 5:16 PM, Tom Lane wrote:
Greg Stark st...@mit.edu writes:
That said, to help in the case I described you would have to implement
the tapesort algorithm on the GPU as well.
I think the real problem would be that we are seldom sorting just the
key values. If you have to
hello …
i have been thinking about this issue for quite a while ...
given your idea i am not sure how this can work at all.
consider:
begin;
insert 1
insert 2
commit
assume this ends up in the same node,
now you split it into two …
1 and 2 will have exactly the
- and: this what would be
under your full control.
what do you think?
i got to think about it futher but i can envision that this could be feasible
...
hans
On Sep 2, 2011, at 2:36 PM, Hannu Krosing wrote:
On Fri, 2011-09-02 at 14:01 +0200, PostgreSQL - Hans-Jürgen Schönig
wrote:
hello …
i
On Sep 2, 2011, at 2:59 PM, Hannu Krosing wrote:
On Fri, 2011-09-02 at 14:51 +0200, PostgreSQL - Hans-Jürgen Schönig
wrote:
hello …
the goal of the entire proxy thing is to make the right query go to the
right node / nodes.
we determine this by using a partitioning function and so
hello …
i have just fallen over a nasty problem (maybe missing feature) with PL/Pythonu
…
consider:
-- add a document to the corpus
CREATE OR REPLACE FUNCTION textprocess.add_to_corpus(lang text, t text) RETURNS
float4 AS $$
from SecondCorpus import SecondCorpus
from
On Aug 17, 2011, at 2:19 PM, Jan Urbański wrote:
On 17/08/11 14:09, PostgreSQL - Hans-Jürgen Schönig wrote:
CREATE OR REPLACE FUNCTION textprocess.add_to_corpus(lang text, t text)
RETURNS float4 AS $$
from SecondCorpus import SecondCorpus
from SecondDocument import
On Aug 15, 2011, at 9:40 AM, Simon Riggs wrote:
On Mon, Aug 15, 2011 at 2:31 AM, Joachim Wieland j...@mcknight.de wrote:
In short, this is how it works:
SELECT pg_export_snapshot();
pg_export_snapshot
03A1-1
(1 row)
(and then in a different session)
On Aug 12, 2011, at 10:03 PM, Heikki Linnakangas wrote:
On 11.08.2011 23:06, Robert Haas wrote:
Comments, testing, review appreciated...
I would've expected this to use an index-only scan:
postgres=# CREATE TABLE foo AS SELECT generate_series(1,10) AS id;
SELECT 10
postgres=#
On Aug 7, 2011, at 11:01 AM, Simon Riggs wrote:
On Sun, Aug 7, 2011 at 8:57 AM, Hannu Krosing ha...@2ndquadrant.com wrote:
In 9.0 (as in earlier versions) a former standby host has to do a full
checkpoint before becoming available as an independent database instance
in either switchover or
On Jun 23, 2011, at 12:52 AM, Alvaro Herrera wrote:
Excerpts from Joshua D. Drake's message of mié jun 22 15:37:17 -0400 2011:
Per:
http://archives.postgresql.org/pgsql-hackers/2010-11/msg02043.php
It seems we did come up with a use case in the procpid discussion. The
ability to change
On Jun 23, 2011, at 12:52 AM, Alvaro Herrera wrote:
Excerpts from Joshua D. Drake's message of mié jun 22 15:37:17 -0400 2011:
Per:
http://archives.postgresql.org/pgsql-hackers/2010-11/msg02043.php
It seems we did come up with a use case in the procpid discussion. The
ability to change
hello ...
2.4? we know that some versions of 2.4 cause problems due to broken
posix_fadvise. if i remember correctly we built some configure magic into
PostgreSQL to check for this bug. what does this check do?
many thanks,
hans
On Jun 15, 2011, at 6:12 PM, Merlin
price has a problem :(.
iphone can be a 20 cents bag or a sticker or a 900 euro thing signed by
whoever ...
so, words and the sort-number / price are not related in anyway. price is in
this case no way to narrow down the problem (e.g. evaluate first or so).
many thanks,
hello ...
i have put some research into that some time ago and as far as i have seen
there is a 99% chance that no other database can do it the way we do it. it
seems nobody comes even close to it (especially not in the flexibility-arena).
oracle: disgusting workaround ...
hello all ...
given oleg's posting before i also wanted to fire up some KNN related question.
let us consider a simple example. i got some million lines and i want all rows
matching a tsquery sorted by price.
i did some tests:
test=# explain (analyze true, buffers true, costs true) SELECT id
hello ...
i got that one ...
idx_product_t_product_titleprice gist (to_tsvector('german'::regconfig,
title), int_price)
so, i have a combined index on text + number.
to me the plan seems fine ... it looks like a prober KNN traversal.
the difference between my plan and your plan seems to be
Still, having more data a user can probe would be nice.
I wonder why everyone avoids Microsoft's approach to the subject.
Apparently, they go in the 'auto-tune as much as possible' direction.
And tests we did a while ago, involving asking team from Microsoft and a
team from oracle to
On Feb 23, 2011, at 2:58 AM, Robert Haas wrote:
2011/2/22 PostgreSQL - Hans-Jürgen Schönig postg...@cybertec.at:
how does it work? we try to find suitable statistics for an arbitrary length
list of conditions so that the planner can use it directly rather than
multiplying all
Those are real problems, but I still want it. The last time I hit
this problem I spent two days redesigning my schema and adding
triggers all over the place to make things work. If I had been
dealing with a 30TB database instead of a 300MB database I would have
been royally up a creek.
cheapest and easiest solution if you run into this: add fake functions
which the planner cannot estimate properly.
use OR to artificially prop up estimates or use AND to artificially lower
them. there is actually no need to redesign the schema to get around it but
it is such an ugly
On Feb 23, 2011, at 3:46 PM, Robert Haas wrote:
On Wed, Feb 23, 2011 at 12:50 AM, Nathan Boley npbo...@gmail.com wrote:
Personally, I think the first thing we ought to do is add a real, bona
fide planner hint to override the selectivity calculation manually,
maybe something like this:
On Feb 24, 2011, at 2:09 AM, Josh Berkus wrote:
Personally, I think the first thing we ought to do is add a real, bona
fide planner hint to override the selectivity calculation manually,
maybe something like this:
WHERE (x 5 AND y = 1) SELECTIVITY (0.1);
Then, having provided a method
hello everbody,
we have spent some time in finally attacking cross column correlation. as this
is an issue which keeps bugging us for a couple of applications (some years).
this is a WIP patch which can do:
special cross column correlation specific syntax:
I have toyed around with KNN a little and I am pretty impressed when it comes
to the results we have seen in the GIS world.
Given the infrastructure we have at the moment I wonder if KNN can help to
speedup queries like that:
SELECT ... WHERE fti_query ORDER BY numeric_col LIMIT x
The use case
hello all ...
it seems we have found a fairly nasty problem.
imagine a long transaction which piles up XX.XXX of locks (count on pg_locks)
inside the same transaction by doing some tasty savepoints, with hold cursors
and so on.
in this case we see that a normal count issued in a second database
On Oct 4, 2010, at 1:23 PM, Heikki Linnakangas wrote:
On 04.10.2010 14:02, Hans-Jürgen Schönig wrote:
it seems we have found a fairly nasty problem.
imagine a long transaction which piles up XX.XXX of locks (count on
pg_locks) inside the same transaction by doing some tasty savepoints
On Sep 28, 2010, at 10:15 AM, Markus Wanner wrote:
Hi,
On 09/28/2010 07:24 AM, Li Jie wrote:
I'm interested in this parallel project,
http://wiki.postgresql.org/wiki/Parallel_Query_Execution
But I can't find any discussion and current progress in the website, it
seems to stop for nearly
On Sep 23, 2010, at 3:29 PM, Robert Haas wrote:
On Tue, Sep 21, 2010 at 12:29 AM, David Fetter da...@fetter.org wrote:
On Mon, Sep 20, 2010 at 10:57:00PM -0400, Robert Haas wrote:
2010/9/3 Hans-Jürgen Schönig h...@cybertec.at:
On Sep 2, 2010, at 1:20 AM, Robert Haas wrote:
I agree. Explicit
, Robert Haas wrote:
On Tue, Sep 7, 2010 at 2:14 PM, Boszormenyi Zoltan z...@cybertec.at wrote:
Hi,
Robert Haas írta:
2010/9/3 PostgreSQL - Hans-Jürgen Schönig postg...@cybertec.at:
i tried this one with 5000 unindexed tables (just one col):
test=# \timing
Timing is on.
test=# prepare x
for this function call goes down to
basically zero.
i hope this is something which is useful to some folks out there.
many thanks,
hans
canon-pathkeys-as-rbtree-3-ctxdiff.patch
Description: Binary data
On Sep 8, 2010, at 4:18 PM, Stephen Frost wrote:
* Hans-Jürgen Schönig
On Sep 8, 2010, at 4:57 PM, Stephen Frost wrote:
* Robert Haas (robertmh...@gmail.com) wrote:
Neat. Have you checked what effect this has on memory consumption?
Also, don't forget to add it to
https://commitfest.postgresql.org/action/commitfest_view/open
Would be good to have the patch
hello everybody,
we came across an issue which turned out to be more serious than previously
expected.
imagine a system with, say, 1000 partitions (heavily indexed) or so. the time
taken by the planner is already fairly heavy in this case.
i tried this one with 5000 unindexed tables (just one
On Sep 3, 2010, at 2:04 PM, Stephen Frost wrote:
* PostgreSQL - Hans-Jürgen Schönig (postg...@cybertec.at) wrote:
did anybody think of a solution to this problem.
or more precisely: can there be a solution to this problem?
Please post to the correct list (-performance) and provide
On Sep 2, 2010, at 1:20 AM, Robert Haas wrote:
On Sep 1, 2010, at 10:21 AM, Greg Stark gsst...@mit.edu wrote:
For what it's worth I disagree with Tom. I think this is a situation
where we need *both* types of solution. Ideally we will be able to use
a plain Append node for cases where we
On Sep 3, 2010, at 4:40 PM, Tom Lane wrote:
=?iso-8859-1?Q?PostgreSQL_-_Hans-J=FCrgen_Sch=F6nig?= postg...@cybertec.at
writes:
imagine a system with, say, 1000 partitions (heavily indexed) or so. the
time taken by the planner is already fairly heavy in this case.
As the fine manual
On Sep 1, 2010, at 4:10 PM, Tom Lane wrote:
Boszormenyi Zoltan z...@cybertec.at writes:
we are experimenting with modifying table partitioning
so the ORDER BY clause can be pushed down to
child nodes on the grounds that:
This is really premature, and anything you do along those lines now
hello tom,
yeah, we have followed quite a lot of discussion as well ...
and yes, no patches.
as far as this problem is concerned: we are working on a patch and did some
prototyping inside the planner already (attached).
the code we have is pretty limited atm (such as checking for a sort clause
hello ...
here is the ultimate revelation of planner hints in postgres ...
let us praise oleg and teodor for solving a practical problem for practical
people ...
http://www.sai.msu.su/~megera/wiki/plantuner
try this one ...
it is excellent and definitely helpful for many many people out there.
hello all ...
i am bugged with a small issue which is basically like this ...
test=# create table t_test as select x, x % 5 as y from generate_series(1,
100) AS x;
SELECT
test=# create index idx_a on t_test (x) ;
CREATE INDEX
test=# ANALYZE ;
ANALYZE
test=# explain analyze select * from
as tom pointed out - this is not possible.
there is no limit 20 in my case - i just used it to indicate that limiting does
not make the index scan possible which it does in some other cases.
the partial sort thing simon pointed out is what is needed at this point.
many thanks,
to solve it no matter what it takes.
many thanks,
hans
On Jul 26, 2010, at 1:14 AM, Robert Haas wrote:
On Sun, Jul 25, 2010 at 6:40 PM, Greg Stark gsst...@mit.edu wrote:
2010/7/25 Robert Haas robertmh...@gmail.com:
2010/7/25 PostgreSQL - Hans-Jürgen Schönig postg
On Jul 25, 2010, at 11:56 AM, Martijn van Oosterhout wrote:
On Fri, Jul 23, 2010 at 10:04:00PM +0200, Hans-Jürgen Schönig wrote:
create table foo ( x date );
create table foo_2010 () INHERITS (foo)
create table foo_2009 () INHERITS (foo)
create table foo_2008 () INHERITS
hello everybody,
i have just come across some issue which has been bugging me for a while.
consider:
SELECT * FROM foo ORDER BY bar;
if we have an index on bar, we can nicely optimize away the sort step by
consulting the index - a btree will return sorted output.
under normal
On Jul 15, 2010, at 6:43 PM, Magnus Hagander wrote:
On Thu, Jul 15, 2010 at 18:35, Simon Riggs si...@2ndquadrant.com wrote:
On Thu, 2010-07-15 at 17:38 +0200, Magnus Hagander wrote:
Is there an actual common use-case for having these commands available
for *non-psql* interfaces?
There
hello ...
a view is already nice but i think it is still too narrow.
the problem is: you don't want a view for every potential join.
in addition to that - ideally there is not much left of a view when it comes to
checking for costs.
so, i think, this is not the kind of approach leading to total
On Jul 15, 2010, at 5:20 PM, Simon Riggs wrote:
On Thu, 2010-07-15 at 11:05 -0400, Tom Lane wrote:
Simon Riggs si...@2ndquadrant.com writes:
The biggest turn off that most people experience when using PostgreSQL
is that psql does not support memorable commands.
I would like to implement
On Jul 15, 2010, at 6:20 PM, Thom Brown wrote:
On 15 July 2010 17:16, Marc G. Fournier scra...@hub.org wrote:
On Thu, 15 Jul 2010, Thom Brown wrote:
On 15 July 2010 17:07, Marc G. Fournier scra...@hub.org wrote:
On Thu, 15 Jul 2010, Thom Brown wrote:
If it's only a psql problem, why
hello everybody,
we are currently facing some serious issues with cross correlation issue.
consider: 10% of all people have breast cancer. we have 2 genders (50:50).
if i select all the men with breast cancer, i will get basically nobody - the
planner will overestimate the output.
this is the
On Jul 14, 2010, at 12:40 PM, Heikki Linnakangas wrote:
On 14/07/10 13:12, PostgreSQL - Hans-Jürgen Schönig wrote:
hello everybody,
we are currently facing some serious issues with cross correlation issue.
consider: 10% of all people have breast cancer. we have 2 genders (50:50).
if i
.
many thanks,
hans
On Jul 14, 2010, at 4:01 PM, Tom Lane wrote:
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
On 14/07/10 13:12, PostgreSQL - Hans-Jürgen Schönig wrote:
maybe somehow like this ...
ALTER TABLE x SET CORRELATION STATISTICS FOR (id = id2
hello ...
look at the syntax i posted in more detail:
ALTER TABLE x SET CORRELATION STATISTICS FOR (x.id = y.id AND x.id2 =
y.id2)
it says X and Y ...
the selectivity of joins are what i am most interested in. cross correlation of
columns within the same table are just a byproduct.
On Jun 24, 2010, at 2:13 PM, Michael Meskes wrote:
I think, yes, it does make sense. Because we are talking
about porting a whole lot of COBOL applications.
COBOL???
yes, COBOL :).
it is much more common than people think.
it is not the first COBOL request for PostgreSQL hitting my desk.
hello ...
i have come an interesting corner case this morning and i am not sure
if it is worth treating this as a bug or as just bad luck.
imagine creating a directory along with a tablespace ...
hans-jurgen-schonigs-macbook:html hs$ mkdir /tmp/x
hans-jurgen-schonigs-macbook:html hs$ psql
On Nov 30, 2009, at 10:32 AM, Stefan Kaltenbrunner wrote:
Andres Freund wrote:
On Monday 30 November 2009 03:57:11 Itagaki Takahiro wrote:
Boszormenyi Zoltan z...@cybertec.at wrote:
we tried to discuss on a lower level what should be needed
for a partial replication based on streaming
Just a side note: in addition to its use for partial replication, this
might have potential for performance-prioritizing databases or
tablespaces.
hello ...
this is an absolutely non-starter. the WAL is designed to be hyper
ordered and hyper critical. once you fuck up order you will
On Nov 13, 2009, at 8:06 AM, Michael Meskes wrote:
On Thu, Nov 12, 2009 at 03:07:27PM -0500, Robert Haas wrote:
If you want to submit patches in a series like this one, they need
to be
considered standalone, I think. The Linux kernel devs work
differently
than us here.
Zoltan broke
hello tom ...
the reason for SELECT FOR UPDATE is very simple:
this is the typical lock obtained by basically every business
application if written properly (updating a product, whatever).
the problem with NOWAIT basically is that if a small transaction holds
a a lock for a subsecond, you
Joshua Tolley wrote:
Way back in this thread[1] one of the arguments against allowing
some version of CREATE SYNONYM was that we couldn't create a synonym for
an object in a remote database. Will the SQL/MED work make this sort of
thing a possibility? I realize since it's not standard anyway,
why not just stream it in via set-returning functions and make sure
that we can mark a set returning function as STREAMABLE or so (to
prevent joins, whatever).
is it the easiest way to get it right and it helps in many other cases.
i think that the storage manager is definitely the wrong
hi ...
i think the easiest way to do this is to simply add a mechanism to
functions which allows a function to stream data through.
it would basically mean losing join support as you cannot read data
again in a way which is good enough good enough for joining with the
function providing
Bruce Momjian wrote:
Greg Stark wrote:
I couldn't get async I/O to work on Linux. That is it worked but
performed the same as reading one block at a time. On solaris the
situation is reversed.
In what way is fadvise a kludge?
I think he is saying AIO gives us more flexibility,
On Sep 22, 2008, at 9:46 AM, Simon Riggs wrote:
On Sun, 2008-09-21 at 12:05 -0400, Tom Lane wrote:
... and it goes on to point out how to force immediate space
reclamation
if you need that. These statements apply independently of whether
any
particular value is toasted or not.
The
On Sep 22, 2008, at 12:02 PM, Simon Riggs wrote:
On Mon, 2008-09-22 at 04:57 -0400, Greg Smith wrote:
-As Greg Stark suggested, the larger the spindle count the larger the
speedup, and the larger the prefetch size that might make sense. His
suggestion to model the user GUC as
*snip*
Judging from that, the toasted table
cleanup may be part of ALTER TABLE DROP COLUMN.
That would only help if you were dropping the last potentially-
toastable
column of a table. And implementing it would require introducing
weird
corner cases into the tuple toaster, because it
this entire thing is not about cartesian products at all.
it is about kicking out expensive queries before they even start to
eat up tons of CPU.
imagine a user asking for give me all phone call in the US within the
past 10 years. you could kill the guy instantly because you know that
hello ...
I still support it. Regrettably, many SQL developers introduce product
joins and other unintentional errors. Why let problem queries through?
i think the killer is that we don't have to wait until the query dies
with a statement_timeout.
it is ways more elegant to kill things
hello ...
i picked up csaba nagy's idea and implemented a very simple yet very
useful extension.
i introduced a GUC called statement_cost_limit which can be used to
error out if a statement is expected to be too expensive.
the advantage over statement_timeout is that we are actually able to
On Aug 2, 2008, at 8:38 PM, Tom Lane wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
Hans-Jürgen Schönig wrote:
i introduced a GUC called statement_cost_limit which can be used to
error out if a statement is expected to be too expensive.
You clearly have far more faith in the cost
good morning,
I got a bug request for the following unicode character in PostgreSQL
8.1.4: 0xedaeb8
ERROR: invalid byte sequence for encoding UTF8: 0xedaeb8
This one seemed to work properly in PostgreSQL 8.0.3.
I think the following code in postgreSQL 814 has a bug in it.
File:
Tom Lane wrote:
Andrew Dunstan [EMAIL PROTECTED] writes:
I'd like to know some settings that we can use that will get Tru64
cleanly through the buildfarm set. If noone offers any, I propose that
we revert the getaddrinfo() test in configure and use our own on Tru64
until they do.
I have
Jonah H. Harris wrote:
This email is a preliminary design for the implementation of synonyms in
PostgreSQL. Comments and suggestions are welcomed.
BACKGROUND
Synonyms are database objects which can be used in place of their
referenced object in SELECT, INSERT, UPDATE, and DELETE SQL
i was just wondering about the status of hannu's concurrent vacuum patch.
are there any plans to integrate this or are there still improvements
which have to be made?
many thanks,
hans-juergen schoenig
---(end of broadcast)---
TIP 3:
i think if the protocol is enhanced again we should also consider adding
protocol level support for RESET CONNECTION.
i have sent this patch some time ago but i think it is not worth to do
the remaining protocol level changes (sql level support is finished) if
this is the only change on the
i had to deal with oracle in the past couple of days (*mega sigh*)
i have seen a very interesting feature which would make sense for
PostgreSQL users.
currently we have:
test=# \h PREPARE TRANSACTION
Command: PREPARE TRANSACTION
Description: prepare the current transaction for two-phase
Luke Lonergan wrote:
Dan,
On 9/29/05 3:23 PM, Daniel Duvall [EMAIL PROTECTED] wrote:
What about clustered filesystems? At first blush I would think the
overhead of something like GFS might kill performance. Could one
potentially achieve a fail-over config using multiple nodes with GFS,
hackers,
currently we have to hack tons of export scripts for various customers.
the problem is: if tables can be exported straight forward COPY will
give you all you need but when data has to be transformed while
exporting things start becoming a bit more complex. usually people want
to have
Rod Taylor wrote:
the problem is: COPY can write data returned by a SELECT statement to a
file. our idea is to implement precisely that.
example:
COPY TO file_name USING some_select_statement;
I have run into plenty of cases where I wanted to dump part of a
structure and this could be used
PROTECTED]
[mailto:[EMAIL PROTECTED] conto di Hans-Jürgen
Schönig
Inviato: mercoledì 21 settembre 2005 15.30
A: pgsql-hackers@postgresql.org; [EMAIL PROTECTED]
Oggetto: [HACKERS] feature proposal ...
hackers,
currently we have to hack tons of export scripts for various customers.
the problem
Joshua D. Drake wrote:
Hans-Jürgen Schönig wrote:
no because a new is not a heap ...
Why not use a function with a temporary table?
That way you can pass a table parameter that
is the temporary table with a select statement
that you can populate the temp table with.
Sincerely,
Joshua D
I was wondering whether it is possible to teach the planner to handle
DISTINCT in a more efficient way:
em=# explain select distinct lastname from import.testtest;
QUERY PLAN
in the past we have faced a couple of problems with corrupted system
tables. this seems to be a version independent problem which occurs on
hackers' from time to time.
i have checked a broken file and i have seen that the corrupted page has
actually been zeroed out.
my question is: are there
1 - 100 of 231 matches
Mail list logo