On 26/12/10 20:57, Tom Lane wrote:
=?UTF-8?B?SmFuIFVyYmHFhHNraQ==?= wulc...@wulczer.org writes:
How about a format like this then:
# Comment
Section: Class 2F - SQL Routine Exception
macro_name sqlstate plpgsqlname is_error
That is: # and blank lines are comments, lines starting with
On 26/12/10 21:17, Tom Lane wrote:
=?UTF-8?B?SmFuIFVyYmHFhHNraQ==?= wulc...@wulczer.org writes:
Makes sense. Wait, no, errcodes.sgml includes the entries for success
and warnings, but the plpgsql conditions list does not. So we need a
separate column to differentiate.
OK. But not 0/1
On 23/12/10 12:16, Marti Raudsepp wrote:
On Thu, Dec 23, 2010 at 04:08, Jan Urbański wulc...@wulczer.org wrote:
* providing custom exceptions for SPI errors, so you can catch only
UniqueViolations and not have to muck around with SQLCODE
py-postgresql already has a mapping from error codes
Here's a patch implementing a validator functoin mentioned in
http://archives.postgresql.org/pgsql-hackers/2010-12/msg01991.php. It's
an incremental patch on top of the plpython-refactor patch sent eariler.
Git branch for this patch:
https://github.com/wulczer/postgres/tree/validator.
Cheers,
On 23/12/10 14:41, Jan Urbański wrote:
Here's a patch implementing a validator functoin mentioned in
http://archives.postgresql.org/pgsql-hackers/2010-12/msg01991.php. It's
an incremental patch on top of the plpython-refactor patch sent eariler.
Git branch for this patch:
https://github.com
Here's a patch implementing a executing SPI in an subtransaction
mentioned in
http://archives.postgresql.org/pgsql-hackers/2010-12/msg01991.php. It's
an incremental patch on top of the plpython-refactor patch sent eariler.
Git branch for this patch:
Here's a patch implementing properly invalidating functions that have
composite type arguments after the type changes, as mentioned in
http://archives.postgresql.org/pgsql-hackers/2010-12/msg01991.php. It's
an incremental patch on top of the plpython-refactor patch sent eariler.
Git branch for
Here's a patch implementing traceback support for PL/Python mentioned in
http://archives.postgresql.org/pgsql-hackers/2010-12/msg01991.php. It's
an incremental patch on top of the plpython-refactor patch sent eariler.
Git branch for this patch:
https://github.com/wulczer/postgres/tree/tracebacks.
Here's a patch implementing table functions mentioned in
http://archives.postgresql.org/pgsql-hackers/2010-12/msg01991.php. It's
an incremental patch on top of the plpython-refactor patch sent eariler.
Git branch for this patch:
https://github.com/wulczer/postgres/tree/table-functions.
This
Here's a patch implementing custom parsers for data types mentioned in
http://archives.postgresql.org/pgsql-hackers/2010-12/msg01991.php. It's
an incremental patch on top of the plpython-refactor patch sent eariler.
Git branch for this patch:
Here's a patch implementing explicitly starting subtransactions mentioned in
http://archives.postgresql.org/pgsql-hackers/2010-12/msg01991.php. It's
an incremental patch on top of the spi-in-subxacts patch sent eariler.
Git branch for this patch:
Here's a patch implementing custom Python exceptions for SPI errors
mentioned in
http://archives.postgresql.org/pgsql-hackers/2010-12/msg01991.php. It's
an incremental patch on top of the explicit-subxacts patch sent eariler.
Git branch for this patch:
On 08/12/10 22:41, Peter Eisentraut wrote:
On tis, 2010-12-07 at 23:56 +0100, Jan Urbański wrote:
Peter suggested having a mail/patch per feature and the way I intend
to do that is instead of having a dozen branches, have one and after
I'm done rebase it interactively to produce incremental
Hi,
there seems to be a problem in the way we add exceptions to the plpy
module in PL/Python compiled with Python 3k.
Try this: DO $$ plpy.SPIError $$ language plpython3u;
I'm not a Python 3 expert, but I nicked some code from the Internet and
came up with this patch (passes regression tests on
On 18/12/10 18:56, Jan Urbański wrote:
I'm not a Python 3 expert, but I nicked some code from the Internet and
came up with this patch (passes regression tests on both Python 2 and 3).
I tried to be too cute with the regression test, it fails with Python
2.3.7 (the latest 2.3 release
On Wed, Dec 15, 2010 at 12:19:53AM +0100, Jan Urbański wrote:
Problem: what to do it hstore_plpython gets loaded, but hstore is not
yet loaded. hstore_plpython will want to DirectFunctionCall(hstore_in),
so loading hstore_plpython without loading hstore will result in an
ereport(ERROR
On 15/12/10 15:38, Tom Lane wrote:
Jan =?utf-8?B?VXJiYcWEc2tp?= wulc...@wulczer.org writes:
OK, here's another master plan:
1) hstore_plplython, when loaded, looks for a type called hstore. If
you created a hstore type that does not come from hstore.so, and you
still load hstore_plpython,
On 15/12/10 16:11, Tom Lane wrote:
Robert Haas robertmh...@gmail.com writes:
I was asking what would satisfy you as regards a reliable way to
identify a type, not what you think we should do about this particular
proposal.
Okay: a preassigned OID is safe. I haven't seen any other safe
On 15/12/10 16:25, Dmitriy Igrishin wrote:
2010/12/15 Jan Urbański wulc...@wulczer.org
So how about just adding a text column to pg_type and a IDENTIFIER
keywork to CREATE TYPE. It's not guaranteed to be unique, but isn't it
pushing the argument to the extreme? Someone can change around bool
On 14/12/10 18:05, David E. Wheeler wrote:
On Dec 13, 2010, at 11:37 PM, Jan Urbański wrote:
A function said to be returning a hstore could return a dictionary and if it
would have only string keys/values, it would be changed into a hstore (and
if not, throw an ERROR).
It doesn't turn
On 14/12/10 17:52, Tom Lane wrote:
Peter Eisentraut pete...@gmx.net writes:
On mån, 2010-12-13 at 08:50 +0100, Jan Urbański wrote:
It would be cool to be able to transparently use hstores as Python
dictionaries and vice versa. It would be easy enough with hstore as a
core type
Robert Haas robertmh...@gmail.com writes:
On Mon, Dec 13, 2010 at 9:16 PM, Tom Lane t...@sss.pgh.pa.us wrote:
It seems like what we need at this point is a detailed, non-arm-waving
design for what Jan would do in pl/python if hstore were in core. Then
we can look at it and see exactly what
It would be cool to be able to transparently use hstores as Python
dictionaries and vice versa. It would be easy enough with hstore as a
core type, but with hstore as an addon it's not that easy.
There was talk about including hstore in core, is there still chance for
that to happen in 9.1? I'd
On 08/12/10 18:45, Tom Lane wrote:
The real fix in my mind is to replace GEQO search with something
smarter. I wonder what happened to the SA patch that was reported
on at PGCon.
I got distracted with other things :( I'll try to plan the two queries
with SA and see what the results are. If
On 08/12/10 19:02, Jan Urbański wrote:
On 08/12/10 18:45, Tom Lane wrote:
The real fix in my mind is to replace GEQO search with something
smarter. I wonder what happened to the SA patch that was reported
on at PGCon.
I got distracted with other things :( I'll try to plan the two queries
On 08/12/10 21:18, Tom Lane wrote:
=?UTF-8?B?SmFuIFVyYmHFhHNraQ==?= wulc...@wulczer.org writes:
I'm pleasantly surprised that the SA code as it stands today, setting
the equlibrium factor to 8 and temperature reduction factor to 0.4, the
query takes 1799.662 ms in total.
Cool.
With the
Hi,
no, no patch(es) yet. I'm going through plpython.c trying as best I can
to improve things there. I'll have a patch (or patches) ready for the
January commitfest, but I thought I'd open up a discussion already to
spare me having to redo features because the way I attacked the problem
is a dead
On 07/12/10 21:33, Andres Freund wrote:
On Tuesday 07 December 2010 20:17:57 Jan Urbański wrote:
* execute SPI calls in a subtransaction, report errors back to Python
as exceptions that can be caught etc.
Youre doing that unconditionally? I think the performance impact of this will
be too
On 07/12/10 23:00, Andrew Dunstan wrote:
On 12/07/2010 04:50 PM, Peter Eisentraut wrote:
The code is on https://github.com/wulczer/postgres, in the plpython
branch. I'll be rebasing it regularly, so don't be surprised by commit
hashes changing.
I think rebasing published repositories isn't
On 28/11/10 05:23, Andrew Dunstan wrote:
On 11/27/2010 10:28 PM, Tom Lane wrote:
=?UTF-8?B?SmFuIFVyYmHFhHNraQ==?=wulc...@wulczer.org writes:
I noticed that PL/Python uses a simple wrapper around malloc that does
ereport(FATAL) if malloc returns NULL. I find it a bit harsh, don't we
I noticed that PL/Python uses a simple wrapper around malloc that does
ereport(FATAL) if malloc returns NULL. I find it a bit harsh, don't we
normally do ERROR if we run out of memory?
And while looking at how PL/Perl does these things I find that one
failed malloc (in compile_plperl_function)
On 24/10/10 00:32, Jan Urbański wrote:
On 21/10/10 20:48, Alvaro Herrera wrote:
... and presumably somebody can fix the real bug that Jean-Baptiste hit,
too.
AFAICS the error comes from PLy_function_handler disconnecting from SPI
after calling into the Python code and then going ahead
On 04/11/10 20:43, Hannu Krosing wrote:
On Thu, 2010-11-04 at 11:07 -0600, Alex Hunsaker wrote:
On Thu, Nov 4, 2010 at 03:54, Hannu Krosing ha...@2ndquadrant.com wrote:
try:
plpy.execute(insert into foo values(1))
except plpy.UniqueViolation, e:
plpy.notice(Ooops, you got yourself a
On 04/11/10 14:09, Robert Haas wrote:
On Thu, Nov 4, 2010 at 6:05 AM, Itagaki Takahiro
itagaki.takah...@gmail.com wrote:
2010/11/4 KaiGai Kohei kai...@kaigai.gr.jp:
The attached patch is a contrib module to inject a few seconds
delay on authentication failed. It is also a proof of the concept
On 03/11/10 20:57, Alex Hunsaker wrote:
On Wed, Nov 3, 2010 at 10:28, Tom Lane t...@sss.pgh.pa.us wrote:
OK, applied.
Thanks!
I notice that plpython is also using the trigger relation's OID, but I
don't know that language well enough to tell whether it really needs to.
This thread was
On 25/10/10 03:59, Andrew Dunstan wrote:
On 10/24/2010 09:34 PM, Tom Lane wrote:
For both trigger and non-trigger functions, we compile this ahead of the
user-set function code:
our $_TD; local $_TD=shift;
Non-trigger functions get passed undef to correspond to this invisible
On 24/10/10 14:44, Sushant Sinha wrote:
I am using gin index on a tsvector and doing basic search. I see the
row-estimate of the planner to be horribly wrong. It is returning
row-estimate as 4843 for all queries whether it matches zero rows, a
medium number of rows (88,000) or a large number
On 24/10/10 14:44, Sushant Sinha wrote:
I am using gin index on a tsvector and doing basic search. I see the
row-estimate of the planner to be horribly wrong. It is returning
row-estimate as 4843 for all queries whether it matches zero rows, a
medium number of rows (88,000) or a large number
I see that plperl uses a triple of (function oid, is_trigger flag, user
id) as a hash key for caching compiled functions. OTOH pltcl and plpgsql
both use (oid, trigger relation oid, user id). Is there any reason why
just using a bool as plperl does would be wrong?
I'm trying to write a validator
On 21/10/10 20:48, Alvaro Herrera wrote:
Excerpts from Alvaro Herrera's message of jue oct 21 15:32:53 -0300 2010:
Excerpts from Jean-Baptiste Quenot's message of jue oct 21 09:20:16 -0300
2010:
I get this error when calling the function:
test=# select foobar();
ERROR: error fetching
On 24/07/10 15:20, Adriano Lange wrote:
Hi,
Hi!
I'd like to release the last version of my experimental join order
algorithm (TwoPO - Two Phase Optimization [1]):
http://git.c3sl.ufpr.br/gitweb?p=lbd/ljqo.git;a=summary
This algorithm is not production-ready, but an experimental set of
OK, here's a review, as much as I was able to do it without
understanding deeply how GIN works.
The patch is context, applies cleanly to HEAD, compiles without warnings
and passes regression tests.
Using the script from
http://archives.postgresql.org/pgsql-performance/2009-10/msg00393.php I
On 26/07/10 12:58, Oleg Bartunov wrote:
Jan,
On Sun, 25 Jul 2010, Jan Urbaski wrote:
On 02/07/10 14:33, Teodor Sigaev wrote:
Patch implements much more accuracy estimation of cost for GIN index
scan than generic cost estimation function.
I was able to reproduce his issue, that is: select
On 23/07/10 20:55, Pavel Stehule wrote:
Hello
2010/7/23 Jan Urbański wulc...@wulczer.org:
On 21/07/10 14:43, Pavel Stehule wrote:
Hello
I am sending a actualised patch.
OK, thanks. This time the only thing I'm not happy about is the error
message from doing:
\ef func 0
\e /etc/passwd xxx
On 02/07/10 14:33, Teodor Sigaev wrote:
Patch implements much more accuracy estimation of cost for GIN index
scan than generic cost estimation function.
Hi,
I'm reviewing this patch, and to begin with it I tried to reproduce the
problem that originally came up on -performance in
On 21/07/10 14:43, Pavel Stehule wrote:
Hello
I am sending a actualised patch.
Hi, thanks!
I understand to your criticism about line numbering. I have to
agree. With line numbering the patch is longer. I have a one
significant reason for it.
CREATE OR REPLACE FUNCTION public.foo()
Hi,
here's a review of the \sf and \ef [num] patch from
http://archives.postgresql.org/message-id/162867791003290927y3ca44051p80e697bc6b19d...@mail.gmail.com
== Formatting ==
The patch has some small tabs/spaces and whitespace issues and it
applies with some offsets, I ran pgindent and
On 07/07/10 17:19, Peter Froehlich wrote:
On Wed, Jul 7, 2010 at 8:49 AM, Peter Eisentrautpete...@gmx.net wrote:
If you want to hack PL/Python, which is a Python interpreter embedded
into the PostgreSQL server, then this is the right place. (Yes, it's
mixed with all the rest.)
If you want to
Hi,
per $SUBJECT.
Cheers,
Jan
diff --git a/src/test/regress/pg_regress.c b/src/test/regress/pg_regress.c
index 9de4189..ebdf812 100644
--- a/src/test/regress/pg_regress.c
+++ b/src/test/regress/pg_regress.c
@@ -1870,6 +1870,7 @@ help(void)
printf(_((can be used
Jesper Krogh jes...@krogh.cc writes:
On 2010-05-29 15:56, Jan Urbański wrote:
AFAIK statistics for everything other than tsvectors are built based
on the values of whole rows.
Wouldn't it make sense to treat array types like the tsvectors?
Yeah, I have a personal TODO item to look
On 30/05/10 09:08, Jesper Krogh wrote:
On 2010-05-29 15:56, Jan Urbański wrote:
On 29/05/10 12:34, Jesper Krogh wrote:
I can fairly easy try out patches or do other kind of testing.
I'll try to come up with a patch for you to try and fiddle with these
values before Monday.
Here's
On 31/05/10 00:07, Tom Lane wrote:
=?UTF-8?B?SmFuIFVyYmHFhHNraQ==?= wulc...@wulczer.org writes:
I committed the attached revised version of the patch. Revisions are
mostly minor but I did make two substantive changes:
* The patch changed the target number of mcelems from 10 *
On 29/05/10 12:34, Jesper Krogh wrote:
On 2010-05-28 23:47, Jan Urbański wrote:
On 28/05/10 22:22, Tom Lane wrote:
Now I tried to substitute some numbers there, and so assuming the
English language has ~1e6 words H(W) is around 6.5. Let's assume the
statistics target to be 100.
I chose s
On 29/05/10 17:09, Tom Lane wrote:
=?UTF-8?B?SmFuIFVyYmHFhHNraQ==?= wulc...@wulczer.org writes:
Now I tried to substitute some numbers there, and so assuming the
English language has ~1e6 words H(W) is around 6.5. Let's assume the
statistics target to be 100.
I chose s as 1/(st + 10)*H(W)
On 29/05/10 17:34, Tom Lane wrote:
=?UTF-8?B?SmFuIFVyYmHFhHNraQ==?= wulc...@wulczer.org writes:
On 29/05/10 17:09, Tom Lane wrote:
There is definitely something wrong with your math there. It's not
possible for the 100'th most common word to have a frequency as high
as 0.06 --- the ones
On 28/05/10 04:47, Tom Lane wrote:
=?UTF-8?B?SmFuIFVyYmHFhHNraQ==?= wulc...@wulczer.org writes:
On 19/05/10 21:01, Jesper Krogh wrote:
In practice, just cranking the statistics estimate up high enough seems
to solve the problem, but doesn't
there seem to be something wrong in how the
On 28/05/10 04:47, Tom Lane wrote:
I re-scanned that paper and realized that there is indeed something
wrong with the way we are doing it. The paper says (last sentence in
the definition of the algorithm, section 4.2):
When a user requests a list of items with threshold s, we output
On 28/05/10 22:22, Tom Lane wrote:
The idea that I was toying with is to assume a Zipfian distribution of
the input (with some reasonable parameter), and use that to estimate
what the frequency of the K'th element will be, where K is the target
number of MCV entries or perhaps a bit more.
On 19/05/10 21:01, Jesper Krogh wrote:
The document base is arount 350.000 documents and
I have set the statistics target on the tsvector column
to 1000 since the 100 seems way of.
So for tsvectors the statistics target means more or less at any time
track at most 10 * target lexemes
On 24/03/10 21:06, Markus Wanner wrote:
Steve,
Steve Singer wrote:
$ git clone http://git.postgres-r.org/dtester
Initialized empty Git repository in
/local/home/ssinger/src/dtester/dtester/.git/
fatal: http://git.postgres-r.org/dtester/info/refs download error -
The requested URL returned
Arie Bikker wrote:
Hi all,
I've combined the review suggestions of Jan Urbański, Scott Bailey, and
others.
This was a lot harder, then I had foreseen; and I took my time to do it
the right way (hope you agree!).
Hi,
I see the patch has been marked as Returned with Feedback on the 6th
Hi,
here's a review of the patch:
It applies with offsets, but worked fine for me. It works as advertised,
and I believe it is a solid step forward from the current situation.
As far as the coding goes, the PG_TRY/CATCH in xml_xmlpathobjtoxmltype
seems unnecessary in the XPATH_BOOLEAN branch,
Markus Wanner wrote:
I do want to expand the tests quite a bit -- do I work them all into
this same file, or how would I proceed? I think I'll need about 20
more tests, but I don't want to get in the way of your work on the
framework which runs them.
Well, first of all, another piece of
Andres Freund wrote:
On Wednesday 23 December 2009 02:23:55 Jan Urbański wrote:
Lastly, I'm lacking good testcases
If you want to see some queries which are rather hard to plan with random
search you can look at
http://archives.postgresql.org/message-
id/200907091700.43411
Hi,
I've been playing with using a Simulated Annealing-type algorithm for
determinig join ordering for relations. To get into context see
http://archives.postgresql.org/pgsql-hackers/2009-05/msg00098.php
(there's also a TODO in the wiki). There's a nice paper on that in
Hi,
ISTM that there's a superfluous curly brace in print_path (which only
gets compiled with -DOPTIMIZER_DEBUG.
Patch attached.
Jan
diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c
index bfadcb0..6b2f86c 100644
---
Emmanuel Cecchet wrote:
Jan,
A couple of nitpicks first:
o) the route_tuple_to_child recurses to child tables of child tables,
which is undocumented and requires a check_stack_depth() call if it's
really desirable
The recursive call is as deep as the inheritance hierarchy. I am not
Emmanuel Cecchet wrote:
Hi Jan,
Here is the updated patch.
Note that the new code in trigger is a copy/paste of the before row
insert trigger code modified to use the pointers of the after row
trigger functions.
Hi,
ok, this version applied, compiled and ran the regression tests fine. I
Emmanuel Cecchet wrote:
Hi Jan,
Here is a new version of the patch with the following modifications:
- used oid list from pg_list.h
- properly handles triggers and generate an error if needed (updated doc
as well)
- added your test cases + extra bad trigger cases
Hi,
that got broken by
Hi,
I'll hopefully look at the next version of the patch tommorrow.
Emmanuel Cecchet wrote:
o test1.sql always segfaults for me, poking around with gdb suggests
it's a case of an uninitialised cache list (another reason to use the
builtin one).
I was never able to reproduce that
Emmanuel Cecchet wrote:
Hi all,
Hi!,
partitioning option for COPY
Here's the review:
== Submission ==
The patch is contextual, applies cleanly to current HEAD, compiles fine.
The docs build cleanly.
== Docs ==
They're reasonably clear, although they still mention ERROR_LOGGING,
which was
Jan Urbański wrote:
Emmanuel Cecchet wrote:
Hi all,
Hi!,
partitioning option for COPY
Attached are 3 files that demonstrate problems the patch has.
And the click-before-you-think prize winner is... me.
Test cases attached, see the comments for expected/actual results.
Jan
-- segfaults
Petr Jelinek wrote:
I made some more small adjustments - mainly renaming stuff after Tom's
comment on anonymous code blocks patch and removed one unused shared
dependency.
Hi,
the patch still has some issues with dependency handling:
postgres=# create role test;
CREATE ROLE
postgres=# create
OK, the previous problem went away, but I can still do something like that:
postgres=# create role test;
CREATE ROLE
postgres=# create role test2;
CREATE ROLE
postgres=# create database db;
CREATE DATABASE
postgres=# \c db
psql (8.5devel)
You are now connected to database db.
db=# alter default
Petr Jelinek wrote:
Jan Urbański napsal(a):
Dependencies suck, I know..
Cross-database dependencies do.
I had to make target role owner of the default acls which adds some side
effects like the fact that it blocks DROP ROLE so DROP OWNED BY has to
be used.
As for REASSIGN OWNED
Hi,
here's a (late, sorry about that) review:
== Trivia ==
Patch applies cleanly with a few 1 line offsets.
It's unified, not context, but that's trivial.
The patch adds some trailing whitespace, which is not good (git diff
shows it in red, it's easy to spot it). There's also one
hunk that's
Petr Jelinek wrote:
So I've been working on solution with which I am happy with (does not
mean anybody else will be also though).
Hi Petr,
I'm reviewing this patch and after reading it I have some comments.
Unfortunately, when I got to the compiling part, it turned out that the
attached patch
Patch -p1 attached.
Cheers,
Jan
diff --git a/doc/src/sgml/release-8.4.sgml b/doc/src/sgml/release-8.4.sgml
index 184bf47..50d9cb0 100644
--- a/doc/src/sgml/release-8.4.sgml
+++ b/doc/src/sgml/release-8.4.sgml
@@ -104,7 +104,7 @@
listitem
para
Properly show fractional seconds and
Tom Lane wrote:
Kevin Grittner kevin.gritt...@wicourts.gov writes:
I guess the question is whether there is anyone who has had a contrary
experience. (There must have been some benchmarks to justify adding
geqo at some point?)
The CVS history shows that geqo was integrated on 1997-02-19,
Alvaro Herrera wrote:
Andrew Dunstan wrote:
Alvaro Herrera wrote:
Hi,
I was having a look at this snippet:
http://wiki.postgresql.org/wiki/Google_Translate
and it turns out that it doesn't work if the result contains non-ASCII
chars. Does anybody know how to fix it?
alvherre=# select
Alvaro Herrera wrote:
Andrew Dunstan wrote:
Alvaro Herrera wrote:
Hi,
I was having a look at this snippet:
http://wiki.postgresql.org/wiki/Google_Translate
and it turns out that it doesn't work if the result contains non-ASCII
chars. Does anybody know how to fix it?
alvherre=# select
Tom Lane wrote:
Tatsuo Ishii is...@postgresql.org writes:
I'm wondering if following behavior of PostgreSQL regarding lock
conflict is an expected one. Here's a scenario:
Session A:
BEGIN;
SELECT * FROM pg_class limit 1; -- acquires access share lock
Session B:
BEGIN;
Tom Lane wrote:
I started making the changes to increase the default and maximum stats
targets 10X, as I believe was agreed to in this thread:
http://archives.postgresql.org/pgsql-hackers/2008-12/msg00386.php
I came across this bit in ts_typanalyze.c:
/* We want statistic_target * 100
rahulg wrote:
I am facing problem in tracing in what events the selectivity
histogram in pg_statistic is stored/updated.
I went through the code in src/backend/commands/analyze.c and got to
see the code computing the histogram but when I tried to trace the
caller of analyze_rel or
[EMAIL PROTECTED] wrote:
Quoting Tom Lane [EMAIL PROTECTED]:
I wrote:
... One possibly
performance-relevant point is to use DatumGetTextPP for detoasting;
you've already paid the costs by using VARDATA_ANY etc, so you might
as well get the benefit.
Actually, wait a second. That code
Tom Lane wrote:
=?UTF-8?B?SmFuIFVyYmHFhHNraQ==?= [EMAIL PROTECTED] writes:
[EMAIL PROTECTED] wrote:
Well whaddya know. It turned out that my new company has a
'Fridays-are-for-any-opensource-hacking-you-like' policy, so I got a
full day to work on the patch.
Hm, does their name start with
Tom Lane wrote:
=?UTF-8?B?SmFuIFVyYmHFhHNraQ==?= [EMAIL PROTECTED] writes:
Simon Riggs wrote:
put it in a file called selfuncs_ts.c so it is similar to the existing
filename?
I followed the pattern of ts_parse.c, ts_utils.c and so on.
Also, I see geo_selfuncs.c. No big deal, though, I can
Simon Riggs wrote:
On Thu, 2008-08-14 at 22:27 +0200, Jan Urbański wrote:
Jan Urbański wrote:
+ * ts_selfuncs.c
Not sure why this is in its own file
I couldn't decide where to put it, so I came up with this.
put it in a file called selfuncs_ts.c so it is similar to the existing
Heikki Linnakangas wrote:
Jan Urbański wrote:
26763 3.5451 AllocSetCheck
Make sure you disable assertions before profiling.
Awww, darn. OK, here goes another set of results, without casserts this
time.
=== CVS HEAD ===
number of clients: 10
number of transactions per client: 10
Heikki Linnakangas wrote:
Jan Urbański wrote:
Not good... Shall I try sorting pg_statistics arrays on text values
instead of frequencies?
Yeah, I'd go with that. If you only do it for the new
STATISTIC_KIND_MCV_ELEMENT statistics, you shouldn't need to change any
other code.
OK, will do
Heikki Linnakangas wrote:
Jan Urbański wrote:
So right now the idea is to:
(1) pre-sort STATISTIC_KIND_MCELEM values
(2) build an array of pointers to detoasted values in tssel()
(3) use binary search when looking for MCELEMs during tsquery analysis
Sounds like a plan. In (2), it's even
Heikki Linnakangas wrote:
Jan Urbański wrote:
So right now the idea is to:
(1) pre-sort STATISTIC_KIND_MCELEM values
(2) build an array of pointers to detoasted values in tssel()
(3) use binary search when looking for MCELEMs during tsquery analysis
Sounds like a plan. In (2), it's even
Jan Urbański wrote:
Heikki Linnakangas wrote:
Jan Urbański wrote:
So right now the idea is to:
(1) pre-sort STATISTIC_KIND_MCELEM values
(2) build an array of pointers to detoasted values in tssel()
(3) use binary search when looking for MCELEMs during tsquery analysis
Sounds like a plan
Alvaro Herrera wrote:
Jan Urbański wrote:
Heikki Linnakangas wrote:
Sounds like a plan. In (2), it's even better to detoast the values
lazily. For a typical one-word tsquery, the binary search will only
look at a small portion of the elements.
Hm, how can I do that? Toast is still a bit
Tim Hawes wrote:
Hello all,
I am trying to write an extension in C that returns a simple environment
variable. The code compiles without any complaint or warning, and it
loads fine into the database, however, when I run the function, I get
disconnected from the server.
Here is my C code:
Tim Hawes wrote:
@Jan:
It appears the cstring_to_text function is unique to the latest
PostgreSQL code. I do not have a def for that for PostgreSQL 8.2, and
Oh, I'm sorry, I forgot about that. cstring_to_text has been added only
recently (it's not even it 8.3, silly me).
Datum
Heikki Linnakangas wrote:
Jan Urbański wrote:
through it. The only tiny ugliness is that there's one function used
for qsort() and another for bsearch(), because I'm sorting an array of
texts (from pg_statistic) and I'm binary searching for a lexeme
(non-NULL terminated string with length
Peter Eisentraut wrote:
On Monday 11 August 2008 16:23:29 Jan Urbański wrote:
Often clients want their searches to be
accented-or-language-specific letters insensitive. So searching for
'łódź' returns 'lodz'. So the use case is there (in fact, the lack of
such facility made me consider
Andrew Dunstan wrote:
Pavel Stehule wrote:
One note - convert_to is correct. But we have to use to_ascii without
decode functions. It has same behave - convert from bytea to text.
Text in incorrect encoding is dafacto bytea. So correct to_ascii
function prototypes are:
to_ascii(text)
Andrew Dunstan wrote:
Jan Urbański wrote:
Andrew Dunstan wrote:
Pavel Stehule wrote:
What you have not said is how you propose to convert UTF8 to ASCII.
Currently to_ascii() converts a small number of single byte charsets
to ASCII by folding the chars with high bits set, so what we get
201 - 300 of 345 matches
Mail list logo