On Jan 8, 2011, at 1:59 PM, Tom Lane wrote:
Hrm, the queries I wrote for this sort of thing use intarray:
I'm going to work on contrib/intarray first (before tsearch etc)
so that you can do whatever testing you want sooner.
No, of course not.
One of the things that first got me annoyed
On Jan 4, 2011, at 3:18 PM, Josh Berkus wrote:
Actually, there's been a *lot* of complaining about the GIN issues.
It's just that most of that complaining doesn't reach -hackers.
The common pattern I've seen in our practice and on IRC is:
1) user has GiST indexes
2) user tries converting
On Jan 7, 2011, at 4:19 PM, Tom Lane wrote:
Well, actually, I just committed it. If you want to test, feel free.
Note that right now only the anyarray @ @ operators are genuinely
fixed ... I plan to hack on tsearch and contrib pretty soon though.
Hrm, the queries I wrote for this sort of
On Jan 5, 2011, at 10:05 AM, Robert Haas wrote:
There's no consensus to publish a bakend \i like function. So there's
no support for this upgrade script organizing you're promoting. Unless
the consensus changes again (but a commit has been done).
My understanding of the consensus is that
On Jan 4, 2011, at 12:46 AM, Dimitri Fontaine wrote:
David E. Wheeler da...@kineticode.com writes:
Just so long as you're aware that you might get more challenges on this
going forward.
Sure, thanks for the reminder. That said I also remember the reaction
when I used to scan the SHARE
On Jan 4, 2011, at 11:48 AM, Dimitri Fontaine wrote:
As Tom pointed out, you can do the same with naming conventions by having
scripts \i each other as appropriate.
This is a deprecated idea, though. We're talking about the
pg_execute_from_file() patch that has been applied, but without
On Jan 4, 2011, at 12:05 PM, Dimitri Fontaine wrote:
David E. Wheeler da...@kineticode.com writes:
* Prefer convention over configuration
The previous idea about the convention is not flying well with the very
recent proposal of ALTER EXTENSION ... UPGRADE TO VERSION ..., because
it would
On Dec 29, 2010, at 2:01 PM, Dimitri Fontaine wrote:
# lo
comment = 'managing Large Objects'
version = '9.1devel'
relocatable = true
upgrade_from_null = 'null = lo.upgrade.sql'
Here, any property that begins with 'upgrade_from_' is considered as an
upgrade setup and the
On Jan 1, 2011, at 2:30 PM, Dimitri Fontaine wrote:
To support that is quite simple in fact, as the following commands will
do the trick:
CREATE WRAPPER EXTENSION ...;-- don't run the script
ALTER OBJECT ... SET EXTENSION ...; -- that's in the upgrade script
ALTER EXTENSION ...
On Jan 3, 2011, at 11:42 AM, Tom Lane wrote:
It is, but I don't see any alternative. As Dimitri said, the .so will
typically be installed by a packaging system, so we don't have any
opportunity to run SQL code beforehand. In any case ...
The new .so should not be installed until the
On Jan 3, 2011, at 11:49 AM, Dimitri Fontaine wrote:
David E. Wheeler da...@kineticode.com writes:
I rather doubt that WRAPPER will be accepted as a reserved word in the
grammar.
It's already in the grammar, and I didn't change its level.
Okay.
dim=# create wrapper extension lo;
CREATE
On Jan 3, 2011, at 11:51 AM, Tom Lane wrote:
1. Doesn't work if you're upgrading an installation that has more than
one database using the extension. There's only one library directory.
2. Not possible from a permissions standpoint. Even if you think it'd
be smart to have the postgres
On Jan 3, 2011, at 11:54 AM, Dimitri Fontaine wrote:
That's what I understood your original UPGRADE from NULL being. Did I
misread you?
Are the docs about the feature, available handy in HTML so that you
don't have to read them in SGML at my git repository, are they *that*
bad?
On Jan 3, 2011, at 11:46 AM, Dimitri Fontaine wrote:
Not what I have understood.
http://archives.postgresql.org/pgsql-hackers/2010-12/msg01014.php
http://archives.postgresql.org/pgsql-hackers/2010-12/msg01045.php
AS there was no answer, the meaning for me is that it was ok to
proceed.
On Jan 3, 2011, at 12:23 PM, Dimitri Fontaine wrote:
David E. Wheeler da...@kineticode.com writes:
The fact that the last two messages in the thread say something else
does not mean that they represent the consensus.
Yeah, but as I'm the one writing the code, I gave myself more than one
On Dec 31, 2010, at 5:00 AM, Joel Jacobson wrote:
Happy new year fellow pgsql-hackers!
This is the first alpha release of a new hopefully quite interesting little
tool, named snapshot.
Feedback welcomed.
This looks awesome, Joel! One question: Why the dependence on pg_crypto? If
it's
On Dec 31, 2010, at 10:15 AM, Joel Jacobson wrote:
2010/12/31 David E. Wheeler da...@kineticode.com
This looks awesome, Joel! One question: Why the dependence on pg_crypto? If
it's just for SHA1 support, and you're just using it to to create hashes of
function bodies, I suspect that you
On Dec 29, 2010, at 12:00 PM, Bruce Momjian wrote:
Don't people normally define the version number in the Makefile and pass
the version string into the C code and perhaps a psql variable?
There is no standard pattern AFAIK. A best practice would be welcome here.
David
--
Sent via
On Dec 29, 2010, at 12:23 PM, Tom Lane wrote:
We had a long discussion upthread of what version numbers to keep where.
IMHO the Makefile is about the *least* useful place to put a version
number; the more so if you want more than one. What we seem to need is
a version number in the .sql file
On Dec 29, 2010, at 1:27 PM, Robert Haas wrote:
I think there are really two tasks here:
1. Identify whether a newer set of SQL definitions than the one
installed is available. If so, the extension is a candidate for an
upgrade.
2. Identify whether the installed version of the SQL
On Dec 21, 2010, at 8:19 PM, Alex Hunsaker wrote:
And here is v3, fixes the above and also makes sure to properly
encode/decode SPI arguments. Tested on a latin1 database with latin1
columns and utf8 with utf8 columns. Also passes make installcheck (of
course) and changes one or two things
On Dec 20, 2010, at 11:53 AM, Kenneth Marshall wrote:
Here is an interesting description of some of the gotchas:
http://en.wikipedia.org/wiki/Windows-1252
FWIW, those are gotchas translating between Windows 1252 and Latin-1. Windows
1252's nerbles translate to UTF-8 just fine.
David
--
On Dec 19, 2010, at 12:20 AM, Alex Hunsaker wrote:
I would argue that it should output the same as the first example. That is,
PL/Perl should have decoded the latin-1 before passing the text to the Perl
function.
Yeah, I don't think you will find anyone who disagrees :) PL/TCL and
On Dec 17, 2010, at 10:46 PM, Alex Hunsaker wrote:
But that's a separate issue from the, erm, inconsistency with which PL/Perl
treats encoding and decoding of its inputs and outputs.
Yay! So I think we can finally agree that for Oleg's original test
case postgres was getting right. I hope
On Dec 17, 2010, at 9:32 PM, David Christensen wrote:
+1 on the original sentiment, but only for the case that we're dealing with
data that is passed in/out as arguments. In the case that the
server_encoding is UTF-8, this is as trivial as a few macros on the
underlying SVs for text-like
On Dec 18, 2010, at 7:04 PM, Robert Haas wrote:
- Did we decide to ditch the encoding parameter for extension scripts
and mandate UTF-8?
+1
It was certainly suggested. I think it's a good idea, at least with a first cut.
Best,
David
--
Sent via pgsql-hackers mailing list
On Dec 17, 2010, at 9:31 AM, Tom Lane wrote:
Well, we did beat up Pavel over trying to shoehorn this facility into
the existing FOR syntax, so I can hardly blame him for thinking this
way. The question is whether we're willing to assume that FOREACH will
be limited to iterating over arrays,
On Dec 16, 2010, at 8:39 PM, Alex Hunsaker wrote:
No, URI::Escape is fine. The issue is that if you don't decode text to
Perl's internal form, it assumes that it's Latin-1.
So... you are saying \xc3\xa9 eq \xe9 or chr(233) ?
Not knowing what those mean, I'm not saying either one, to my
On Dec 17, 2010, at 5:04 PM, David E. Wheeler wrote:
see? Either uri_unescape() should be decoding that utf8() or you need
to do it *after* you call uri_unescape(). Hence the maybe it could be
considered a bug in uri_unescape().
Agreed.
On second thought, no. You can in fact encode
On Dec 16, 2010, at 8:19 AM, Tom Lane wrote:
I would think that we want to establish the same policy as we have for
dictionary files: they're assumed to be UTF-8. I don't believe there
should be an encoding option at all. If we didn't need one for
dictionary files, there is *surely* no
On Dec 16, 2010, at 6:39 PM, Alex Hunsaker wrote:
You might argue this is a bug with URI::Escape as I *think* all uri's
will be utf8 encoded. Anyway, I think postgres is doing the right
thing here.
No, URI::Escape is fine. The issue is that if you don't decode text to Perl's
internal form,
On Dec 13, 2010, at 11:37 PM, Jan Urbański wrote:
A function with a hstore parameter called x would get a Python dictionary as
its input. A function said to be returning a hstore could return a dictionary
and if it would have only string keys/values, it would be changed into a
hstore (and
On Dec 14, 2010, at 9:31 AM, Robert Haas wrote:
Three different people developed patches, and I think we don't really
have unanimity on which way to go with it. I've kind of been thinking
we should wait for a broader consensus on which way to go with it...
There needs to be a discussion for
On Dec 14, 2010, at 11:52 AM, Jan Urbański wrote:
If the function is declared to return a hstore, it transforms the
dictionary to a hstore.
Oh, right. Duh.
Can you overload the stringification of a dictionary to return the hstore
string representation?
Mmm, interesting thought. I don't
On Dec 13, 2010, at 8:06 AM, Oleg Bartunov wrote:
My most serious pro about hstore in core is a better dump/restore
support. Also, since we have so much better hstore and people started
to use it in their projects, it'd be great to have built-in feature in
PostgreSQL, which mimic key-value
On Dec 13, 2010, at 12:04 PM, Dimitri Fontaine wrote:
So, who's in to finish up and commit this patch in this round? :)
I certainly am ready to support last minute changes, given some are
required. And if they are too big for the schedule, better shake the
patch out now rather than let it
On Dec 12, 2010, at 12:50 PM, Dimitri Fontaine wrote:
The only item with still some work to be done on it is the regression
tests support: we're not aiming to full coverage is my understanding,
and installing contribs goes a long way towards testing extensions. Do
we want more? If so, please
On Dec 11, 2010, at 1:09 PM, David Fetter wrote:
Why is it in the makefile at all? If the makefile does need to know it,
why don't we have it scrape the number out of the control file? Or even
more to the point, since when do we need version numbers in extensions?
We *absolutely* need
On Dec 11, 2010, at 12:09 PM, Dimitri Fontaine wrote:
Yeah that works, as soon as VVV is the version we upgrade from.
That said, we need to find a way to lighten the process for extensions
where it's easy to have a single script to support upgrade from more
than once past release.
What
On Dec 11, 2010, at 2:27 PM, Andrew Dunstan wrote:
Yesterday I did a bit of work on allowing bytea values to be passed into and
out of plperl in binary format, effectively removing the need to escape and
de-escape them. (The work can be seen on he plperlargs branch of my
development repo
On Dec 11, 2010, at 5:58 PM, Andrew Dunstan wrote:
create function foo() . with ( /attribute/ [, ...] )
Currently allowed attributes are isStrict and isCachable. The mechanism is
effectively obsolete right now, but we could use it for what I have in mind
quite nicely.
Makes
On Dec 10, 2010, at 12:26 AM, Dimitri Fontaine wrote:
What if $extension.control exists? Is it a byproduct of the .in file
from previous `make` run or a user file? What if we have both the .in
and the make variable because people are confused? Or both the make
variables and a .control and not
On Dec 10, 2010, at 7:32 AM, Tom Lane wrote:
Are there any actual remaining use-cases for that sed step? It's
certainly vestigial as far as the contrib modules are concerned:
it would be simpler and more readable to replace MODULE_PATHNAME with
$libdir in the sources. Unless somebody can
On Dec 10, 2010, at 10:20 AM, Tom Lane wrote:
True. Consider a situation like an RPM upgrade: it's going to drop in a
new .so version, *and nothing else*. It's pure fantasy to imagine that
the RPM script is going to find all your databases and execute some SQL
commands against them. Since
On Dec 10, 2010, at 11:28 AM, Dimitri Fontaine wrote:
Well the Makefile support is just a facility to fill in the control file
automatically for you, on the grounds that you're probably already
maintaining your version number in the Makefile. Or that it's easy to
get it there, as in:
On Dec 10, 2010, at 11:47 AM, Tom Lane wrote:
Why would you choose to maintain it in the Makefile? In most cases
makefiles are the least likely thing to be changing during a minor
update. I would think that the right place for it is in the C code
(if we're trying to version .so files) or
On Dec 10, 2010, at 1:55 PM, Josh Berkus wrote:
I'd say that for anything in /contrib, it gets a new version with each
major version of postgresql, but not with each minor version. Thus,
say, dblink when 9.1.0 is release would be dblink 9.1-1. If in 9.1.4 we
fix a bug in dblink, then it
On Dec 10, 2010, at 1:50 PM, Dimitri Fontaine wrote:
(Actually, we could probably assume that the target version is
implicitly the current version, as identified from the control file,
and omit that from the script file names. That would avoid ambiguity
if version numbers can have more than
On Dec 10, 2010, at 2:32 PM, Dimitri Fontaine wrote:
David E. Wheeler da...@kineticode.com writes:
On Dec 10, 2010, at 1:50 PM, Dimitri Fontaine wrote:
I don't think we can safely design around one part version numbers here,
because I'm yet to see that happening in any extension I've had my
On Dec 10, 2010, at 2:40 PM, Tom Lane wrote:
Since you know the existing version number, you just run all that come
after. For example, if the current version is 1.12, then you know to run
foo-1.13.sql and foo-1.15.sql.
If we assume the target is the current version, then we only need the
On Dec 10, 2010, at 2:43 PM, Dimitri Fontaine wrote:
David E. Wheeler da...@kineticode.com writes:
You keep making extension authors have to do more work. I keep trying
to make it so they can do less. We want the barrier to be as low as
possible, which means a lot of DRY. Make it *possible
On Dec 10, 2010, at 2:55 PM, Dimitri Fontaine wrote:
Tom Lane t...@sss.pgh.pa.us writes:
If we assume the target is the current version, then we only need the
old-version number in the file name, so it doesn't matter how many
parts it has.
IIUC, that puts even more work on the shoulders of
On Dec 10, 2010, at 2:58 PM, Tom Lane wrote:
Maybe I misread David's meaning, but I thought he was saying that
there's no value in inventing all those control file entries in the
first place. Just hard-wire in ALTER EXTENSION UPGRADE the convention
that the name of an upgrade script to
On Dec 10, 2010, at 3:03 PM, Tom Lane wrote:
Yeah, it should be *to* 1.12. FWIW, this is how Bricolage upgrade scripts
are handled: version-string-named directories with the appropriate scripts
to upgrade *to* the named version number.
But you still have to know what you're upgrading
On Dec 10, 2010, at 4:15 PM, Tom Lane wrote:
Huh? It's in the pg_extension catalog.
How do you select which upgrade script to apply?
You run all those that contain version numbers higher than the
currently-installed one.
This of course assumes that one can correctly tell that one version
On Dec 10, 2010, at 4:34 PM, Cédric Villemain wrote:
Other possibilities include TRANSIENT, EPHEMERAL, TRANSIENT, TENUOUS.
EVANESCENT.
UNSAFE ?
LOLZ.
David
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
On Dec 10, 2010, at 4:39 PM, Tom Lane wrote:
This idea is not exactly free of disadvantages.
1. It assumes that the underlying .so supports not only the current
version, but every intermediate version of the SQL objects. For
example, say the previously installed version was 1.10, and we
On Dec 9, 2010, at 12:34 PM, Dimitri Fontaine wrote:
- add support for 'relocatable' boolean property in the control file,
as discussed on list
this controls what happens at create extension time, by doing a
relocation of the extension objects when the extension is relocatable
and
On Dec 8, 2010, at 8:13 AM, Oleg Bartunov wrote:
adding utf8::decode($_[0]) solves the problem:
knn=# CREATE OR REPLACE FUNCTION url_decode(Vkw varchar) RETURNS varchar AS
$$
use strict;
use URI::Escape;
utf8::decode($_[0]);
return uri_unescape($_[0]); $$ LANGUAGE plperlu;
On Dec 8, 2010, at 9:14 AM, Tim Bunce wrote:
Do you have any more improvements in the pipeline?
I'd like to add $arrayref = decode_array_literal('{2,3}') and
maybe $hashref = decode_hstore_literal('x=1, y=2').
I don't know how much works would be involved in those though.
Those would be
On Dec 8, 2010, at 12:42 PM, Dimitri Fontaine wrote:
Kineticode Billing da...@kineticode.com writes:
No, it's not. There are no unit tests at all. You can call the contrib
modules and their tests acceptance tests, but that's not the same
thing.
Ok, I need some more guidance here. All
On Dec 8, 2010, at 1:53 PM, Dimitri Fontaine wrote:
I don't see why. Most of them are dead simple and could easily be
Makefile variables.
And how does the information flows from the Makefile to the production
server, already?
`make` generates the file if it doesn't already exist.
David
On Dec 8, 2010, at 2:07 PM, Dimitri Fontaine wrote:
David E. Wheeler da...@kineticode.com writes:
And how does the information flows from the Makefile to the production
server, already?
`make` generates the file if it doesn't already exist.
Again, will retry when possible, but it has
On Dec 7, 2010, at 8:00 AM, Dimitri Fontaine wrote:
You write a very simple contrib module exclusively for testing. It
doesn't have to actually do anything other than create a couple of
objects. A domain perhaps.
What about unaccent? Or lo (1 domain, 2 functions)?
Sure. Doesn't have to
On Dec 7, 2010, at 1:17 PM, Dimitri Fontaine wrote:
Anyway, in a less blue-sky vein: we could fix some of these problems by
having an explicit relocatable-or-not property for extensions. If it is
relocatable, it's required to keep all its owned objects in the target
schema, and ALTER
On Dec 6, 2010, at 4:06 AM, Itagaki Takahiro wrote:
* contrib/citext raises an encoding error when COLLATE is specified
even if it is the collation as same as the database default.
We might need some special treatment for C locale.
I've been wondering if this patch will support
On Dec 6, 2010, at 7:19 AM, Tom Lane wrote:
On the whole I'd prefer not to have any substitution functionality
hard-wired into pg_execute_file either, though I can see the argument
that it's necessary for practical use. Basically I'm concerned that
replace-equivalent behavior is not going to
On Dec 6, 2010, at 10:43 AM, Tom Lane wrote:
That's an interesting idea, but I'm not sure it's wise to design around
the assumption that we won't need substitutions ever. What I was
thinking was that we should try to limit knowledge of the substitution
behavior to the extension definition
On Dec 6, 2010, at 11:12 AM, Tom Lane wrote:
Well, I don't put any stock in the idea that it's important for existing
module .sql files to be usable as-is as extension definition files. If
it happens to fall out that way, fine, but we shouldn't give up anything
else to get that.
I agree,
On Dec 6, 2010, at 11:29 AM, Peter Eisentraut wrote:
This has been touch upon several times during the discussions on past
patches.
Essentially, the current patch only arranges that you can specify a sort
order for data. The system always breaks ties using a binary
comparison. This could
On Dec 6, 2010, at 11:36 AM, Tom Lane wrote:
There's a difference between whether an extension as such is considered
to belong to a schema and whether its contained objects do. We can't
really avoid the fact that functions, operators, etc must be assigned to
some particular schema.
Right,
the point or that the behaviour
ain't right.
Overall I think the docs could use a lot more fleshing out. Some of the stuff
in the wiki would help a lot. At some point, though, I'll work over the docs
myself and either send a patch to you or to the list (if it has been committed
to core).
David E
On Dec 3, 2010, at 8:38 AM, Dimitri Fontaine wrote:
David, and anyone feeling like reviewing or trying the patch, if you're
not already crawling into the v14 patch, you could as well begin with
this cleaner version — no behavior changes, some cleaner code, make
check passes, no bitrot against
Extensions Patch v15 Review
===
Submission review
-
* Is the patch in context diff format?
Yes.
* Does it apply cleanly to the current git master?
Yes.
* Does it include reasonable tests, necessary doc patches, etc?
`make check` passes.
`make
On Nov 22, 2010, at 6:03 PM, Josh Berkus wrote:
... original patch. Sorry. Let's not fiddle with the names.
To be clear, as things stand now, the new command is:
ALTER TYPE name ADD VALUE new_enum_value [ { BEFORE | AFTER }
existing_enum_value ]
So while the term in the SQL statement
On Nov 23, 2010, at 11:48 AM, Robert Haas wrote:
So while the term in the SQL statement is VALUE, it's called a label in
the documentation. I think that's confusing. Does anyone else?
Yes. As between the two options, I favor changing the command. And
let's also paint it pink.
Would that
Patch attached.
Best,
David
enum_value.patch
Description: Binary data
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Nov 22, 2010, at 4:46 PM, Tom Lane wrote:
Oh my boots and buttons. I think we're splitting some very fine hairs
here. A few weeks back you were telling us that label wasn't a very good
word and shouldn't be sanctified in the SQL.
It isn't a very good word for the abstract value, IMO,
On Nov 20, 2010, at 9:31 PM, Terry Laurenzo wrote:
Assuming that the JSON datatype (at a minimum) normalizes text for storage,
then the text storage option accounts for about the most expensive path but
with none of the benefits of an internal binary form (smaller size, ability
to cheaply
On Nov 14, 2010, at 7:42 AM, Andrew Dunstan wrote:
It's fairly unscientific and inconclusive, and the discussion seems to have
died. I think since Tom and I did most of the work on this our voices should
count a little louder :-) , so I'm going to go with his suggestion of VALUE,
unless
On Nov 12, 2010, at 6:28 AM, Kevin Grittner wrote:
The CursesReporter moves up and down the lines to write results to
concurrently running tests. It's only useful on a terminal and
certainly gets confused by anything that moves the cursor (which a
plain 'print' certainly does).
Ah, well
On Nov 12, 2010, at 12:39 PM, Kevin Grittner wrote:
(2) If I wanted something to show in the TAP output, like the three
counts at the end of the test, what's the right way to do that? (I
suspect that printing with a '#' character at the front of the line
would do it, but that's probably not
On Nov 11, 2010, at 9:13 AM, Tom Lane wrote:
If we establish a precedent that WITHs can be thought of as executing
before the main command, we will eventually have to de-optimize existing
WITH behavior. Or else make up reasons why the inconsistency is okay in
some cases and not others, but
On Nov 11, 2010, at 7:02 AM, Itagaki Takahiro wrote:
MULTISET supports are more difficult. We have corresponding
type IDs for each array, but we might not want to add additional
IDs for multiset for each type. Any ideas for the issue?
Why not?
If we reuse type IDs of arrays for multisets,
On Nov 11, 2010, at 9:29 AM, Tom Lane wrote:
I can see that, but if one can't see the result of the write, or can't
determine whether or not it will be visible in advance, what's the point of
writeable CTEs?
The writeable CTE returns a RETURNING set, which you can and should use
in the
On Nov 11, 2010, at 10:05 AM, Tom Lane wrote:
So are you planning to implement multisets? It's a feature I'd love to see
What actual functionality does it buy? AFAICT from Itagaki-san's
description, it's an array only you ignore the specific element order.
So what? You can write functions
On Nov 11, 2010, at 10:19 AM, Darren Duncan wrote:
I think that it would be best to implement MULTISET in the same way that a
TABLE is implemented. Logically and structurally they are the same thing, but
that a MULTISET typically is used as a field value of a table row. Aka, a
table and a
On Nov 11, 2010, at 10:24 AM, Nicolas Barbier wrote:
Also, no dupes.
The multi in multiset indicates that duplicate elements are
explicitly allowed and tracked.
D'oh! Right.
D
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
On Nov 11, 2010, at 12:08 PM, Alvaro Herrera wrote:
That sounds like a composite type to me.
No, it's perpendicular in the sense that while a composite type allows
you to have different columns, this multiset thing lets you have rows
(I initially thought about them as sets of scalars, but
On Nov 10, 2010, at 5:31 AM, Kevin Grittner wrote:
For the Serializable Snapshot Isolation (SSI) patch I needed a test
suite which would handle concurrent sessions which interleaved
statements in predictable ways. I was told pgTAP wasn't a good
choice for that and went with Markus Wanner's
On Nov 10, 2010, at 9:48 AM, Andrew Dunstan wrote:
I don't know if dtester meets the other needs people have, or whether
this is a complementary approach, but it seemed worth mentioning.
Where is this available? Is it self-contained? And what does it require?
Python.
On Nov 10, 2010, at 2:15 PM, Andrew Dunstan wrote:
We already use some contrib stuff in the regression tests. (It really is time
we stopped calling it contrib.)
Call them core extensions. Works well considering Dimitri's work, which
explicitly makes them extensions. So maybe change the
On Nov 10, 2010, at 3:17 PM, Tom Lane wrote:
We've been calling it contrib for a dozen years, so that name is
pretty well baked in by now. IMO renaming it is pointless and will
accomplish little beyond creating confusion and making back-patches
harder.
*Shrug*. Just change the name in the
On Nov 9, 2010, at 12:12 AM, Dimitri Fontaine wrote:
WITH plan AS (
EXPLAIN (format table) SELECT * FROM bar
)
INSERT INTO plan_audit
SELECT * FROM plan
WHERE actual_total_time 12 * interval '100 ms';
Yeah, that would be nice, but my current implementation has a row for each
node, and a
On Nov 9, 2010, at 1:38 AM, Dmitriy Igrishin wrote:
* text[] = record_to_array(record)
* table(id, key, datatype, value) = record_to_table(record)
* text = record_get_field(record, text)
* record = record_set_field(record, text, anyelement)
??
I personally like it. But I propose to add as
On Nov 9, 2010, at 9:18 AM, Dmitriy Igrishin wrote:
Yep, but hstore is an additional module. Although, its not a problem.
Yeah, but JSON will be in core, and with luck, before long, it will have the
same (or similar) capabilities.
Best,
David
--
Sent via pgsql-hackers mailing list
On Nov 9, 2010, at 9:35 AM, Pavel Stehule wrote:
You realize you can pretty much do all this with hstore, right?
hstore has similar functionality, but missing a some details and add
lot of other functionality - it doesn't identify type of field.
Personally - it is nothing what I like - but
On Nov 9, 2010, at 9:34 AM, Tom Lane wrote:
I think there's a fairly fundamental contradiction involved here.
One of the basic design attributes of plpgsql is that it's strongly
typed. Sometimes that's a blessing, and sometimes it's not, but
it's a fact. There really isn't a good way to
On Nov 9, 2010, at 9:35 AM, Pavel Stehule wrote:
hstore has similar functionality, but missing a some details and add
lot of other functionality - it doesn't identify type of field.
Personally - it is nothing what I like - but can be better than
nothing.
What are you going to do with the
On Nov 9, 2010, at 12:18 PM, Peter Eisentraut wrote:
One possible way out is not to include these tests in the main test set
and instead require manual invocation.
Better ideas?
I've been talking to Nasby and Dunstan about adding a Test::More/pgTAP-based
test suite to the core. It wouldn't
601 - 700 of 1565 matches
Mail list logo