I have actually been working on the task discussed in this thread, most relevant
parts quoted below, for awhile now, and hope to have something concrete that you
can use by the end of this summer.
My in-development Muldis D language is homoiconic as a core feature, its source
code is data to i
On Sat, May 11, 2013 at 07:22:55PM -0400, Robert Haas wrote:
> On Sat, May 11, 2013 at 12:27 PM, Tom Lane wrote:
> > By the time you've got an expression tree, the problem is mostly solved,
> > at least so far as parser extension is concerned.
>
> Right.
>
> > More years ago than I care to admit
On Sunday, May 12, 2013 4:53 AM Robert Haas wrote:
> On Sat, May 11, 2013 at 12:27 PM, Tom Lane wrote:
> > By the time you've got an expression tree, the problem is mostly
> solved,
> > at least so far as parser extension is concerned.
>
> Right.
>
> > More years ago than I care to admit, I work
On Sun, May 12, 2013 at 1:18 PM, Jim Nasby wrote:
> FWIW, I've wanted the ability to plug into the parser not for an extension,
> but so that I could programmaticly enforce certain coding conventions.
Depending on the exact requirements, that probably wouldn't be too
difficult. It'd likely entail
On 5/11/13 11:27 AM, Tom Lane wrote:
David Fetter writes:
On Sat, May 11, 2013 at 11:17:03AM -0400, Robert Haas wrote:
Some kind of extendable parser would be awesome. It would need to tie
into the rewriter also.
No, I don't have a clue what the design looks like.
That's a direction sever
On Sat, May 11, 2013 at 12:27 PM, Tom Lane wrote:
> By the time you've got an expression tree, the problem is mostly solved,
> at least so far as parser extension is concerned.
Right.
> More years ago than I care to admit, I worked on systems that had
> run-time-extensible parsers at Hewlett-Pac
David Fetter writes:
> On Sat, May 11, 2013 at 11:17:03AM -0400, Robert Haas wrote:
>> Some kind of extendable parser would be awesome. It would need to tie
>> into the rewriter also.
>>
>> No, I don't have a clue what the design looks like.
> That's a direction several of the proprietary RDBMS
On Sat, May 11, 2013 at 11:17:03AM -0400, Robert Haas wrote:
> On Thu, May 9, 2013 at 7:36 AM, Michael Paquier
> wrote:
>
> >> Some of this is getting solved by making PostgreSQL more pluggable in
> >> ways that isolate the proprietary stuff, i.e. make people not have to
> >> touch the PostgreSQL
On Thu, May 9, 2013 at 7:36 AM, Michael Paquier
wrote:
>> Some of this is getting solved by making PostgreSQL more pluggable in
>> ways that isolate the proprietary stuff, i.e. make people not have to
>> touch the PostgreSQL core code much, if at all, in order to provide
>> whatever special featu
On Thu, May 9, 2013 at 9:12 AM, David Fetter wrote:
> On Wed, May 08, 2013 at 06:08:28PM -0500, Jim Nasby wrote:
> > I believe that makes it significantly harder for them to actually
> > contribute code back that doesn't give them a business advantage, as
> > well as making it harder to justify h
On Wed, May 08, 2013 at 06:08:28PM -0500, Jim Nasby wrote:
> On 5/1/13 7:36 PM, Robert Haas wrote:
> >On Mon, Apr 29, 2013 at 4:33 PM, Jim Nasby wrote:
> >>>On 4/28/13 7:50 AM, Craig Ringer wrote:
> >
> >I find it frustrating that I've never seen an @paraccel email address
> >here
> >
On 5/1/13 7:36 PM, Robert Haas wrote:
On Mon, Apr 29, 2013 at 4:33 PM, Jim Nasby wrote:
>On 4/28/13 7:50 AM, Craig Ringer wrote:
>>
>>I find it frustrating that I've never seen an @paraccel email address here
>>and that few of the other vendors of highly customised Pg offshoots are
>>contribut
Robert Haas writes:
> ... There are in fact a
> pretty large number of companies - EnterpriseDB, obviously, but there
> are many, many others - that are choosing to build businesses around
> PostgreSQL precisely because it *isn't* GPL. Personally, I think
> that's a good thing for our community i
On 05/02/2013 08:36 AM, Robert Haas wrote:
> I've talked with a bunch of other companies over the last few years
> who also chose PostgreSQL for licensing reasons. I'm pretty happy
> about that.
I think it's a pretty good thing too, personally ... but I do wish
they'd contribute a bit more to th
> I find it frustrating that I've never seen an @paraccel email address
> here and that few of the other vendors of highly customised Pg offshoots
> are contributing back. It's almost enough to make me like the GPL.
Well, Paraccel never ended up contributing any code, but in years back
(2006-2008
On Mon, Apr 29, 2013 at 4:33 PM, Jim Nasby wrote:
> On 4/28/13 7:50 AM, Craig Ringer wrote:
>>
>> I find it frustrating that I've never seen an @paraccel email address here
>> and that few of the other vendors of highly customised Pg offshoots are
>> contributing back. It's almost enough to make m
On 4/28/13 7:50 AM, Craig Ringer wrote:
I find it frustrating that I've never seen an @paraccel email address here and
that few of the other vendors of highly customised Pg offshoots are
contributing back. It's almost enough to make me like the GPL.
FWIW, I think there's a pretty large barrie
On 04/25/2013 10:55 PM, Andrew Dunstan wrote:
>
> It's an Amazon product based on release 8.0, but with many many
> features removed (e.g. Indexes!)
More specifically, it's a hacked-up column-store-ized Pg for OLAP and
analytics work. As I understand it Amazon didn't develop it themselves;
they bo
On 04/25/2013 07:42 AM, Tom Lane wrote:
Christopher Manning writes:
Fabr�zio and Tom,
I know that you can use --variable="FETCH_COUNT=1" from the
psql command line, but internally that uses a CURSOR to batch the rows and
[Redshift doesn't support CURSOR](
https://forums.aws.amazon.com/thr
On 04/25/2013 10:42 AM, Tom Lane wrote:
Christopher Manning writes:
Fabrízio and Tom,
I know that you can use --variable="FETCH_COUNT=1" from the
psql command line, but internally that uses a CURSOR to batch the rows and
[Redshift doesn't support CURSOR](
https://forums.aws.amazon.com/thr
Christopher Manning writes:
> Fabrízio and Tom,
> I know that you can use --variable="FETCH_COUNT=1" from the
> psql command line, but internally that uses a CURSOR to batch the rows and
> [Redshift doesn't support CURSOR](
> https://forums.aws.amazon.com/thread.jspa?threadID=122664&tstart=0)
Fabrízio and Tom,
I know that you can use --variable="FETCH_COUNT=1" from the
psql command line, but internally that uses a CURSOR to batch the rows and
[Redshift doesn't support CURSOR](
https://forums.aws.amazon.com/thread.jspa?threadID=122664&tstart=0) so it's
not an option when using psql
On Tue, Apr 23, 2013 at 1:05 PM, Tom Lane wrote:
>
> Isn't there already a way to set FETCH_COUNT from the command line?
> (ie, I think there's a generic variable-assignment facility that could
> do this)
>
Christopher,
Tom is all right... from psql [1] command line we can do that:
$ bin/psql -
Christopher Manning writes:
> I'm proposing to add a --single-row option to psql that would allow the
> result rows of a query to be streamed to a file without collecting them in
> memory first.
Isn't there already a way to set FETCH_COUNT from the command line?
(ie, I think there's a generic var
Hello
It is redundant to current FETCH_COUNT implementation, so I don't see sense
to use it together. Maybe we can drop FETCH_COUNT and replace it by
--single-row mode and probably it can simplify code.
Regards
Pavel
2013/4/23 Christopher Manning
> psql currently collects the query result ro
psql currently collects the query result rows in memory before writing them
to a file and can cause out of memory problems for large results in low
memory environments like ec2. I can't use COPY TO STDOUT or FETCH_COUNT
since I'm using Redshift and it doesn't support [writing to STDOUT](
http://doc
26 matches
Mail list logo