Re: [HACKERS] Re: new set of psql patches for loading (saving) data from (to) text, binary files

2017-08-14 Thread Peter Eisentraut
On 3/18/17 12:51, Pavel Stehule wrote:
> I rewrote these patches - it allows binary export/import from psql and
> the code is very simple. The size of the patch is bigger due including
> 4KB binary file (in hex format 8KB).

This patch needs (at least) a rebase for the upcoming commit fest.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: new set of psql patches for loading (saving) data from (to) text, binary files

2017-04-06 Thread Pavel Stehule
2017-04-06 14:47 GMT+02:00 Stephen Frost :

> Greetings,
>
> * Pavel Stehule (pavel.steh...@gmail.com) wrote:
> > 2017-04-06 3:34 GMT+02:00 Stephen Frost :
> > > Having the template not require the row/column place-holders included
> > > strikes me as more likely to be confusing.  My initial thinking around
> > > this was that users who actually want independent files would simply
> > > issue independent queries, while users who want to take a bunch of int4
> > > columns and dump them into a single binary file would be able to do so
> > > easily.
> > >
> > > I'm not against adding the ability for a single query result to be
> saved
> > > into independent files, but it strikes me as feature creep on this
> basic
> > > capability.  Further, I don't see any particular reason why splitting
> up
> > > the output from a query into multiple files is only relevant for binary
> > > data.
> >
> > The files can be simply joined together outside psql
>
> Just as multiple queries could be done to have the results put into
> independent files.
>
> > Personally I prefer relation type: single field, single file  in special
> g
> > command - because I can simply off all formatting and result should be
> > correct every time.
>
> Not sure why you think there would be a formatting issue or why the
> result might not be 'correct'.
>
> > Stephen, have you some use case for your request?
>
> The initial patch forced a single value result.  Including such a
> restriction doesn't seem necessary to me.  As for use-case, I've
> certainly written code to work with binary-result data from PG
> previously and it seems entirely reasonable that someone might wish to
> pull data into a file with psql and then process it.  I've been
> wondering if we should consider how binary-mode COPY works, but that
> format ends up being pretty inefficient due to the repeated 32-bit
> length value for every field.
>
> My initial reaction was primairly that I didn't see value in the
> somewhat arbitrary restriction being imposed on usage of this.
>

ok.

It is hard to design any solution - because there are not any intersection
on this basic simple things.

Regards

Pavel


>
> Thanks!
>
> Stephen
>


Re: [HACKERS] Re: new set of psql patches for loading (saving) data from (to) text, binary files

2017-04-06 Thread Stephen Frost
Greetings,

* Pavel Stehule (pavel.steh...@gmail.com) wrote:
> 2017-04-06 3:34 GMT+02:00 Stephen Frost :
> > Having the template not require the row/column place-holders included
> > strikes me as more likely to be confusing.  My initial thinking around
> > this was that users who actually want independent files would simply
> > issue independent queries, while users who want to take a bunch of int4
> > columns and dump them into a single binary file would be able to do so
> > easily.
> >
> > I'm not against adding the ability for a single query result to be saved
> > into independent files, but it strikes me as feature creep on this basic
> > capability.  Further, I don't see any particular reason why splitting up
> > the output from a query into multiple files is only relevant for binary
> > data.
> 
> The files can be simply joined together outside psql

Just as multiple queries could be done to have the results put into
independent files.

> Personally I prefer relation type: single field, single file  in special g
> command - because I can simply off all formatting and result should be
> correct every time.

Not sure why you think there would be a formatting issue or why the
result might not be 'correct'.

> Stephen, have you some use case for your request?

The initial patch forced a single value result.  Including such a
restriction doesn't seem necessary to me.  As for use-case, I've
certainly written code to work with binary-result data from PG
previously and it seems entirely reasonable that someone might wish to
pull data into a file with psql and then process it.  I've been
wondering if we should consider how binary-mode COPY works, but that
format ends up being pretty inefficient due to the repeated 32-bit
length value for every field.

My initial reaction was primairly that I didn't see value in the
somewhat arbitrary restriction being imposed on usage of this.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Re: new set of psql patches for loading (saving) data from (to) text, binary files

2017-04-05 Thread Pavel Stehule
2017-04-06 3:34 GMT+02:00 Stephen Frost :

> Andres,
>
> * Andres Freund (and...@anarazel.de) wrote:
> > On 2017-04-05 21:07:59 -0400, Stephen Frost wrote:
> > > * Andres Freund (and...@anarazel.de) wrote:
> > > > I don't yet have a good idea how to deal with moving individual cells
> > > > into files, so they can be loaded.  One approach would be to to have
> > > > something like
> > > >
> > > > \storequeryresult filename_template.%row.%column
> > > >
> > > > which'd then print the current query buffer into the relevant file
> after
> > > > doing replacement on %row and %column.
> > >
> > > I don't actually agree that there's a problem having the output from a
> > > query stored direclty in binary form into a single file.  The above
> > > approach seems to imply that every binary result must go into an
> > > independent file, and perhaps that would be useful in some cases, but I
> > > don't see it as required.
> >
> > Well, it'd not be enforced - it'd depend on your template.  But for a
> > lot of types of files, it'd not make sense to store multiple
> > columns/rows in file. Particularly for ones where printing them out to
> > files is actually meaningful (i.e. binary ones).
>
> Having the template not require the row/column place-holders included
> strikes me as more likely to be confusing.  My initial thinking around
> this was that users who actually want independent files would simply
> issue independent queries, while users who want to take a bunch of int4
> columns and dump them into a single binary file would be able to do so
> easily.
>
> I'm not against adding the ability for a single query result to be saved
> into independent files, but it strikes me as feature creep on this basic
> capability.  Further, I don't see any particular reason why splitting up
> the output from a query into multiple files is only relevant for binary
> data.
>

The files can be simply joined together outside psql

Personally I prefer relation type: single field, single file  in special g
command - because I can simply off all formatting and result should be
correct every time.

Stephen, have you some use case for your request?

Regards

Pavel


> Thanks!
>
> Stephen
>


Re: [HACKERS] Re: new set of psql patches for loading (saving) data from (to) text, binary files

2017-04-05 Thread Stephen Frost
Andres,

* Andres Freund (and...@anarazel.de) wrote:
> On 2017-04-05 21:07:59 -0400, Stephen Frost wrote:
> > * Andres Freund (and...@anarazel.de) wrote:
> > > I don't yet have a good idea how to deal with moving individual cells
> > > into files, so they can be loaded.  One approach would be to to have
> > > something like
> > > 
> > > \storequeryresult filename_template.%row.%column
> > > 
> > > which'd then print the current query buffer into the relevant file after
> > > doing replacement on %row and %column.
> > 
> > I don't actually agree that there's a problem having the output from a
> > query stored direclty in binary form into a single file.  The above
> > approach seems to imply that every binary result must go into an
> > independent file, and perhaps that would be useful in some cases, but I
> > don't see it as required.
> 
> Well, it'd not be enforced - it'd depend on your template.  But for a
> lot of types of files, it'd not make sense to store multiple
> columns/rows in file. Particularly for ones where printing them out to
> files is actually meaningful (i.e. binary ones).

Having the template not require the row/column place-holders included
strikes me as more likely to be confusing.  My initial thinking around
this was that users who actually want independent files would simply
issue independent queries, while users who want to take a bunch of int4
columns and dump them into a single binary file would be able to do so
easily.

I'm not against adding the ability for a single query result to be saved
into independent files, but it strikes me as feature creep on this basic
capability.  Further, I don't see any particular reason why splitting up
the output from a query into multiple files is only relevant for binary
data.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Re: new set of psql patches for loading (saving) data from (to) text, binary files

2017-04-05 Thread Andres Freund
Hi,

On 2017-04-05 21:07:59 -0400, Stephen Frost wrote:
> * Andres Freund (and...@anarazel.de) wrote:
> > I don't like the API here much.  Loading requires knowledge of some
> > magic $1 value and allows only a single column, printing doesn't mean
> > much when there's multiple columns / rows.
> 
> > I think the loading side of things should be redesigned into a more
> > general facility for providing query parameters.  E.g. something like
> > \setparam $1 'whateva'
> > \setparamfromfile $2 'somefile'
> > \setparamfromprogram $3 cat /frakbar
> >
> > which then would get used in the next query sent to the server.  That'd
> > allow importing multiple columns, and it'd be useful for other purposes
> > than just loading binary data.
> 
> I tend to agree that the loading side should probably be thought through
> more.
> 
> > I don't yet have a good idea how to deal with moving individual cells
> > into files, so they can be loaded.  One approach would be to to have
> > something like
> > 
> > \storequeryresult filename_template.%row.%column
> > 
> > which'd then print the current query buffer into the relevant file after
> > doing replacement on %row and %column.
> 
> I don't actually agree that there's a problem having the output from a
> query stored direclty in binary form into a single file.  The above
> approach seems to imply that every binary result must go into an
> independent file, and perhaps that would be useful in some cases, but I
> don't see it as required.

Well, it'd not be enforced - it'd depend on your template.  But for a
lot of types of files, it'd not make sense to store multiple
columns/rows in file. Particularly for ones where printing them out to
files is actually meaningful (i.e. binary ones).

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: new set of psql patches for loading (saving) data from (to) text, binary files

2017-04-05 Thread Stephen Frost
Andres,

* Andres Freund (and...@anarazel.de) wrote:
> I don't like the API here much.  Loading requires knowledge of some
> magic $1 value and allows only a single column, printing doesn't mean
> much when there's multiple columns / rows.

> I think the loading side of things should be redesigned into a more
> general facility for providing query parameters.  E.g. something like
> \setparam $1 'whateva'
> \setparamfromfile $2 'somefile'
> \setparamfromprogram $3 cat /frakbar
>
> which then would get used in the next query sent to the server.  That'd
> allow importing multiple columns, and it'd be useful for other purposes
> than just loading binary data.

I tend to agree that the loading side should probably be thought through
more.

> I don't yet have a good idea how to deal with moving individual cells
> into files, so they can be loaded.  One approach would be to to have
> something like
> 
> \storequeryresult filename_template.%row.%column
> 
> which'd then print the current query buffer into the relevant file after
> doing replacement on %row and %column.

I don't actually agree that there's a problem having the output from a
query stored direclty in binary form into a single file.  The above
approach seems to imply that every binary result must go into an
independent file, and perhaps that would be useful in some cases, but I
don't see it as required.

> I don't think we can find an API we agree upon in the next 48h...

Now that there's more than one opinion being voiced on the API, I tend
to agree with this.  Hopefully we can keep the discussion moving
forward, however, as I do see value in this capability in general.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Re: new set of psql patches for loading (saving) data from (to) text, binary files

2017-04-05 Thread Andres Freund
On 2017-03-18 17:51:48 +0100, Pavel Stehule wrote:
> What is done:
> 
> create table foo foo(a bytea);
> 
> -- import
> insert into foo values($1)
> \gloadfrom ~/xxx.jpg bytea
> 
> -- export
> \pset format binary
> select a from foo
> \g ~/xxx2.jpg
> 
> tested on import 55MB binary file
> 
> Comments, notes?

I don't like the API here much.  Loading requires knowledge of some
magic $1 value and allows only a single column, printing doesn't mean
much when there's multiple columns / rows.

I think the loading side of things should be redesigned into a more
general facility for providing query parameters.  E.g. something like
\setparam $1 'whateva'
\setparamfromfile $2 'somefile'
\setparamfromprogram $3 cat /frakbar

which then would get used in the next query sent to the server.  That'd
allow importing multiple columns, and it'd be useful for other purposes
than just loading binary data.


I don't yet have a good idea how to deal with moving individual cells
into files, so they can be loaded.  One approach would be to to have
something like

\storequeryresult filename_template.%row.%column

which'd then print the current query buffer into the relevant file after
doing replacement on %row and %column.

I don't think we can find an API we agree upon in the next 48h...

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: new set of psql patches for loading (saving) data from (to) text, binary files

2017-04-05 Thread Andres Freund
Hi,

On 2017-03-18 17:51:48 +0100, Pavel Stehule wrote:
> What is done:
> 
> create table foo foo(a bytea);
> 
> -- import
> insert into foo values($1)
> \gloadfrom ~/xxx.jpg bytea
> 
> -- export
> \pset format binary
> select a from foo
> \g ~/xxx2.jpg
> 
> tested on import 55MB binary file
> 
> Comments, notes?
> 
> Available import formats are limited to text, bytea, xml - these formats
> are safe for receiving data via recv function.

I don't think we have design agreement on this at this point.  Given the
upcoming code freeze, I think we'll have to hash this out during the
next development cycle.  Any counterarguments?

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: new set of psql patches for loading (saving) data from (to) text, binary files

2017-03-18 Thread Pavel Stehule
Hi


> > > There's a whitespace-only hunk that shouldn't be included.
> > >
> > > I don't agree with the single-column/single-row restriction on these.
> I
> > > can certainly see a case where someone might want to, say, dump out a
> > > bunch of binary integers into a file for later processing.
> > >
> > > The tab-completion for 'gstore' wasn't correct (you didn't include the
> > > double-backslash).  The patch also has conflicts against current master
> > > now.
> > >
> > > I guess my thinking about moving this forward would be to simplify it
> to
> > > just '\gb' which will pull the data from the server side in binary
> > > format and dump it out to the filename or command given.  If there's a
> > > new patch with those changes, I'll try to find time to look at it.
> >
> > ok I'll prepare patch
>
> Great, thanks!
>

I rewrote these patches - it allows binary export/import from psql and the
code is very simple. The size of the patch is bigger due including 4KB
binary file (in hex format 8KB).

What is done:

create table foo foo(a bytea);

-- import
insert into foo values($1)
\gloadfrom ~/xxx.jpg bytea

-- export
\pset format binary
select a from foo
\g ~/xxx2.jpg

tested on import 55MB binary file

Comments, notes?

Available import formats are limited to text, bytea, xml - these formats
are safe for receiving data via recv function.

Regards

Pavel



>
> Stephen
>
diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index 2a9c412020..f26d406f89 100644
--- a/doc/src/sgml/ref/psql-ref.sgml
+++ b/doc/src/sgml/ref/psql-ref.sgml
@@ -1945,6 +1945,38 @@ CREATE INDEX
 
 
   
+\gloadfrom filename  format  
+
+
+
+ Sends the current query input buffer to the server. The current
+ query should to have one query parameter $1.
+ The content of filename file will be used as
+ the value of this parameter.
+
+
+
+ When format is not specified, then data are
+ passed in text format. When format is specified, then data are
+ passed in binary format. The available formats are:
+ text, bytea
+ or xml type. In the example the XML document is 
+ imported to table my_table:
+
+=> CREATE TABLE my_table(id serial, doc xml);
+=> INSERT INTO my_table(doc) VALUES($1) RETURNING id
+-> \gloadfrom ~/Documents/data.xml xml
+ id 
+
+  1
+(1 row)
+
+
+
+  
+
+
+  
 \gset [ prefix ]
 
 
@@ -2366,8 +2398,8 @@ lo_import 152801
   aligned, wrapped,
   html, asciidoc,
   latex (uses tabular),
-  latex-longtable, or
-  troff-ms.
+  latex-longtable, troff-ms or
+  binary.
   Unique abbreviations are allowed.  (That would mean one letter
   is enough.)
   
@@ -2404,6 +2436,12 @@ lo_import 152801
   also requires the LaTeX
   longtable and booktabs packages.
   
+
+  
+  The binary format is simply output values in
+  binary format.
+  
+
   
   
 
diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
index 4f4a0aa9bd..51ed3df58c 100644
--- a/src/bin/psql/command.c
+++ b/src/bin/psql/command.c
@@ -28,6 +28,7 @@
 #endif
 
 #include "catalog/pg_class.h"
+#include "catalog/pg_type.h"
 #include "portability/instr_time.h"
 
 #include "libpq-fe.h"
@@ -936,6 +937,50 @@ exec_command(const char *cmd,
 		status = PSQL_CMD_SEND;
 	}
 
+	/*
+	 * \gloadfrom filename -- send query and use content of file as parameter
+	 */
+	else if (strcmp(cmd, "gloadfrom") == 0)
+	{
+		char	   *fmt = NULL;
+		char	   *fname = psql_scan_slash_option(scan_state,
+   OT_FILEPIPE, NULL, false);
+
+		if (!fname)
+		{
+			psql_error("\\%s: missing required argument\n", cmd);
+			success = false;
+		}
+		else
+		{
+			/* try to get separate format arg */
+			fmt = psql_scan_slash_option(scan_state,
+		OT_NORMAL, NULL, true);
+
+			if (fmt)
+			{
+if (strcmp(fmt, "text") == 0)
+	pset.gloadfrom_fmt = TEXTOID;
+else if (strcmp(fmt, "bytea") == 0)
+	pset.gloadfrom_fmt = BYTEAOID;
+else if (strcmp(fmt, "xml") == 0)
+	pset.gloadfrom_fmt = XMLOID;
+else
+{
+	psql_error("\\%s: only [text, bytea, xml] format can be specified\n", cmd);
+	success = false;
+}
+			}
+			else
+pset.gloadfrom_fmt = 0;		/* UNKNOWNOID */
+
+			expand_tilde(&fname);
+			pset.gloadfrom = pg_strdup(fname);
+		}
+		free(fname);
+		status = PSQL_CMD_SEND;
+	}
+
 	/* \gset [prefix] -- send query and store result into variables */
 	else if (strcmp(cmd, "gset") == 0)
 	{
@@ -2518,6 +2563,9 @@ _align2string(enum printFormat in)
 		case PRINT_TROFF_MS:
 			return "troff-ms";
 			break;
+		case PRINT_BINARY:
+			return "binary";
+			break;
 	}
 	return "unknown";
 }
@@ -2589,9 +2637,11 @@ do_pset(const char *param, const char *value, printQueryOpt *popt, bool quiet)
 			popt->topt.format = PRINT_LATEX_LONGTABLE

[HACKERS] Re: new set of psql patches for loading (saving) data from (to) text, binary files

2017-03-17 Thread Pavel Stehule
2017-03-16 22:01 GMT+01:00 Stephen Frost :

> Pavel,
>
> * Pavel Stehule (pavel.steh...@gmail.com) wrote:
> > 2017-03-15 17:21 GMT+01:00 Stephen Frost :
> > > I started looking through this to see if it might be ready to commit
> and
> > > I don't believe it is.  Below are my comments about the first patch, I
> > > didn't get to the point of looking at the others yet since this one had
> > > issues.
> > >
> > > * Pavel Stehule (pavel.steh...@gmail.com) wrote:
> > > > 2017-01-09 17:24 GMT+01:00 Jason O'Donnell :
> > > > > gstore/gbstore:
> > >
> > > I don't see the point to 'gstore'- how is that usefully different from
> > > just using '\g'?  Also, the comments around these are inconsistent,
> some
> > > say they can only be used with a filename, others say it could be a
> > > filename or a pipe+command.
> >
> > \gstore ensure dump row data. It can be replaced by \g with some other
> > setting, but if query is not unique, then the result can be messy. What
> is
> > not possible with \gbstore.
>
> I don't understand what you mean by "the result can be messy."  We have
> lots of options for controlling the output of the query and those can be
> used with \g just fine.  This seems like what you're doing is inventing
> something entirely new which is exactly the same as setting the right
> options which already exist and that seems odd to me.
>
> Is it any different from setting \a and \t and then calling \g?  If not,
> then I don't see why it would be useful to add.
>

I am searching some comfortable way - I agree, it can be redundant to
already available functionality.


>
> > More interesting is \gbstore that uses binary API - it can be used for
> > bytea fields or for XML fields with implicit correct encoding change.
> > \gbstore is not possible to replace by \g.
>
> Yes, having a way to get binary data out using psql and into a file is
> interesting and I agree that we should have that capability.
>
> Further, what I think we will definitely need is a way to get binary
> data out using psql at the command-line too.  We have the -A and -t
> switches which correspond to \a and \t, we should have something for
> this too.  Perhaps what that really calls for is a '\b' and a '-B'
> option to go with it which will flip psql into binary mode, similar to
> the other Formatting options.  I realize it might seem a bit
> counter-intuitive, but I can actually see use-cases for having binary
> data spit out to $PAGER (when you have a $PAGER that handles it
> cleanly, as less does, for example).
>

It is interesting idea. I am not sure if it is more formatting option or
general psql option. But can be interesting for another purposes too.

One idea for import files to postgres via psql

we can introduce \gloadfrom that can replace parameters by files - and this
statement can work in text or in binary mode controlled by proposed option.

some like

insert into foo values('Pavel','Stehule', $1) \gloadfrom ~/avatar.jpg
insert into doc(id, doc) values(default, $1) \gloadfrom ~/mydoc.xml

Regards

Pavel


>
> > > There's a whitespace-only hunk that shouldn't be included.
> > >
> > > I don't agree with the single-column/single-row restriction on these.
> I
> > > can certainly see a case where someone might want to, say, dump out a
> > > bunch of binary integers into a file for later processing.
> > >
> > > The tab-completion for 'gstore' wasn't correct (you didn't include the
> > > double-backslash).  The patch also has conflicts against current master
> > > now.
> > >
> > > I guess my thinking about moving this forward would be to simplify it
> to
> > > just '\gb' which will pull the data from the server side in binary
> > > format and dump it out to the filename or command given.  If there's a
> > > new patch with those changes, I'll try to find time to look at it.
> >
> > ok I'll prepare patch
>
> Great, thanks!
>
> Stephen
>


[HACKERS] Re: new set of psql patches for loading (saving) data from (to) text, binary files

2017-03-16 Thread Stephen Frost
Pavel,

* Pavel Stehule (pavel.steh...@gmail.com) wrote:
> 2017-03-15 17:21 GMT+01:00 Stephen Frost :
> > I started looking through this to see if it might be ready to commit and
> > I don't believe it is.  Below are my comments about the first patch, I
> > didn't get to the point of looking at the others yet since this one had
> > issues.
> >
> > * Pavel Stehule (pavel.steh...@gmail.com) wrote:
> > > 2017-01-09 17:24 GMT+01:00 Jason O'Donnell :
> > > > gstore/gbstore:
> >
> > I don't see the point to 'gstore'- how is that usefully different from
> > just using '\g'?  Also, the comments around these are inconsistent, some
> > say they can only be used with a filename, others say it could be a
> > filename or a pipe+command.
> 
> \gstore ensure dump row data. It can be replaced by \g with some other
> setting, but if query is not unique, then the result can be messy. What is
> not possible with \gbstore.

I don't understand what you mean by "the result can be messy."  We have
lots of options for controlling the output of the query and those can be
used with \g just fine.  This seems like what you're doing is inventing
something entirely new which is exactly the same as setting the right
options which already exist and that seems odd to me.

Is it any different from setting \a and \t and then calling \g?  If not,
then I don't see why it would be useful to add.

> More interesting is \gbstore that uses binary API - it can be used for
> bytea fields or for XML fields with implicit correct encoding change.
> \gbstore is not possible to replace by \g.

Yes, having a way to get binary data out using psql and into a file is
interesting and I agree that we should have that capability.

Further, what I think we will definitely need is a way to get binary
data out using psql at the command-line too.  We have the -A and -t
switches which correspond to \a and \t, we should have something for
this too.  Perhaps what that really calls for is a '\b' and a '-B'
option to go with it which will flip psql into binary mode, similar to
the other Formatting options.  I realize it might seem a bit
counter-intuitive, but I can actually see use-cases for having binary
data spit out to $PAGER (when you have a $PAGER that handles it
cleanly, as less does, for example).

> > There's a whitespace-only hunk that shouldn't be included.
> >
> > I don't agree with the single-column/single-row restriction on these.  I
> > can certainly see a case where someone might want to, say, dump out a
> > bunch of binary integers into a file for later processing.
> >
> > The tab-completion for 'gstore' wasn't correct (you didn't include the
> > double-backslash).  The patch also has conflicts against current master
> > now.
> >
> > I guess my thinking about moving this forward would be to simplify it to
> > just '\gb' which will pull the data from the server side in binary
> > format and dump it out to the filename or command given.  If there's a
> > new patch with those changes, I'll try to find time to look at it.
> 
> ok I'll prepare patch

Great, thanks!

Stephen


signature.asc
Description: Digital signature


[HACKERS] Re: new set of psql patches for loading (saving) data from (to) text, binary files

2017-03-15 Thread Pavel Stehule
Hi

2017-03-15 17:21 GMT+01:00 Stephen Frost :

> Pavel,
>
> I started looking through this to see if it might be ready to commit and
> I don't believe it is.  Below are my comments about the first patch, I
> didn't get to the point of looking at the others yet since this one had
> issues.
>
> * Pavel Stehule (pavel.steh...@gmail.com) wrote:
> > 2017-01-09 17:24 GMT+01:00 Jason O'Donnell :
> > > gstore/gbstore:
>
> I don't see the point to 'gstore'- how is that usefully different from
> just using '\g'?  Also, the comments around these are inconsistent, some
> say they can only be used with a filename, others say it could be a
> filename or a pipe+command.
>

\gstore ensure dump row data. It can be replaced by \g with some other
setting, but if query is not unique, then the result can be messy. What is
not possible with \gbstore.

More interesting is \gbstore that uses binary API - it can be used for
bytea fields or for XML fields with implicit correct encoding change.
\gbstore is not possible to replace by \g.


>
> There's a whitespace-only hunk that shouldn't be included.
>
> I don't agree with the single-column/single-row restriction on these.  I
> can certainly see a case where someone might want to, say, dump out a
> bunch of binary integers into a file for later processing.
>
> The tab-completion for 'gstore' wasn't correct (you didn't include the
> double-backslash).  The patch also has conflicts against current master
> now.
>
> I guess my thinking about moving this forward would be to simplify it to
> just '\gb' which will pull the data from the server side in binary
> format and dump it out to the filename or command given.  If there's a
> new patch with those changes, I'll try to find time to look at it.
>

ok I'll prepare patch


>
> I would recommend going through a detailed review of the other patches
> also before rebasing and re-submitting them also, in particular look to
> make sure that the comments are correct and consistent, that there are
> comments where there should be (generally speaking, whole functions
> should have at least some comments in them, not just the function header
> comment, etc).
>
> Lastly, I'd suggest creating a 'psql.source' file for the regression
> tests instead of just throwing things into 'misc.source'.  Seems like we
> should probably have more psql-related testing anyway and dumping
> everything into 'misc.source' really isn't a good idea.
>
> Thanks!
>
> Stephen
>


[HACKERS] Re: new set of psql patches for loading (saving) data from (to) text, binary files

2017-03-15 Thread Stephen Frost
Pavel,

I started looking through this to see if it might be ready to commit and
I don't believe it is.  Below are my comments about the first patch, I
didn't get to the point of looking at the others yet since this one had
issues.

* Pavel Stehule (pavel.steh...@gmail.com) wrote:
> 2017-01-09 17:24 GMT+01:00 Jason O'Donnell :
> > gstore/gbstore:

I don't see the point to 'gstore'- how is that usefully different from
just using '\g'?  Also, the comments around these are inconsistent, some
say they can only be used with a filename, others say it could be a
filename or a pipe+command.

There's a whitespace-only hunk that shouldn't be included.

I don't agree with the single-column/single-row restriction on these.  I
can certainly see a case where someone might want to, say, dump out a
bunch of binary integers into a file for later processing.

The tab-completion for 'gstore' wasn't correct (you didn't include the
double-backslash).  The patch also has conflicts against current master
now.

I guess my thinking about moving this forward would be to simplify it to
just '\gb' which will pull the data from the server side in binary
format and dump it out to the filename or command given.  If there's a
new patch with those changes, I'll try to find time to look at it.

I would recommend going through a detailed review of the other patches
also before rebasing and re-submitting them also, in particular look to
make sure that the comments are correct and consistent, that there are
comments where there should be (generally speaking, whole functions
should have at least some comments in them, not just the function header
comment, etc).

Lastly, I'd suggest creating a 'psql.source' file for the regression
tests instead of just throwing things into 'misc.source'.  Seems like we
should probably have more psql-related testing anyway and dumping
everything into 'misc.source' really isn't a good idea.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] Re: new set of psql patches for loading (saving) data from (to) text, binary files

2017-01-30 Thread Michael Paquier
On Wed, Jan 11, 2017 at 12:32 AM, Pavel Stehule  wrote:
> Thank you for review

Moved to next CF 2017-03.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: new set of psql patches for loading (saving) data from (to) text, binary files

2017-01-10 Thread Pavel Stehule
Hi

Thank you for review

2017-01-09 17:24 GMT+01:00 Jason O'Donnell :

> The following review has been posted through the commitfest application:
> make installcheck-world:  tested, passed
> Implements feature:   tested, passed
> Spec compliant:   not tested
> Documentation:tested, failed
>
> Pavel,
>
> gstore/gbstore:
>
> The functionality worked as expected - one row, one column results of
> queries can be sent to a file or shell.  It would be nice if a test case
> was included that proves results more than one row, one column wide will
> fail.
>

fixed


>
> The documentation included is awkward to read.  How about:
>
> "Sends the current query input buffer to the server and stores
> the result to an output file specified in the query or pipes the output
> to a shell command.  The file or command are written to only if the query
> successfully returns exactly one, non-null row and column.  If the
> query fails or does not return data, an error is raised. "
>

super

>
>
> Parameterized Queries:
>
> The functionality proposed works as expected.  Throughout the
> documentation, code and test cases the word "Parameterized" is spelled
> incorrectly: "PARAMETRIZED_QUERIES"
>

fixed

>
>
> set_from_file/set_from_bfile:
>
> The functionality proposed worked fine, I was able to set variables in sql
> from files.  Minor typo in the documentation:
> "The content is escapeaed as bytea value."
>

fixed


>
> Hope this helps!
>
> Jason O'Donnell
> Crunchy Data
>
> The new status of this patch is: Waiting on Author
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index 9915731..4f95f86 100644
--- a/doc/src/sgml/ref/psql-ref.sgml
+++ b/doc/src/sgml/ref/psql-ref.sgml
@@ -1944,6 +1944,28 @@ hello 10
 
   
 
+
+  
+\gstore [ filename ]
+\gstore [ |command ]
+\gbstore [ filename ]
+\gbstore [ |command ]
+
+
+ Sends the current query input buffer to the server and stores
+ the result to an output file specified in the query or pipes the output
+ to a shell command.  The file or command are written to only if the query
+ successfully returns exactly one, non-null row and column.  If the
+ query fails or does not return data, an error is raised.
+
+=> SELECT avatar FROM users WHERE id = 123
+-> \gbstore ~/avatar.png
+
+
+
+  
+
+
   
 \h or \help [ command ]
 
diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
index 4139b77..33f4559 100644
--- a/src/bin/psql/command.c
+++ b/src/bin/psql/command.c
@@ -929,6 +929,27 @@ exec_command(const char *cmd,
 		status = PSQL_CMD_SEND;
 	}
 
+	/* \gstore [filename], \gbstore [filename] -- send query and store result in (binary) file */
+	else if (strcmp(cmd, "gstore") == 0 ||
+			  (strcmp(cmd, "gbstore") == 0))
+	{
+		char	   *fname = psql_scan_slash_option(scan_state,
+   OT_FILEPIPE, NULL, false);
+
+		if (!fname)
+			pset.gfname = pg_strdup("");
+		else
+		{
+			expand_tilde(&fname);
+			pset.gfname = pg_strdup(fname);
+		}
+
+		pset.raw_flag = true;
+		pset.binres_flag = (strcmp(cmd, "gbstore") == 0);
+		free(fname);
+		status = PSQL_CMD_SEND;
+	}
+
 	/* help */
 	else if (strcmp(cmd, "h") == 0 || strcmp(cmd, "help") == 0)
 	{
@@ -1064,7 +1085,6 @@ exec_command(const char *cmd,
 		free(opt2);
 	}
 
-
 	/* \o -- set query output */
 	else if (strcmp(cmd, "o") == 0 || strcmp(cmd, "out") == 0)
 	{
diff --git a/src/bin/psql/common.c b/src/bin/psql/common.c
index e1b04de..a6aaebe 100644
--- a/src/bin/psql/common.c
+++ b/src/bin/psql/common.c
@@ -854,6 +854,85 @@ StoreQueryTuple(const PGresult *result)
 	return success;
 }
 
+/*
+ * StoreRawResult: the returned value (possibly binary) is displayed
+ * or stored in file. The result should be exactly one row, one column.
+ */
+static bool
+StoreRawResult(const PGresult *result)
+{
+	bool		success = true;
+
+	if (PQntuples(result) < 1)
+	{
+		psql_error("no rows returned for \\gstore or \\gbstore\n");
+		success = false;
+	}
+	else if (PQntuples(result) > 1)
+	{
+		psql_error("more than one row returned for \\gstore or \\gbstore\n");
+		success = false;
+	}
+	else if (PQnfields(result) < 1)
+	{
+		psql_error("no columns returned for \\gstore or \\gbstore\n");
+		success = false;
+	}
+	else if (PQnfields(result) > 1)
+	{
+		psql_error("more than one column returned for \\gstore or \\gbstore\n");
+		success = false;
+	}
+	else if (PQgetisnull(result, 0, 0))
+	{
+		psql_error("returned value is null for \\gstore or \\gbstore\n");
+		success = false;
+	}
+	else
+	{
+		char	   *value;
+		int			length;
+		FILE	   *fout = NULL;
+		bool		is_pipe = false;
+
+		value = PQgetvalue(result, 0, 0);
+		length = PQgetlength(result, 0, 0);
+
+		if (pset.gfname && *(pset.gfname)

[HACKERS] Re: new set of psql patches for loading (saving) data from (to) text, binary files

2017-01-09 Thread Jason O'Donnell
The following review has been posted through the commitfest application:
make installcheck-world:  tested, passed
Implements feature:   tested, passed
Spec compliant:   not tested
Documentation:tested, failed

Pavel,

gstore/gbstore:

The functionality worked as expected - one row, one column results of queries 
can be sent to a file or shell.  It would be nice if a test case was included 
that proves results more than one row, one column wide will fail.

The documentation included is awkward to read.  How about:

"Sends the current query input buffer to the server and stores
the result to an output file specified in the query or pipes the output 
to a shell command.  The file or command are written to only if the query 
successfully returns exactly one, non-null row and column.  If the 
query fails or does not return data, an error is raised. "


Parameterized Queries:

The functionality proposed works as expected.  Throughout the documentation, 
code and test cases the word "Parameterized" is spelled incorrectly: 
"PARAMETRIZED_QUERIES"


set_from_file/set_from_bfile:

The functionality proposed worked fine, I was able to set variables in sql from 
files.  Minor typo in the documentation:
"The content is escapeaed as bytea value."

Hope this helps!

Jason O'Donnell
Crunchy Data

The new status of this patch is: Waiting on Author

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers