Re: Upgrade procedure

2019-10-30 Thread rihad

>From: rihad 

Hi, all. Why is it normally suggested to stop the server, upgrade it, 
then start it? Wouldn't it be easier & quicker to simply upgrade the 
package in-place and restart the service? On OSen that allow 
modification of currently running binaries, which is most Unix OS, M$ 
Windows being a notable exception )


That might be possible on a minor upgrade, but quite probably not on a 
major version upgrade. I'm reasonably sure I've read that a major 
upgrade *can* change underlying data/structures for tables and other 
things. I don't think you want version-X writing to the tables on disk 
while version-Y writes a new layout to the same files at the same 
time. 





Why would that matter if the server gets restarted after replacing the 
binaries? Aren't previous version's binaries "hard-wired" into memory 
while they are running? AFAIK on FreeBSD at least no attempt is made to 
stop the corresponding server or restart it when a package is upgraded 
by pkg(8).





Can you please suggest how to configure hot_standby_feedback?

2019-10-30 Thread M Tarkeshwar Rao
Hi all,



Can you please suggest how to configure hot_standby_feedback?



Regards

Tarkeshwar





RE: Getting following error in using cursor to fetch the records from a large table in c language

2019-10-30 Thread M Tarkeshwar Rao
Hi Laurenz,

You are absolutely right. This is the issue with us.
If we retry the query again. Will it be successful? 

Can you please suggest how to configure hot_standby_feedback?

Regards
Tarkeshwar

-Original Message-
From: Laurenz Albe  
Sent: Wednesday, October 30, 2019 11:20 PM
To: M Tarkeshwar Rao ; 
'pgsql-gene...@postgresql.org' 
Subject: Re: Getting following error in using cursor to fetch the records from 
a large table in c language

On Wed, 2019-10-30 at 16:59 +, M Tarkeshwar Rao wrote:
> Getting following error in using cursor to fetch the records from a large 
> table in c language.
> Can you please suggest why it is coming and what is the remedy for this.
> 
> Error Details
> -
> Failed to execute the sql command close:
> mycursor_4047439616_1571970686004430275FATAL:  terminating connection 
> due to conflict with recovery
> DETAIL:  User query might have needed to see row versions that must be 
> removed.
> HINT:  In a moment you should be able to reconnect to the database and repeat 
> your command.

This is not a proble with your program.

Your query is running against a standby server with "hot_standby" on, and there 
are conflicts between replication and your query.

If you increase "max_standby_streaming_delay" in "postgresql.conf" on the 
standby, your query will be given more time to complete.
This will, however, cause replay of the replicated changes to be delayed.

Yours,
Laurenz Albe



Re: Automatically parsing in-line composite types

2019-10-30 Thread Mitar
Hi!

On Wed, Oct 30, 2019 at 3:06 PM Merlin Moncure  wrote:
> It looks it up from the database.

Yes, this is how I started doing it in my prototype as well.

> Correct. Only declared (via CREATE TYPE) composite types will work due
> to protocol limitations.

Exactly. This is where I got stuck, so this is why I started this thread. :-(

> So if you decided to scratch in itch and create a postgres
> BSON type, no one would likely use it, since the chances of adoption
> in core are slim to none.

Yea. :-( So we get back to square one. :-(

One other approach I was investigating was developing a Babel-like
transpiler for PostgreSQL SQL, so that I could have plugins which
would convert SQL queries to automatically encode values in JSON. And
then parse it back out once results arrive. Because yes, as you note,
JSON is the only stable and supported format among all installations
there is (except for the wire format, which has limitations). So
having to map to it and back, but without developer having to think
about it, might be the best solution.


Mitar

-- 
http://mitar.tnode.com/
https://twitter.com/mitar_m




RE: Upgrade procedure

2019-10-30 Thread Kevin Brannen
>From: rihad 

>Hi, all. Why is it normally suggested to stop the server, upgrade it, then 
>start it? Wouldn't it be easier & quicker to simply upgrade the package 
>in-place and restart the service? On OSen that allow modification of currently 
>running binaries, which is most Unix OS, M$ Windows being a notable exception )



That might be possible on a minor upgrade, but quite probably not on a major 
version upgrade. I'm reasonably sure I've read that a major upgrade *can* 
change underlying data/structures for tables and other things. I don't think 
you want version-X writing to the tables on disk while version-Y writes a new 
layout to the same files at the same time. 

As always, see the fine manual, especially the part on upgrades and release 
notes.

Kevin

This e-mail transmission, and any documents, files or previous e-mail messages 
attached to it, may contain confidential information. If you are not the 
intended recipient, or a person responsible for delivering it to the intended 
recipient, you are hereby notified that any disclosure, distribution, review, 
copy or use of any of the information contained in or attached to this message 
is STRICTLY PROHIBITED. If you have received this transmission in error, please 
immediately notify us by reply e-mail, and destroy the original transmission 
and its attachments without reading them or saving them to disk. Thank you.


Re: Automatically parsing in-line composite types

2019-10-30 Thread Andres Freund
Hi,

On 2019-10-29 14:33:00 -0400, Tom Lane wrote:
> Mitar  writes:
> > I think RowDescription should be extended to provide full recursive
> > metadata about all data types. That would be the best way to do it.
> 
> [ shrug... ]  In a world where stability of the wire protocol were
> of zero value, maybe we would do that.  In the real world, don't
> hold your breath.

Hm. Wouldn't it be fairly easy to allow the client to specify how much
metadata they'd want? I.e. opt-in into getting more complete metadata?

Presumably a lot of clients/applications wouldn't want the server to do
the extra work / use bandwidth for the full details anyway, so making a
more expansive RowDescription be explicitly opt-in would be good, even
if there were zero compatibility concerns.

There's different ways we could do the opt-in. We could use the "_pq_."
startup option stuff to opt in, we could make it an optional parameter
to D messages (it'd be mildly hacky because unfortunately
describe_target is not a counted text), we could use an additional
describe_type etc...

Greetings,

Andres Freund




Re: Automatically parsing in-line composite types

2019-10-30 Thread Merlin Moncure
On Wed, Oct 30, 2019 at 11:15 AM Mitar  wrote:
>
> Hi!
>
> On Wed, Oct 30, 2019 at 8:37 AM Merlin Moncure  wrote:
> > Check out libpqtypes: https://github.com/pgagarinov/libpqtypes
>
> Interesting. I have looked at the code a bit but I do not find how it
> determines the type for inline compound types, like the ones they
> appear in my original SQL query example. Could you maybe point me to
> the piece of code there handling that? Because to my
> understanding/exploration that information is simply not exposed to
> the client in any way. :-(

It looks it up from the database.  See implementation in
https://github.com/pgagarinov/libpqtypes/blob/master/source/src/handler.c
 (look for macro LOOKUP_TYPES) and usage in
https://github.com/pgagarinov/libpqtypes/blob/master/source/src/regression-test.c.

> > it does exactly what you want. It's a wrapper for libpq that provides
> > client side parsing for the binary protocol with array and composite
> > type parsing.
>
> It looks to me that it does parsing of composite types only if they
> are registered composite types. But not for example ones you get if
> you project a subset of fields from a table in a subquery. That has no
> registered composite type?

Correct. Only declared (via CREATE TYPE) composite types will work due
to protocol limitations.  For custom C programming this is fine, but
limiting if you are interested in writing a driver that can handle any
type of object the database can throw at you; (and, don't forget
custom types at the C level!).

> Also, how you are handling discovery of registered types, do you read
> that on-demand from the database? They are not provided over the wire?

The client application has to declare at connection time which types
it is interested in, then they are looked up in the SQL level.   This
would be fairly typical of C applications, I think; but mostly this
works around the limitations of the binary protocol.

> > Virtually any
> > non-C client application really ought to be using json rather than the
> > custom binary structures libpqtyps would provide.
>
> I thought that initially, too, but then found out that JSON has some
> heavy limitations because the implementation in PostgreSQL is standard
> based. There is also no hook to do custom encoding of non-JSON values.
> So binary blobs are converted in an ugly way (base64 would be better).
> You also loose a lot of meta-information, because everything non-JSON
> gets converted to strings automatically. Like knowing what is a date.
> I think MongoDB with BSON made much more sense here. It looks like
> perfect balance between simplicity of JSON structure and adding few
> more useful data types.
>
> But yes, JSON is great also because clients often have optimized JSON
> readers. Which can beat any other binary serialization format. In
> node.js, it is simply the fastest there is to transfer data:

Sure, JSON has only very, very basic type support.  In a practical
sense this means type safety has to be built above the json layer,
(e.g. {"field: "foo", "type":"shape", "data": "(1,1), (2,2)"}) exactly
as we do with the textual sql protocol.

The postgres hacker community will tend to target major standards as a
matter of culture and prudence..this used to annoy me, now I support
this strongly.  Since the type system is extensible it's theoretically
possible to implement bson support or some other funky type safe
format.  I would personally argue (perhaps with good company against
such a type being incorporated in core, or even in contrib. In fact, I
argued (win some, lose some, heh!) that jsonb should be in contrib,
not core; we should be moving stuff OUT of the regular namespace and
into extensions, not the other way around..

Aside: now that stored procedures are good to go, there is one feature
left that IMNSHO postgres desperately needs,  and that is external
package repository management (on the order of CPAN, npm, etc) so that
3rd party repositories handle grooming, precompiling, packaging,
downloading, and deploying (via SQL) of extensions that interact with
the database at the C level.  Enterprise environments and managed
postgres providers will never allow custom compiled C extensions which
is the kiss of death; even if I wanted to use those extensions, I
can't.  So if you decided to scratch in itch and create a postgres
BSON type, no one would likely use it, since the chances of adoption
in core are slim to none.

I had seen your article, and liked it.  During the development of what
was to become the jsonb type, I had argued quite strenuously not to
have it completely displace the old json type variant on performance
and other grounds.

merlin




Re: Getting following error in using cursor to fetch the records from a large table in c language

2019-10-30 Thread Laurenz Albe
On Wed, 2019-10-30 at 16:59 +, M Tarkeshwar Rao wrote:
> Getting following error in using cursor to fetch the records from a large 
> table in c language.
> Can you please suggest why it is coming and what is the remedy for this.
> 
> Error Details
> -
> Failed to execute the sql command close:
> mycursor_4047439616_1571970686004430275FATAL:  terminating connection due to 
> conflict with recovery
> DETAIL:  User query might have needed to see row versions that must be 
> removed.
> HINT:  In a moment you should be able to reconnect to the database and repeat 
> your command.

This is not a proble with your program.

Your query is running against a standby server with "hot_standby" on,
and there are conflicts between replication and your query.

If you increase "max_standby_streaming_delay" in "postgresql.conf" on
the standby, your query will be given more time to complete.
This will, however, cause replay of the replicated changes to be delayed.

Yours,
Laurenz Albe





Re: Upgrade procedure

2019-10-30 Thread Francisco Olarte
On Wed, Oct 30, 2019 at 6:00 PM rihad  wrote:
> Hi, all. Why is it normally suggested to stop the server, upgrade it,
> then start it? Wouldn't it be easier & quicker to simply upgrade the
> package in-place and restart the service? On OSen that allow
> modification of currently running binaries, which is most Unix OS, M$
> Windows being a notable exception )

Not sure exactly why, and whether postmaster protects itself from
this, but in many years of using linux, which supports it, I've
encountered my share of (minor) problems when doing that, typically
due to the program not having loaded all the modules it can use (
firefox is a classic for me, as I have it normally open while I do my
background updates, nothing a restart does not solve, but I would get
nervous doing that to the database ).

That being said, I've upgraded my TEST servers without stopping them (
although lately the upgrade scripts do a restart at the end ), but for
production I prefer to download everything and prepare as much as I
can and do a stop-finish upgrade-start, the last ( file extracting )
phase is normally much faster than the server restart machinery, so no
much is gained by doing it in paralell.

And other thing, MODIFICATION of currently running binaries is BAD,
and IIRC many OS use them for paging and will not be happy ( maybe in
these days they transparently switch to normal swap ). What you
normally want is unlinking and replacing, modification of the names
aka dir contents (not being able to do this is one of my strongest
dislikes of windows, it makes so many things so much simpler ).

Francisco Olarte.




Getting following error in using cursor to fetch the records from a large table in c language

2019-10-30 Thread M Tarkeshwar Rao
Hi all,



Getting following error in using cursor to fetch the records from a large table 
in c language.

Can you please suggest why it is coming and what is the remedy for this.



Error Details

-

Failed to execute the sql command close:

mycursor_4047439616_1571970686004430275FATAL:  terminating connection due to 
conflict with recovery

DETAIL:  User query might have needed to see row versions that must be removed.

HINT:  In a moment you should be able to reconnect to the database and repeat 
your command.



Sample Code snippet used



theCursorDec = (RWCString)"DECLARE " +  mySqlCursor + " CURSOR FOR " + 
theSql;

myFetchSql = "FETCH " + fetchStr + " IN " + mySqlCursor;



// Begin the cursor

PQexec(connection, ,"BEGIN"))

PQexec(connection, ,"myFetchSql")



// Fetch records from the cursor. Getting First N tuples

mySqlResultsPG = PQexec(connection,myFetchSql);

if(PQresultStatus(mySqlResultsPG) == PGRES_TUPLES_OK)

{

myNumColumns = PQnfields(mySqlResultsPG);

ntuples = PQntuples(mySqlResultsPG);

myTotalNumberOfRowsInQueryResult = ntuples;

myCurrentRowNum = 0 ;

}



Regards

Tarkeshwar



Upgrade procedure

2019-10-30 Thread rihad
Hi, all. Why is it normally suggested to stop the server, upgrade it, 
then start it? Wouldn't it be easier & quicker to simply upgrade the 
package in-place and restart the service? On OSen that allow 
modification of currently running binaries, which is most Unix OS, M$ 
Windows being a notable exception )



Thanks.





Re: Automatically parsing in-line composite types

2019-10-30 Thread Fabio Ugo Venchiarutti




On 30/10/2019 16:15, Mitar wrote:

Hi!

On Wed, Oct 30, 2019 at 8:37 AM Merlin Moncure  wrote:

Check out libpqtypes: https://github.com/pgagarinov/libpqtypes


Interesting. I have looked at the code a bit but I do not find how it
determines the type for inline compound types, like the ones they
appear in my original SQL query example. Could you maybe point me to
the piece of code there handling that? Because to my
understanding/exploration that information is simply not exposed to
the client in any way. :-(


it does exactly what you want. It's a wrapper for libpq that provides
client side parsing for the binary protocol with array and composite
type parsing.


It looks to me that it does parsing of composite types only if they
are registered composite types. But not for example ones you get if
you project a subset of fields from a table in a subquery. That has no
registered composite type?

Also, how you are handling discovery of registered types, do you read
that on-demand from the database? They are not provided over the wire?


Virtually any
non-C client application really ought to be using json rather than the
custom binary structures libpqtyps would provide.


I thought that initially, too, but then found out that JSON has some
heavy limitations because the implementation in PostgreSQL is standard
based. There is also no hook to do custom encoding of non-JSON values.
So binary blobs are converted in an ugly way (base64 would be better).
You also loose a lot of meta-information, because everything non-JSON
gets converted to strings automatically. Like knowing what is a date.
I think MongoDB with BSON made much more sense here. It looks like
perfect balance between simplicity of JSON structure and adding few
more useful data types.

But yes, JSON is great also because clients often have optimized JSON
readers. Which can beat any other binary serialization format. In
node.js, it is simply the fastest there is to transfer data:

https://mitar.tnode.com/post/in-nodejs-always-query-in-json-from-postgresql/


Mitar




Then perhaps, as opposed to wedging this into the tabular paradigm, a 
transition to more targeted support for hierarchical result 
representation would be preferable, just done directly by the backend an 
rendered by libpq... (perhaps still encapsulated as a DataRow field not 
to break the traditional model. Or perhaps a special RowDescription-like 
message in the backend protocol? Not my place to strongly push proposals 
there).



There's a lot of room for optimisation if done natively (think label 
deduplication at the source. Not sure if BSON works this way too).



There's also the problem of independent implementations of the 
protocol...AFAIK the JDBC client is not a wrapper to libpq and they'd 
also have to break their result surfacing paradigms to make it work...



Sounds like an enormous risk & undertaking for the hackers TBH, and I 
currently see another limiting factor to idea's popularity: as it 
stands, advanced SQL is daunting for the much of the industry, and IMHO 
the queries to generate arbitrarily structured & lightweight inline 
types/relations are relatively verbose and deeply nested (eg: last time 
I checked, stripping/renaming some attributes from a relation required 
subselecting them, and - prehaps due to PEBCAK - can't think of a way to 
create results as associative arrays indexed by attributes).


For this to gain traction, a more streamlined syntax/functions/operators 
for precision work may be necessary, or the result would only satisfy a 
narrow set of users who are already intimate with the state of affairs.



Can't help thinking that the current JSON-over-field pinhole may already 
be at the sweet spot between usefulness and inter-operability with 
existing systems. Just the SQL side of it could be less noisy and, yes, 
data type pidgeonhole problem could benefit from something like a GUC 
setting to electively break standard JSON compatibility, function 
arguments or else.





--
Regards

Fabio Ugo Venchiarutti
OSPCFC Network Engineering Dpt.
Ocado Technology

--


Notice: 
This email is confidential and may contain copyright material of 
members of the Ocado Group. Opinions and views expressed in this message 
may not necessarily reflect the opinions and views of the members of the 
Ocado Group.


If you are not the intended recipient, please notify us 
immediately and delete all copies of this message. Please note that it is 
your responsibility to scan this message for viruses.


References to the 
"Ocado Group" are to Ocado Group plc (registered in England and Wales with 
number 7098618) and its subsidiary undertakings (as that expression is 
defined in the Companies Act 2006) from time to time. The registered office 
of Ocado Group plc is Buildings One & Two, Trident Place, Mosquito Way, 
Hatfield, Hertfordshire, AL10 9UL.





RE: Can you please tell us how set this prefetch attribute in following lines.

2019-10-30 Thread M Tarkeshwar Rao
Thanks Thompson. Your inputs are very valuable and we successfully implemented 
it and results are very good. 

But I am getting following error message. Can you please suggest why this is 
coming and what is the remedy for this.

Error Details
-
Failed to execute the sql command close: 
mycursor_4047439616_1571970686004430275FATAL:  terminating connection due to 
conflict with recovery
DETAIL:  User query might have needed to see row versions that must be removed.
HINT:  In a moment you should be able to reconnect to the database and repeat 
your command.

Regards
Tarkeshwar

-Original Message-
From: Reid Thompson  
Sent: Thursday, October 17, 2019 9:49 PM
To: pgsql-general@lists.postgresql.org
Cc: Reid Thompson 
Subject: Re: Can you please tell us how set this prefetch attribute in 
following lines.

On Thu, 2019-10-17 at 11:16 +, M Tarkeshwar Rao wrote:
> [EXTERNAL SOURCE]
> 
>  
> 
> Hi all,
>  
> How to fetch certain number of tuples from a postgres table.
>  
> Same I am doing in oracle using following lines by setting prefetch attribute.
>  
> For oracle
> // Prepare query
> if( OCIStmtPrepare( myOciStatement, myOciError, (text 
> *)aSqlStatement, // Get statement type  OCIAttrGet( (void 
> *)myOciStatement, OCI_HTYPE_STMT, _type, 0, OCI_ATTR_STMT_TYPE, 
> myOciError );
> // Set prefetch count   
>   OCIAttrSet( myOciStatement, OCI_HTYPE_STMT, , 0, 
> OCI_ATTR_PREFETCH_ROWS, myOciError );   
> // Execute query
> status = OCIStmtExecute( myOciServerCtx, myOciStatement, myOciError, 
> iters, 0, NULL, NULL, OCI_DEFAULT );
>  
>  
> For Postgres
>  
> Can you please tell us how set this prefetch attribute in following lines. Is 
> PQexec returns all the rows from the table?
>  
> mySqlResultsPG = PQexec(connection, aSqlStatement);
> if((PQresultStatus(mySqlResultsPG) == PGRES_FATAL_ERROR ) || 
> (PQstatus(connection) != CONNECTION_OK)){} if 
> ((PQresultStatus(mySqlResultsPG) == PGRES_COMMAND_OK) || 
> (PQresultStatus(mySqlResultsPG) == PGRES_TUPLES_OK))
> {
> myNumColumns = PQnfields(mySqlResultsPG);
> myTotalNumberOfRowsInQueryResult = PQntuples(mySqlResultsPG);
> myCurrentRowNum = 0 ;
> }
> 
>  
> Regards
> Tarkeshwar
>  

declare a cursor and fetch

https://protect2.fireeye.com/v1/url?k=d75a6ab6-8b8e60bf-d75a2a2d-86740465fc08-fa8f74c15b35a3fd=1=7b7df498-f187-408a-a07c-07b1c5f6f868=https%3A%2F%2Fbooks.google.com%2Fbooks%3Fid%3DNc5ZT2X5mOcC%26pg%3DPA405%26lpg%3DPA405%26dq%3Dpqexec%2Bfetch%26source%3Dbl%26ots%3D8P8w5JemcL%26sig%3DACfU3U0POGGSP0tYTrs5oxykJdOeffaspA%26hl%3Den%26sa%3DX%26ved%3D2ahUKEwjevbmA2KPlAhXukOAKHaBIBcoQ6AEwCnoECDEQAQ%23v%3Donepage%26q%3Dpqexec%2520fetch%26f%3Dfalse




Re: Automatically parsing in-line composite types

2019-10-30 Thread Mitar
Hi!

On Wed, Oct 30, 2019 at 8:37 AM Merlin Moncure  wrote:
> Check out libpqtypes: https://github.com/pgagarinov/libpqtypes

Interesting. I have looked at the code a bit but I do not find how it
determines the type for inline compound types, like the ones they
appear in my original SQL query example. Could you maybe point me to
the piece of code there handling that? Because to my
understanding/exploration that information is simply not exposed to
the client in any way. :-(

> it does exactly what you want. It's a wrapper for libpq that provides
> client side parsing for the binary protocol with array and composite
> type parsing.

It looks to me that it does parsing of composite types only if they
are registered composite types. But not for example ones you get if
you project a subset of fields from a table in a subquery. That has no
registered composite type?

Also, how you are handling discovery of registered types, do you read
that on-demand from the database? They are not provided over the wire?

> Virtually any
> non-C client application really ought to be using json rather than the
> custom binary structures libpqtyps would provide.

I thought that initially, too, but then found out that JSON has some
heavy limitations because the implementation in PostgreSQL is standard
based. There is also no hook to do custom encoding of non-JSON values.
So binary blobs are converted in an ugly way (base64 would be better).
You also loose a lot of meta-information, because everything non-JSON
gets converted to strings automatically. Like knowing what is a date.
I think MongoDB with BSON made much more sense here. It looks like
perfect balance between simplicity of JSON structure and adding few
more useful data types.

But yes, JSON is great also because clients often have optimized JSON
readers. Which can beat any other binary serialization format. In
node.js, it is simply the fastest there is to transfer data:

https://mitar.tnode.com/post/in-nodejs-always-query-in-json-from-postgresql/


Mitar

-- 
http://mitar.tnode.com/
https://twitter.com/mitar_m




Re: Automatically parsing in-line composite types

2019-10-30 Thread Merlin Moncure
On Tue, Oct 29, 2019 at 12:58 PM Mitar  wrote:
>
> Hi!
>
> On Tue, Oct 29, 2019 at 9:06 AM Fabio Ugo Venchiarutti
>  wrote:
> > You can use subqueries and array_agg() to deepen your output tree all
> > the way to a stack overflow, a single _to_json() call at the
> > top will recursively traverse and convert whatever you feed it.
>
> Yes, what you are describing is exactly the sad state of things: the
> only way to meaningfully retrieve inline composite types which are
> made when one aggregate things like that, or when you subselect a set
> of fields from a table in a sub-query, is that you then convert the
> whole thing to JSON and transmit it in that way. Because this is the
> only way you can parse things on the client. Because if you leave it
> as raw composite type encoding, you cannot really parse that on the
> client correctly in all cases without knowing what types are stored
> inside those composite types you are getting.
>
> But JSON is not a lossless transport format: it does not support full
> floating point spec (no inf, NANs) and for many types of fields it
> just converts to string representation of that, which can be
> problematic. For example, if you have binary blobs.
>
> So no, JSON is a workaround, but it is sad that we should have to use
> it. PostgreSQL seems to be almost there with the support for composite
> types and nested query results, only it seems you cannot really parse
> it out. I mean, why PostgreSQL even has its own binary format for
> results, then it could just transmit everything as JSON. :-) But that
> does not really work for many data types.
>
> I think RowDescription should be extended to provide full recursive
> metadata about all data types. That would be the best way to do it.

Check out libpqtypes: https://github.com/pgagarinov/libpqtypes

it does exactly what you want. It's a wrapper for libpq that provides
client side parsing for the binary protocol with array and composite
type parsing.  This library was written before the json train left the
station; it's only use today (assuming you don't like the cute
'PQexedf') would be scenarios where performance requirements are
extreme and the client application is written in C (so you can benefit
from direct binary copying vs string serialization).  Virtually any
non-C client application really ought to be using json rather than the
custom binary structures libpqtyps would provide.  I cowrite the
library, it works wondefully, but I've since moved on to json.

JSON also undercuts the argument that the database ought to provide
deep type facilities on both sides of the protocol IMO; I used to be
interested in that, now I'm not; the world is moving on.

merlin




Re: Restore single table

2019-10-30 Thread Ritesh Jha
psql -d databaseName -f tablename.sql

On Wed, Oct 30, 2019, 5:51 PM Sonam Sharma  wrote:

> Can someone please help me with the command to restore a single table
> using psql.
>
> Took the backup using below command :
> pg_dump -t (tablename) dbname > tablename.sql
>


Restore single table

2019-10-30 Thread Sonam Sharma
Can someone please help me with the command to restore a single table using
psql.

Took the backup using below command :
pg_dump -t (tablename) dbname > tablename.sql