Re: Get table catalog from pg_indexes

2022-11-27 Thread Igor Korot
Ron,

On Sun, Nov 27, 2022 at 4:10 PM Ron  wrote:
>
> On 11/27/22 15:55, Adrian Klaver wrote:
> > On 11/27/22 13:31, Ron wrote:
> >> On 11/27/22 11:22, Igor Korot wrote:
> >>> Hi, ALL,
> >>> Table pg_indexes does not contain a field for a catalog.
> >>>
> >>> So how do I get that?
> >>>
> >>> SELECT 1 FROM pg_indexes WHERE indexname = $1 AND tablename = $2 AND
> >>> schemaname = $3
> >>
> >> You did not look hard enough, or Google "postgresql pg_indexes".
> >>
> >> test=# \d pg_indexes
> >>  View "pg_catalog.pg_indexes"
> >> Column   | Type | Collation | Nullable | Default
> >> +--+---+--+-
> >> *schemaname* | name | |  |
> >> *tablename*  | name | |  |
> >> *indexname*  | name | |  |
> >>   tablespace | name |   |  |
> >>   indexdef   | text |   |  |
> >>
> >> https://www.postgresql.org/docs/current/view-pg-indexes.html
> >
> > What the OP was looking for a field in the above that was catalogname or
> > datname per:
>
> I've never heard of a database referred to as a catalog.  (That's always
> been where a database's metadata -- i.e. the pg_catalog schema -- is stored.)

In the ODBC terminology the DB is usually referenced as catalog.

Thank you.

>
> --
> Angular momentum makes the world go 'round.
>
>




Re: Get table catalog from pg_indexes

2022-11-27 Thread Igor Korot
David,

On Sun, Nov 27, 2022 at 12:55 PM David G. Johnston
 wrote:
>
> On Sun, Nov 27, 2022 at 11:42 AM Igor Korot  wrote:
>>
>>
>> It doesn't say anything about "current" DB - only the DB.
>
>
> Yes, but you must be connected to some database in order to execute this 
> command: "the database" refers to this database you are connected to.

Yes, I am and I get that.

>
> The catalogs are not information_schema.
>
>>
>> However, I think I can try "SELECT 1 FROM .pg_indexes...".
>> Will this work?
>>
>
> What is that even supposed to mean?  It also seems simple enough to just do 
> that asking "will this work" is a waste of time.  Just try it.

Apparently it looks like this query fails to execute.
I am connected to the "draft" database and running

SELECT 1 FROM draft.pg_indexes;

gives:

[quote]
ERROR:schema "draft" does not exist
[/quote]

Thank you/

>
> David J.




Re: Get table catalog from pg_indexes

2022-11-27 Thread Igor Korot
Thx, Adrian.

On Sun, Nov 27, 2022 at 3:56 PM Adrian Klaver  wrote:
>
> On 11/27/22 13:31, Ron wrote:
> > On 11/27/22 11:22, Igor Korot wrote:
> >> Hi, ALL,
> >> Table pg_indexes does not contain a field for a catalog.
> >>
> >> So how do I get that?
> >>
> >> SELECT 1 FROM pg_indexes WHERE indexname = $1 AND tablename = $2 AND
> >> schemaname = $3
> >
> > You did not look hard enough, or Google "postgresql pg_indexes".
> >
> > test=# \d pg_indexes
> >  View "pg_catalog.pg_indexes"
> > Column   | Type | Collation | Nullable | Default
> > +--+---+--+-
> > *schemaname* | name | |  |
> > *tablename*  | name | |  |
> > *indexname*  | name | |  |
> >   tablespace | name |   |  |
> >   indexdef   | text |   |  |
> >
> > https://www.postgresql.org/docs/current/view-pg-indexes.html
>
> What the OP was looking for a field in the above that was catalogname or
> datname per:
>
> https://www.postgresql.org/docs/current/catalog-pg-database.html
>
> Table "pg_catalog.pg_database"
>  Column |   Type| Collation | Nullable | Default
> ---+---+---+--+-
>   oid   | oid   |   | not null |
>   datname   | name  |   | not null |
> ...
>
> In other words to filter the pg_index results by database/catalog name.
> Since pg_index is scoped to the database you are in when you do the
> query that is not going to happen.
>
> >
> >
> >
> >
> > --
> > Angular momentum makes the world go 'round.
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
>




Re: Get table catalog from pg_indexes

2022-11-27 Thread Igor Korot
Hi, Christopher,

On Sun, Nov 27, 2022 at 12:45 PM Christophe Pettus  wrote:
>
>
>
> > On Nov 27, 2022, at 10:42, Igor Korot  wrote:
> >
> > It doesn't say anything about "current" DB - only the DB.
>
> In PostgreSQL terminology, a single PostgreSQL instance (the cluster) can 
> have multiple databases.

Just like any other DBMS.

> The database the connection is currently open to is the current database.

Are you saying I can't run any query on other DB? Or connect to DB and
run select?

Thank you.




Re: Get table catalog from pg_indexes

2022-11-27 Thread Igor Korot
Hi, Erik,

On Sun, Nov 27, 2022 at 11:59 AM Erik Wienhold  wrote:
>
> > On 27/11/2022 18:22 CET Igor Korot  wrote:
> >
> > Table pg_indexes does not contain a field for a catalog.
> >
> > So how do I get that?
> >
> > SELECT 1 FROM pg_indexes WHERE indexname = $1 AND tablename = $2 AND
> > schemaname = $3
>
> Use SELECT current_database() if you need to know the catalog.
> pg_indexes only covers the current database[1].

>From the lin yo referenced:

[quote]
The view pg_indexes provides access to useful information about each
index in the database.
[/quote]

It doesn't say anything about "current" DB - only the DB.

However, I think I can try "SELECT 1 FROM .pg_indexes...".
Will this work?

Thank you.

>
> [1] https://www.postgresql.org/docs/current/view-pg-indexes.html
>
> --
> Erik




Get table catalog from pg_indexes

2022-11-27 Thread Igor Korot
Hi, ALL,
Table pg_indexes does not contain a field for a catalog.

So how do I get that?

SELECT 1 FROM pg_indexes WHERE indexname = $1 AND tablename = $2 AND
schemaname = $3

Thank you.




Re: Printf-like function

2022-11-11 Thread Igor Korot
Thank you.

On Fri, Nov 11, 2022 at 11:12 PM Tom Lane  wrote:
>
> Igor Korot  writes:
> > Is there a printf-lilke function inside libpq that can be used
> > to create a proper query string with the proper quoting of the
> > literal?
>
> No.  You'd be better off to transmit the literal value as a
> parameter.
>
> regards, tom lane




Printf-like function

2022-11-11 Thread Igor Korot
Hi, ALL,
Is there a printf-lilke function inside libpq that can be used
to create a proper query string with the proper quoting of the
literal?

Or I will need to use sprintf()?

Thank you.




List user databases

2022-11-09 Thread Igor Korot
Hi, ALL,
According to 
https://chartio.com/resources/tutorials/how-to-list-databases-and-tables-in-postgresql-using-psql/,
there are generally 3 system DBs.

However I'm confused with the word general.
How many system databases can be made on a PG server?

Thank you.




Re: Attaching database

2022-10-18 Thread Igor Korot
Hi, guys,
After reading the documentation on
https://www.postgresql.org/docs/current/postgres-fdw.html
and checking the example I have a different question.

The presentation in the link referenced doesn't explain how to get the
table list on the
remote server and the information on the specific table.

The example tals about connection and creating a linked table, but it
doesn't explain
how to know the information about the tables on the remote

Is it possible or I will have to know this beforehand?

(Sorry for the top-posting).

Thank you.


On Sat, Oct 15, 2022 at 5:57 PM Alex Theodossis  wrote:
>
> Hi,
>
> you can only run inquires now on information_schema for the database you
> are connected to.
>
> If you are looking/searching in pg_database though, you can information
> globally.
>
> Something changed recently (I am using Dbeaver); I was able to attach to
> a single database and see all my databases; run queries, etc.
>
> With the latest update, now you have to have a separate connection to
> each database.
>
> Navicat now does the same thing, so my thinking is they must have
> changed the drivers.
>
> Regards,
>
> On 10/15/22 02:06, Julien Rouhaud wrote:
> > Hi,
> >
> > On Fri, Oct 14, 2022 at 11:16:44PM -0500, Igor Korot wrote:
> >> Sorry for resurrecting this old thread...
> >> If an attaching the DB creates new connection which will be cmpletely
> >> independent - how the INFORMATION_SCHEMA.table@table_catalog
> >> field is handled.
> >>
> >> Lets say I open connection to the DB (finance) and then attached another DB
> >> (finance_2021).
> >>
> >> So, when I call SELECT table_schema, table_name FROM 
> >> INFORMATION_SCHEMA.table
> >> I will get all tables from (finance) DB only.
> >> And to get all tables from (finance_2021) I need to make this catalog 
> >> current
> >> and reissue the query.
> >>
> >> Am I right?
> > No.  In postgres, databases are completely disjoint containers and once you
> > have a connection on a given database it will stay on that database, 
> > there's no
> > way around that.
> >
> > Using postgres_fdw allows you to create a local table that will point to
> > another table, possibly on another database or even another server, but it 
> > will
> > still be a (foreign) table, that has to be created in the current database 
> > in
> > the schema of your choice.
> >
> > Depending on your use case, maybe what you could do is create a finance_2021
> > schema, and create all the foreign tables you need in that schema pointing 
> > to
> > the finance_2021 database.  Any table existing in both "finance" and
> > "finance_2021" will then be visible in information_schema.tables, with a
> > different table_schema.  If you have multiple schemas in each database, then
> > find a way to make it work, maybe adding a _2021 suffix on each schema or
> > something like that.
> >
> > You can then maybe use the search_path (see
> > https://www.postgresql.org/docs/current/ddl-schemas.html#DDL-SCHEMAS-PATH) 
> > to
> > use by default one of the set of tables rather than the other.  But in any
> > case, it's only a workaround that has to be implemented on your client, as 
> > you
> > will always be connected on the same database, and see both set of object in
> > information_schema.
> >
> >
> --
> Alex Theodossis
> a...@dossi.info
> 347-514-5420
>
>
>




Re: Attaching database

2022-10-15 Thread Igor Korot
Hi, Adrian,



On Sat, Oct 15, 2022, 10:51 AM Adrian Klaver 
wrote:

> On 10/15/22 08:20, Adrian Klaver wrote:
> > On 10/14/22 21:46, Igor Korot wrote:
> >
> >
> >> Making catalog current means switching between DBs.
> >> Remember initially I connected to (finance) DB, which made the
> >> (finance) catalog
> >> current.
> >> Then I "opened a second connection" to (finance_2021), which made
> >> that current catalog and so that select would give me all tables from
> >> (finance_2021).
> >
> > No you didn't.
> >
> > Rough outline of what happens:
> >
> > 1) You connected to the database finance.
> > 2) While in the finance database you queried the foreign tables that are
> > linked to finance_2021.
> > 3) The queries made connections the  to finance_2021 for the purposes of
> > making the data visible in the foreign tables in finance.
> > 4) The client you did this did not 'leave' the finance database, so the
>  ^
>  in
> > only information_schema you have access to is in the finance database.
>

So any and all operations/queries performed before, during or after that
will be done on (finance) catalog), because this is the "main" connection,
right?

Thank you.

>
> >>
> >> I hope now its clearer.
> >>
> >> Thank you.
> >>
> >>>
> >>> David J.
> >>
> >>
> >
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Re: Attaching database

2022-10-14 Thread Igor Korot
Hi, David et al,

On Fri, Oct 14, 2022 at 11:39 PM David G. Johnston
 wrote:
>
> On Fri, Oct 14, 2022 at 9:17 PM Igor Korot  wrote:
>>
>> Hi, guys,
>>
>> On Sun, Oct 25, 2020 at 1:27 PM Peter J. Holzer  wrote:
>> >
>> > On 2020-10-19 20:21:05 +0200, Pavel Stehule wrote:
>>
>> > In case this isn't clear:
>> >
>> > It is the server (or more specifically, the foreign data wrapper) which
>> > opens that connection. To the client it looks like it's just accessing a
>> > normal table within the same database.
>>
>> Sorry for resurrecting this old thread...
>
>
> Then why did you do it?  You couldn't send a new email without copying 
> possibly no longer interested people and with better thought out 
> self-contained content that simply notes you are somehow using an FDW.
>
>>
>> If an attaching the DB creates new connection which will be cmpletely
>> independent - how the INFORMATION_SCHEMA.table@table_catalog
>> field is handled.
>>
>> Lets say I open connection to the DB (finance) and then attached another DB
>> (finance_2021).
>>
>> So, when I call SELECT table_schema, table_name FROM INFORMATION_SCHEMA.table
>
>
> Call this how exactly? There are three information_schema instances that you 
> can be talking about, though only the one in the local database is going to 
> be called that.  If you are dealing with FDWs you would have to have 
> different names involved.
>
>>
>> I will get all tables from (finance) DB only.
>> And to get all tables from (finance_2021) I need to make this catalog current
>> and reissue the query.
>>
>> Am I right?
>
>
> Do it and find out?  Then if still confused, show what you attempted.  But I 
> don't know what this concept of "make the catalog current" you speak of comes 
> from.  That isn't a thing that I am aware of.  Where do you see this 
> documented?

Making catalog current means switching between DBs.
Remember initially I connected to (finance) DB, which made the (finance) catalog
current.
Then I "opened a second connection" to (finance_2021), which made
that current catalog and so that select would give me all tables from
(finance_2021).

I hope now its clearer.

Thank you.

>
> David J.




Re: Attaching database

2022-10-14 Thread Igor Korot
Hi, guys,

On Sun, Oct 25, 2020 at 1:27 PM Peter J. Holzer  wrote:
>
> On 2020-10-19 20:21:05 +0200, Pavel Stehule wrote:
> > po 19. 10. 2020 v 20:18 odesílatel Igor Korot  napsal:
> > On Mon, Oct 19, 2020 at 12:51 PM Pavel Stehule 
> > wrote:
> > > It is a different thing - postgresql_fdw does nested connect - it uses
> > > client-server protocol.
> > >
> > > For postgres connect and sql engine process is one entity - and 
> > engine is
> > > written without a possibility to reconnect to another database.
> >
> > So if I understand correctly the postgresql_fdw is creating a second
> > connection and uses it as kind of "virtualizing mechanism"
> > in order to give access to the second database.
> >
> >
> > yes - it does new connect to somewhere (local or remote server, but 
> > mechanism
> > is absolutely same)
>
> In case this isn't clear:
>
> It is the server (or more specifically, the foreign data wrapper) which
> opens that connection. To the client it looks like it's just accessing a
> normal table within the same database.

Sorry for resurrecting this old thread...
If an attaching the DB creates new connection which will be cmpletely
independent - how the INFORMATION_SCHEMA.table@table_catalog
field is handled.

Lets say I open connection to the DB (finance) and then attached another DB
(finance_2021).

So, when I call SELECT table_schema, table_name FROM INFORMATION_SCHEMA.table
I will get all tables from (finance) DB only.
And to get all tables from (finance_2021) I need to make this catalog current
and reissue the query.

Am I right?

Thank you.

>
> hp
>
> --
>_  | Peter J. Holzer| Story must make more sense than reality.
> |_|_) ||
> | |   | h...@hjp.at |-- Charles Stross, "Creative writing
> __/   | http://www.hjp.at/ |   challenge!"




Re: Is there Postgres ODBC binary for OSX?

2022-09-30 Thread Igor Korot
Hi,

On Fri, Sep 30, 2022 at 10:41 AM Daniel Gustafsson  wrote:
>
> > On 30 Sep 2022, at 17:36, Igor Korot  wrote:
>
> > Is building it myself the only option?
>
> A quick googling indicates that there are options in the macOS package
> managers:
>
> https://formulae.brew.sh/formula/psqlodbc#default

Thx for the reply.

I don't have a brew installed.
I was hoping to have a download of the dmg  and just run it
on my Mac. The one that is produced by the PG devs...

Guess I was too hopeful...

Thank you.

>
> --
> Daniel Gustafsson   https://vmware.com/
>




Is there Postgres ODBC binary for OSX?

2022-09-30 Thread Igor Korot
Hi, ALL,
Is building it myself the only option?

Thank you.




Is ODBC list dead?

2022-08-12 Thread Igor Korot
Hi,
If the list IS dead - where can I get help on the topic...

Thank you.




Is ODBC list still alive?

2022-08-11 Thread Igor Korot
Thank you.




Re: Logging the query executed on the server

2022-07-24 Thread Igor Korot
Hi, Steve,

On Sun, Jul 24, 2022 at 12:51 AM Steve Baldwin  wrote:
>
> Hi Igor,
>
> Before you issue your query, try something like this:
>
> (from psql, but hopefully you get the idea)
>
> b2bcreditonline=# set log_min_duration_statement to 0;
> SET
> b2bcreditonline=# set log_statement to 'all';
> SET
>
> Ref: https://www.postgresql.org/docs/current/sql-set.html, 
> https://www.postgresql.org/docs/current/config-setting.html
>
> Then submit your query and it will be set to the server log. You can get the 
> name of the current logfile with:
>
> b2bcreditonline=# select pg_current_logfile();
>  pg_current_logfile
> 
>  /log/pg.csv

2 things:
1. How do I turn this off? ;-)
2. The log does show the query but it shows it with the placeholders.
Is there a way to see the actual query?

Thank you.


>
> HTH,
>
> Steve
>
> On Sun, Jul 24, 2022 at 3:26 PM Igor Korot  wrote:
>>
>> Hi,
>> Is it possible to log the query that will be executed
>> on the server?
>>
>> I'm writing an application that connects to the server
>> through ODBC and libpq.
>> For some reason ODBC interface is failing - it desn't
>> return any rows
>>
>> So I'm thinking if I have a proof that the query I am
>> actually executing is the same as the one I run through
>> the psql - I will know where to look.
>>
>> I am actually binding some parameters and trying to
>> execute the query.
>>
>> Thank you.
>>
>>




Logging the query executed on the server

2022-07-23 Thread Igor Korot
Hi,
Is it possible to log the query that will be executed
on the server?

I'm writing an application that connects to the server
through ODBC and libpq.
For some reason ODBC interface is failing - it desn't
return any rows

So I'm thinking if I have a proof that the query I am
actually executing is the same as the one I run through
the psql - I will know where to look.

I am actually binding some parameters and trying to
execute the query.

Thank you.




Re: Getting the table ID

2022-07-19 Thread Igor Korot
Hi, guys,

On Tue, Jul 19, 2022 at 4:42 AM Walter Dörwald  wrote:
>
> On 19 Jul 2022, at 5:10, Igor Korot wrote:
>
> Hi, guys,
>
> In the database theory each table is identified as "schema_name"."table_name".
>
> When I tried to look at how to get the table id inside the PostgreSQL,
> I saw that I needed to look at the pg_class table.
>
> SELECT oid FROM pg_class WHERE relname = "table_name";
>
> However that query will give a non-unique table id (see the first sentence).
>
> So how do I get the table id based on the "schema_name.table_name"?
>
> There is a pg_namespace table - is this where the schema should come from?
> If yes - how?
> Looking at that table I don't see any unique fields...
> Or is this something that is hidden?
>
> In fact I'm trying to run following query:
>
> SELECT c.oid FROM pg_class c, pg_namespace nc WHERE nc.oid =
> c.relnamespace AND c.relname = ? AND nc.nspname = ?;
>
> from my ODBC based program, but it returns 0 rows on SQLFetch.
>
> I know PostgreSQL does not use '?' for query parameters
> but I thought that since its an ODBC everything should work.
>
> Nevertheless, all bindings were successful, but now rows are returned.
>
> Is this query correct?
>
> Thank you.
>
> That's more or less the same query that I am using:
>
> select
> r.oid as oid,
> n.nspname || '.' || r.relname as name
> from
> pg_catalog.pg_namespace n
> join
> pg_catalog.pg_class r on n.oid = r.relnamespace
> where
> (r.relkind = 'r') and
> (n.nspname not like 'pg_%') and
> (n.nspname != 'information_schema') and
> (n.nspname = 'email') and
> (r.relname = 'emailhistory')
>
> Maybe your problem has to to with uppercase/lowercase schema and/or table 
> names?

Below is my C++ code based on the ODBC library:

[code]
SQLHSTMT stmt = 0;
SQLHDBC hdbc;
SQLLEN cbName, cbTableName = SQL_NTS, cbSchemaName = SQL_NTS;
long id;
int result = 0;
std::wstring query;
SQLWCHAR *qry = NULL, *tname = NULL, *sname = NULL;;
query = L"SELECT c.oid FROM pg_class c, pg_namespace nc WHERE nc.oid =
c.relnamespace AND c.relname = ? AND nc.nspname = ?;";
qry = new SQLWCHAR[query.length() + 2];
tname = new SQLWCHAR[tableName.length() + 2];
sname = new SQLWCHAR[schemaName.length() + 2];
memset( tname, '\0', tableName.length() + 2 );
memset( sname, '\0', schemaName.length() + 2);
uc_to_str_cpy( sname, schemaName );
uc_to_str_cpy( tname, tableName );
memset( qry, '\0', query.length() + 2 );
uc_to_str_cpy( qry, query );
SQLRETURN retcode = SQLAllocHandle( SQL_HANDLE_DBC, m_env,  );
if( retcode != SQL_SUCCESS && retcode != SQL_SUCCESS_WITH_INFO )
{
GetErrorMessage( errorMsg, 0 );
result = 1;
}
else
{
SQLSMALLINT OutConnStrLen;
retcode = SQLDriverConnect( hdbc, NULL, m_connectString, SQL_NTS,
NULL, 0, , SQL_DRIVER_NOPROMPT );
if( retcode != SQL_SUCCESS && retcode != SQL_SUCCESS_WITH_INFO )
{
GetErrorMessage( errorMsg, 2, hdbc );
result = 1;
}
else
{
auto dbName = new SQLWCHAR[pimpl->m_dbName.length() + 2];
memset( dbName, '\0', pimpl->m_dbName.length() + 2 );
uc_to_str_cpy( dbName, pimpl->m_dbName );
retcode = SQLSetConnectAttr( hdbc, SQL_ATTR_CURRENT_CATALOG, dbName, SQL_NTS );
delete[] dbName;
dbName = nullptr;
if( retcode != SQL_SUCCESS && retcode != SQL_SUCCESS_WITH_INFO )
{
GetErrorMessage( errorMsg, 2 );
result = 1;
retcode = SQLEndTran( SQL_HANDLE_DBC, m_hdbc, SQL_ROLLBACK );
}
else
{
retcode = SQLAllocHandle( SQL_HANDLE_STMT, hdbc,  );
if( retcode != SQL_SUCCESS && retcode != SQL_SUCCESS_WITH_INFO )
{
GetErrorMessage( errorMsg, 2, hdbc );
result = 1;
}
else
{
retcode = SQLPrepare( stmt, qry, SQL_NTS );
if( retcode != SQL_SUCCESS && retcode != SQL_SUCCESS_WITH_INFO )
{
GetErrorMessage( errorMsg, 1, stmt );
result = 1;
}
else
{
SQLSMALLINT dataType[2], decimalDigit[2], nullable[2];
SQLULEN parameterSize[2];
retcode = SQLDescribeParam( stmt, 1, [0], [0],
[0], [0] );
if( retcode != SQL_SUCCESS && retcode != SQL_SUCCESS_WITH_INFO )
{
GetErrorMessage( errorMsg, 1, stmt );
result = 1;
}
else
{
retcode = SQLBindParameter( stmt, 1, SQL_PARAM_INPUT, SQL_C_DEFAULT,
dataType[0], parameterSize[0], decimalDigit[0], tname, 0, 
);
if( retcode != SQL_SUCCESS && retcode != SQL_SUCCESS_WITH_INFO )
{
GetErrorMessage( errorMsg, 1, stmt );
result = 1;
}
}
if( !result )
{
retcode = SQLDescribeParam( stmt, 2, [1], [1],
[1], [1] );
if( retcode != SQL_SUCCESS && retcode != SQL_SUCCESS_WITH_INFO )
{
GetErrorMessage( errorMsg, 1, stmt );
result = 1;
}
else
{
retcode = SQLBindParameter( stmt, 2, SQL_PARAM_INPUT, SQL_C_DEFAULT,
dataType[1], parameterSize[1], decimalDigit[1], sname, 0,
 );
if( retcode != SQL_SUCCESS && retcode != SQL_SUCCESS_WITH_INFO )
{
GetErrorMessage( errorMsg, 1, stmt );
result = 1;
}
}
}
if( !result )
{
retcode = SQLExecute( stmt );
if( retcode != SQL

Getting the table ID

2022-07-18 Thread Igor Korot
Hi, guys,

In the database theory each table is identified as "schema_name"."table_name".

When I tried to look at how to get the table id inside the PostgreSQL,
I saw that I needed to look at the pg_class table.

SELECT oid FROM pg_class WHERE relname = "table_name";

However that query will give a non-unique table id (see the first sentence).

So how do I get the table id based on the "schema_name.table_name"?

There is a pg_namespace table - is this where the schema should come from?
If yes - how?
Looking at that table I don't see any unique fields...
Or is this something that is hidden?

In fact I'm trying to run following query:

SELECT c.oid FROM pg_class c, pg_namespace nc WHERE nc.oid =
c.relnamespace AND c.relname = ? AND nc.nspname = ?;

from my ODBC based program, but it returns 0 rows on SQLFetch.

I know PostgreSQL does not use '?' for query parameters
but I thought that since its an ODBC everything should work.

Nevertheless, all bindings were successful, but now rows are returned.

Is this query correct?

Thank you.




Re: Get the table creation DDL

2022-07-11 Thread Igor Korot
Hi,

On Mon, Jul 11, 2022 at 7:56 AM Rob Sargent  wrote:
>
> On 7/11/22 06:31, Mladen Gogala wrote:
> > On 7/10/22 22:52, Rob Sargent wrote:
> >> Are you alone or on a team?

No, this is my personal project.

Thank you.

> >
> > What are your pronouns?
> >
> This did make me chuckle, but no I am just asking whether or not the OP
> is currently part of a team.
>




Re: Get the table creation DDL

2022-07-10 Thread Igor Korot
Hi,

On Sun, Jul 10, 2022 at 8:14 PM Mladen Gogala  wrote:
>
> On 7/10/22 21:13, Igor Korot wrote:
>
> How should I do that?
>
> Thank you.
>
> Oh boy! I give up.

Does he mean I need to make it as a GitHub module?
Can I even do that given that you use GitLab and my project is on GitHub?

Thank you.

>
> --
> Mladen Gogala
> Database Consultant
> Tel: (347) 321-1217
> https://dbwhisperer.wordpress.com




Re: Get the table creation DDL

2022-07-10 Thread Igor Korot
Hi,

On Sun, Jul 10, 2022 at 8:09 PM Rob Sargent  wrote:
>
>
>
> > On Jul 10, 2022, at 6:16 PM, Igor Korot  wrote:
> >
> > Hi,
> >
> >> On Sun, Jul 10, 2022 at 7:02 PM Mladen Gogala  
> >> wrote:
> >>
> >> On 7/10/22 17:00, Igor Korot wrote:
> >>
> >> I understand.
> >> The Problem is that I need to put this inside the C/ODBC interface for
> >> my project.
> >>
> >> I'm sure it is not a problem when people are working out of psql or
> >> writing some scripts,
> >> but for me it is painful to go and try to recreate it.
> >>
> >> Now, I'm not sure if this extension can be freely re-used (query
> >> extracted and placed
> >> inside someone else's project).
> >>
> >> Thank you.
> >>
> >> Igor, https://github.com/MichaelDBA/pg_get_tabledef provides 
> >> "pg_get_tabledef" function which can be called from SQL and therefore used 
> >> from ODBC/C. This "extension" is nothing PL/PGSQL source code of the 
> >> function that returns DDL. That's about it. This is how it works:
> >
> > I understand.
> >
> > The question here - does this "extension'' is a part of standard
> > PostgreSQL install?
> > And if not - can I copy and paste that code in my program?
> >
> > Thank you.
> >
> >>
> >> mgogala@umajor Downloads]$ psql -h postgres -f 
> >> pg_get_tabledef-main/pg_get_tabledef.sql
> >> Password for user mgogala:
> >> DO
> >> CREATE FUNCTION
> >> [mgogala@umajor Downloads]$ psql -h postgres
> >> Password for user mgogala:
> >> psql (13.6, server 14.4)
> >> WARNING: psql major version 13, server major version 14.
> >> Some psql features might not work.
> >> Type "help" for help.
> >>
> >> mgogala=# select pg_get_tabledef('mgogala','emp');
> >>   pg_get_tabledef
> >> -
> >> CREATE  TABLE mgogala.emp (+
> >>   empno smallint NOT NULL, +
> >>   ename character varying(10) NULL,+
> >>   job character varying(9) NULL,   +
> >>   mgr smallint NULL,   +
> >>   hiredate timestamp without time zone NULL,   +
> >>   sal double precision NULL,   +
> >>   comm double precision NULL,  +
> >>   deptno smallint NULL,+
> >>   CONSTRAINT emp_pkey PRIMARY KEY (empno), +
> >>   CONSTRAINT fk_deptno FOREIGN KEY (deptno) REFERENCES dept(deptno)+
> >> ) TABLESPACE pg_default;   +
> >>+
> >>
> >> (1 row)
> >>
> >> So, you clone the Git repository, run the "CREATE FUNCTION" script and, 
> >> voila, you can get the DDL for the desired table. Here is the same stuff 
> >> produced by the psql utility:
> >>
> >> mgogala=# \d emp
> >>   Table "mgogala.emp"
> >>  Column  |Type | Collation | Nullable | Default
> >> --+-+---+--+-
> >> empno| smallint|   | not null |
> >> ename| character varying(10)   |   |  |
> >> job  | character varying(9)|   |  |
> >> mgr  | smallint|   |  |
> >> hiredate | timestamp without time zone |   |  |
> >> sal  | double precision|   |  |
> >> comm | double precision|   |  |
> >> deptno   | smallint|   |  |
> >> Indexes:
> >>"emp_pkey" PRIMARY KEY, btree (empno)
> >> Foreign-key constraints:
> >>"fk_deptno" FOREIGN KEY (deptno) REFERENCES dept(deptno)
> >>
> >> And here is u

Re: Get the table creation DDL

2022-07-10 Thread Igor Korot
Hi,

On Sun, Jul 10, 2022 at 7:02 PM Mladen Gogala  wrote:
>
> On 7/10/22 17:00, Igor Korot wrote:
>
> I understand.
> The Problem is that I need to put this inside the C/ODBC interface for
> my project.
>
> I'm sure it is not a problem when people are working out of psql or
> writing some scripts,
> but for me it is painful to go and try to recreate it.
>
> Now, I'm not sure if this extension can be freely re-used (query
> extracted and placed
> inside someone else's project).
>
> Thank you.
>
> Igor, https://github.com/MichaelDBA/pg_get_tabledef provides 
> "pg_get_tabledef" function which can be called from SQL and therefore used 
> from ODBC/C. This "extension" is nothing PL/PGSQL source code of the function 
> that returns DDL. That's about it. This is how it works:

I understand.

The question here - does this "extension'' is a part of standard
PostgreSQL install?
And if not - can I copy and paste that code in my program?

Thank you.

>
> mgogala@umajor Downloads]$ psql -h postgres -f 
> pg_get_tabledef-main/pg_get_tabledef.sql
> Password for user mgogala:
> DO
> CREATE FUNCTION
> [mgogala@umajor Downloads]$ psql -h postgres
> Password for user mgogala:
> psql (13.6, server 14.4)
> WARNING: psql major version 13, server major version 14.
>  Some psql features might not work.
> Type "help" for help.
>
> mgogala=# select pg_get_tabledef('mgogala','emp');
>pg_get_tabledef
> -
>  CREATE  TABLE mgogala.emp (+
>empno smallint NOT NULL, +
>ename character varying(10) NULL,+
>job character varying(9) NULL,   +
>mgr smallint NULL,   +
>hiredate timestamp without time zone NULL,   +
>sal double precision NULL,   +
>comm double precision NULL,  +
>deptno smallint NULL,+
>CONSTRAINT emp_pkey PRIMARY KEY (empno), +
>CONSTRAINT fk_deptno FOREIGN KEY (deptno) REFERENCES dept(deptno)+
>  ) TABLESPACE pg_default;   +
> +
>
> (1 row)
>
> So, you clone the Git repository, run the "CREATE FUNCTION" script and, 
> voila, you can get the DDL for the desired table. Here is the same stuff 
> produced by the psql utility:
>
> mgogala=# \d emp
>Table "mgogala.emp"
>   Column  |Type | Collation | Nullable | Default
> --+-+---+--+-
>  empno| smallint|   | not null |
>  ename| character varying(10)   |   |  |
>  job  | character varying(9)|   |  |
>  mgr  | smallint|   |  |
>  hiredate | timestamp without time zone |   |  |
>  sal  | double precision|   |  |
>  comm | double precision|   |  |
>  deptno   | smallint|   |  |
> Indexes:
> "emp_pkey" PRIMARY KEY, btree (empno)
> Foreign-key constraints:
> "fk_deptno" FOREIGN KEY (deptno) REFERENCES dept(deptno)
>
> And here is using the function from an ODBC connection:
>
> [mgogala@umajor Downloads]$ isql mgogala-pg
> +---+
> | Connected!|
> |   |
> | sql-statement |
> | help [tablename]  |
> | quit  |
> |   |
> +---+
> SQL> select pg_get_tabledef('mgogala','emp');
> +-+
> | pg_get_tabledef 
>   
>   
>   

Re: Get the table creation DDL

2022-07-10 Thread Igor Korot
Hi,

On Sun, Jul 10, 2022 at 2:27 PM Mladen Gogala  wrote:
>
> On 7/10/22 12:47, Igor Korot wrote:
>
> So there is no "query" per se?
> Also how do I pass the table name?
>
> Thank you.
>
> You can create one from the catalog tables. Personally, I would use 
> INFORMATION_SCHEMA to avoid pg_class and pg_attribute. However, there is an 
> extension which does that for you. Somebody else has already done the hard 
> work.

I understand.
The Problem is that I need to put this inside the C/ODBC interface for
my project.

I'm sure it is not a problem when people are working out of psql or
writing some scripts,
but for me it is painful to go and try to recreate it.

Now, I'm not sure if this extension can be freely re-used (query
extracted and placed
inside someone else's project).

Thank you.

>
> --
> Mladen Gogala
> Database Consultant
> Tel: (347) 321-1217
> https://dbwhisperer.wordpress.com




Re: Get the table creation DDL

2022-07-10 Thread Igor Korot
Hi,

On Sun, Jul 10, 2022 at 12:47 PM Ray O'Donnell  wrote:
>
> On 10/07/2022 17:47, Igor Korot wrote:
> > Hi, Michael,
> >
> > On Sun, Jul 10, 2022 at 11:13 AM Michael Nolan  wrote:
> >>
> >>
> >>
> >> On Sun, Jul 10, 2022 at 10:28 AM Igor Korot  wrote:
> >>>
> >>> Hi,
> >>> Is there a query I can execute that will give me CREATE TABLE() command
> >>> used to create a table?
> >
> > So there is no "query" per se?
> > Also how do I pass the table name?
> >
>
> If you connect to the database with psql including the -E option, then do
>
>  \d 

It means it is possible to have an actal query getting it...

Thank you.

>
> It will show you the SQL used to generate the output... this may help.
>
> Ray.
>
>
> --
> Raymond O'Donnell // Galway // Ireland
> r...@rodonnell.ie




Re: Get the table creation DDL

2022-07-10 Thread Igor Korot
Hi,

On Sun, Jul 10, 2022 at 11:47 AM Igor Korot  wrote:
>
> Hi, Michael,
>
> On Sun, Jul 10, 2022 at 11:13 AM Michael Nolan  wrote:
> >
> >
> >
> > On Sun, Jul 10, 2022 at 10:28 AM Igor Korot  wrote:
> >>
> >> Hi,
> >> Is there a query I can execute that will give me CREATE TABLE() command
> >> used to create a table?
>
> So there is no "query" per se?
> Also how do I pass the table name?
>
> Thank you.
>
> >>
> >> Thank you.
> >
> >
> > Use pg_dump --schema-only

In addition:

Can I send it to execute with PQexec() or SQLExecDirect()?

It is not a query, but an external command, so I'm wondering...

Thank you.

> > --
> > Mike Nolan




Re: Get the table creation DDL

2022-07-10 Thread Igor Korot
Hi, Michael,

On Sun, Jul 10, 2022 at 11:13 AM Michael Nolan  wrote:
>
>
>
> On Sun, Jul 10, 2022 at 10:28 AM Igor Korot  wrote:
>>
>> Hi,
>> Is there a query I can execute that will give me CREATE TABLE() command
>> used to create a table?

So there is no "query" per se?
Also how do I pass the table name?

Thank you.

>>
>> Thank you.
>
>
> Use pg_dump --schema-only
> --
> Mike Nolan




Get the table creation DDL

2022-07-10 Thread Igor Korot
Hi,
Is there a query I can execute that will give me CREATE TABLE() command
used to create a table?

Thank you.




Re: How to write such a query?

2022-01-05 Thread Igor Korot
Hi, Armul,

On Thu, Jan 6, 2022 at 12:46 AM Amul Sul  wrote:
>
> See prepare statement : 
> https://www.postgresql.org/docs/current/sql-prepare.html

The documentation is talking about a way to do it like:

SELECT a, b, c FROM foo WHERE id = $1,

which is equivalent to the

SELECT a, b, c FROM foo WHERE id = ?;

i.e. using unnamed parameter.

Thank you.

>
> On Thu, Jan 6, 2022 at 12:10 PM Igor Korot  wrote:
> >
> > Hi, ALL,
> > In SQLite you can write:
> >
> > SELECT a, b, c FROM foo WHERE id = :id;
> >
> > where ":id" is the named parameter.
> >
> > The query above is similar to
> >
> > SELECT a,b,c FROM foo WHERE id = ?;
> >
> > except that the parameter has a name.
> >
> > Is there a way to write such a SELECT statement with the
> > named parameter in PostgreSQL?
> >
> > Thank you.
> >
> >




Re: How to write such a query?

2022-01-05 Thread Igor Korot
Hi, David,

On Thu, Jan 6, 2022 at 1:00 AM David G. Johnston
 wrote:
>
> On Wednesday, January 5, 2022, Igor Korot  wrote:
>>
>>
>> Is there a way to write such a SELECT statement with the
>> named parameter in PostgreSQL?
>
>
> The server, and its prepared SQL capability, doesn’t understand named 
> parameters.  Only numbered ones.
>
> That said, there are a number of different ways to write and execute SQL 
> available to you and each of those provides different extended capabilities.  
> For instance, psql and its variables feature.

It looks like your answer contradicts with Ron...

Thank you.

>
> David J.
>




Re: How to write such a query?

2022-01-05 Thread Igor Korot
Hi, Ron,

On Thu, Jan 6, 2022 at 1:01 AM Ron  wrote:
>
> On 1/6/22 12:39 AM, Igor Korot wrote:
> > Hi, ALL,
> > In SQLite you can write:
> >
> > SELECT a, b, c FROM foo WHERE id = :id;
> >
> > where ":id" is the named parameter.
> >
> > The query above is similar to
> >
> > SELECT a,b,c FROM foo WHERE id = ?;
> >
> > except that the parameter has a name.
> >
> > Is there a way to write such a SELECT statement with the
> > named parameter in PostgreSQL?
> Absolutely.  Of course, the exact method depends on the client.  Are you
> referring to psql?
>
> If so, then here's an example:
> $ psql12 -v S=456789012 test
> psql (12.8 (Ubuntu 12.8-1.pgdg18.04+1))
> Type "help" for help.
>
> test=# select * from employee where ssn = :'S';
>  ssn|  name   |  ssn_int
> ---+-+---
>   456789012 | Fred Flintstone | 456789012
> (1 row)
>
> In this case, column ssn is of type varchar(9).

Is the syntax available since 9.0+? Or later?
I'm writing C++.

Thank you.

>
> --
> Angular momentum makes the world go 'round.
>
>




How to write such a query?

2022-01-05 Thread Igor Korot
Hi, ALL,
In SQLite you can write:

SELECT a, b, c FROM foo WHERE id = :id;

where ":id" is the named parameter.

The query above is similar to

SELECT a,b,c FROM foo WHERE id = ?;

except that the parameter has a name.

Is there a way to write such a SELECT statement with the
named parameter in PostgreSQL?

Thank you.




Re: Splitting libpq build

2021-10-03 Thread Igor Korot
Hi, Tom,

On Sun, Oct 3, 2021 at 5:27 PM Tom Lane  wrote:
>
> Igor Korot  writes:
> > Imagine you are a developer/maintainer of sme software. This software needs 
> > to
> > connect to different DBMSes perform soe operations and then disconnect.
> > It is very easy to say that I have SQLite as a submodule in my Git tree,
> > I have mysql-connector as a submodule, but I can't have libpq this way
> > because it is hidden deep down inside PostgreSQL sources.
>
> To be blunt, that is an absolutely horrid way to distribute software.
> You should not, not, not be redistributing other people's code,
> because that puts you on the hook to keep up with their bug fixes ---
> particularly security-critical fixes, which do happen from time to time.
> That's a loop you don't want to be in.

I am not.
I don't know if you are familiar, but Git has a notion of using submodules
Which means all I as a developer can do is to say "My program is using moduleA,
moduleB and moduleC". And if the user configure my program not to use
system libraries
but use mine instead (maybe because mine is newer?), I'll let them.

>
> In this case I don't even see what's the point.  If somebody wants to
> use your code with Postgres, it's just about certain that they already
> have an installed copy of libpq, from wherever they get Postgres from.
> The same argument applies to mysql or SQLite.  So you should be
> expecting to link against already-present libraries, not bring your
> own.  Do you bring your own libc, or openssl, or C compiler?

Imagine a situation where someone is using Ubuntu 16 and libpq 5.
I can provide a newer version of libpq because it has security fixes
and sme additional features.
Remember - release life cycle for different distros are different.
And what about Windows?
Usually when you work with Windows you lin statically.
Therefore it doesn't matter much - you provide the code that the user builds
and that's it.

>
> I can tell you for certain that if your package were to be picked up
> and redistributed by Red Hat, or Debian, or any other open-source
> vendor, the very first thing they'd do is strip it of any such copied
> code.  They don't want the management hassle of having to update
> multiple copies of libpq or mysql-connector or other stuff.

Agreed.
But my repository will not contain a copy of libpq - it will merely be
pointing to
the original source code of the library.
See the difference?
And if the user wants to clone my repository - great.
He will just grab the cde from the PostgreSQL repo and build it as
part of the build process.

Thank you.

>
> regards, tom lane




Re: Splitting libpq build

2021-10-03 Thread Igor Korot
Hi, Adrian,

On Sun, Oct 3, 2021 at 3:28 PM Adrian Klaver  wrote:
>
> On 10/3/21 1:20 PM, Igor Korot wrote:
> > Hi Tom,
> >
> > On Sun, Oct 3, 2021 at 3:11 PM Tom Lane  wrote:
> >>
> >> Igor Korot  writes:
> >>> I wonder who ade the decision not to provide an independent build
> >>> of the client library of PostgreSQL? And what was the reason behind it?
>
> > But building the whole tree is just that - useless.
> > All I need is the libpq to connect to the remote server.
> >
> > So I don't need the server to be built - I need a client.
>
> That is available from many of the packaging systems.
>
> >
> >>
> >> Having said that, it does work on non-Windows machines to do
> >>  ./configure ...
> >>  cd src/interfaces/libpq
> >>  make -s install
> >
> > Well, asking people to download the whole source tree just to build
> > client library 1000 times and then 1000 times delete the source code...
>
> If you are deploying to the same client you only need to build once.
>
> If not then you need to be more specific about what you are doing?

Imagine you are a developer/maintainer of sme software. This software needs to
connect to different DBMSes perform soe operations and then disconnect.

It is very easy to say that I have SQLite as a submodule in my Git tree,
I have mysql-connector as a submodule, but I can't have libpq this way
because it is hidden deep down inside PostgreSQL sources.

So in order too package the software and let the users build it on
their machine,
I need to explicitly document that the user need to get the PostgreSQL sources,
build it, copy the headers and libraries as appropriate and then build
the software
I provide.

Do you know of a better way of handling this?

O, and of course I need this on all 3 major platforms - Windows, nix and OSX.

Thank you.

>
> >
> > I don't know - I'd rather have to install just the library.
> >
>
> > So instead of simplifying the life of maintainers in the different distros
> > you made the life easier for yourself. ;;-)
>
> Not following:
>
> apt-cache show libpq5
> Package: libpq5
> Source: postgresql-14
> Version: 14.0-1.pgdg20.04+1
> Architecture: amd64
> Maintainer: Debian PostgreSQL Maintainers
> 
> Installed-Size: 748
> Depends: libc6 (>= 2.25), libgssapi-krb5-2 (>= 1.17), libldap-2.4-2 (>=
> 2.4.7), libssl1.1 (>= 1.1.0)
> Multi-Arch: same
> Homepage: http://www.postgresql.org/
> Priority: optional
> Section: libs
> Filename: pool/main/p/postgresql-14/libpq5_14.0-1.pgdg20.04+1_amd64.deb
> Size: 169756
> SHA256: 59be24749b3ee996af762dfad6ac8e915498f0bdb431612308548bac6a5324e0
> SHA1: d002223d362a45959a7832bdddbd2ec04c4611ba
> MD5sum: 2834edac0af08b7ffb4cc5093bb236b6
> Description-en: PostgreSQL C client library
>   libpq is a C library that enables user programs to communicate with
>   the PostgreSQL database server.  The server can be on another machine
>   and accessed through TCP/IP.  This version of libpq is compatible
>   with servers from PostgreSQL 8.2 or later.
>   .
>   This package contains the run-time library, needed by packages using
>   libpq.
>   .
>   PostgreSQL is an object-relational SQL database management system.
> Description-md5: 4109d08c168c8058d09c0cc8bf8b18b1
>
> >
> > Thank you.
> >
> >>
> >>  regards, tom lane
> >
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com




Re: Splitting libpq build

2021-10-03 Thread Igor Korot
Hi Tom,

On Sun, Oct 3, 2021 at 3:11 PM Tom Lane  wrote:
>
> Igor Korot  writes:
> > I wonder who ade the decision not to provide an independent build
> > of the client library of PostgreSQL? And what was the reason behind it?
>
> It wasn't worth the maintenance trouble.  It still isn't.  On any
> remotely-modern machine, a build of the whole tree is only going
> to take a minute or two.  And libpq doesn't change so fast that
> you'd need to do this often.

But building the whole tree is just that - useless.
All I need is the libpq to connect to the remote server.

So I don't need the server to be built - I need a client.

>
> Having said that, it does work on non-Windows machines to do
> ./configure ...
> cd src/interfaces/libpq
> make -s install

Well, asking people to download the whole source tree just to build
client library 1000 times and then 1000 times delete the source code...

I don't know - I'd rather have to install just the library.

>
> The Windows case is exactly what we killed as not being worth
> the maintenance effort, so I doubt you'll find much interest
> in resurrecting that one.

So instead of simplifying the life of maintainers in the different distros
you made the life easier for yourself. ;;-)

Thank you.

>
> regards, tom lane




Splitting libpq build

2021-10-03 Thread Igor Korot
Hi, ALL,
I wonder who ade the decision not to provide an independent build
of the client library of PostgreSQL? And what was the reason behind it?

Its very annoying to build the whole server, just to create a client library
to connect to a remote DB. And then kill the build after the install as
useless.

Could someone please recommend a best way to have it build?

Thank you.




Re:

2021-09-01 Thread Igor Korot
Hi,

On Wed, Sep 1, 2021 at 9:37 AM obi reddy  wrote:
>
> Windows version:10
> Postgresql version:13.
>
> On Wed, 1 Sep 2021, 6:47 pm Igor Korot,  wrote:
>>
>> Hi,
>>
>> On Wed, Sep 1, 2021 at 7:57 AM obi reddy  wrote:
>> >
>> > 1.How to set replication in windows machine.

Did you see this:
https://blog.crunchydata.com/blog/postgres-streaming-replication-on-windows-a-quick-guide?

>> > 2. How to take incremental backup in windows machine.

And this: 
https://docs.microsoft.com/en-us/previous-versions/office/developer/exchange-server-2010/dd877022(v=exchg.140)#:~:text=Using%20an%20Incremental%20backup%20to,after%20the%20last%20Full%20backup.=A%20Full%20backup%20taken%20from,active%20location%20and%20vice%20versa.?

Thank you.

>>
>> What Windows version do you use?
>> What PostgreSQL version?
>>
>> Thank you.
>>
>> >
>> > Thank you 
>> > Obireddy.G
>> >
>> > On Wed, 1 Sep 2021, 6:21 pm Igor Korot,  wrote:
>> >>
>> >> Hi,
>> >>
>> >>
>> >>
>> >> On Wed, Sep 1, 2021, 07:32 obi reddy  wrote:
>> >>>
>> >>> Hi everyone
>> >>>
>> >>> is there any one working on PostgreSQL windows machine.
>> >>
>> >>
>> >> You mean server installed on Windows? Or client? If the client - which 
>> >> one?
>> >>
>> >> And what is your question?
>> >>
>> >> Thank you.
>> >>
>> >>>
>> >>> Thank you & Regards
>> >>> Obireddy.G




Re:

2021-09-01 Thread Igor Korot
Hi,

On Wed, Sep 1, 2021 at 7:57 AM obi reddy  wrote:
>
> 1.How to set replication in windows machine.
> 2. How to take incremental backup in windows machine.

What Windows version do you use?
What PostgreSQL version?

Thank you.

>
> Thank you 
> Obireddy.G
>
> On Wed, 1 Sep 2021, 6:21 pm Igor Korot,  wrote:
>>
>> Hi,
>>
>>
>>
>> On Wed, Sep 1, 2021, 07:32 obi reddy  wrote:
>>>
>>> Hi everyone
>>>
>>> is there any one working on PostgreSQL windows machine.
>>
>>
>> You mean server installed on Windows? Or client? If the client - which one?
>>
>> And what is your question?
>>
>> Thank you.
>>
>>>
>>> Thank you & Regards
>>> Obireddy.G




Re:

2021-09-01 Thread Igor Korot
Hi,



On Wed, Sep 1, 2021, 07:32 obi reddy  wrote:

> Hi everyone
>
> is there any one working on PostgreSQL windows machine.
>

You mean server installed on Windows? Or client? If the client - which one?

And what is your question?

Thank you.


> Thank you & Regards
> Obireddy.G
>


Re:

2021-07-02 Thread Igor Korot
Hi,

On Fri, Jul 2, 2021, 5:24 AM Atul Kumar  wrote:

> Hi,
>
> I have one query like below :
>
>
>  SELECT
> m.iMemberId "memberId",
> m.cFirstName "firstName",
> m.cLastName "lastName",
> m.cFirstName || ' ' ||
> m.cLastName "fullName",
> m.cPlayerStateId "stateId",
> DECODE(m.cBirthdateVerify, 1,
> 'Yes', 'No') "birthdateVerify",
> TO_CHAR(m.dBirthDate,
> 'MM/DD/') "dateOfBirth",
> p.cPosition "position",
> p.cJerseyNumber "number",
> DECODE(daps.status, 2, 'PT',
> 1, 'FT', NULL) "daps",
> op.cCitizenship "citizenship",
> op.cNotes "notes",
> NVL(op.cCountryOfBirth,
> op.cCountryOfBirthOther) "countryOfBirth"
> FROM sam_gameroster r
> INNER JOIN sam_guestParticipant p ON
> p.iMemberId = r.iMemberId AND p.iContainerId = r.iContainerId
> INNER JOIN sam_member m ON m.iMemberId
> = p.iMemberId
> INNER JOIN sam_container c ON
> c.iContainerId = r.iContainerId
> LEFT JOIN sam_container lc ON
> c.iContainerLinkId = lc.iContainerId
> LEFT JOIN sam_participant op ON
> op.iassignContainerId = NVL(c.iContainerLinkId, c.iContainerId) AND
> op.imemberID = m.imemberId
> LEFT JOIN (
> SELECT pp.iMemberId,
> MAX(CASE WHEN
> pp.cDpFtStatus = 'PT' THEN 2
> WHEN
> pp.cDpFtStatus = 'FT' THEN 1
> ELSE 0 END) status
> FROM sam_participant pp
> WHERE pp.igroupid =
> getGroupId() GROUP BY pp.iMemberId
> ) daps ON daps.iMemberId = r.iMemberId
> LEFT JOIN sam_playersuspension ps ON
> ps.iMemberId = m.iMemberId AND ps.iEventId = r.iEventId
> WHERE r.iEventId = '7571049' AND
> r.iContainerId = '15257396' AND ((ps.iIsAutoCreated = 1 AND
> ps.iIsTombstone = 1) OR ps.iPlayerSuspensionId IS NULL)
> ORDER BY LOWER(m.cLastName),
> LOWER(m.cFirstName)
>
>
>
>
>
> And the execution of above query is
>
>
>
>
> QUERY PLAN
>
> 
>  Sort  (cost=783789.11..783789.11 rows=1 width=377) (actual
> time=12410.619..12410.619 rows=0 loops=1)
>Sort Key: (lower((m.clastname)::text)), (lower((m.cfirstname)::text))
>Sort Method: quicksort  Memory: 25kB
>Buffers: shared hit=525065
>->  Merge Right Join  (cost=781822.64..783789.10 rows=1 width=377)
> (actual time=12410.609..12410.609 rows=0 loops=1)
>  Merge Cond: (pp.imemberid = r.imemberid)
>  Buffers: shared hit=525065
>  ->  GroupAggregate  (cost=781820.08..783074.57 rows=55308
> width=11) (actual time=12410.251..12410.251 rows=1 loops=1)
>Group Key: pp.imemberid
>Buffers: shared hit=524884
>->  Sort  (cost=781820.08..781960.36 rows=56113
> width=10) (actual time=12410.234..12410.234 rows=3 loops=1)
>  Sort Key: pp.imemberid
>  Sort Method: quicksort  Memory: 207217kB
>  Buffers: shared hit=524884
>  ->  Seq Scan on sam_participant pp
> (cost=0.00..777393.87 rows=56113 width=10) (actual
> time=0.284..10871.913 rows=2335154 loops=1)
>Filter: ((igroupid)::integer =
> (current_setting('env.groupid'::text))::integer)
>Rows Removed by Filter: 8887508
>Buffers: shared hit=524884
>  ->  Materialize  (cost=2.56..23.14 rows=1 width=184) (actual
> time=0.354..0.354 rows=0 loops=1)
>Buffers: shared hit=181
>->  Nested Loop Left Join  (cost=2.56..23.14 rows=1
> width=184) (actual time=0.352..0.353 rows=0 loops=1)
>  Join Filter: (ps.ieventid = r.ieventid)
>  Filter: (((ps.iisautocreated = '1'::numeric) AND
> (ps.iistombstone = '1'::numeric)) OR (ps.iplayersuspensionid IS NULL))
>  Buffers: shared hit=181
>  ->  Nested Loop Left Join  

Re: How can I insert the image as a blob in the table

2021-04-11 Thread Igor Korot
Hi, David,

On Sun, Apr 11, 2021 at 6:57 PM David G. Johnston
 wrote:
>
> On Sunday, April 11, 2021, Igor Korot  wrote:
>>
>> Hi, David,
>>
>>
>> On Sun, Apr 11, 2021 at 6:24 PM David G. Johnston
>>  wrote:
>>
>> > If you can decide on what client interface you want to use there should be 
>> > existing resources on the web walking through how to do this using that 
>> > client interface.
>>
>> Something like this:
>>
>> INSERT INTO images(image) VALES( loadfile( /home/igor/my_image) );
>>
>> Looking to run it from gAdmin in the Terminal.
>
>
> As SQL is executed on the server there is the major issue of the server being 
> unable to resolve /home on the client machine.  But maybe pgAdmin has its own 
> way to deal with this.  Have you read its documentation?

I don't know - server is running on the same OSX machine as pgAdmin will be.

Thank you.

>
> David J.
>




Re: How can I insert the image as a blob in the table

2021-04-11 Thread Igor Korot
Hi, David,


On Sun, Apr 11, 2021 at 6:24 PM David G. Johnston
 wrote:
>
> On Sun, Apr 11, 2021 at 2:04 PM Igor Korot  wrote:
>>
>> Hi, ALL,
>> I have an image on my hard drive and I'd like to store it in the BLOB
>> column of the images table.
>>
>> Is there a simple SQL to do that in PostgreSQL?
>>
>
> SQL proper has no concept of "your hard drive".  You need to define what you 
> plan to use as an in between layer that can produce SQL AND see "your hard 
> drive" (i.e., what database client).  Also, BLOB isn't an actual thing in 
> PostgreSQL proper (though generic client APIs may use that term), so I 
> presume you mean to store the contents in a "bytea" typed column.  You could 
> also convert the binary to a encoded text compatible type (e.g., base64) and 
> use a text field instead.
>
> If you can decide on what client interface you want to use there should be 
> existing resources on the web walking through how to do this using that 
> client interface.

Something like this:

INSERT INTO images(image) VALES( loadfile( /home/igor/my_image) );

Looking to run it from gAdmin in the Terminal.

Thank you.

>
> David J.
>




How can I insert the image as a blob in the table

2021-04-11 Thread Igor Korot
Hi, ALL,
I have an image on my hard drive and I'd like to store it in the BLOB
column of the images table.

Is there a simple SQL to do that in PostgreSQL?

Thank you.




Re: Error message

2021-03-15 Thread Igor Korot
Thx, David.
I need to get some sleep. ;-)



On Tue, Mar 16, 2021 at 12:35 AM David G. Johnston
 wrote:
>
> On Monday, March 15, 2021, Igor Korot  wrote:
>>
>> [quote]
>> As with PQexec, the result is normally a PGresult object whose
>> contents indicate server-side success or failure. A null result
>> indicates out-of-memory or inability to send the command at all. Use
>> PQerrorMessage to get more information about such errors.
>> [/quote]
>>
>> But this function is not the same as PQexec().
>>
>> So what does it return if it succeeds?
>
>
> Exactly what the syntax line and that first sentence says it does, a PGresult.
>
> David J.
>




Error message

2021-03-15 Thread Igor Korot
Hi, ALL,

Documentation for PQprepare states:

[quote]
As with PQexec, the result is normally a PGresult object whose
contents indicate server-side success or failure. A null result
indicates out-of-memory or inability to send the command at all. Use
PQerrorMessage to get more information about such errors.
[/quote]

But this function is not the same as PQexec().

So what does it return if it succeeds?

Thank you.




Re: copy command - something not found

2020-12-29 Thread Igor Korot
Hi
You could try to do "set +x" before running the script...

Thank you.


On Tue, Dec 29, 2020, 2:23 PM David G. Johnston 
wrote:

> On Tue, Dec 29, 2020 at 1:01 PM Susan Hurst <
> susan.hu...@brookhurstdata.com> wrote:
>
>> Actually, the -c was in an example of a copy command that I found while
>> working at my last job. I tried executing the command without the -c and
>> got the same results as before, so I suppose I really don't know what it
>> means.
>>
>> Can you enlighten me
>>
> Read the fine manual before running stuff that you don't understand.
>
> https://www.postgresql.org/docs/current/app-psql.html
>
> Then, provide a self-contained script that demonstrates the problem
> because at this point I am either unable to follow or untrusting of the
> written description of the problem.
>
> Or consider using less indirection until you get something that is working
> and then add your layers incrementally testing as you go along.
>
> David J.
>
>


Re: Different bitness

2020-11-08 Thread Igor Korot
Thx, Tom.

On Sun, Nov 8, 2020 at 2:21 PM Tom Lane  wrote:
>
> Igor Korot  writes:
> > ld: fatal: file ../postgresql/src/interfaces/libpq/libpq.so: wrong ELF
> > class: ELFCLASS32
>
> That path doesn't seem to quite agree with this one:
>
> > igor@solaris:~/dbhandlerSol/libpq/src/interfaces/libpq$ file libpq.so
>
>
> regards, tom lane




Different bitness

2020-11-08 Thread Igor Korot
Hi, ALL,

[code]

CC -m64   -std=c++11 -o
dist/Debug/OracleDeveloperStudio-Solaris-x86/liblibpostgres.so
build/Debug/OracleDeveloperStudio-Solaris-x86/database_postgres.o
-L../postgresql/src/interfaces/libpq -lpq -m64 -std=c++11 -G -KPIC
-norunpath -h liblibpostgres.so
ld: fatal: file ../postgresql/src/interfaces/libpq/libpq.so: wrong ELF
class: ELFCLASS32
*** Error code 2
[/code]

However testing the library reveals:

[code]

igor@solaris:~/dbhandlerSol/libpq/src/interfaces/libpq$ file libpq.so
libpq.so:   ELF 64-bit LSB dynamic lib AMD64 Version 1 [SSE2 SSE],
dynamically linked, not stripped
[/code]

What is wrong here?

I did do the clean the default 32-bit build and then rebuild with:

[code]
CFLAGS="-m64" LDFLAGS="-m64" ./configure && cd src/interface/libpq && gmake
[/code]

What am I missing?

Thank you.




Re: Building for 64-bit platform

2020-11-08 Thread Igor Korot
Thx.

On Sun, Nov 8, 2020 at 12:15 AM Tom Lane  wrote:
>
> Igor Korot  writes:
> > I build libpq with the standard configure/dmake.
> > Now I realize I need t build it for a 64-bit platform.
>
> > My questions are:
> > Is it enough to ust do
>
> Do "make distclean" at the top level, then re-configure with the
> new options and re-make.
>
> Maybe you can get away with a partial rebuild, but there is no way
> that it's worth your time to experiment.  On any machine built in
> the last two decades, you could have already finished a full rebuild
> in the time it took me to type this.  On the other hand, if you do
> a partial rebuild and it turns out to be broken, you could waste
> many hours figuring that out.
>
> regards, tom lane




Building for 64-bit platform

2020-11-07 Thread Igor Korot
Hi,
I build libpq with the standard configure/dmake.
Now I realize I need t build it for a 64-bit platform.

My questions are:
Is it enough to ust do

[code]
cd libpq
CFLAGS="-m64" LDFLAGS="-m64" ./configure
dmake
[/code]

or I have to do:

[code]
dmake clean
[/code]

?

2. Is my configure line above correct?
Or PostgreSQL configure contains special flags for 64 bit builds already?

Thank you.




Re: Attaching database

2020-10-19 Thread Igor Korot
Hi, Pavel,

On Mon, Oct 19, 2020 at 12:51 PM Pavel Stehule  wrote:
>
> Hi
>
> po 19. 10. 2020 v 19:40 odesílatel Igor Korot  napsal:
>>
>> Hi,
>> IIUC, PostgreSQL does not support attaching a database to an existing
>> connection.
>> However I was told that I can use this:
>> https://www.postgresql.org/docs/9.3/postgres-fdw.html.
>>
>> Is it the same thing? Why there is no simple
>>
>> ATTACH  AS 
>>
>
> It is a different thing - postgresql_fdw does nested connect - it uses 
> client-server protocol.
>
> For postgres connect and sql engine process is one entity - and engine is 
> written without a possibility to reconnect to another database.

So if I understand correctly the postgresql_fdw is creating a second
connection and uses it as kind of "virtualizing mechanism"
in order to give access to the second database.

Am I right?

Thank you.

>
> Regards
>
> Pavel
>
>
>
>>
>> ?
>>
>> Thank you.
>>
>>




Attaching database

2020-10-19 Thread Igor Korot
Hi,
IIUC, PostgreSQL does not support attaching a database to an existing
connection.
However I was told that I can use this:
https://www.postgresql.org/docs/9.3/postgres-fdw.html.

Is it the same thing? Why there is no simple

ATTACH  AS 

?

Thank you.




Re: Failed to compile libpq

2020-10-15 Thread Igor Korot
Tom,

On Thu, Oct 15, 2020 at 12:16 AM Tom Lane  wrote:
>
> Igor Korot  writes:
> > On Thu, Oct 15, 2020 at 12:01 AM Tom Lane  wrote:
> >> Kinda looks like you're using some non-GNU make.
>
> > Correct.
> > It is from Solaris Studio compiler.
>
> > What should I do?
>
> Try "gmake".  If it's not already on the system, you'll need to
> install it.

Thx.
That's solved it.

>
> regards, tom lane




Re: Failed to compile libpq

2020-10-14 Thread Igor Korot
Tom et al,

On Thu, Oct 15, 2020 at 12:01 AM Tom Lane  wrote:
>
> Igor Korot  writes:
> > Configure succeeded, but running "make" failed with
>
> > [quote]
> > make: Fatal error in reader ../../../src/Makefile.global, line 45:
> > Unexpected end of line seen
> > [/quote]
>
> Kinda looks like you're using some non-GNU make.

Correct.
It is from Solaris Studio compiler.

What should I do?

Thank you.

>
> regards, tom lane




Failed to compile libpq

2020-10-14 Thread Igor Korot
Hi, ALL,
I just tried to compile libpq on latest Solaris x86 (12.4) with Solaris
compiler (Sollaris Studio) version 12.6.

Configure succeeded, but running "make" failed with

[quote]
make: Fatal error in reader ../../../src/Makefile.global, line 45:
Unexpected end of line seen
[/quote]

Is there a way to fix it?
Below is the complete log from the Terminal session:

[quote]
igor@solaris:~/dbhandlerSol/libpq$ ./configure --enable-debug
--enable-nls --with-openssl
checking build system type... i386-pc-solaris2.11
checking host system type... i386-pc-solaris2.11
checking which template to use... solaris
checking whether to build with 64-bit integer date/time support... yes
checking whether NLS is wanted... yes
checking for default port number... 5432
checking for block size... 8kB
checking for segment size... 1GB
checking for WAL block size... 8kB
checking for WAL segment size... 16MB
checking for gcc... no
checking for cc... cc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... no
checking whether cc accepts -g... yes
checking for cc option to accept ISO C89... none needed
checking whether the C compiler still works... yes
checking how to run the C preprocessor... cc -Xa -E
checking allow thread-safe client libraries... yes
checking whether to build with Tcl... no
checking whether to build Perl modules... no
checking whether to build Python modules... no
checking whether to build with GSSAPI support... no
checking whether to build with PAM support... no
checking whether to build with BSD Authentication support... no
checking whether to build with LDAP support... no
checking whether to build with Bonjour support... no
checking whether to build with OpenSSL support... yes
checking whether to build with SELinux support... no
checking whether to build with systemd support... no
checking for grep that handles long lines and -e... /usr/bin/ggrep
checking for egrep... /usr/bin/ggrep -E
checking for non-GNU ld... /usr/bin/ld
checking if the linker (/usr/bin/ld) is GNU ld... no
checking for ranlib... ranlib
checking for strip... strip
checking whether it is possible to strip libraries... no
checking for ar... ar
checking for a BSD-compatible install... /usr/bin/ginstall -c
checking for tar... /usr/bin/tar
checking whether ln -s works... yes
checking for gawk... gawk
checking for a thread-safe mkdir -p... /usr/bin/gmkdir -p
checking for bison... no
configure: WARNING:
*** Without Bison you will not be able to build PostgreSQL from Git nor
*** change any of the parser definition files.  You can obtain Bison from
*** a GNU mirror site.  (If you are using the official distribution of
*** PostgreSQL then you do not need to worry about this, because the Bison
*** output is pre-generated.)
checking for flex... no
configure: WARNING:
*** Without Flex you will not be able to build PostgreSQL from Git nor
*** change any of the scanner definition files.  You can obtain Flex from
*** a GNU mirror site.  (If you are using the official distribution of
*** PostgreSQL then you do not need to worry about this because the Flex
*** output is pre-generated.)
checking for perl... /usr/bin/perl
configure: using perl 5.22.1
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking if compiler needs certain flags to reject unknown flags...
-Werror -Wunknown-warning-option -Wunused-command-line-argument
checking whether pthreads work with -mt -lpthread... yes
checking for joinable pthread attribute... PTHREAD_CREATE_JOINABLE
checking if more special flags are required for pthreads... -D_REENTRANT
checking for PTHREAD_PRIO_INHERIT... yes
checking pthread.h usability... yes
checking pthread.h presence... yes
checking for pthread.h... yes
checking for strerror_r... yes
checking for getpwuid_r... yes
checking for gethostbyname_r... yes
checking whether strerror_r returns int... yes
checking for main in -lm... yes
checking for library containing setproctitle... no
checking for library containing dlopen... none required
checking for library containing socket... none required
checking for library containing shl_load... no
checking for library containing getopt_long... none required
checking for library containing crypt... none required
checking for library containing shm_open... none required
checking for library containing shm_unlink... none required
checking for library containing fdatasync... none required
checking for library containing sched_yield... none required
checking for library containing gethostbyname_r... none required
checking for library containing 

Re: Can't query system tables during transaction

2020-10-04 Thread Igor Korot
Hi,

On Sun, Oct 4, 2020 at 3:30 PM Tom Lane  wrote:
>
> Igor Korot  writes:
> > I'm trying to execute following:
>
> > SELECT 1 FROM pg_class c, pg_namespace n WHERE n.oid = c.relnamespace
> > AND c.relname = 'foo' AND n.nspname = public;
>
> I suppose you meant to put quotes around 'public'?

I suppose so as well. ;-)

>
> > I'm getting the following error:
> > ERROR:  current transaction is aborted, commands ignored until end of
> > transaction block
>
> This has nothing to do with the current command, but with failure
> of some previous command in the transaction.

Thank you.
I will try to track down the error.

>
> regards, tom lane




Can't query system tables during transaction

2020-10-04 Thread Igor Korot
Hi, ALL,
I'm trying to execute following:

SELECT 1 FROM pg_class c, pg_namespace n WHERE n.oid = c.relnamespace
AND c.relname = 'foo' AND n.nspname = public;

inside the transaction.

I'm getting the following error:

ERROR:  current transaction is aborted, commands ignored until end of
transaction block

Does this mean I can't query system tables during the transaction?
What is the problem here if it's not and how do I find out the reason?
And if it is - how to work around it?

I can probably commit it and start a new transaction, but I fear I will
have the same issue there...

Thank you.

If it matters - I'm working with C++ and libpq.




Re: How to write such a query

2020-09-18 Thread Igor Korot
Ken,

On Fri, Sep 18, 2020 at 3:35 PM Ken Tanzer  wrote:

> On Fri, Sep 18, 2020 at 1:26 PM Ron  wrote:
>
>> On 9/18/20 3:18 PM, Igor Korot wrote:
>>
> Thank you for the info.
>> My problem is that I want to emulate Access behavior.
>>
>> As I said - Access does it without changing the query internally (I
>> presume).
>>
>> I want to do the same with PostgreSQL.
>>
>> I'm just trying to understand how to make it work for any query
>>
>> I can have 3,4,5 tables, query them and then update the Nth record in the
>> resulting recordset.
>>
>> Access does it, PowerBuilder does it.
>>
>> I just want to understand how.
>>
>>
>> They do it by hiding the details from you.
>>
>>
> That's true.  And Igor--people are asking you some good questions about
> why and design and such that you'd probably be well-advised to think about
> and respond to.
>
> So I'm not saying you should do this, but responding to your question
> specifically, and what the "details" are that Ron alludes to, one way to
> get the result you're asking about is to run your query adding on row
> numbers (pay attention to your ordering!), and then reference that result
> set from an update to get the primary key you want.  So I didn't test it,
> but something roughly like this:
>
> WITH tmp AS (SELECT X.field1, Y.field2,row_number() OVER () from X, Y
> WHERE X.id = Y.id ) UPDATE x SET ...  FROM tmp WHERE
>  tmp.row_number=5 AND x.field1=tmp.field1;
>

I didn't know that row_number() function exists and it is available across
different DBMSes.

I will test that query later.

Thank you.

Now one other little thing: could you point me to the documentation that
explains the meaning of the "window function"?



> Cheers,
> Ken
>
>
>
> --
> AGENCY Software
> A Free Software data system
> By and for non-profits
> *http://agency-software.org/ <http://agency-software.org/>*
> *https://demo.agency-software.org/client
> <https://demo.agency-software.org/client>*
> ken.tan...@agency-software.org
> (253) 245-3801
>
> Subscribe to the mailing list
>  to
> learn more about AGENCY or
> follow the discussion.
>


Re: How to write such a query

2020-09-18 Thread Igor Korot
Hi, Ken,

On Fri, Sep 18, 2020 at 2:46 PM Ken Tanzer  wrote:

> > How to find what the primary key (or UNIQUE identifier) value is
>> > for row 5 in the recordset?
>>
>> You're missing the point: as mentioned before, there is no "row 5". To
>> update the 5th record that you've fetched, you increment a counter each
>> time
>> you fetch a row, and when you read #5, do an UPDATE X SET field1 =
>> 'blarg'
>> WHERE id = ;
>>
>>
> It seems worth mentioning for benefit of the OPs question that there _is_
> a way to get a row number within a result set.  Understanding and making
> good use of that is an additional matter.
>
> SELECT X.field1, Y.field2*,row_number() OVER ()*  from X, Y WHERE X.id =
> Y.id -- ORDER BY ?
>
> That row number is going to depend on the order of the query, so it might
> or might not have any meaning.  But if you queried with a primary key and a
> row number, you could then tie the two together and make an update based on
> that.
>

Thank you for the info.
My problem is that I want to emulate Access behavior.

As I said - Access does it without changing the query internally (I
presume).

I want to do the same with PostgreSQL.

I'm just trying to understand how to make it work for any query

I can have 3,4,5 tables, query them and then update the Nth record in the
resulting recordset.

Access does it, PowerBuilder does it.

I just want to understand how.

Thank you.


> Cheers,
> Ken
> --
> AGENCY Software
> A Free Software data system
> By and for non-profits
> *http://agency-software.org/ *
> *https://demo.agency-software.org/client
> *
> ken.tan...@agency-software.org
> (253) 245-3801
>
> Subscribe to the mailing list
>  to
> learn more about AGENCY or
> follow the discussion.
>


Re: How to write such a query

2020-09-18 Thread Igor Korot
Hi, Adrian,

On Fri, Sep 18, 2020 at 12:58 PM Adrian Klaver
 wrote:
>
> On 9/18/20 10:46 AM, Igor Korot wrote:
> > Hi, Johnathan,
> >
> > On Fri, Sep 18, 2020 at 12:34 PM Jonathan Strong
> > mailto:jonathanrstr...@gmail.com>> wrote:
> >
> > Are you looking to arbitrarily update the field in the fifth row, or
> > can the row that needs to be updated be isolated by some add'l
> > attribute? What's the use case?
> >
> >
> > What do you mean?
> > I don't have any other attributes.
> >
> > I want to understand how to emulate MS Access behavior, where you have a
> > form
> > with the arbitrary query, then you can go to any record in that form and
> > update any field.
> >
> > Is it even possible from the "pure SQL" POV? Or Access is doing some
> > VBA/DB/4GL magic?
> >
>
> When you are updating a record in a form the framework(Access in your
> case) is using some identifier from that record to UPDATE that
> particular record in the database. From when I used Access, I seem to
> remember it would not give you INSERT/UPDATE capability on a form unless
> you had specified some unique key for the records. So you need to find
> what the key(generally a PRIMARY KEY) is and use that to do the UPDATE.

But now the question becomes

How to find what the primary key (or UNIQUE identifier) value is
for row 5 in the recordset?

Thank you.

>
> > Thank you.
> >
>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com




Re: How to write such a query

2020-09-18 Thread Igor Korot
Hi, Johnathan,

On Fri, Sep 18, 2020 at 12:34 PM Jonathan Strong 
wrote:

> Are you looking to arbitrarily update the field in the fifth row, or can
> the row that needs to be updated be isolated by some add'l attribute?
> What's the use case?
>

What do you mean?
I don't have any other attributes.

I want to understand how to emulate MS Access behavior, where you have a
form
with the arbitrary query, then you can go to any record in that form and
update any field.

Is it even possible from the "pure SQL" POV? Or Access is doing some
VBA/DB/4GL magic?

Thank you.


> - Jon
>
> <https://www.linkedin.com/in/jonstrong/>
> <https://www.jonathanrstrong.com>
>
> *Jonathan Strong*
>
> CIO / CTO / Consultant
>
> *P:* 609-532-1715 *E:* jonathanrstr...@gmail.com
>
> *Quora Top Writer <https://www.quora.com/profile/Jonathan-R-Strong>*
>
>
> On Fri, Sep 18, 2020 at 1:27 PM Igor Korot  wrote:
>
>> Hi,
>> Consider following
>>
>> [code]
>> CREATE TABLE X(id INTEGER PRIMARY KEY, field1 char(50), field2 int);
>> CREATE TABLE Y(id INTEGER PRIMARY KEY, field1 char, field2 double(10, 2));
>> SELECT X.field1, Y.field2 from X, Y WHERE X.id = Y.id;
>> [/code]
>>
>> Assuming that the SELECT return 10 rows, I want to update X.field1
>> in row 5.
>>
>> How do I write a WHERE clause in the
>>
>> [code]
>> UPDATE X.field1 SET X.field1 = '' WHERE
>> [/code]
>>
>> Thank you.
>>
>>
>>


Re: How to write such a query

2020-09-18 Thread Igor Korot
Hi, Paul

On Fri, Sep 18, 2020 at 12:34 PM Paul Förster  wrote:
>
> Hi Igor,
>
> > On 18. Sep, 2020, at 19:29, Igor Korot  wrote:
> >
> > Hi,
> > Consider following
> >
> > [code]
> > CREATE TABLE X(id INTEGER PRIMARY KEY, field1 char(50), field2 int);
> > CREATE TABLE Y(id INTEGER PRIMARY KEY, field1 char, field2 double(10, 2));
> > SELECT X.field1, Y.field2 from X, Y WHERE X.id = Y.id;
> > [/code]
> >
> > Assuming that the SELECT return 10 rows, I want to update X.field1
> > in row 5.
> >
> > How do I write a WHERE clause in the
> >
> > [code]
> > UPDATE X.field1 SET X.field1 = '' WHERE
> > [/code]
> >
> > Thank you.
>
> update x set field1='' where id=5;

How do you know that the row #5 will have an X.id field 5?

Thank you.

>
> Cheers,
> Paul




How to write such a query

2020-09-18 Thread Igor Korot
Hi,
Consider following

[code]
CREATE TABLE X(id INTEGER PRIMARY KEY, field1 char(50), field2 int);
CREATE TABLE Y(id INTEGER PRIMARY KEY, field1 char, field2 double(10, 2));
SELECT X.field1, Y.field2 from X, Y WHERE X.id = Y.id;
[/code]

Assuming that the SELECT return 10 rows, I want to update X.field1
in row 5.

How do I write a WHERE clause in the

[code]
UPDATE X.field1 SET X.field1 = '' WHERE
[/code]

Thank you.




Re: Need free PG odbc driver for Windows 10

2020-07-28 Thread Igor Korot
Hi,

On Tue, Jul 28, 2020 at 11:40 AM David Gauthier
 wrote:
>
> Hi:
>
> I need a free odbc driver for PG to be installed on Windows 10 that my user 
> community can easily install.  By "easily install" I mean no binaries, no 
> zipped file, etc... just point-click-install (with the usual confirmations 
> and accepting default destinations for the code and such).
>
> Devart used to give this away and I have a copy on my laptop.  But apparently 
> it's not free anymore.
>
> If it matters, the PG DB they will be accessing is on linux, psql (9.3.2, 
> server 11.3)

If I understand correctly, there is a free ODBC driver on the
PostgreSQL download page.

Thank you.

>
> Thanks !




Re: Connecting Powerbuilder / EAserver to PostgreSQL server 11 via ODBC or JDBC

2020-06-26 Thread Igor Korot
Hi,



On Fri, Jun 26, 2020, 8:31 AM Matthias Apitz  wrote:

>
> Hello,
>
> After the very successful migration of our Library Management System
> (which uses ESQL/C, DBI, JDBC) together with PostgreSQL 11.4, we want to
> do the
> same with another LMS written in Powerbuild, running in an EAServer and
> currently using Sybase ASE as DBS.
>
> There is an error situation already on the connection phase, the ODBC
> debug logs show that the connection establishment and authentication to
> the PostgreSQL server is fine (also the disconnect), but the EAServer
> makes an error out of this and returns to the Powerbuilder software that
> the connection is invalid,
> raising some error 999 without saying much in the log files what this
> could mean or is caused from.
>

So what operation is eroding out?
Can you try to isolate it?

Thank you.


> I know this is difficult to analyze with all this proprietary software
> stack, but my first question here is: anybody out here who could manage
> such an architecture successful working?
>
> And please do not send hints of the type, rewrite everything in Java or
> Visual Basic, as this is not an option :-)
>
> Thanks
>
> matthias
> --
> Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/
> +49-176-38902045
> Public GnuPG key: http://www.unixarea.de/key.pub
>
>
>


Re: Problems modifyiong view

2019-11-14 Thread Igor Korot
Hi,

On Thu, Nov 14, 2019 at 7:54 AM stan  wrote:
>
> I am trying to add columns to a view using CREATE OR REPLACE VIEW, and I am
> getting the following error:
>
> ERROR:  cannot change name of view column "descrip" to "contact_person_1"
>
> I suppose  I can drop the view, and recreate it, but that seems to indicate
> that the create or replace functionality is not functioning the way I would
> expect.
>
> Am I missing something here?

What version?
What OS server is running on?
What client are you running?

Thank you.

>
> --
> "They that would give up essential liberty for temporary safety deserve
> neither liberty nor safety."
> -- Benjamin Franklin
>
>




Re: Recomended front ends?

2019-08-07 Thread Igor Korot
Hi,

On Wed, Aug 7, 2019 at 1:57 PM stan  wrote:
>
> I am in the process of defining an application for a very small company
> that uses Postgresql for the backend DB. This DB will eventually run on a
> hosted machine. As you imagine all of the employees have Windows machines
> for their normal work asks. Frankly I am not very strong on Windows. so I
> am wondering what the consensus is for creating forms and reports?
>
> My first though is Libre Office as that is cross platform, and i can test
> on my development Linux machine. However, i am getting a bit of push-back
> from the user as he is having issues with installing Libre Office on his
> computer. he says it does not play well with MS Office. Also we seem to be
> having some bugs with Libre Office Base in early development.
>
> What is the community wisdom here?

On top of what already been said - make sure that the product you are
about to start
working on will have its requirements clear and concise.

What is expected from the software?
Does it needs to go out and access the web?
Is the company split between different areas of the country/state?
Does it needs to support Windows only?
Will there be a need to a handheld device or bar code scanner?
Will printing be involved?

List is preliminary and can go on and on. Its just first that comes to mind.

Get the requirements from the company management, make sure you understand them
check you knowledge of different tools available and their support of
the feature requested
and start working.

Good luck!!

Thank you.

>
> --
> "They that would give up essential liberty for temporary safety deserve
> neither liberty nor safety."
> -- Benjamin Franklin
>
>




Re: Recomended front ends?

2019-08-07 Thread Igor Korot
Hi,

On Wed, Aug 7, 2019 at 1:57 PM stan  wrote:
>
> I am in the process of defining an application for a very small company
> that uses Postgresql for the backend DB. This DB will eventually run on a
> hosted machine. As you imagine all of the employees have Windows machines
> for their normal work asks. Frankly I am not very strong on Windows. so I
> am wondering what the consensus is for creating forms and reports?
>
> My first though is Libre Office as that is cross platform, and i can test
> on my development Linux machine. However, i am getting a bit of push-back
> from the user as he is having issues with installing Libre Office on his
> computer. he says it does not play well with MS Office. Also we seem to be
> having some bugs with Libre Office Base in early development.
>
> What is the community wisdom here?

What language/tools you are most comfortable with?

Thank you.

>
> --
> "They that would give up essential liberty for temporary safety deserve
> neither liberty nor safety."
> -- Benjamin Franklin
>
>




Re: Query plan: SELECT vs INSERT from same select

2019-07-23 Thread Igor Korot
Hi,

On Tue, Jul 23, 2019 at 3:29 PM Alexander Voytsekhovskyy
 wrote:
>
> I have quite complicated query:
>
> SELECT axis_x1, axis_y1, SUM(delivery_price)  as v_1 FROM (
> SELECT to_char(delivery_data.delivery_date, '-MM') as axis_x1, 
> clients.id_client as axis_y1, delivery_data.amount * production_price.price * 
> groups.discount as delivery_price
>
> FROM delivery_data
> JOIN client_tt ON (client_tt.id_client_tt = delivery_data.id_client_tt)
> JOIN clients ON (client_tt.id_client = clients.id_client)
> JOIN production ON (production.id = delivery_data.id_product)
> JOIN groups ON (groups.id = delivery_data.delivery_group_id AND 
> client_tt.id_group = groups.id AND groups.id = clients.id_group)
> LEFT JOIN production_price on (delivery_data.id_product = 
> production_price.id_production AND groups.price_list_id = 
> production_price.price_list_id AND delivery_data.delivery_date BETWEEN 
> production_price.date_from AND production_price.date_to)
>
> WHERE delivery_data.delivery_date between '2019-03-01' AND '2019-06-30'
> AND delivery_data.delivery_group_id IN (...short list of values...)
> AND delivery_data.id_product IN ()) AS tmpsource
>
> WHERE TRUE
> GROUP BY GROUPING SETS ((axis_x1, axis_y1), (axis_x1), (axis_y1), ())
>
> It runs well, took 1s and returns 4000 rows.
>
> You can see explain analyze verbose here:
> https://explain.depesz.com/s/AEWj
>
> The problem is, when i wrap it to
>
> A)
> INSERT INTO norepl_1542_result (axis_x1, axis_y1, v_1)
> SELECT  SAME QUERY
>
> OR even
>
> B)
> WITH rows AS (
> ... SAME SELECT QUERY ...
> )
> INSERT INTO norepl_1542_result (axis_x1, axis_y1, v_1)
> SELECT * FROM rows
>
> The query time dramatically drops to 500+ seconds.
>
> You can see explain analyze verbose here
> https://explain.depesz.com/s/AEWj
>
> As you can see, 100% of time goes to same SELECT query, there is no issues 
> with INSERT-part
>
> I have played a lot and it's reproducing all time.
>
> So my question is, why wrapping SELECT query with INSERT FROM SELECT 
> dramatically change query plan and make it 500x slower?

Which version of PostgreSQL do you have?
Which OS does it running on?

Thank you.




Re: Inserting into the blob

2019-06-10 Thread Igor Korot
Hi, David,

On Mon, Jun 10, 2019 at 10:45 PM David G. Johnston
 wrote:
>
> On Mon, Jun 10, 2019 at 8:32 PM Rob Sargent  wrote:
>>
>> > On Jun 10, 2019, at 6:40 AM, Igor Korot  wrote:
>> >
>> > Hi, Adrian,
>> >
>> >> On Mon, Jun 10, 2019 at 7:03 PM Adrian Klaver  
>> >> wrote:
>> >>
>> >>> On 6/10/19 9:30 AM, Igor Korot wrote:
>> >>>
>> >>> According to 
>> >>> https://stackoverflow.com/questions/16048649/postgresql-9-x-pg-read-binary-file-inserting-files-into-bytea,
>> >>> the file needs to be on the server inside PGDATA directory.
>> >>> It is not a problem in general, but just curious - is there a more
>> >>> generic solution (to get the file from the client)?
>> >>
>> >> This would depend on what is is you are trying to accomplish:
>> >>
>> >> 1) Are you really wanting to insert a file at a time at the psql command
>> >> line?
>> >
>> > Yes.
>> Gnarly. I suppose you could open the pdf in emacs and tell emacs to  NOT 
>> render it. Cut the entire buffer and paste it, properly quoted, into your 
>> psql command line. But \lo stuff seems much more likely to work.
>>
>
> This is what I've used for text:
>
> \set file_content `cat './file.txt'`
> SELECT:'file_content';
>
> For smallish files I'd probably just stick with the same theme but encode the 
> binary data as Base64 and then decode it into the bytea field.
>
> For not-so-small files probably better off storing the content elsewhere and 
> inserting location data into the database.
>
> I have not yet had the desire to incorporate the large object API into my 
> designs.

You are lucky you didn't work with the face databases...
Or photographs...

Thank you.

>
> David J.




Re: Inserting into the blob

2019-06-10 Thread Igor Korot
Hi, Adrian,

On Mon, Jun 10, 2019 at 7:03 PM Adrian Klaver  wrote:
>
> On 6/10/19 9:30 AM, Igor Korot wrote:
>
> >
> > According to 
> > https://stackoverflow.com/questions/16048649/postgresql-9-x-pg-read-binary-file-inserting-files-into-bytea,
> > the file needs to be on the server inside PGDATA directory.
> > It is not a problem in general, but just curious - is there a more
> > generic solution (to get the file from the client)?
>
> This would depend on what is is you are trying to accomplish:
>
> 1) Are you really wanting to insert a file at a time at the psql command
> line?

Yes.

>
> 2) If not then is there a program you are using/writing that will insert
> the data?

More like the program will query for the data...

Thank you.

>
> >
> > Thank you.
> >
>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com




Re: Inserting into the blob

2019-06-10 Thread Igor Korot
Hi, Adrian,

On Mon, Jun 10, 2019 at 8:38 AM Adrian Klaver  wrote:
>
> On 6/9/19 10:06 AM, Igor Korot wrote:
> > Hi, Adrian,
> >
> > On Sun, Jun 9, 2019 at 11:14 PM Adrian Klaver  
> > wrote:
> >>
> >> On 6/9/19 8:28 AM, Igor Korot wrote:
> >>> Hi, ALL,
> >>> How do I insert the png file (content, not a name) into the BLOB field
> >>> in the table i PostgreSQL DB?
> >>
> >> Are you talking about this data type?:
> >>
> >> https://www.postgresql.org/docs/11/datatype-binary.html
> >>
> >> Or this?:
> >>
> >> https://www.postgresql.org/docs/11/largeobjects.html
> >
> > Which one is best to hold an images of unknown size?
>
> Probably bytea as it is easier to work with overall.

OK.

>
> >
> >>
> >> What client/language are you using?
> >
> > psql if possible.
>
> Take a look at(NOTE: need to be superuser):
>
> https://www.postgresql.org/docs/current/functions-admin.html
>
> pg_read_binary_file is similar to pg_read_file, except that the result
> is a bytea value; accordingly, no encoding checks are performed. In
> combination with the convert_from function, this function can be used to
> read a file in a specified encoding:

According to 
https://stackoverflow.com/questions/16048649/postgresql-9-x-pg-read-binary-file-inserting-files-into-bytea,
the file needs to be on the server inside PGDATA directory.
It is not a problem in general, but just curious - is there a more
generic solution (to get the file from the client)?

Thank you.

>
> >
> > Thank you.
> >
> >>
> >>
> >>>
> >>> Thank you.
> >>>
> >>>
> >>>
> >>
> >>
> >> --
> >> Adrian Klaver
> >> adrian.kla...@aklaver.com
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com




Re: Inserting into the blob

2019-06-09 Thread Igor Korot
Hi, Adrian,

On Sun, Jun 9, 2019 at 11:14 PM Adrian Klaver  wrote:
>
> On 6/9/19 8:28 AM, Igor Korot wrote:
> > Hi, ALL,
> > How do I insert the png file (content, not a name) into the BLOB field
> > in the table i PostgreSQL DB?
>
> Are you talking about this data type?:
>
> https://www.postgresql.org/docs/11/datatype-binary.html
>
> Or this?:
>
> https://www.postgresql.org/docs/11/largeobjects.html

Which one is best to hold an images of unknown size?

>
> What client/language are you using?

psql if possible.

Thank you.

>
>
> >
> > Thank you.
> >
> >
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com




Inserting into the blob

2019-06-09 Thread Igor Korot
Hi, ALL,
How do I insert the png file (content, not a name) into the BLOB field
in the table i PostgreSQL DB?

Thank you.




Re: Back Slash \ issue

2019-05-03 Thread Igor Korot
Hi,

On Fri, May 3, 2019 at 11:20 AM Michael Nolan  wrote:
>
>
>
> I'm still not clear what the backslash is for, it is ONLY to separate first 
> and last name?  Can you change it to some other character?
>
> Others have suggested you're in a Windows environment, that might limit your 
> options.   How big is the file, is it possible to copy it to another server 
> to manipulate it?

Why not use something like Perl to process the data and then feed the
processed file to PostgreSQL?

Thank you.

> --
> Mike Nolan




Compilation fails with Solaris Studio 12.6

2019-04-14 Thread Igor Korot
Hi,
[quote]
igor@solaris:~/dbhandlerSol/postgresql$ ./configure
checking build system type... i386-pc-solaris2.11
checking host system type... i386-pc-solaris2.11
checking which template to use... solaris
checking whether to build with 64-bit integer date/time support... yes
checking whether NLS is wanted... no
checking for default port number... 5432
checking for block size... 8kB
checking for segment size... 1GB
checking for WAL block size... 8kB
checking for WAL segment size... 16MB
checking for gcc... no
checking for cc... cc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... no
checking whether cc accepts -g... yes
checking for cc option to accept ISO C89... none needed
checking whether the C compiler still works... yes
checking how to run the C preprocessor... cc -Xa -E
checking allow thread-safe client libraries... yes
checking whether to build with Tcl... no
checking whether to build Perl modules... no
checking whether to build Python modules... no
checking whether to build with GSSAPI support... no
checking whether to build with PAM support... no
checking whether to build with BSD Authentication support... no
checking whether to build with LDAP support... no
checking whether to build with Bonjour support... no
checking whether to build with OpenSSL support... no
checking whether to build with SELinux support... no
checking whether to build with systemd support... no
checking for grep that handles long lines and -e... /usr/bin/ggrep
checking for egrep... /usr/bin/ggrep -E
checking for non-GNU ld... /usr/bin/ld
checking if the linker (/usr/bin/ld) is GNU ld... no
checking for ranlib... ranlib
checking for strip... strip
checking whether it is possible to strip libraries... no
checking for ar... ar
checking for a BSD-compatible install... /usr/bin/ginstall -c
checking for tar... /usr/bin/tar
checking whether ln -s works... yes
checking for gawk... gawk
checking for a thread-safe mkdir -p... /usr/bin/gmkdir -p
checking for bison... no
configure: WARNING:
*** Without Bison you will not be able to build PostgreSQL from Git nor
*** change any of the parser definition files.  You can obtain Bison from
*** a GNU mirror site.  (If you are using the official distribution of
*** PostgreSQL then you do not need to worry about this, because the Bison
*** output is pre-generated.)
checking for flex... no
configure: WARNING:
*** Without Flex you will not be able to build PostgreSQL from Git nor
*** change any of the scanner definition files.  You can obtain Flex from
*** a GNU mirror site.  (If you are using the official distribution of
*** PostgreSQL then you do not need to worry about this because the Flex
*** output is pre-generated.)
checking for perl... /usr/bin/perl
configure: using perl 5.22.1
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking if compiler needs certain flags to reject unknown flags...
-Werror -Wunknown-warning-option -Wunused-command-line-argument
checking whether pthreads work with -mt -lpthread... yes
checking for joinable pthread attribute... PTHREAD_CREATE_JOINABLE
checking if more special flags are required for pthreads... -D_REENTRANT
checking for PTHREAD_PRIO_INHERIT... yes
checking pthread.h usability... yes
checking pthread.h presence... yes
checking for pthread.h... yes
checking for strerror_r... yes
checking for getpwuid_r... yes
checking for gethostbyname_r... yes
checking whether strerror_r returns int... yes
checking for main in -lm... yes
checking for library containing setproctitle... no
checking for library containing dlopen... none required
checking for library containing socket... none required
checking for library containing shl_load... no
checking for library containing getopt_long... none required
checking for library containing crypt... none required
checking for library containing shm_open... none required
checking for library containing shm_unlink... none required
checking for library containing fdatasync... none required
checking for library containing sched_yield... none required
checking for library containing gethostbyname_r... none required
checking for library containing shmget... none required
checking for library containing readline... -lreadline -ltermcap
checking for inflate in -lz... yes
checking atomic.h usability... yes
checking atomic.h presence... yes
checking for atomic.h... yes
checking crypt.h usability... yes
checking crypt.h presence... yes
checking for crypt.h... yes
checking dld.h usability... no
checking dld.h presence... no
checking for dld.h... no

Re: Installing PostgreSQL on Oracle Solaris

2019-03-23 Thread Igor Korot
Hi,

On Sat, Mar 23, 2019 at 10:42 AM Tom Lane  wrote:
>
> Igor Korot  writes:
> >>>> igor@solaris:/usr$ sudo bunzip2 < postgresql-9.6.1-S11.i386-64.tar.bz2
> >>>> | tar xpf -
>
> > On Sat, Mar 23, 2019 at 9:25 AM Ron  wrote:
> >> Off hand I would say the user you are running as does not have the 
> >> permissions to unpack the tarball in the location you have selected.
>
> > You mean even running as "sudo"?
>
> In the above, the "sudo" raises the permissions of the bunzip2 program
> (quite uselessly, AFAICS), while doing nothing for the tar program in
> the other pipe step.  Put the "sudo" in the other pipe step.
>
> (This is assuming that unpacking straight into /usr is actually what
> you want to do.  I share the doubts of the other responders about
> that being a wise procedure.)

Well, as I said I don't need a server - I just need libpq.
So I didn't really care where to install it. ;-)

Thank you.

>
> regards, tom lane



Re: Installing PostgreSQL on Oracle Solaris

2019-03-23 Thread Igor Korot
Hi,

On Sat, Mar 23, 2019 at 10:36 AM Hans Schou  wrote:
>
>
>
> On Sat, Mar 23, 2019 at 3:48 PM Igor Korot  wrote:
>>
>>
>> You mean even running as "sudo"?
>
>
> igor@solaris:/usr$ sudo bunzip2 < postgresql-9.6.1-S11.i386-64.tar.bz2
> | tar xpf -
>
> No, you are not running tar as sudo.
>
> I would at least suggest:
>   cd /usr/local/src
>   sudo tar --bzip2 xf /usr/postgresql-9.6.1-S11.i386-64.tar.bz2
>
> a bit dependen of which tar you have.
>
> otherwise:
>   cd /usr/local/src
>   sudo tar xf <( bzcat /usr/postgresql-9.6.1-S11.i386-64.tar.bz2 )

Thank you.

>



Re: Installing PostgreSQL on Oracle Solaris

2019-03-23 Thread Igor Korot
Hi,

On Sat, Mar 23, 2019 at 10:17 AM Ron  wrote:
>
> On 3/22/19 9:50 PM, Igor Korot wrote:
>
> Hi
>
> On Sat, Mar 23, 2019 at 9:25 AM Ron  wrote:
>
> On 3/23/19 8:58 AM, Adrian Klaver wrote:
>
> On 3/22/19 11:59 PM, Igor Korot wrote:
>
> Hi, ALL,
> I tried to follow an instructions at
> https://www.postgresql.org/ftp/binary/v9.6.1/solaris/solaris11/i386/
> in the README but I received following:
>
>
> Off hand I would say the user you are running as does not have the 
> permissions to unpack the tarball in the location you have selected.
>
>
> [code]
> igor@solaris:/usr$ sudo bunzip2 < postgresql-9.6.1-S11.i386-64.tar.bz2
>
> You mean even running as "sudo"?
>
>
> Is that a binary or source tarball?

Binary.

>
> If binary, is it designed specifically for Solaris?

Yes, the link I posted says so.

>
> Does the tarball's README tell you to untar it under /usr?

Not specifically. But it mentioned /usr.

Thank you.

>
> Thank you.
>
> | tar xpf -
> tar: postgres: Permission denied
> tar: postgres: Permission denied
> tar: cannot open postgres: No such file or directory
> tar: postgres: Permission denied
> tar: cannot open postgres/9.6-pgdg: No such file or directory
>
>
> Yeah. Unpacking it directly into /usr doesn't seem a particularly wise idea.
>
> tar: postgres: Permission denied
> tar: cannot open postgres/9.6-pgdg/doc: No such file or directory
>
> [snip]
>
> 4/server: No such file or directory
> tar: postgres: Permission denied
> tar: cannot open postgres/9.6-pgdg/include/64/server: No such file or 
> directory
> tar: postgres: Permission denied
> tar: cannot open postgres/9.6-pgdg/include/64/server/utils: No such
> file or directory
> tar: postgres: Permission denied
> tar: cannot open postgres/9.6-pgdg/include/64/server/utils: No such
> file or directory
> tar: postgres: Permission denied
> [/code]
>
> And many other like those.
>
> Is the instructions wrong?
>
> I'm trying 9.6.1 because I'm using it on Windows/OSX for libpq.
>
> Thank you.
>
>
>
>
>
> --
> Angular momentum makes the world go 'round.
>
>
> --
> Angular momentum makes the world go 'round.



Re: Installing PostgreSQL on Oracle Solaris

2019-03-23 Thread Igor Korot
Hi,

On Sat, Mar 23, 2019 at 9:55 AM Michael Loftis  wrote:
>
>
>
> On Sat, Mar 23, 2019 at 08:48 Igor Korot  wrote:
>>
>>
>> > [code]
>> > igor@solaris:/usr$ sudo bunzip2 < postgresql-9.6.1-S11.i386-64.tar.bz2
>>
>> You mean even running as "sudo"?
>>
>> Thank you.
>
>
>
> You ran the bunzip2 as sudo. No reason to do that. I didn't look at the 
> instructions you did but you'd need the tar command under sudo.

Running below command it looks like everything went good.

igor@solaris:/usr$ sudo bunzip2 < postgresql-9.6.1-S11.i386-64.tar.bz2
| sudo tar xpf -

Thank you.

>
>
>>
>>
>> > | tar xpf -
>> > tar: postgres: Permission denied
>> > tar: postgres: Permission denied
>> > tar: cannot open postgres: No such file or directory
>> > tar: postgres: Permission denied
>> > tar: cannot open postgres/9.6-pgdg: No such file or directory
>> >
>> >
>> > Yeah. Unpacking it directly into /usr doesn't seem a particularly wise 
>> > idea.
>> >
>> > tar: postgres: Permission denied
>> > tar: cannot open postgres/9.6-pgdg/doc: No such file or directory
>> >
>> > [snip]
>> >
>> > 4/server: No such file or directory
>> > tar: postgres: Permission denied
>> > tar: cannot open postgres/9.6-pgdg/include/64/server: No such file or 
>> > directory
>> > tar: postgres: Permission denied
>> > tar: cannot open postgres/9.6-pgdg/include/64/server/utils: No such
>> > file or directory
>> > tar: postgres: Permission denied
>> > tar: cannot open postgres/9.6-pgdg/include/64/server/utils: No such
>> > file or directory
>> > tar: postgres: Permission denied
>> > [/code]
>> >
>> > And many other like those.
>> >
>> > Is the instructions wrong?
>> >
>> > I'm trying 9.6.1 because I'm using it on Windows/OSX for libpq.
>> >
>> > Thank you.
>> >
>> >
>> >
>> >
>> >
>> > --
>> > Angular momentum makes the world go 'round.
>>
> --
>
> "Genius might be described as a supreme capacity for getting its possessors
> into trouble of all kinds."
> -- Samuel Butler



Re: Installing PostgreSQL on Oracle Solaris

2019-03-23 Thread Igor Korot
Hi

On Sat, Mar 23, 2019 at 9:25 AM Ron  wrote:
>
> On 3/23/19 8:58 AM, Adrian Klaver wrote:
>
> On 3/22/19 11:59 PM, Igor Korot wrote:
>
> Hi, ALL,
> I tried to follow an instructions at
> https://www.postgresql.org/ftp/binary/v9.6.1/solaris/solaris11/i386/
> in the README but I received following:
>
>
> Off hand I would say the user you are running as does not have the 
> permissions to unpack the tarball in the location you have selected.
>
>
> [code]
> igor@solaris:/usr$ sudo bunzip2 < postgresql-9.6.1-S11.i386-64.tar.bz2

You mean even running as "sudo"?

Thank you.

> | tar xpf -
> tar: postgres: Permission denied
> tar: postgres: Permission denied
> tar: cannot open postgres: No such file or directory
> tar: postgres: Permission denied
> tar: cannot open postgres/9.6-pgdg: No such file or directory
>
>
> Yeah. Unpacking it directly into /usr doesn't seem a particularly wise idea.
>
> tar: postgres: Permission denied
> tar: cannot open postgres/9.6-pgdg/doc: No such file or directory
>
> [snip]
>
> 4/server: No such file or directory
> tar: postgres: Permission denied
> tar: cannot open postgres/9.6-pgdg/include/64/server: No such file or 
> directory
> tar: postgres: Permission denied
> tar: cannot open postgres/9.6-pgdg/include/64/server/utils: No such
> file or directory
> tar: postgres: Permission denied
> tar: cannot open postgres/9.6-pgdg/include/64/server/utils: No such
> file or directory
> tar: postgres: Permission denied
> [/code]
>
> And many other like those.
>
> Is the instructions wrong?
>
> I'm trying 9.6.1 because I'm using it on Windows/OSX for libpq.
>
> Thank you.
>
>
>
>
>
> --
> Angular momentum makes the world go 'round.



Installing PostgreSQL on Oracle Solaris

2019-03-22 Thread Igor Korot
Hi, ALL,
I tried to follow an instructions at
https://www.postgresql.org/ftp/binary/v9.6.1/solaris/solaris11/i386/
in the README but I received following:

[code]
igor@solaris:/usr$ sudo bunzip2 < postgresql-9.6.1-S11.i386-64.tar.bz2
| tar xpf -
tar: postgres: Permission denied
tar: postgres: Permission denied
tar: cannot open postgres: No such file or directory
tar: postgres: Permission denied
tar: cannot open postgres/9.6-pgdg: No such file or directory
tar: postgres: Permission denied
tar: cannot open postgres/9.6-pgdg/doc: No such file or directory
tar: postgres: Permission denied
tar: cannot open postgres/9.6-pgdg/doc/extension: No such file or directory
tar: postgres: Permission denied
tar: cannot open postgres/9.6-pgdg/doc/extension: No such file or directory
tar: postgres: Permission denied
tar: cannot open postgres/9.6-pgdg/doc/extension: No such file or directory
tar: postgres: Permission denied
tar: cannot open postgres/9.6-pgdg/doc/extension: No such file or directory
tar: postgres: Permission denied
tar: cannot open postgres/9.6-pgdg/doc/extension: No such file or directory
tar: postgres: Permission denied
tar: cannot open postgres/9.6-pgdg: No such file or directory
tar: postgres: Permission denied
tar: cannot open postgres/9.6-pgdg/include: No such file or directory
tar: postgres: Permission denied
tar: cannot open postgres/9.6-pgdg/include/64: No such file or directory
tar: postgres: Permission denied
tar: cannot open postgres/9.6-pgdg/include/64: No such file or directory
tar: postgres: Permission denied
tar: cannot open postgres/9.6-pgdg/include/64: No such file or directory
tar: postgres: Permission denied
tar: cannot open postgres/9.6-pgdg/include/64: No such file or directory
tar: postgres: Permission denied
tar: cannot open postgres/9.6-pgdg/include/64: No such file or directory
tar: postgres: Permission denied
tar: cannot open postgres/9.6-pgdg/include/64: No such file or directory
tar: postgres: Permission denied
tar: cannot open postgres/9.6-pgdg/include/64: No such file or directory
tar: postgres: Permission denied
tar: cannot open postgres/9.6-pgdg/include/64: No such file or directory
tar: postgres: Permission denied
tar: cannot open postgres/9.6-pgdg/include/64/server: No such file or directory
tar: postgres: Permission denied
tar: cannot open postgres/9.6-pgdg/include/64/server: No such file or directory
tar: postgres: Permission denied
tar: cannot open postgres/9.6-pgdg/include/64/server/fe_utils: No such
file or directory
tar: postgres: Permission denied
tar: cannot open postgres/9.6-pgdg/include/64/server/fe_utils: No such
file or directory
tar: postgres: Permission denied
tar: cannot open postgres/9.6-pgdg/include/64/server/fe_utils: No such
file or directory
tar: postgres: Permission denied
tar: cannot open postgres/9.6-pgdg/include/64/server/fe_utils: No such
file or directory
tar: postgres: Permission denied
tar: cannot open postgres/9.6-pgdg/include/64/server/fe_utils: No such
file or directory
tar: postgres: Permission denied
tar: cannot open postgres/9.6-pgdg/include/64/server/fe_utils: No such
file or directory
tar: postgres: Permission denied
tar: cannot open postgres/9.6-pgdg/include/64/server: No such file or directory
tar: postgres: Permission denied
tar: cannot open postgres/9.6-pgdg/include/64/server: No such file or directory
tar: postgres: Permission denied
tar: cannot open postgres/9.6-pgdg/include/64/server: No such file or directory
tar: postgres: Permission denied
tar: cannot open postgres/9.6-pgdg/include/64/server: No such file or directory
tar: postgres: Permission denied
tar: cannot open postgres/9.6-pgdg/include/64/server/utils: No such
file or directory
tar: postgres: Permission denied
tar: cannot open postgres/9.6-pgdg/include/64/server/utils: No such
file or directory
tar: postgres: Permission denied
[/code]

And many other like those.

Is the instructions wrong?

I'm trying 9.6.1 because I'm using it on Windows/OSX for libpq.

Thank you.



Re: Comparing dates in DDL

2019-01-04 Thread Igor Korot
Hi, Rich,

On Fri, Jan 4, 2019 at 10:53 AM Rich Shepard  wrote:
>
>I have a projects table that includes these two columns:
>
> start_date date DEFAULT CURRENT_DATE,
> end_date date
> CONSTRAINT valid_start_date
> CHECK (start_date <= end_date),
>
>1. Do I need a DEFAULT value for the end_date?
>2. If so, please suggest a value for it.

start_date.day() + 1?

Thank you.

>
> TIA,
>
> Rich
>



Re: libpq bug?

2018-12-31 Thread Igor Korot
Hi,
So, does anybody have an idea?

I do have following code at the beginning of the cpp file:

#ifdef WIN32
#include 
#pragma execution_character_set("utf-8")
#endif

but even running it on OSX, I am getting this same error.

Thank you.

On Fri, Dec 28, 2018 at 11:30 PM Igor Korot  wrote:
>
> Hi,
>
> On Sat, Dec 29, 2018 at 1:37 AM patrick keshishian  wrote:
> 
>  On Fri, Dec 28, 2018 at 5:40 PM Igor Korot  wrote:
> 
>  Hi, Patrick,
> 
>  Here is my new code:
> 
>  int PostgresDatabase::GetTableOwner (const std::wstring
> schemaName,
>  const std::wstring tableName, std::wstring owner,
>  std::vector errorMsg)
>  {
>  int result = 0;
>  std::wstring query = L"SELECT u.usename FROM pg_class c, pg_user
>  u, pg_namespace n WHERE n.oid = c.relnamespace AND u.usesysid =
>  c.relowner AND n.nspname = $1 AND relname = $2";
>  char *values[2];
>  values[0] = NULL, values[1] = NULL;
>  int charlength1 = schemaName.length() * sizeof( wchar_t ),
>  charlength2 = tableName.length() * sizeof( wchar_t );
>  values[0] = new char[schemaName.length() * sizeof( wchar_t ) + 
> 1];
>  values[1] = new char[tableName.length() * sizeof( wchar_t ) + 1];
>  memset( values[0], '\0', schemaName.length()  * sizeof(
> wchar_t ) + 1 );
>  memset( values[1], '\0', tableName.length() * sizeof(
> wchar_t ) + 1 );
>  strcpy( values[0], m_pimpl-m_myconv.to_bytes(
> schemaName.c_str()
>  ).c_str() );
>  strcpy( values[1], m_pimpl-m_myconv.to_bytes( 
> tableName.c_str()
>  ).c_str() );
>  int len1 = (int) schemaName.length() * sizeof( wchar_t );
>  int len2 = (int) tableName.length() * sizeof( wchar_t );
>  int length[2] = { len1, len2 };
>  int formats[2] = { 1, 1 };
>  PGresult *res = PQexecParams( m_db, 
> m_pimpl-m_myconv.to_bytes(
>  query.c_str() ).c_str(), 2, NULL, values, length, formats, 1 );
>  ExecStatusType status = PQresultStatus( res );
>  if( status != PGRES_COMMAND_OK  status !=
> PGRES_TUPLES_OK )
>  {
>  result = 1;
>  std::wstring err = m_pimpl-m_myconv.from_bytes(
>  PQerrorMessage( m_db ) );
>  errorMsg.push_back( L"Error executing query: " + err );
>  PQclear( res );
>  }
>  else
>  {
>  owner = m_pimpl-m_myconv.from_bytes( PQgetvalue(
> res, 0, 0 ) );
>  }
>  return result;
>  }
> 
>  The charlength2 variable contains the value of 8 and I'm still 
> getting
>  the same error.
> 
> 
>  I was hoping someone more versed in C++ would jump in to answer
> your question. I haven't used C++ in at least a decade.
>  You need to convert the wchar_t  data that wstring stores into
> UTF-8. Personally, I would use iconv (common enough).
>
> But that is very weird.
> When I check what is stored in the values[1] array, I see the same
> byte sequence as what I got from the database
> information_schema.tables..
> Maybe I should just upgrade the libpq and try the latest release?
>
> 
>  I assume the PostgresDatabase class is your own (?) I would add a
> helper function to do the conversion. Here is a very rough template
> for you to adapt if you think it helps you.
>
> Yes, PostgresDatabase is my class.
> I will look at that later today, but it would definitely be
> interesting to get someone with the current C++
> experience (especially with C++11), because I believe that I am doing
> a conversion into UTF8.
>
> The m_convert variable is declared as:
>
> std::wstring_convert > m_myconv;
>
> and so I think it is converting to the UTF8.
>
> Thank you.
>
> >
> >
> > #include 
> >
> > #include 
> >
> > #include 
> >
> > #include 
> >
> > #include 
> >
> >
> > #include 
> >
> >
> > class PGDB {
> >
> > public:
> >
> > // your stuff ...
> >
> > iconv_t ic;
> >
> >
> > PGDB(void) {
> >
> > setlocale(LC_CTYPE, "");
> >
> > ic = iconv_open("UTF-8", "wchar_t");
> >
> > if ((iconv_t)-1 == ic)
> >
> > errx(1, "iconv_open");
> >
> > }
> >
> > ~PGDB() {
> >
> > iconv_close(ic);
> >
> > }
> >
> > // caller should free()
> >
> > char*wchar2utf8(std::wstring const ) {
> >
> > char*in, *buf, *out;
> >
> > size_t  bufsz, inbytes, outbytes;
> >
> >
&g

Re: libpq bug?

2018-12-29 Thread Igor Korot
Hi,

On Sat, Dec 29, 2018 at 1:37 AM patrick keshishian  wrote:

 On Fri, Dec 28, 2018 at 5:40 PM Igor Korot  wrote:

 Hi, Patrick,

 Here is my new code:

 int PostgresDatabase::GetTableOwner (const std::wstring
schemaName,
 const std::wstring tableName, std::wstring owner,
 std::vector errorMsg)
 {
 int result = 0;
 std::wstring query = L"SELECT u.usename FROM pg_class c, pg_user
 u, pg_namespace n WHERE n.oid = c.relnamespace AND u.usesysid =
 c.relowner AND n.nspname = $1 AND relname = $2";
 char *values[2];
 values[0] = NULL, values[1] = NULL;
 int charlength1 = schemaName.length() * sizeof( wchar_t ),
 charlength2 = tableName.length() * sizeof( wchar_t );
 values[0] = new char[schemaName.length() * sizeof( wchar_t ) + 1];
 values[1] = new char[tableName.length() * sizeof( wchar_t ) + 1];
 memset( values[0], '\0', schemaName.length()  * sizeof(
wchar_t ) + 1 );
 memset( values[1], '\0', tableName.length() * sizeof(
wchar_t ) + 1 );
 strcpy( values[0], m_pimpl-m_myconv.to_bytes(
schemaName.c_str()
 ).c_str() );
 strcpy( values[1], m_pimpl-m_myconv.to_bytes( tableName.c_str()
 ).c_str() );
 int len1 = (int) schemaName.length() * sizeof( wchar_t );
 int len2 = (int) tableName.length() * sizeof( wchar_t );
 int length[2] = { len1, len2 };
 int formats[2] = { 1, 1 };
 PGresult *res = PQexecParams( m_db, m_pimpl-m_myconv.to_bytes(
 query.c_str() ).c_str(), 2, NULL, values, length, formats, 1 );
 ExecStatusType status = PQresultStatus( res );
 if( status != PGRES_COMMAND_OK  status !=
PGRES_TUPLES_OK )
 {
 result = 1;
 std::wstring err = m_pimpl-m_myconv.from_bytes(
 PQerrorMessage( m_db ) );
 errorMsg.push_back( L"Error executing query: " + err );
 PQclear( res );
 }
 else
 {
 owner = m_pimpl-m_myconv.from_bytes( PQgetvalue(
res, 0, 0 ) );
 }
 return result;
 }

 The charlength2 variable contains the value of 8 and I'm still getting
 the same error.


 I was hoping someone more versed in C++ would jump in to answer
your question. I haven't used C++ in at least a decade.
 You need to convert the wchar_t  data that wstring stores into
UTF-8. Personally, I would use iconv (common enough).

But that is very weird.
When I check what is stored in the values[1] array, I see the same
byte sequence as what I got from the database
information_schema.tables..
Maybe I should just upgrade the libpq and try the latest release?


 I assume the PostgresDatabase class is your own (?) I would add a
helper function to do the conversion. Here is a very rough template
for you to adapt if you think it helps you.

Yes, PostgresDatabase is my class.
I will look at that later today, but it would definitely be
interesting to get someone with the current C++
experience (especially with C++11), because I believe that I am doing
a conversion into UTF8.

The m_convert variable is declared as:

std::wstring_convert > m_myconv;

and so I think it is converting to the UTF8.

Thank you.

>
>
> #include 
>
> #include 
>
> #include 
>
> #include 
>
> #include 
>
>
> #include 
>
>
> class PGDB {
>
> public:
>
> // your stuff ...
>
> iconv_t ic;
>
>
> PGDB(void) {
>
> setlocale(LC_CTYPE, "");
>
> ic = iconv_open("UTF-8", "wchar_t");
>
> if ((iconv_t)-1 == ic)
>
> errx(1, "iconv_open");
>
> }
>
> ~PGDB() {
>
> iconv_close(ic);
>
> }
>
> // caller should free()
>
> char*wchar2utf8(std::wstring const ) {
>
> char*in, *buf, *out;
>
> size_t  bufsz, inbytes, outbytes;
>
>
> in = (char *)ws.data();
>
> inbytes = ws.length() * sizeof(wchar_t);
>
> outbytes = inbytes;
>
> bufsz = inbytes + 1; // XXX check for overflow
>
>
> buf = (char *)calloc(bufsz, 1);
>
> if (NULL == buf)
>
> err(1, NULL); // or throw something
>
>
> out = buf;
>
> if ((size_t)-1 == iconv(ic, , , , ))
>
> errx(1, "iconv"); // or throw ...
>
>
> // TODO ensure inbytes is 0 (meaning all input consumed)
>
> return buf;
>
> }
>
> };
>
>
> // demo using above PGDB class/code
>
> int main(int argc, char *argv[])
>
> {
>
> char*str;
>
> size_t  i, n;
>
> std::wstringtab;
>
> PGDBpg;
>
>
> tab = L

Re: libpq bug?

2018-12-28 Thread Igor Korot
Hi, Patrick,

Here is my new code:

int PostgresDatabase::GetTableOwner (const std::wstring ,
const std::wstring , std::wstring ,
std::vector )
{
int result = 0;
std::wstring query = L"SELECT u.usename FROM pg_class c, pg_user
u, pg_namespace n WHERE n.oid = c.relnamespace AND u.usesysid =
c.relowner AND n.nspname = $1 AND relname = $2";
char *values[2];
values[0] = NULL, values[1] = NULL;
int charlength1 = schemaName.length() * sizeof( wchar_t ),
charlength2 = tableName.length() * sizeof( wchar_t );
values[0] = new char[schemaName.length() * sizeof( wchar_t ) + 1];
values[1] = new char[tableName.length() * sizeof( wchar_t ) + 1];
memset( values[0], '\0', schemaName.length()  * sizeof( wchar_t ) + 1 );
memset( values[1], '\0', tableName.length() * sizeof( wchar_t ) + 1 );
strcpy( values[0], m_pimpl->m_myconv.to_bytes( schemaName.c_str()
).c_str() );
strcpy( values[1], m_pimpl->m_myconv.to_bytes( tableName.c_str()
).c_str() );
int len1 = (int) schemaName.length() * sizeof( wchar_t );
int len2 = (int) tableName.length() * sizeof( wchar_t );
int length[2] = { len1, len2 };
int formats[2] = { 1, 1 };
PGresult *res = PQexecParams( m_db, m_pimpl->m_myconv.to_bytes(
query.c_str() ).c_str(), 2, NULL, values, length, formats, 1 );
ExecStatusType status = PQresultStatus( res );
if( status != PGRES_COMMAND_OK && status != PGRES_TUPLES_OK )
{
result = 1;
std::wstring err = m_pimpl->m_myconv.from_bytes(
PQerrorMessage( m_db ) );
errorMsg.push_back( L"Error executing query: " + err );
PQclear( res );
}
else
{
owner = m_pimpl->m_myconv.from_bytes( PQgetvalue( res, 0, 0 ) );
}
return result;
}

The charlength2 variable contains the value of 8 and I'm still getting
the same error.

Any idea?

Thank you.

On Fri, Dec 28, 2018 at 5:40 PM patrick keshishian  wrote:
>
> On Fri, Dec 28, 2018 at 3:07 PM Igor Korot  wrote:
>>
>> Hi,
>>
>> On Fri, Dec 28, 2018 at 4:51 PM patrick keshishian  
>> wrote:
>> >
>> >
>> > On Fri, Dec 28, 2018 at 2:00 PM Igor Korot  wrote:
>> >>
>> >> Hi, ALL,
>> >> Following code:
>> >>
>> >> int PostgresDatabase::GetTableOwner (const std::wstring ,
>> >> const std::wstring , std::wstring ,
>> >> std::vector )
>> >> {
>> >>int result = 0;
>> >> std::wstring query = L"SELECT u.usename FROM pg_class c, pg_user
>> >> u, pg_namespace n WHERE n.oid = c.relnamespace AND u.usesysid =
>> >> c.relowner AND n.nspname = $1 AND relname = $2";
>> >> char *values[2];
>> >> values[0] = NULL, values[1] = NULL;
>> >> values[0] = new char[schemaName.length() + 1];
>> >> values[1] = new char[tableName.length() + 1];
>> >> memset( values[0], '\0', schemaName.length() + 1 );
>> >> memset( values[1], '\0', tableName.length() + 1 );
>> >> strcpy( values[0], m_pimpl->m_myconv.to_bytes( schemaName.c_str()
>> >> ).c_str() );
>> >> strcpy( values[1], m_pimpl->m_myconv.to_bytes( tableName.c_str()
>> >> ).c_str() );
>> >> int len1 = (int) schemaName.length();
>> >> int len2 = (int) tableName.length();
>> >> int length[2] = { len1, len2 };
>> >> int formats[2] = { 1, 1 };
>> >> PGresult *res = PQexecParams( m_db, m_pimpl->m_myconv.to_bytes(
>> >> query.c_str() ).c_str(), 2, NULL, values, length, formats, 1 );
>> >> ExecStatusType status = PQresultStatus( res );
>> >> if( status != PGRES_COMMAND_OK && status != PGRES_TUPLES_OK )
>> >> {
>> >> result = 1;
>> >> std::wstring err = m_pimpl->m_myconv.from_bytes(
>> >> PQerrorMessage( m_db ) );
>> >> errorMsg.push_back( L"Error executing query: " + err );
>> >> PQclear( res );
>> >> }
>> >> else
>> >> {
>> >> owner = m_pimpl->m_myconv.from_bytes( PQgetvalue( res, 0, 0 ) );
>> >> }
>> >> return result;
>> >> }
>> >>
>> >> when ran with the call of
>> >>
>> >> GetTableOwner( "public", "abcß", owner, errorMsg );
>> >>
>> >> returns:
>> >>
>> >> ERROR: Invalid byte sequence for encoding UTF8.
>> >>
>> >> Does this mean I found the bug in the library?
>> >
>> >
>> > The bug is in your C++ code. "abcß" a

Re: libpq bug?

2018-12-28 Thread Igor Korot
Hi,

On Fri, Dec 28, 2018 at 5:07 PM Igor Korot  wrote:
>
> Hi,
>
> On Fri, Dec 28, 2018 at 4:51 PM patrick keshishian  wrote:
> >
> >
> > On Fri, Dec 28, 2018 at 2:00 PM Igor Korot  wrote:
> >>
> >> Hi, ALL,
> >> Following code:
> >>
> >> int PostgresDatabase::GetTableOwner (const std::wstring ,
> >> const std::wstring , std::wstring ,
> >> std::vector )
> >> {
> >>int result = 0;
> >> std::wstring query = L"SELECT u.usename FROM pg_class c, pg_user
> >> u, pg_namespace n WHERE n.oid = c.relnamespace AND u.usesysid =
> >> c.relowner AND n.nspname = $1 AND relname = $2";
> >> char *values[2];
> >> values[0] = NULL, values[1] = NULL;
> >> values[0] = new char[schemaName.length() + 1];
> >> values[1] = new char[tableName.length() + 1];
> >> memset( values[0], '\0', schemaName.length() + 1 );
> >> memset( values[1], '\0', tableName.length() + 1 );
> >> strcpy( values[0], m_pimpl->m_myconv.to_bytes( schemaName.c_str()
> >> ).c_str() );
> >> strcpy( values[1], m_pimpl->m_myconv.to_bytes( tableName.c_str()
> >> ).c_str() );
> >> int len1 = (int) schemaName.length();
> >> int len2 = (int) tableName.length();
> >> int length[2] = { len1, len2 };
> >> int formats[2] = { 1, 1 };
> >> PGresult *res = PQexecParams( m_db, m_pimpl->m_myconv.to_bytes(
> >> query.c_str() ).c_str(), 2, NULL, values, length, formats, 1 );
> >> ExecStatusType status = PQresultStatus( res );
> >> if( status != PGRES_COMMAND_OK && status != PGRES_TUPLES_OK )
> >> {
> >> result = 1;
> >> std::wstring err = m_pimpl->m_myconv.from_bytes(
> >> PQerrorMessage( m_db ) );
> >> errorMsg.push_back( L"Error executing query: " + err );
> >> PQclear( res );
> >> }
> >> else
> >> {
> >> owner = m_pimpl->m_myconv.from_bytes( PQgetvalue( res, 0, 0 ) );
> >> }
> >> return result;
> >> }
> >>
> >> when ran with the call of
> >>
> >> GetTableOwner( "public", "abcß", owner, errorMsg );
> >>
> >> returns:
> >>
> >> ERROR: Invalid byte sequence for encoding UTF8.
> >>
> >> Does this mean I found the bug in the library?
> >
> >
> > The bug is in your C++ code. "abcß" as tableName.lenght() (wstring) returns 
> > 4 (as in four characters) not number of bytes required to represent the 
> > intended string: 61 62 63 c3 9f
> > Since the last character is a 2 bytes in length. Therefore, your call to 
> > PQexecParams() specifies a shorter length and hence an invalid UTF-8 
> > sequence.
> >
> > Furthermore, your value[] array allocation is in error since 
> > wstring::length returns number of characters, not number of bytes. so you 
> > will end up with buffer-overflows.
>
> So I should use 
> https://stackoverflow.com/questions/9278723/how-can-i-get-the-byte-size-of-stdwstring
>  in both places?

And this solution will work cross-platform, right?

Thank you.

>
> Thank you.
>
> >
> > HTH,
> > --patrick
> >
> >
> >>
> >> Any idea what I can do?
> >>
> >> Thank you.
> >>



Re: libpq bug?

2018-12-28 Thread Igor Korot
Hi,

On Fri, Dec 28, 2018 at 4:51 PM patrick keshishian  wrote:
>
>
> On Fri, Dec 28, 2018 at 2:00 PM Igor Korot  wrote:
>>
>> Hi, ALL,
>> Following code:
>>
>> int PostgresDatabase::GetTableOwner (const std::wstring ,
>> const std::wstring , std::wstring ,
>> std::vector )
>> {
>>int result = 0;
>> std::wstring query = L"SELECT u.usename FROM pg_class c, pg_user
>> u, pg_namespace n WHERE n.oid = c.relnamespace AND u.usesysid =
>> c.relowner AND n.nspname = $1 AND relname = $2";
>> char *values[2];
>> values[0] = NULL, values[1] = NULL;
>> values[0] = new char[schemaName.length() + 1];
>> values[1] = new char[tableName.length() + 1];
>> memset( values[0], '\0', schemaName.length() + 1 );
>> memset( values[1], '\0', tableName.length() + 1 );
>> strcpy( values[0], m_pimpl->m_myconv.to_bytes( schemaName.c_str()
>> ).c_str() );
>> strcpy( values[1], m_pimpl->m_myconv.to_bytes( tableName.c_str()
>> ).c_str() );
>> int len1 = (int) schemaName.length();
>> int len2 = (int) tableName.length();
>> int length[2] = { len1, len2 };
>> int formats[2] = { 1, 1 };
>> PGresult *res = PQexecParams( m_db, m_pimpl->m_myconv.to_bytes(
>> query.c_str() ).c_str(), 2, NULL, values, length, formats, 1 );
>> ExecStatusType status = PQresultStatus( res );
>> if( status != PGRES_COMMAND_OK && status != PGRES_TUPLES_OK )
>> {
>> result = 1;
>> std::wstring err = m_pimpl->m_myconv.from_bytes(
>> PQerrorMessage( m_db ) );
>> errorMsg.push_back( L"Error executing query: " + err );
>> PQclear( res );
>> }
>> else
>> {
>> owner = m_pimpl->m_myconv.from_bytes( PQgetvalue( res, 0, 0 ) );
>> }
>> return result;
>> }
>>
>> when ran with the call of
>>
>> GetTableOwner( "public", "abcß", owner, errorMsg );
>>
>> returns:
>>
>> ERROR: Invalid byte sequence for encoding UTF8.
>>
>> Does this mean I found the bug in the library?
>
>
> The bug is in your C++ code. "abcß" as tableName.lenght() (wstring) returns 4 
> (as in four characters) not number of bytes required to represent the 
> intended string: 61 62 63 c3 9f
> Since the last character is a 2 bytes in length. Therefore, your call to 
> PQexecParams() specifies a shorter length and hence an invalid UTF-8 sequence.
>
> Furthermore, your value[] array allocation is in error since wstring::length 
> returns number of characters, not number of bytes. so you will end up with 
> buffer-overflows.

So I should use
https://stackoverflow.com/questions/9278723/how-can-i-get-the-byte-size-of-stdwstring
in both places?

Thank you.

>
> HTH,
> --patrick
>
>
>>
>> Any idea what I can do?
>>
>> Thank you.
>>



libpq bug?

2018-12-28 Thread Igor Korot
Hi, ALL,
Following code:

int PostgresDatabase::GetTableOwner (const std::wstring ,
const std::wstring , std::wstring ,
std::vector )
{
   int result = 0;
std::wstring query = L"SELECT u.usename FROM pg_class c, pg_user
u, pg_namespace n WHERE n.oid = c.relnamespace AND u.usesysid =
c.relowner AND n.nspname = $1 AND relname = $2";
char *values[2];
values[0] = NULL, values[1] = NULL;
values[0] = new char[schemaName.length() + 1];
values[1] = new char[tableName.length() + 1];
memset( values[0], '\0', schemaName.length() + 1 );
memset( values[1], '\0', tableName.length() + 1 );
strcpy( values[0], m_pimpl->m_myconv.to_bytes( schemaName.c_str()
).c_str() );
strcpy( values[1], m_pimpl->m_myconv.to_bytes( tableName.c_str()
).c_str() );
int len1 = (int) schemaName.length();
int len2 = (int) tableName.length();
int length[2] = { len1, len2 };
int formats[2] = { 1, 1 };
PGresult *res = PQexecParams( m_db, m_pimpl->m_myconv.to_bytes(
query.c_str() ).c_str(), 2, NULL, values, length, formats, 1 );
ExecStatusType status = PQresultStatus( res );
if( status != PGRES_COMMAND_OK && status != PGRES_TUPLES_OK )
{
result = 1;
std::wstring err = m_pimpl->m_myconv.from_bytes(
PQerrorMessage( m_db ) );
errorMsg.push_back( L"Error executing query: " + err );
PQclear( res );
}
else
{
owner = m_pimpl->m_myconv.from_bytes( PQgetvalue( res, 0, 0 ) );
}
return result;
}

when ran with the call of

GetTableOwner( "public", "abcß", owner, errorMsg );

returns:

ERROR: Invalid byte sequence for encoding UTF8.

Does this mean I found the bug in the library?

Any idea what I can do?

Thank you.



Error on Windows

2018-12-26 Thread Igor Korot
Hi, ALL,
The following code compiles and executes but returns an error:

[quote]
Invalid byte sequence for encoding UTF8
[/quote]

[code]
char *values[2];
values[0] = NULL, values[1] = NULL;
values[0] = new char[schemaName.length() + 1];
values[1] = new char[tableName.length() + 1];
memset( values[0], '\0', schemaName.length() + 1 );
memset( values[1], '\0', tableName.length() + 1 );
strcpy( values[0], m_pimpl->m_myconv.to_bytes( schemaName.c_str()
).c_str() );
strcpy( values[1], m_pimpl->m_myconv.to_bytes( tableName.c_str()
).c_str() );
int len1 = (int) schemaName.length();
int len2 = (int) tableName.length();
int length[2] = { len1, len2 };
int formats[2] = { 1, 1 };
PGresult *res = PQexecParams( m_db, m_pimpl->m_myconv.to_bytes(
query.c_str() ).c_str(), 2, NULL, values, length, formats, 1 );
ExecStatusType status = PQresultStatus( res );
if( status != PGRES_COMMAND_OK && status != PGRES_TUPLES_OK )
{
result = 1;
std::wstring err = m_pimpl->m_myconv.from_bytes(
PQerrorMessage( m_db ) );
errorMsg.push_back( L"Error executing query: " + err );
PQclear( res );
}
[/code]

in the "err" variable.

Looking under MSVC debugger I see for a tableName a following sequence:

97 98 99 223

What can I do to eliminate the error?

Thank you.



Re: NOTIFY/LISTEN with ODBC interface

2018-12-24 Thread Igor Korot
Hi, Clemens, et al,

On Tue, Dec 25, 2018 at 12:23 AM Clemens Ladisch  wrote:
>
> Igor Korot wrote:
> >Is there a way to write [LISTEN/NOTIFY] with the ODBC interface?
>
> At the moment, no.
>
> ODBC is a standard interface, so there are no functions for this defined.
>
> It might be possible to add PG-specific extensions to the ODBC driver,
> but this has not (yet) be done.

Thank you for the reply.
I guess I will just have to implement minimal schema watching algorithm.


>
>
> Regards,
> Clemens



Re: NOTIFY/LISTEN with ODBC interface

2018-12-24 Thread Igor Korot
Hi,

On Mon, Dec 24, 2018 at 5:25 AM Danny Severns  wrote:
>
> Is your question "can it be done" or "is someone willing to do it" or both?

Pretty much both.

Looking at the sample on the page it retrieves the connection socket.
And I know that there is no ODBC function that can retrieve this.

So can this be done with ODBC interface, and if yes - how?

Thank you.

>
> -----Original Message-
> From: Igor Korot 
> Sent: Sunday, December 23, 2018 9:37 PM
> To: PostgreSQL ODBC list ; pgsql-general 
> 
> Subject: NOTIFY/LISTEN with ODBC interface
>
> Hi,
> Is there a way to write a code found at
> https://www.postgresql.org/docs/9.1/libpq-example.html
> Example 2 with the ODBC interface?
>
> Thank you.
>



NOTIFY/LISTEN with ODBC interface

2018-12-23 Thread Igor Korot
Hi,
Is there a way to write a code found at
https://www.postgresql.org/docs/9.1/libpq-example.html
Example 2 with the ODBC interface?

Thank you.



IF NOT EXIST

2018-12-17 Thread Igor Korot
Hi, ALL,
I have a following statement:

IF NOT EXIST( SELECT 1 SELECT 1 FROM pg_proc AS proc, pg_namespace AS
ns ) CREATE FUNCTION();

Unfortunately trying to execute it thru the ODBC interface with:

ret = SQLExecDirect( m_hstmt, query, SQL_NTS );

gives syntax error near IF.

What is the proper way to do that?

Thank you.



Re: How to watch for schema changes

2018-12-04 Thread Igor Korot
Igor,

On Tue, Dec 4, 2018 at 8:20 AM Igor Neyman  wrote:
>
>
> -Original Message-
> From: Igor Korot [mailto:ikoro...@gmail.com]
> Sent: Monday, December 03, 2018 8:29 AM
> To: Adrian Klaver 
> Cc: pgsql-general 
> Subject: Re: How to watch for schema changes
>
> ...
>
> And executing LISTEN will also work for ODBC connection, right?
>
> Thank you.
>
> ___
>
> It's been years since we dealt with this problem, so the details are fuzzy.
>
> All applications in the package we develop connect to PG using ODBC, but one 
> app that's using LISTEN is connecting to PG through native interface 
> libpq.dll, ODBC didn't work for that purpose, at least at the time.

I will try it and report back.
Out of curiosity - when was the last time you tried?

Thank you.

>
> Regards,
> Igor Neyman



Re: How to watch for schema changes

2018-12-03 Thread Igor Korot
Hi, Adrian,

On Mon, Dec 3, 2018 at 5:15 PM Adrian Klaver  wrote:
>
> On 12/3/18 3:00 PM, Igor Korot wrote:
> > Hi, Adrian,
> >
> > On Mon, Dec 3, 2018 at 4:10 PM Adrian Klaver  
> > wrote:
> >>
> >> On 12/3/18 1:53 PM, Igor Korot wrote:
> >>> Hi, Adrian,
> >>
> >>>> Why? Just create the trigger once in a script. Event triggers are an
> >>>> attribute of the database and stay with it until they are dropped. If
> >>>> you want to turn then on and off use the ALTER  EVENT TRIGGER
> >>>> ENABLE/DISABLE. If you insist on recreating them on each connection then:
> >>>>
> >>>> https://www.postgresql.org/docs/10/sql-dropeventtrigger.html
> >>>> DROP EVENT TRIGGER [ IF EXISTS ] name [ CASCADE | RESTRICT ]
> >>>> CREATE EVENT TRIGGER ...
> >>>
> >>> I was hoping to create a software which will be just "install-and-use".
> >>> No additional script running or database changes is required.
> >>
> >> After I hit reply my subconscious kicked in and pointed out something:)
> >>
> >> If there are no database changes why do you need to track schema changes?
> >
> > That was a bad word selection. ;-)
> >
> > What I mean to say was "no schema changes/server changes that comes
> > independently
> > of the program install". Or something to that extent.
>
> Which circles back around to the same question:
>
> If there are to be no schema changes after the install why track them on
> subsequent connections?
>
> Or maybe an explanation of what you are trying to achieve would make
> things clearer?

Ok, it probably will make things clearer.
So I install my program perform some queries and exit.
At the same time if the user will create or delete a table the program should
pick up those changes and act accordingly.

I was hoping to do the watching initialization dynamically, but it looks as it
is more pain and I can probably try to execute the script during the
installation.

So basically I will create the function in C, compile it and then
during the program installation
create a trigger with that function.
Then in my program I will execute "LISTEN" command and act accordingly.

Am I right?

And executing LISTEN will also work for ODBC connection, right?

Thank you.

>
>
>
> >
> > Sorry, ESL person here.
> >
> > Thank you.
> >
> >>
> >>>
> >>> But I will probably create it on every connection and delete on the
> >>> disconnect (see above).
> >>>
> >>
> >> --
> >> Adrian Klaver
> >> adrian.kla...@aklaver.com
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com



Re: How to watch for schema changes

2018-12-03 Thread Igor Korot
Hi, Adrian,

On Mon, Dec 3, 2018 at 4:10 PM Adrian Klaver  wrote:
>
> On 12/3/18 1:53 PM, Igor Korot wrote:
> > Hi, Adrian,
>
> >> Why? Just create the trigger once in a script. Event triggers are an
> >> attribute of the database and stay with it until they are dropped. If
> >> you want to turn then on and off use the ALTER  EVENT TRIGGER
> >> ENABLE/DISABLE. If you insist on recreating them on each connection then:
> >>
> >> https://www.postgresql.org/docs/10/sql-dropeventtrigger.html
> >> DROP EVENT TRIGGER [ IF EXISTS ] name [ CASCADE | RESTRICT ]
> >> CREATE EVENT TRIGGER ...
> >
> > I was hoping to create a software which will be just "install-and-use".
> > No additional script running or database changes is required.
>
> After I hit reply my subconscious kicked in and pointed out something:)
>
> If there are no database changes why do you need to track schema changes?

That was a bad word selection. ;-)

What I mean to say was "no schema changes/server changes that comes
independently
of the program install". Or something to that extent.

Sorry, ESL person here.

Thank you.

>
> >
> > But I will probably create it on every connection and delete on the
> > disconnect (see above).
> >
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com



Re: How to watch for schema changes

2018-12-03 Thread Igor Korot
Hi, Adrian,

On Mon, Dec 3, 2018 at 11:59 AM Adrian Klaver  wrote:
>
> On 12/3/18 9:53 AM, Igor Korot wrote:
> >> So event triggers are associated with
> >> events(www.postgresql.org/docs/9.6/event-trigger-matrix.html) in a
> >> particular database. A rough description is that they are triggers on
> >> changes to the system catalogs.
> >> You could, I guess, create and drop them for each connection. To me it
> >> would seem more efficient to create them once. You then have the choice
> >> of leaving them running or using the ALTER command I posted previously
> >> to ENABLE/DISABLE them.
> >
> > OK, so how do I do it?
> > There is no "CREATE EVENT TRIGGER IF NOT EXIST".
> >
> > As I say - I'm trying to make it work from both ODBC and libpq
> > connection (one at a time)
>
> Why? Just create the trigger once in a script. Event triggers are an
> attribute of the database and stay with it until they are dropped. If
> you want to turn then on and off use the ALTER  EVENT TRIGGER
> ENABLE/DISABLE. If you insist on recreating them on each connection then:
>
> https://www.postgresql.org/docs/10/sql-dropeventtrigger.html
> DROP EVENT TRIGGER [ IF EXISTS ] name [ CASCADE | RESTRICT ]
> CREATE EVENT TRIGGER ...

I was hoping to create a software which will be just "install-and-use".
No additional script running or database changes is required.

But I will probably create it on every connection and delete on the
disconnect (see above).

>
> >
> >>
> >>>
> >>> Now the other question is - how to make it work?
> >>> I can write the function code, compile it and place in the folder
> >>> where my executable is (or it should be in some postgreSQL dir?) and
> >>> then executing above code
> >>> will be enough. Is this correct?
> >
> > Also - what about this?
> >
> > And why did you say that saving the SQL commend is not a good idea.
> >
> > What's better?
>
> See above.
>
> >
> > Thank you.
> >
> >>>
> >>>>
> >>>>>
> >>>>> And then in my C++ code I will continuously query this temporary table.
> >>>>
> >>>> Why a temporary table? They are session specific and if the session
> >>>> aborts prematurely you will lose the information.
> >>>
> >>> Is there a better alternative?
> >>>
> >>> Thank you.
> >>>
> >>>>
> >>>>>
> >>>>> Or there is a better alternative on the algorithm?
> >>>>>
> >>>>> Thank you.
> >>>>>
> >>>>>>>>
> >>>>>>> Thank you.
> >>>>>>>
> >>>>>>
> >>>>>>
> >>>>>> --
> >>>>>> Adrian Klaver
> >>>>>> adrian.kla...@aklaver.com
> >>>>>
> >>>>
> >>>>
> >>>> --
> >>>> Adrian Klaver
> >>>> adrian.kla...@aklaver.com
> >>>
> >>
> >>
> >> --
> >> Adrian Klaver
> >> adrian.kla...@aklaver.com
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com



  1   2   >