Are advisory locks guaranteed to be First Come First Serve? And can the behavior be relied upon?

2020-08-26 Thread Hemil Ruparel
I was playing with Advisory locks. I opened three connections to the
database And obtained a lock with the same key. I noticed that the locks
were obtained in the order of requests. I was wondering whether the locks
are guaranteed to be First Come First Serve and can this behaviour be
relied upon.

I read the docs here
. But it
didn't mention anything.

Thanks a lot for your time and efforts


Re: Calling a SQL function inside a C function

2020-08-26 Thread Pavel Stehule
Hi

čt 27. 8. 2020 v 0:43 odesílatel Eric Zhu  napsal:

> How do I call a function defined using CREATE FUNCTION in SQL inside a C
> function in an extension? I feel this should be possible as the query
> parser is able to resolve the function names and arguments in a raw string
> query. I want to know if there is a standard way to look up for
> user-defined functions in the backend.
>
> For example, I have a function defined in SQL:
>
> ```
> CREATE FUNCTION times_two(x integer)
> RETURNS integer AS $$
> SELECT x*2
> $$ LANGUAGE SQL;
> ```
>
> Now I wish to call `times_two()` in a C extension similar to:
>
> ```
> // Look up for the user-defined function times_two()
> // ...
>
> // Use the function.
> Datum ret = DirectFunctionCall(times_two, Int32GetDatum(13));
> ```
>

Surely, it is not possible. You can use SPI interface, which is most
simple  https://www.postgresql.org/docs/current/spi.html and instead of
calling function, you can call "SELECT  times_two(x)". PLpgSQL is working
like that. It ensures necessary checks, and you don't need to manually
handle NULL values.

For direct function call of SQL function, you can use OidFunctionCall

some like

Oid funcargs = {INT4OID};
List *funcname;
Oid funcoid

funcname = stringToQualifiedNameList("times_two");
funcoid = LookupFuncName(func_name, 1, funcargs, false);

Datum ret = OidFunctionCall1(funcoid, Int32GetDatum(13));

Attention - for direct function calls the function should not to have any
NULL argument, and should not to return NULL.

Regards

Pavel




> Best,
> Eric
>


Re: Database cluster binary compatibility accross CPU platforms

2020-08-26 Thread Tom Lane
=?UTF-8?B?QWxlxaEgWmVsZW7DvQ==?=  writes:
> is it supported to restore binary backup (pg_basebackup) taken on an amd64
> platform on arm64 while using the same PostgreSQL version?

I wouldn't say it's "supported", but you might get away with it,
if the endianness and datatype alignment rules are the same.
The database does do some basic checks on that using known values
stored in the pg_control file, so it's likely you'd find out
immediately at startup if any of those things don't match.

What's significantly more risky, and which we do *not* have good
checks for, is that unless these are different builds of the same
version of the same operating system, the two platforms may not sort
strings the same way; resulting in all your text indexes being
corrupt.  You can recover from that by reindexing, but it's hardly
a painless transition.

regards, tom lane




Calling a SQL function inside a C function

2020-08-26 Thread Eric Zhu
How do I call a function defined using CREATE FUNCTION in SQL inside a C
function in an extension? I feel this should be possible as the query
parser is able to resolve the function names and arguments in a raw string
query. I want to know if there is a standard way to look up for
user-defined functions in the backend.

For example, I have a function defined in SQL:

```
CREATE FUNCTION times_two(x integer)
RETURNS integer AS $$
SELECT x*2
$$ LANGUAGE SQL;
```

Now I wish to call `times_two()` in a C extension similar to:

```
// Look up for the user-defined function times_two()
// ...

// Use the function.
Datum ret = DirectFunctionCall(times_two, Int32GetDatum(13));
```

Best,
Eric


Database cluster binary compatibility accross CPU platforms

2020-08-26 Thread Aleš Zelený
Hello,

is it supported to restore binary backup (pg_basebackup) taken on an amd64
platform on arm64 while using the same PostgreSQL version?

I'd like to know, whether there is binary file compatibility to save time
needed for dump & restore migration scenario or all the complexity of using
logical replication to migrate databases already utilizing logical
replication.

Kind regards Ales Zeleny


Re: Finding description pg_description

2020-08-26 Thread Tom Lane
Susan Hurst  writes:
> How do I find the source of an objoid from pg_catalog.pg_description?

The classoid column holds the OID of the system catalog that contains
the object's defining row.  The objoid column is the OID of the object,
ie the "oid" column of that row.  (If you are working with a pre-v12
PG release you might be confused by the fact that the oid column is
hidden by default.  But it's there and you can select it.)

Depending on what you're doing, you might prefer to use the
pg_describe_object() function to decipher those columns:

regression=# select classoid::regclass, pg_describe_object(classoid, objoid, 
objsubid), description from pg_description limit 5;
 classoid |pg_describe_object | description 
 
--+---+--
 pg_proc  | function ts_debug(regconfig,text) | debug function for text search 
configuration
 pg_proc  | function ts_debug(text)   | debug function for current text 
search configuration
 pg_proc  | function boolin(cstring)  | I/O
 pg_proc  | function boolout(boolean) | I/O
 pg_proc  | function byteain(cstring) | I/O
(5 rows)

Alternately, locutions like "where classoid = 'pg_proc'::regclass"
might be helpful.

regards, tom lane




Re: export to parquet

2020-08-26 Thread George Woodring
I don't know how many hoops you want to jump through, we use AWS and Athena
to create them.

   - Export table as JSON
   - Put on AWS S3
   - Create JSON table in Athena
   - Use the JSON table to create a parquet table

The parquet files will be in S3 as well after the parquet table is
created.  If you are interested I can share the AWS CLI commands we use.

George Woodring
iGLASS Networks
www.iglass.net


On Wed, Aug 26, 2020 at 3:00 PM Scott Ribe 
wrote:

> I have no Hadoop, no HDFS. Just looking for the easiest way to export some
> PG tables into Parquet format for testing--need to determine what kind of
> space reduction we can get before deciding whether to look into it more.
>
> Any suggestions on particular tools? (PG 12, Linux)
>
>
> --
> Scott Ribe
> scott_r...@elevated-dev.com
> https://www.linkedin.com/in/scottribe/
>
>
>
>
>
>


Re: export to parquet

2020-08-26 Thread Scott Ribe
> On Aug 26, 2020, at 1:11 PM, Chris Travers  wrote:
> 
> For simple exporting, the simplest thing is a single-node instance of Spark.

Thanks.

> You can read parquet files in Postgres using 
> https://github.com/adjust/parquet_fdw if you so desire but it does not 
> support writing as parquet files are basically immutable.

Yep, that's the next step. Well, really it is what I am interested in testing, 
but first I need my data in parquet format (and confirmation that it gets 
decently compressed).



Re: export to parquet

2020-08-26 Thread Chris Travers
On Wed, Aug 26, 2020 at 9:00 PM Scott Ribe 
wrote:

> I have no Hadoop, no HDFS. Just looking for the easiest way to export some
> PG tables into Parquet format for testing--need to determine what kind of
> space reduction we can get before deciding whether to look into it more.
>
> Any suggestions on particular tools? (PG 12, Linux)
>
> For simple exporting, the simplest thing is a single-node instance of
Spark.

You can read parquet files in Postgres using
https://github.com/adjust/parquet_fdw if you so desire but it does not
support writing as parquet files are basically immutable.


>
> --
> Scott Ribe
> scott_r...@elevated-dev.com
> https://www.linkedin.com/in/scottribe/
>
>
>
>
>
>

-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


export to parquet

2020-08-26 Thread Scott Ribe
I have no Hadoop, no HDFS. Just looking for the easiest way to export some PG 
tables into Parquet format for testing--need to determine what kind of space 
reduction we can get before deciding whether to look into it more.

Any suggestions on particular tools? (PG 12, Linux)


--
Scott Ribe
scott_r...@elevated-dev.com
https://www.linkedin.com/in/scottribe/







Finding description pg_description

2020-08-26 Thread Susan Hurst
How do I find the source of an objoid from pg_catalog.pg_description?  I 
comment everything in my databases and can find most of the comments in 
pg_catalog.pd_description, which only gives me objoid and classoid for 
the source of a comment.  If I could find the oid sources I could make 
this work.  I can find what I need for tables, columns, functions and a 
few other things but I cannot complete loops for foreign_data_wrappers, 
schemas, triggers and foreign keys.


For example, I created a foreign_data_wrapper comment and can find it 
with this query:

select * from pg_catalog.pg_description where description like '%FDW%';
102432;1259;0;"Alias: FDW - Foreign data wrapper that acquires data from 
a remote database as specified in the column: 
devops.stp2_foreign_data_wrappers.remote_db_connection."


...but I have no idea where the objoid is coming from so I can't join it 
to anything programmatically.


Here is the DDL for schemas that I'm trying to finish:

-- drop view devops.${DBNAME}_schemas;
create view devops.${DBNAME}_schemas
  (schema_name
  ,object_type
  ,schema_description
  )
as
select s.schema_name
  ,'Schema'::text -- for validation log file
  ,pd.description
  from information_schema.schemata s
   left join pg_description  pd
  on (pd.objoid = ??.oid )   --- what do I join to?
 where s.schema_name not in 
('dba','information_schema','pg_catalog','public')

;
comment on view devops.${DBNAME}_schemas is 'Alias: SCH - Master view of 
all ${DBNAME} schemas. Each schema has a purpose and provides a safe 
habitat for its business data and functionality.';


In contrast, the following view works perfectly as planned since I know 
how to find the oid:

-- drop view devops.${DBNAME}_functions;
create view devops.${DBNAME}_functions (
   schema
  ,function_name
  ,function_arguments
  ,function_description
  ) as
select pn.nspname
  ,pp.proname
  ,pp.proargnames
  ,pd.description
  from pg_proc pp
   left join pg_description  pd
  on (pd.objoid = pp.oid )
  ,pg_namespace pn
 where pn.oid = pp.pronamespace
   and pn.nspname not in 
('dba','pg_catalog','information_schema','public')

 order by pn.nspname
 ,pp.proname
;
comment on view devops.${DBNAME}_functions is 'Alias: FN - Master view 
of all ${DBNAME} functions and their arguments from all ${DBNAME} 
schemas.';


Thanks for your help!

Sue
--

Susan E Hurst
Principal Consultant
Brookhurst Data LLC
Email: susan.hu...@brookhurstdata.com
Mobile: 314-486-3261




aarch64 support is available on yum.postgresql.org

2020-08-26 Thread Devrim Gündüz


The yum.postgresql.org repository now includes support for
aarch64(arm64) architecture.

HUAWEI Cloud Services donated two arm64 build host with enough
resources to build the arm64 packages at the same speed as the existing
x86_64, i686, and ppc64le architectures.

These new 2 repos include the same set of packages that x86
architecture has.

We support RHEL/CentOS 7 and 8, with latest point releases. Older point
releases may cause issues.

To use the repo, install the repository RPM first:

RHEL/CentOS 8:
dnf -y install 
https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-aarch64/pgdg-redhat-repo-latest.noarch.rpm

RHEL/CentOS 7:
https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-aarch64/pgdg-redhat-repo-latest.noarch.rpm

and install packages as usual.

If you have any questions, please either email to 
pgsql-pkg-...@postgresql.org, or create a ticket at our redmine:

https://redmine.postgresql.org/projects/pgrpms/issues/new (requires
community username)

Regards,
-- 
Devrim Gündüz
Open Source Solution Architect, Red Hat Certified Engineer
Twitter: @DevrimGunduz , @DevrimGunduzTR


signature.asc
Description: This is a digitally signed message part