[GENERAL] Invalid client charset when using TDS_FDW

2017-11-12 Thread Igal @ Lucee.org

Hi,

I am trying to connect to a MSSQL database via the tds_fdw extension.  
I've installed the extension version 1.0.8 from BIGSQL on a Windows machine.


I get a client charset invalid though, and I'm not sure where it is set 
or what to set it to:


config.c:886:Setting tds version to 7.3 (0x703).
config.c:788:TDS version set to 7.3 from $TDSVER.
config.c:765:Setting 'dump_file' to 'C:\temp\freetds.log' from $TDSDUMP.
dblib.c:1237:tdsdbopen: Calling tds_connect_and_login(003DF9D0, 
003BE2B0)
iconv.c:328:tds_iconv_open(003DF9D0, 
CP1252;LC_CTYPE=English_United 
States.1252;LC_MONETARY=C;LC_NUMERIC=C;LC_TIME=C)

iconv.c:187:local name for ISO-8859-1 is ISO-8859-1
iconv.c:187:local name for UTF-8 is UTF-8
iconv.c:187:local name for UCS-2LE is UCS-2LE
iconv.c:187:local name for UCS-2BE is UCS-2BE
iconv.c:346:setting up conversions for client charset 
"CP1252;LC_CTYPE=English_United 
States.1252;LC_MONETARY=C;LC_NUMERIC=C;LC_TIME=C"
iconv.c:348:preparing iconv for "CP1252;LC_CTYPE=English_United 
States.1252;LC_MONETARY=C;LC_NUMERIC=C;LC_TIME=C" <-> "UCS-2LE" conversion

iconv.c:423:tds_iconv_info_init: *client charset name "-1" invalid*

It looks like the error comes form 
https://github.com/wankdanker/freetds/blob/master/src/tds/iconv.c#L424


I believe that I can set the "client charset" in a .freetds.conf file, 
but I'm not sure what to set it to since CP1252 seems very valid.  Or 
perhaps it should be set to "CP1252" only without the Locale information 
afterwards?


Any ideas?

Thanks,

Igal Sapir
Lucee Core Developer
Lucee.org 



Re: [GENERAL] Migrating money column from MS SQL Server to Postgres

2017-11-09 Thread Igal @ Lucee.org

On 11/9/2017 8:19 AM, Merlin Moncure wrote:

On Thu, Nov 9, 2017 at 8:22 AM, Adam Brusselback
 wrote:

Since you are migrating data into a staging table in PostgreSQL, you may set
the field data type as TEXT for each field where you have noticed or
anticipate issues.
Then after population perform the datatype transformation query on the given
fields to determine the actual field value that could not be gracefully
transformed.

This is the approach I have come to as the most successful for data migrations.

I will use tools like Kettle / Talend to get data into a staging table
with every column as text, then use SQL to migrate that to a properly
typed table.  Works much better than trying to work within the
constraints of these tools.

YES

I call the approach 'ELT', (Extract, Load, Trasform).  You are much
better off writing transformations in SQL than inside of an ETL tool.
This is a perfect example of why.


All sound advice.  Thanks.


Igal Sapir

Lucee Core Developer
Lucee.org 



Re: [GENERAL] Postgresql and github

2017-11-09 Thread Igal @ Lucee.org

On 11/9/2017 3:27 AM, Poul Kristensen wrote:

How come that Postgresql is not present in a github with latest release?


What do you mean?  Is this not what you're referring to:
    https://github.com/postgres/postgres/releases
?

Igal Sapir
Lucee Core Developer
Lucee.org 



Re: [GENERAL] Migrating money column from MS SQL Server to Postgres

2017-11-08 Thread Igal @ Lucee.org

On 11/8/2017 6:25 PM, Igal @ Lucee.org wrote:

On 11/8/2017 5:27 PM, Allan Kamau wrote:
Maybe using NUMERIC without explicitly stating the precision is 
recommended. This would allow for values with many decimal places to 
be accepted without truncation. Your field may need to capture very 
small values such as those in bitcoin trading or some banking fee or 
interest.


That's a very good idea.  For some reason I thought that I tried that 
earlier and it didn't work as expected, but I just tested it (again?) 
and it seems to work well, so that's what I'll do.


Another weird thing that I noticed:

On another column, "total_charged", that was migrated properly as a 
`money` type, when I run `sum(total_charged::money)` I get `null`, but 
if I cast it to numeric, i.e. `sum(total_charged::numeric)`, I get the 
expected sum result.


Is there a logical explanation to that?


Igal


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Migrating money column from MS SQL Server to Postgres

2017-11-08 Thread Igal @ Lucee.org

On 11/8/2017 5:27 PM, Allan Kamau wrote:
Maybe using NUMERIC without explicitly stating the precision is 
recommended. This would allow for values with many decimal places to 
be accepted without truncation. Your field may need to capture very 
small values such as those in bitcoin trading or some banking fee or 
interest.


That's a very good idea.  For some reason I thought that I tried that 
earlier and it didn't work as expected, but I just tested it (again?) 
and it seems to work well, so that's what I'll do.


Thank you,

Igal Sapir
Lucee Core Developer
Lucee.org 



Re: [GENERAL] Migrating money column from MS SQL Server to Postgres

2017-11-08 Thread Igal @ Lucee.org

Thank you all for your help:

On 11/8/2017 4:45 PM, Tom Lane wrote:

"Igal @ Lucee.org" <i...@lucee.org> writes:

The value in the offending insert is:  0.0

Why does Postgres decide that 0.0 is "double precision" (which is a
weird name in my opinion -- why can't it just be double) and not money?

Kettle must be telling it that --- on its own, PG would think '0.0'
is numeric, which it does have a cast to money for.


Looks like you are correct.  Kettle shows me the INSERT statement and 
when I execute it outside of Kettle (in a regular SQL client), the 
INSERT succeeds.


On 11/8/2017 4:45 PM, David G. Johnston wrote:

The lack of quotes surrounding the value is significant.  Money input 
requires a string literal.  Only (more or less) integer and double 
literal values can be written without the single quotes.


That didn't work.  I CAST'ed the value in the SELECT to VARCHAR(16) but 
all it did was change the error message to say that it expected `money` 
but received `character varying`.


On 11/8/2017 4:52 PM, Allan Kamau wrote:


 On Nov 9, 2017 03:46, "Tom Lane" <t...@sss.pgh.pa.us wrote:

 Well, it's imprecise.  Most people don't like that when it comes to
 monetary amounts.


 Could try using NUMERIC datatype for such a field. 


That worked.  I have set the column type to NUMERIC(10, 2) and it seemed 
to have worked fine.  I am not dealing with large amounts here, so 10 
digits is plenty.


This is a "staging" phase where I first import the data into Postgres 
and then I will move it into the permanent tables in the next phase, so 
even taking it as VARHCAR would have been OK.  I just worried about 
using FLOAT/DOUBLE, and Tom confirmed that that was the wrong way to go.


Thanks again,

Igal Sapir
Lucee Core Developer
Lucee.org <http://lucee.org/>



[GENERAL] Migrating money column from MS SQL Server to Postgres

2017-11-08 Thread Igal @ Lucee.org

Hello,

I am migrating a database from MS SQL Server to Postgres.

I have a column named "discount" of type money in SQL Server.  I created 
the table in Postgres with the same name and type, since Postgres has a 
type named money, and am transferring the data by using PDI (Pentaho 
Data Integration) Kettle/Spoon.


Kettle throws an error though:  column "discount" is of type money but 
expression is of type double precision.


The value in the offending insert is:  0.0

Why does Postgres decide that 0.0 is "double precision" (which is a 
weird name in my opinion -- why can't it just be double) and not money?


I have control over the SELECT but not over the INSERT.  Is there any 
way to set the cast the value on the SELECT side in MS SQL Server to 
specify the column type of Postgres-money?


The only solution I found is to set the column in Postgres to DOUBLE 
PRECISION instead of MONEY, but I'm not sure if there are negative side 
effects to that?


Igal Sapir
Lucee Core Developer
Lucee.org 



Re: [GENERAL] Building tds_fdw Extension for Windows 64bit

2017-11-02 Thread Igal @ Lucee.org

On 11/2/2017 8:40 AM, Tom Lane wrote:

It looks to me like MSVC is complaining about the PGDLLEXPORT
markings that tds_fdw.c has on the function definitions
(not the extern declarations).  In the core code we only put
PGDLLEXPORT in extern declarations ... so try keeping it in
the externs and removing it from the .c file.

You may still get warnings that way, which I think you probably
have to live with, unless you want to make a .DEF file or use
--export-all-symbols as a substitute for PGDLLEXPORT'ing specific
functions.  For some sorry history around this see

https://www.postgresql.org/message-id/flat/A737B7A37273E048B164557ADEF4A58B539300BD%40ntex2010a.host.magwien.gv.at

regards, tom lane


Tom, thank you for chiming in on this.  I usually write Java code so am 
not very familiar with C/C++ or the inner workings of DLLs.  I would 
love to be able to compile this extension though, as it will help me and 
others migrate databases from MS SQL to Postgres.


I removed the PGDLLEXPORT declarations from _tds_fdw.c_ and added a 
_deffile_ (tried to rename it to _tds_fdw.def_ but VS2017 complained 
that it can't find _deffile_ so I renamed it back).  It looks like a 
similar concept as a MANIFEST.MF file in OSGi in Java where the manifest 
declares which packages are exported and which are imported.


So now I am getting the errors below.  Do I need to add an entry in 
_deffile_ for each error message?  What does the entry look like?


1>-- Rebuild All started: Project: tds_fdw, Configuration: Release 
x64 --

1>deparse.c
1>options.c
1>tds_fdw.c
1>Generating Code...
1>   Creating library E:\Workspace\git\tds_fdw\x64\Release\tds_fdw.lib 
and object E:\Workspace\git\tds_fdw\x64\Release\tds_fdw.exp
1>tds_fdw.obj : error LNK2019: unresolved external symbol dbbind 
referenced in function tdsGetRowCountShowPlanAll
1>tds_fdw.obj : error LNK2019: unresolved external symbol dbclose 
referenced in function estimate_path_cost_size
1>tds_fdw.obj : error LNK2019: unresolved external symbol dbcmd 
referenced in function tdsExecuteQuery
1>tds_fdw.obj : error LNK2019: unresolved external symbol dbcolname 
referenced in function tdsGetColumnMetadata
1>tds_fdw.obj : error LNK2019: unresolved external symbol dbcoltype 
referenced in function tdsGetColumnMetadata
1>tds_fdw.obj : error LNK2019: unresolved external symbol dbconvert 
referenced in function tdsConvertToCString
1>tds_fdw.obj : error LNK2019: unresolved external symbol dbiscount 
referenced in function tdsGetRowCountExecute
1>tds_fdw.obj : error LNK2019: unresolved external symbol dbcount 
referenced in function tdsGetRowCountExecute
1>tds_fdw.obj : error LNK2019: unresolved external symbol dbdata 
referenced in function tdsIterateForeignScan
1>tds_fdw.obj : error LNK2019: unresolved external symbol dbdatecrack 
referenced in function tdsDatetimeToDatum
1>tds_fdw.obj : error LNK2019: unresolved external symbol dbdatlen 
referenced in function tdsIterateForeignScan
1>tds_fdw.obj : error LNK2019: unresolved external symbol dberrhandle 
referenced in function estimate_path_cost_size
1>tds_fdw.obj : error LNK2019: unresolved external symbol dbexit 
referenced in function estimate_path_cost_size
1>tds_fdw.obj : error LNK2019: unresolved external symbol dbinit 
referenced in function estimate_path_cost_size
1>tds_fdw.obj : error LNK2019: unresolved external symbol dblogin 
referenced in function estimate_path_cost_size
1>tds_fdw.obj : error LNK2019: unresolved external symbol dbloginfree 
referenced in function estimate_path_cost_size
1>tds_fdw.obj : error LNK2019: unresolved external symbol dbmsghandle 
referenced in function estimate_path_cost_size
1>tds_fdw.obj : error LNK2019: unresolved external symbol dbnextrow 
referenced in function tdsGetRowCountExecute
1>tds_fdw.obj : error LNK2019: unresolved external symbol dbnumcols 
referenced in function tdsGetRowCountShowPlanAll
1>tds_fdw.obj : error LNK2019: unresolved external symbol tdsdbopen 
referenced in function tdsSetupConnection
1>tds_fdw.obj : error LNK2019: unresolved external symbol dbresults 
referenced in function tdsExecuteQuery
1>tds_fdw.obj : error LNK2019: unresolved external symbol dbsqlexec 
referenced in function tdsExecuteQuery
1>tds_fdw.obj : error LNK2019: unresolved external symbol dbuse 
referenced in function tdsSetupConnection
1>tds_fdw.obj : error LNK2019: unresolved external symbol dbwillconvert 
referenced in function tdsConvertToCString
1>tds_fdw.obj : error LNK2019: unresolved external symbol dbsetlname 
referenced in function tdsSetupConnection
1>tds_fdw.obj : error LNK2019: unresolved external symbol dbsetlversion 
referenced in function tdsSetupConnection
1>E:\Workspace\git\tds_fdw\x64\Release\tds_fdw.dll : fatal error 
LNK1120: 26 unresolved externals

1>Done building project "tds_fdw.vcxproj" -- FAILED.
== Rebuild All: 0 succeeded, 1 failed, 0 skipped ==

Thanks again,

Igal Sapir
Lucee Core Developer
Lucee.org 



Re: [GENERAL] Building tds_fdw Extension for Windows 64bit

2017-11-02 Thread Igal @ Lucee.org

On 11/2/2017 1:05 AM, Laurenz Albe wrote:

Igal @ Lucee.org wrote:

After reading Craig's excellent blog post at 
https://blog.2ndquadrant.com/compiling-postgresql-extensions-visual-studio-windows/
 I decided to try and build a real extension - tds_fdw.
I've set it up in Visual Studio Community 2017, but am getting the following 
errors:
Error    C2375    'tds_fdw_handler': redefinition; different linkage    tds_fdw 
   E:\Workspace\git\tds_fdw\src\tds_fdw.c    118
Error    C2375    'tds_fdw_validator': redefinition; different linkage    
tds_fdw    E:\Workspace\git\tds_fdw\src\tds_fdw.c    156
Full output:
1>-- Rebuild All started: Project: tds_fdw, Configuration: Release x64 
--
1>deparse.c
1>options.c
1>tds_fdw.c
1>src\tds_fdw.c(118): error C2375: 'tds_fdw_handler': redefinition; different 
linkage
1>E:\Workspace\git\tds_fdw\include\tds_fdw.h(130): note: see declaration of 
'tds_fdw_handler'
1>src\tds_fdw.c(156): error C2375: 'tds_fdw_validator': redefinition; different 
linkage
1>E:\Workspace\git\tds_fdw\include\tds_fdw.h(131): note: see declaration of 
'tds_fdw_validator'
1>Generating Code...
1>Done building project "tds_fdw.vcxproj" -- FAILED.
== Rebuild All: 0 succeeded, 1 failed, 0 skipped ==
I'm not sure if it's related, but the FreeTDS version that I found for Windows 
is marked as 32bit and I am trying to build for 64bit.
I believe that extensions like tds_fdw and pgloader will help users on TDS 
databases to migrate to Postgres and will improve adoption of the project.  I 
will be more than happy to publish my binaries of the extension if I ever get 
that far.
Can anyone help?

Based on my experience with oracle_fdw, I would change the function declaration
in tds_fdw.h to

extern PGDLLEXPORT Datum tds_fdw_handler(PG_FUNCTION_ARGS);


Thanks, Laurenz, but I am still getting the same error after prefixing 
the PGDLLEXPORT statements with `extern` in the `.c` files.  The `.h` 
files already had `extern` for those declarations.


Any other ideas?

Thank you,


Igal



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Building tds_fdw Extension for Windows 64bit

2017-11-02 Thread Igal @ Lucee.org

Hello,

After reading Craig's excellent blog post at 
https://blog.2ndquadrant.com/compiling-postgresql-extensions-visual-studio-windows/ 
I decided to try and build a real extension - tds_fdw.


I've set it up in Visual Studio Community 2017, but am getting the 
following errors:


Error    C2375    'tds_fdw_handler': redefinition; different linkage    
tds_fdw    E:\Workspace\git\tds_fdw\src\tds_fdw.c 118
Error    C2375    'tds_fdw_validator': redefinition; different 
linkage    tds_fdw    E:\Workspace\git\tds_fdw\src\tds_fdw.c 156


Full output:

1>-- Rebuild All started: Project: tds_fdw, Configuration: Release 
x64 --

1>deparse.c
1>options.c
1>tds_fdw.c
1>src\tds_fdw.c(118): error C2375: 'tds_fdw_handler': redefinition; 
different linkage
1>E:\Workspace\git\tds_fdw\include\tds_fdw.h(130): note: see declaration 
of 'tds_fdw_handler'
1>src\tds_fdw.c(156): error C2375: 'tds_fdw_validator': redefinition; 
different linkage
1>E:\Workspace\git\tds_fdw\include\tds_fdw.h(131): note: see declaration 
of 'tds_fdw_validator'

1>Generating Code...
1>Done building project "tds_fdw.vcxproj" -- FAILED.
== Rebuild All: 0 succeeded, 1 failed, 0 skipped ==

I'm not sure if it's related, but the FreeTDS version that I found for 
Windows is marked as 32bit and I am trying to build for 64bit.


I believe that extensions like tds_fdw and pgloader will help users on 
TDS databases to migrate to Postgres and will improve adoption of the 
project.  I will be more than happy to publish my binaries of the 
extension if I ever get that far.


Can anyone help?

Thanks,

Igal Sapir
Lucee Core Developer
Lucee.org 



Re: [GENERAL] tgrm index for word_similarity

2017-10-21 Thread Igal @ Lucee.org

On 10/21/2017 5:01 AM, Arthur Zakirov wrote:

PostgreSQL doesn't use index scan with functions within WHERE clause. So
you always need to use operators instead. You can try <% operator and
pg_trgm.word_similarity_threshold variable:

=# SET pg_trgm.word_similarity_threshold TO 0.1;
=# SELECT name, popularity
FROM temp.items3_v
,(values ('some phrase'::text)) consts(input)
WHERE input <% name
ORDER BY 2, input <<-> name;


Thank you, your solution does show that the index is used when I do 
`explain analyze`, and makes the query finish in about 20ms so it's 
about 1.5 - 2 times faster than without the index, but that raises a few 
questions for me:


1) I thought that the whole idea behind indexes on expressions is that 
the index would be used in a WHERE clause?  See 
https://www.postgresql.org/docs/10/static/indexes-expressional.html - Am 
I missing something?


2) A query with `WHERE input <% name` utilizes the index, but a query 
without a WHERE clause at all does not?


3) What happens if I do not create an index at all?  Does the query that 
I run in 30 - 40ms, the one that does not utilize an index, creates all 
of the tri-grams on the fly each time that it runs?  Would it be 
possible for me to create a TABLE or a VIEW with the tri-grams so that 
there is no need to create them each time the query runs?


Thanks,

Igal Sapir
Lucee Core Developer
Lucee.org 



Re: [GENERAL] tgrm index for word_similarity

2017-10-20 Thread Igal @ Lucee.org

On 10/19/2017 4:54 PM, Igal @ Lucee.org wrote:
I want to use Postgres for a fuzzy auto-suggest search field.  As the 
user will be typing their search phrase, Postgres will show a list of 
items that fuzzy-matches what they typed so far, ordered by popularity 
(ntile(20)) and distance, i.e. 1 - word_similarity().


I created a Materialized View with two columns: name text, popularity int.

My query at the moment is:

    SELECT name, popularity
    FROM   temp.items3_v
      ,(values ('some phrase'::text)) consts(input)
    WHERE  true
        and word_similarity(input, name) > 0.01  -- be lenient as some 
names are 75 characters long and we want to match even on a few 
characters of input

    ORDER BY 2, input <<-> name

I tried to add a GIN trgm index on `name`:

    CREATE INDEX temp_items3_v_tgrm_item_name ON temp.items3_v USING 
GIN(name gin_trgm_ops);


But it is not used

What index would be good for that kind of query?


I see that when I use LIKE or ILIKE the index is used, but I lose all of 
the "fuzzy" benefits by doing that.


Is there any type of INDEX or even building my own COLUMN of trgm that 
can help speed my word_similarity() results?  When used in auto-suggest 
there are usually several queries for each user in a relatively short 
period of time, so speed is important.


Thanks,


Igal



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] tgrm index for word_similarity

2017-10-19 Thread Igal @ Lucee.org

Hello,

I want to use Postgres for a fuzzy auto-suggest search field.  As the 
user will be typing their search phrase, Postgres will show a list of 
items that fuzzy-matches what they typed so far, ordered by popularity 
(ntile(20)) and distance, i.e. 1 - word_similarity().


I created a Materialized View with two columns: name text, popularity int.

My query at the moment is:

    SELECT name, popularity
    FROM   temp.items3_v
      ,(values ('some phrase'::text)) consts(input)
    WHERE  true
        and word_similarity(input, name) > 0.01  -- be lenient as some 
names are 75 characters long and we want to match even on a few 
characters of input

    ORDER BY 2, input <<-> name

Which seems to yield pretty good results, but takes over 40+ ms on a 
table that's not that large.


So I tried to add a GIN trgm index on `name`:

    CREATE INDEX temp_items3_v_tgrm_item_name ON temp.items3_v USING 
GIN(name gin_trgm_ops);


But it is not used:

QUERY PLAN |
-|
Sort  (cost=264.42..269.91 rows=2198 width=43) (actual 
time=41.060..41.117 rows=1044 loops=1)    |
  Sort Key: items3_v.popularity, (('kandels'::text <<-> 
items3_v.name))  |

  Sort Method: quicksort  Memory: 149kB |
  ->  Seq Scan on items3_v  (cost=0.00..142.41 rows=2198 width=43) 
(actual time=0.217..40.471 rows=1044 loops=1) |
    Filter: (word_similarity('kandels'::text, name) > 
'0.01'::double precision)  |

    Rows Removed by Filter: 5550 |
Planning time: 0.149 ms |
Execution time: 41.308 ms |

What index would be good for that kind of query?

Thanks,

Igal Sapir
Lucee Core Developer
Lucee.org 



Re: [GENERAL] Is it OK to create a directory in PGDATA dir

2017-10-19 Thread Igal @ Lucee.org

On 10/19/2017 1:25 PM, Tomas Vondra wrote:

On 10/19/2017 09:58 PM, rakeshkumar464 wrote:

In the container world, sometime the only persistent storage path
(that is, storage outside container world) is PGDATA.>

I don't want to be the "You're doing it wrong!" guy, but you're doing it
wrong. If a container only gives you a single persistent storage path,
then don't use it as PGDATA directly, but create PGDATA as one of the
directories in that path. And create a directory for other stuff next to
it, not in the data directory.


+1

That will also pave the way for an upgrade in the future, where you 
might need another directory for data, e.g. PGDATA-10, PGDATA-11, etc.



Igal Sapir
Lucee Core Developer
Lucee.org 



Re: [GENERAL] Using Variables in Queries

2017-10-19 Thread Igal @ Lucee.org

On 10/19/2017 12:14 PM, Tom Lane wrote:

"Igal @ Lucee.org" <i...@lucee.org> writes:

My real query is for similarity here, so I'm testing different functions
with the same value, e.g.

SELECT item_name
  , similarity('red widget', item_name)
  , similarity(item_name, 'red widget')
  , word_similarity('red widget', item_name)
  , word_similarity(item_name, 'red widget')
  , item_name <->> 'red widget'
  , item_name <<-> 'red widget'
  , 'red widget' <<-> item_name
FROM  products
WHERE similarity('red widget', item_name) > 0.25
ORDER BY 'red widget' <<-> item_name

So each time I want to change the phrase it's a slower process than what
I'm used to (think multiple-cursor in modern text editors, or a
server-side variable)

Well, this is simply not exploiting SQL very well.  You could use a
VALUES subquery to provide the string you're using elsewhere in the query.

SELECT item_name
  , similarity(target, item_name)
  , similarity(item_name, target)
  , word_similarity(target, item_name)
  , word_similarity(item_name, target)
  , item_name <->> target
  , item_name <<-> target
  , target <<-> item_name
FROM  products,
   (values ('red widget'::text)) consts(target)
WHERE similarity(target, item_name) > 0.25
ORDER BY target <<-> item_name

PG 9.5 and up will flatten out cases like this to be exactly what you
wrote out longhand.

regards, tom lane


This is beautiful, thank you!

Igal Sapir
Lucee Core Developer
Lucee.org <http://lucee.org/>



Re: [GENERAL] Using Variables in Queries

2017-10-19 Thread Igal @ Lucee.org

On 10/19/2017 8:44 AM, David G. Johnston wrote:

​PREPARE sqlquery AS​ SELECT * FROM products WHERE col1 LIKE $1 OR 
col2 LIKE $1;

 EXECUTE sqlquery('red widget');
This works, but requires `DEALLOCATE sqlquery` when you want to update 
it from what I've seen which is not very friendly.


 Alban's DO blocks are problematic since they are incapable of 
generating a result set.

Then that's a no-go for me because I definitely want a result set



 As Scott said people needing this functionality in PostgreSQL are 
content with using psql.
psql is very nice, but in the age of GUI it lacks a lot of stuff. I am 
actually using DBeaver which I find to be an excellent  tool that works 
with any DBMS and can generate export/import scripts from migrating from 
one DMBS to another, which is what I'm after ATM.


 Adding lots of new custom syntax to pure server-side parsed SQL is a 
non-trivial undertaking whose need is reduced by the alternatives so 
described (functions, DO block, PREPARE, psql).
I still think that using server side variable is a much easier and 
intuitive way of doing this.  All of the alternatives have major flaws.


On 10/19/2017 8:40 AM, Pavel Stehule wrote:
There was lot of discussion about server side variables 
https://wiki.postgresql.org/wiki/Variable_Design, but nobody write 
final patch. There is not clean result if we want dynamic variables, 
static variables or both.
I'm not sure what is the difference between static and dynamic in this 
context?


p.s. Your query should be terribly slow. When I see it, I am less 
sure, so server side variables are good idea :)
My real query is for similarity here, so I'm testing different functions 
with the same value, e.g.


SELECT item_name
    , similarity('red widget', item_name)
    , similarity(item_name, 'red widget')
    , word_similarity('red widget', item_name)
    , word_similarity(item_name, 'red widget')
    , item_name <->> 'red widget'
    , item_name <<-> 'red widget'
    , 'red widget' <<-> item_name
FROM  products
WHERE similarity('red widget', item_name) > 0.25
ORDER BY 'red widget' <<-> item_name

So each time I want to change the phrase it's a slower process than what 
I'm used to (think multiple-cursor in modern text editors, or a 
server-side variable)



Igal Sapir
Lucee Core Developer
Lucee.org 





[GENERAL] Using Variables in Queries

2017-10-19 Thread Igal @ Lucee.org

Hello,

In other database servers, which I'm finally dropping in favor of 
Postgres, I can do the following (mind you that this is for illustration 
only, I do not actually write queries like that):


DECLARE @query varchar(64) = 'red widget';

SELECT *
FROM products
WHERE col1 LIKE @query
   OR col2 LIKE @query
   OR col3 LIKE @query
   OR col4 LIKE @query
   OR col5 LIKE @query

The point is, though, that I can change the @query variable in one place 
which is very convenient.


Is it still true (the posts I see on this subject are quite old) that I 
can not do so in Postgres outside of a stored procedure/function?  And 
if so, what's the reason of not adding this feature?  Seems very useful 
to me.


Thanks,


Igal Sapir
Lucee Core Developer
Lucee.org 



Re: [GENERAL] Finally upgrading to 9.6!

2017-10-18 Thread Igal @ Lucee.org

On 10/18/2017 7:45 AM, Ron Johnson wrote:

On 10/18/2017 09:34 AM, Igal @ Lucee.org wrote:
A bit off-topic here, but why upgrade to 9.6 when you can upgrade to 
10.0?


There's no way we're going to put an x.0.0 version into production.


Then think of it as 9.7.0 but with an easier name to pronounce ;)


Igal Sapir
Lucee Core Developer
Lucee.org <http://lucee.org/>



Re: [GENERAL] Finally upgrading to 9.6!

2017-10-18 Thread Igal @ Lucee.org

On 10/18/2017 6:24 AM, Ron Johnson wrote:

On 10/17/2017 11:17 AM, Tom Lane wrote:

Ron Johnson  writes:
Where can I look to see (roughly) how much more RAM/CPU/disk needed 
when

moving from 8.4 and 9.2?

It's entirely possible you'll need *less*, as you'll be absorbing the
benefit of several years' worth of performance improvements. But this
is such a workload-dependent thing that there's no general answer.


XML stored in blobs (not sure whether text or bytea) and b-tree indexes.



A bit off-topic here, but why upgrade to 9.6 when you can upgrade to 10.0?

Obviously you're not one to upgrade often so shouldn't you take 
advantage of all of the new features and improvements when "finally" (to 
use your own word) upgrading?


Igal Sapir
Lucee Core Developer
Lucee.org 



Re: [GENERAL] Delete Duplicates with Using

2017-10-16 Thread Igal @ Lucee.org

FYI,

On 10/16/2017 8:58 AM, Igal @ Lucee.org wrote:




test=*# with keep as (select max(ctid) as ctid from dubletten group 
by c1,c2,c3) delete from dubletten where ctid not in (select ctid 
from keep);




I like this solution, but would using a subquery be much slower than 
the implicit join of `using`?  My tables are not big in 
Postgres-standards, so it's probably not an issue, but I'm trying to 
learn as much as I can about Postgres now that I'm getting ready to 
move it to production.


I was able to compare the performance on a table with about 350k rows, 
with duplicates over 7 columns and no indices.


The GROUP BY solution with the subquery (though I was using a simpler 
version of it without a CTE), was taking over 30 seconds so I killed the 
request.


I then ran the USING version which completed in 16 seconds and deleted 
39 rows.


Best,

Igal Sapir
Lucee Core Developer
Lucee.org <http://lucee.org/>



Re: [GENERAL] Delete Duplicates with Using

2017-10-16 Thread Igal @ Lucee.org

Andreas,

On 10/15/2017 11:53 PM, Andreas Kretschmer wrote:
other solution, using the CTID-column: (rows with (1,1,1) and (5,5,5) 
are identical)


test=*# select * from dubletten ;
 c1 | c2 | c3
++
  1 |  1 |  1
  1 |  1 |  1
  1 |  2 |  3
  2 |  3 |  4
  3 |  4 |  5
  4 |  5 |  5
  5 |  5 |  5
  5 |  5 |  5
(8 Zeilen)

test=*# with keep as (select max(ctid) as ctid from dubletten group by 
c1,c2,c3) delete from dubletten where ctid not in (select ctid from 
keep);;

DELETE 2
test=*# select * from dubletten ;
 c1 | c2 | c3
++
  1 |  1 |  1
  1 |  2 |  3
  2 |  3 |  4
  3 |  4 |  5
  4 |  5 |  5
  5 |  5 |  5
(6 Zeilen)

test=*#

Regards, Andreas



I like this solution, but would using a subquery be much slower than the 
implicit join of `using`?  My tables are not big in Postgres-standards, 
so it's probably not an issue, but I'm trying to learn as much as I can 
about Postgres now that I'm getting ready to move it to production.


Thanks,

Igal Sapir
Lucee Core Developer
Lucee.org 



Re: [GENERAL] Adding identity column to a non-empty table

2017-10-15 Thread Igal @ Lucee.org

On 10/15/2017 6:42 PM, Melvin Davidson wrote:
On Sun, Oct 15, 2017 at 9:09 PM, Igal @ Lucee.org <i...@lucee.org 
<mailto:i...@lucee.org>> wrote:


Melvin,

On 10/15/2017 5:56 PM, Melvin Davidson wrote:


On 10/15/2017 4:01 PM, Igal @ Lucee.org wrote:


Hello,

I'm trying to add an identity column to a table that has
records (previously had a bigserial column which I removed):


There is probably a better solution, but the one I came up
with is to add the column as BIGSERIAL and DROP the SEQUENCE
CASCADE, SELECT the max(rid) + 1, and then convert the column
to IDENTITY:


The correct way to make r_id the primary key would be:

ALTER TABLE  event_log
  ADD COLUMN r_id SERIAL;

ALTER TABLE  event_log
  ALTER COLUMN r_id TYPE BIGINT,
  ADD CONSTRAINT dummy_pk PRIMARY KEY (r_id);

That automatically generates the column as

r_id bigint NOT NULL DEFAULT nextval('dummy_r_id_seq'::regclass),
  CONSTRAINT dummy_pk PRIMARY KEY (r_id)

and creates the appropriate sequence for you.



Does that use the new IDENTITY construct that was added in
Postgres 10?  I do not really care for the PRIMARY KEY
constraint.  I just want the sequence with the benefits of the new
IDENTITY "type".


> Does that use the new IDENTITY construct that was added in Postgres 10?

I cannot say, as I do not yet have PostgreSQL 10 installed because it 
was very recently released.

However, the method I supplied works for all prior versions of PostgreSQL.


Understood.  But I already had a an auto-increment column by way of 
BIGSERIAL.


I want specifically to use the new IDENTITY feature of Postgres 10.

Best,

Igal Sapir
Lucee Core Developer
Lucee.org <http://lucee.org/>



Re: [GENERAL] Adding identity column to a non-empty table

2017-10-15 Thread Igal @ Lucee.org

Melvin,

On 10/15/2017 5:56 PM, Melvin Davidson wrote:


On Sun, Oct 15, 2017 at 8:24 PM, Igal @ Lucee.org <i...@lucee.org 
<mailto:i...@lucee.org>> wrote:


On 10/15/2017 4:01 PM, Igal @ Lucee.org wrote:


Hello,

I'm trying to add an identity column to a table that has
records (previously had a bigserial column which I removed):


There is probably a better solution, but the one I came up with is
to add the column as BIGSERIAL and DROP the SEQUENCE CASCADE,
SELECT the max(rid) + 1, and then convert the column to IDENTITY:


The correct way to make r_id the primary key would be:

ALTER TABLE  event_log
  ADD COLUMN r_id SERIAL;

ALTER TABLE  event_log
  ALTER COLUMN r_id TYPE BIGINT,
  ADD CONSTRAINT dummy_pk PRIMARY KEY (r_id);

That automatically generates the column as

r_id bigint NOT NULL DEFAULT nextval('dummy_r_id_seq'::regclass),
  CONSTRAINT dummy_pk PRIMARY KEY (r_id)

and creates the appropriate sequence for you.



Does that use the new IDENTITY construct that was added in Postgres 10?  
I do not really care for the PRIMARY KEY constraint.  I just want the 
sequence with the benefits of the new IDENTITY "type".


Thanks,


Igal


Re: [GENERAL] Adding identity column to a non-empty table

2017-10-15 Thread Igal @ Lucee.org

On 10/15/2017 4:01 PM, Igal @ Lucee.org wrote:


Hello,

I'm trying to add an identity column to a table that has records 
(previously had a bigserial column which I removed):


  ALTER TABLE event_log
ADD COLUMN r_id BIGINT GENERATED BY DEFAULT AS IDENTITY;

But I'm getting an error `column r_id contains null values`.

How can I add the column and populate it for the existing rows?



There is probably a better solution, but the one I came up with is to 
add the column as BIGSERIAL and DROP the SEQUENCE CASCADE, SELECT the 
max(rid) + 1, and then convert the column to IDENTITY:


  ALTER TABLE transient.event_log ADD COLUMN r_id BIGSERIAL;

  -- find the sequence name and then
  DROP sequence  CASCADE;

  -- find min value by executing select max(r_id) + 1
  ALTER table transient.event_log
      ALTER COLUMN r_id
          ADD GENERATED BY DEFAULT AS IDENTITY (MINVALUE );

If anyone has a better suggestion then please let me know.

Thanks,


Igal



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Adding identity column to a non-empty table

2017-10-15 Thread Igal @ Lucee.org

Hello,

I'm trying to add an identity column to a table that has records 
(previously had a bigserial column which I removed):


  ALTER TABLE event_log
ADD COLUMN r_id BIGINT GENERATED BY DEFAULT AS IDENTITY;

But I'm getting an error `column r_id contains null values`.

How can I add the column and populate it for the existing rows?

Thanks,

Igal Sapir
Lucee Core Developer
Lucee.org 



Re: [GENERAL] Delete Duplicates with Using

2017-10-15 Thread Igal @ Lucee.org

On 10/14/2017 12:32 AM, legrand legrand wrote:

DELETE FROM table_with_duplicates AS T1 USING table_with_duplicates AS T2
WHERE
 T1.column_1 = T2.column_1
 AND T1.column_2 = T2.column_2
 AND T1.column_3 = T2.column_3
 AND T1.row_num < T2.row_num


Thank you, I actually thought about that at first but it seemed "too 
easy" and I was looking for some solution with JOIN.


Anyway, this seems to work great.

Thanks,


Igal


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Delete Duplicates with Using

2017-10-14 Thread Igal @ Lucee.org

Hello,

I run the SQL query below to delete duplicates from a table.  The 
subquery is used to identify the duplicated rows (row_num is a BIGSERIAL 
column).


/** delete older copies of duplicates */
DELETE FROM table_with_duplicatesAS T
WHERE row_num IN (
    SELECT     T1.row_num
    FROM    table_with_duplicates  AS T1
        JOIN table_with_duplicates AS T2
            ON         T1.column_1 = T2.column_1
                AND T1.column_2 = T2.column_2
                AND T1.column_3 = T2.column_3
                AND T1.row_num < T2.row_num
);

Can anyone tell me how to rewrite that query to use the USING clause and 
hopefully remove the subquery?


The documentation mentions USING but there is no example and the only 
examples I found online are very trivial.


Thanks,

Igal Sapir
Lucee Core Developer
Lucee.org 



Re: [GENERAL] Multiple Schemas vs. Multiple Databases

2017-10-13 Thread Igal @ Lucee.org

On 10/13/2017 12:47 PM, John R Pierce wrote:

On 10/13/2017 12:29 PM, Igal @ Lucee.org wrote:


The main things that I need to do is:

  a) Be able to backup/restore each "part" separately.  Looks like 
pg_dump allows that for schemas via the --schema=schema argument.


  b) Be able to query aggregates from the secondary "part" and store 
the results in the primary one, which also seems easier with multiple 
schemas than multiple databases.


Am I right to think that two schemas are better in this use case or 
am I missing something important?




generally, yeah, unless you eventually decide to split off the two 
databases onto separate servers for performance reasons.   Of course, 
to access the 'other' database, you'd need to use postgres_fdw or dblink.


Thank you both for confirming,


Igal Sapir
Lucee Core Developer
Lucee.org <http://lucee.org/>





[GENERAL] Multiple Schemas vs. Multiple Databases

2017-10-13 Thread Igal @ Lucee.org

Hello,

I have read quite a few articles about multiple schemas vs. multiple 
databases, but they are all very generic so I wanted to ask here for a 
specific use case:


I am migrating a Web Application from MS SQL Server to PostgreSQL.  For 
the sake of easier maintenance, on SQL Server I have two separate databases:


  1) Primary database containing the data for the application

  2) Secondary database containing "transient" data, e.g. logging of 
different activities on the website in order to generate statistics etc.


Both databases belong to the same application with the same roles and 
permissions.


The secondary database grows much faster, but the data in it is not 
mission-critical , and so the data is aggregated daily and the summaries 
are posted to the primary database, because only the aggregates are 
important here.


To keep the database sizes from growing too large, I periodically delete 
old data from the secondary database since the data becomes obsolete 
after a certain period of time.


At first I thought of doing the same in Postgres, but now it seems like 
the better way to go would be to keep one database with two schemas: 
primary and transient.


The main things that I need to do is:

  a) Be able to backup/restore each "part" separately.  Looks like 
pg_dump allows that for schemas via the --schema=schema argument.


  b) Be able to query aggregates from the secondary "part" and store 
the results in the primary one, which also seems easier with multiple 
schemas than multiple databases.


Am I right to think that two schemas are better in this use case or am I 
missing something important?


Thanks,

Igal Sapir
Lucee Core Developer
Lucee.org 



Re: [GENERAL] Permissions for Web App

2017-10-13 Thread Igal @ Lucee.org

On 10/13/2017 11:21 AM, David G. Johnston wrote:
On Fri, Oct 13, 2017 at 11:03 AM, Igal @ Lucee.org <i...@lucee.org 
<mailto:i...@lucee.org>>wrote:


You mean that if I execute the ALTER DEFAULT command above as user
`postgres` then only tables created by user `postgres` will give
default privileges to role `webapp`?


​Yes. "​You can change default privileges only for objects that will 
be created by yourself or by roles that you are a member of."


You use the "FOR " modifier if you want the "or by roles 
that you are a member of" portion to be used.




Thank you for clarifying,


Igal Sapir
Lucee Core Developer
Lucee.org <http://lucee.org/>



Re: [GENERAL] Permissions for Web App

2017-10-13 Thread Igal @ Lucee.org

Stephen,

On 10/10/2017 6:14 AM, Stephen Frost wrote:



For future reference and for the benefit of others, the command that
I ran is:

   ALTER DEFAULT PRIVILEGES IN SCHEMA public
     GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO webapp;

Note that DEFAULT PRIVILEGES apply to a specific role (in the above
case, whatever CURRENT_USER is) and therefore will only be helpful if
you only have one user creating tables, in this case.


You mean that if I execute the ALTER DEFAULT command above as user 
`postgres` then only tables created by user `postgres` will give default 
privileges to role `webapp`?


I usually only create tables with user `postgres` but this is a bit 
confusing, or at least unclear from the syntax of the command.


Thanks,

Igal Sapir
Lucee Core Developer
Lucee.org 



Re: [GENERAL] Permissions for Web App

2017-10-09 Thread Igal @ Lucee.org

On 10/9/2017 10:51 AM, David G. Johnston wrote:
On Mon, Oct 9, 2017 at 9:44 AM, Igal @ Lucee.org <i...@lucee.org 
<mailto:i...@lucee.org>>wrote:


But I want to give that role permissions on future tables since I
add new tables and drop/recreate current ones.


​ALTER DEFAULT PRIVILEGES​

​https://www.postgresql.org/docs/9.6/static/sql-alterdefaultprivileges.html


It worked, thanks!

For future reference and for the benefit of others, the command that I 
ran is:


  ALTER DEFAULT PRIVILEGES IN SCHEMA public
    GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO webapp;


Igal Sapir
Lucee Core Developer
Lucee.org <http://lucee.org/>



[GENERAL] Permissions for Web App

2017-10-09 Thread Igal @ Lucee.org

Hello,

I created a role named `webapp` as follows:

  CREATE ROLE webapp WITH LOGIN PASSWORD 'changeme';

While in development, I want to give that role permissions on all tables 
in schema public.  So far I've been using the following command, which 
works on existing tables:


  GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO webapp;

But I want to give that role permissions on future tables since I add 
new tables and drop/recreate current ones.


How can I do that?

Thanks,

Igal Sapir
Lucee Core Developer
Lucee.org 



Re: [GENERAL] Functions and Parentheses

2017-10-06 Thread Igal @ Lucee.org

On 10/6/2017 3:10 PM, Joshua D. Drake wrote:

On 10/06/2017 02:33 PM, Tom Lane wrote:

"Igal @ Lucee.org" <i...@lucee.org> writes:
How come `current_date` has no parenthesis but `clock_timestamp()` 
does?


Because the SQL standard says that CURRENT_DATE doesn't have 
parentheses.

It is a function by any other measure, though.

(AFAICT, the SQL committee is unacquainted with any principles of
programming language syntax design that emerged later than the COBOL
era.  Their capacity to invent new and non-orthogonal syntax for every
new feature seems boundless.)


This is the best statement I have read all week.


+1

I also want to say that I'm using many different open source projects, 
but Postgres has by far the best community support of all, not to 
mention a product that is far superior to the many commercial 
alternatives in the market.


It's kind of like -- I can use Postgres for free, or I can spend a lot 
of money and get an inferior product with inferior support.


If Postgres had been written in Java then I'd be much more involved, but 
my C/C++ skills are rather limited.


Anyway, great job guys!  And Thank you!

Igal Sapir
Lucee Core Developer
Lucee.org <http://lucee.org/>




[GENERAL] Functions and Parentheses

2017-10-06 Thread Igal @ Lucee.org

Hi,

Is current_date a function?  It's a bit puzzling to me since there are 
no parentheses after it, i.e.


  SELECT current_date;

And not

  SELECT current_date();  -- syntax error

How come `current_date` has no parenthesis but `clock_timestamp()` does?

Thanks,

Igal Sapir
Lucee Core Developer
Lucee.org 



Re: [GENERAL] Download 9.6.3 Binaries

2017-06-23 Thread Igal @ Lucee.org

Hi,

On 6/23/2017 11:47 AM, Joshua D. Drake wrote:

Looks like EnterpriseDB is behind. You can try BigSQL:
https://www.bigsql.org/postgresql/installers/
Thanks, but I prefer the archive version rather than the installer. 
Contrary to "rumors" on the internet, it is very easy to install 
Postgres on Windows even without the installer.  All you need to add is 
a couple of DDLs to the bin directory.


On 6/23/2017 11:58 AM, Andreas Kretschmer wrote:
Yeah, use the community version from postgresql.org ;-) 
The URL I posted is from https://www.postgresql.org/download/windows/ -- 
is there another "community version" somewhere?  Can you please post a link?


On 6/23/2017 12:12 PM, Adrian Klaver wrote:
The tags are wrong, but the binaries. If you click on one of the 
buttons you will be taken to 9.6.3 binaries for the OS/Arch.
Thank you, you are correct.  I guess I should have tried to click on the 
link first, but I thought that it would start a download.


Thank you all for your help,


Igal



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Download 9.6.3 Binaries

2017-06-23 Thread Igal @ Lucee.org

Hello,

I expected to find binaries for 9.6.3 at 
https://www.enterprisedb.com/download-postgresql-binaries but I only see 
9.6.2.


Am I looking at the wrong place?

Thanks,

Igal Sapir
Lucee Core Developer
Lucee.org 



Re: [GENERAL] BST Time Zone Discrepancy

2017-02-06 Thread Igal @ Lucee.org

Thank you, Steve and Adrian, for clarifying.


Igal Sapir
Lucee Core Developer
Lucee.org 



Re: [GENERAL] BST Time Zone Discrepancy

2017-02-06 Thread Igal @ Lucee.org

Tom,

Thank you for your reply:

On 2/6/2017 12:18 PM, Tom Lane wrote:

This is controlled by the timezone_abbreviations file, which if
you haven't changed it lists:

# CONFLICT! BST is not unique
# Other timezones:
#  - BST: Bougainville Standard Time (Papua New Guinea)
BST  3600 D  # British Summer Time
  # (Europe/London)
I haven't changed any of the config files.  I can not find that file on 
my system (maybe it's in the source code only).


I am using the Red Hat distribution:  PostgreSQL 9.6.1 on 
x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 
4.8.5-4), 64-bit



pg_timezone_names shows the*current*  abbreviation for the zone in question
I'm not sure what you mean by "current".  If this is not an issue then 
that's fine, you can ignore this message.  It just seemed weird to me 
that pg_timezone_names and pg_timezone_abbrevs showed very different 
results for the same code.


Thanks,

Igal Sapir
Lucee Core Developer
Lucee.org 



[GENERAL] BST Time Zone Discrepancy

2017-02-06 Thread Igal @ Lucee.org
I've been having some issues with using BST as a timezone, so I checked 
the Postgres catalogs and found a discrepancy.


I expected it to be British Summer Time, which is a Daylight Saving Time 
in England and has a 1-hour UTC offset.


In pg_timezone_abbrevs I see a 1-hr offset and is_dst set to true as I 
expected:


intranet=> SELECT * FROM pg_timezone_abbrevs WHERE abbrev='BST';
 abbrev | utc_offset | is_dst
++
 BST| 01:00:00   | t
(1 row)

But in pg_timezone_names I get an 11-hr offset with a completely 
different time zone.


intranet=> SELECT * FROM pg_timezone_names WHERE abbrev='BST';
name| abbrev | utc_offset | is_dst
+++
 Pacific/Bougainville   | BST| 11:00:00   | f
 posix/Pacific/Bougainville | BST| 11:00:00   | f
(2 rows)

Surely, they can not both be right, so I'm thinking that this is a bug?

Thank you,

--

Igal Sapir
Lucee Core Developer
Lucee.org 



[GENERAL] RPM Package of PostgreSQL 9.6.1 for CentOS

2016-10-31 Thread Igal @ Lucee.org

Hi,

I see the package for 9.6.0 at 
https://yum.postgresql.org/9.6/redhat/rhel-7-x86_64/


pgdg-centos96-9.6-3.noarch.rpm

Should there be one for 9.6.1?

When I try to install postgresql96-9.6.1-1PGDG.rhel7.x86_64.rpm I get 
dependency errors, so I'm looking for a package like the one above for 
9.6.1.


Any ideas?  Is it just a matter of time until it's released?

Thanks,

--

Igal Sapir
Lucee Core Developer
Lucee.org 



Re: [GENERAL] recordings of pgconf us 2016

2016-05-31 Thread Igal @ Lucee.org
Funny.  I was just looking for that myself.  I would expect it to go on 
their channel at 
https://www.youtube.com/channel/UCsJkVvxwoM7R9oRbzvUhbPQ but so far 
nothing from this year.


PGCon has published their 2016 recordings on their channel: 
https://www.youtube.com/playlist?list=PLuJmmKtsV1dNE5y1gu1xpbIl3M2b7AW4D



Igal Sapir
Lucee Core Developer
Lucee.org 

On 5/28/2016 9:36 PM, Johannes wrote:

I guess I have seen all video recording from pgconf us 2015 at youtube.
Are there any recording from this year available?

Best regards Johannes





Re: [GENERAL] Cannot start the PostgreSQL service

2016-01-30 Thread Igal @ Lucee.org

On 1/30/2016 12:06 PM, Adrian Klaver wrote:

On 01/30/2016 11:01 AM, David Unsworth wrote:
Ccing list, so more eyes can see.



Assuming this is a Windows machine, what version of the OS?
Windows 7 Home Premium


Did you check the Windows Event Viewer?  When a service fails to start 
you should see the details there in the Application log errors.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cannot start the PostgreSQL service

2016-01-30 Thread Igal @ Lucee.org

On 1/30/2016 12:10 PM, Igal @ Lucee.org wrote:


Did you check the Windows Event Viewer?  When a service fails to start 
you should see the details there in the Application log errors.

Sorry, I just realized that you had this:


2016-01-25 18:45:19 GMTFATAL:  the database system is starting up
2016-01-27 01:09:47 GMTFATAL: *role "SYSTEM" does not exist *
pg_ctl: PID file "C:/Program Files
(x86)/PostgreSQL/8.4/data/postmaster.pid" does not exist 


So you are using PostgreSQL 8.4?  Not sure why you would do that on a 
development machine (assuming that you're not running production servers 
on Windows 7 Home)


Looks to me like the issue is that the service is running under the 
Local System account, which is not recognized by PostgreSQL, and that's 
why the log shows [role "SYSTEM" does not exist].  If you have installed 
using the installer then you should probably have a "postgres" account 
on the machine?  If so, set the service to Log On as that account.




Re: [GENERAL] Tutorial on How to Compile PostgreSQL 9.5 for Windows 64bit

2016-01-25 Thread Igal @ Lucee.org

On 1/25/2016 10:51 PM, Thomas Kellerer wrote:

Very helpful, thanks.

Great, thank you for the feedback.

Is there any tutorial on how to compile extensions for Windows?

There are many interesting extensions where no Windows binaries are available, 
so it would be really helpful if I could compile them myself.
I'm actually trying to figure those things out, and once I do 
(hopefully) I plan to post similar tutorials.


My top two extensions of interest are plv8 and tds_fdw.


Igal




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Tutorial on How to Compile PostgreSQL 9.5 for Windows 64bit

2016-01-25 Thread Igal @ Lucee.org

Hi Everybody,

I have posted a video tutorial on How to Compile PostgreSQL 9.5 for 
Windows 64bit

https://www.youtube.com/watch?v=-BJmuZT5IPE

It was quite difficult for me to figure it out, so hopefully it will 
make life easier for

the next guy (or gal).

--

Igal Sapir
Lucee Core Developer
Lucee.org 



Re: [GENERAL] plv8 binaries for PostgreSQL 9.5 on Windows

2016-01-19 Thread Igal @ Lucee.org

On 1/19/2016 2:56 PM, Regina Obe wrote:

On 01/11/2016 05:50 PM, Igal @ Lucee.org wrote:
Where can I find binaries for plv8 for PostgreSQL 9.5 for Windows?

I've posted PostgreSQL 9.5 Windows binaries for PLV8

http://www.postgresonline.com/journal/archives/360-PLV8-binaries-for-Postgre
SQL-9.5-windows-both-32-bit-and-64-bit.html


Awsome!  Thank you :)



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plv8 binaries for PostgreSQL 9.5 on Windows

2016-01-19 Thread Igal @ Lucee.org

On 1/13/2016 8:25 AM, Igal @ Lucee.org wrote:

On 1/12/2016 12:57 PM, Adrian Klaver wrote:

On 01/11/2016 05:50 PM, Igal @ Lucee.org wrote:

Where can I find binaries for plv8 for PostgreSQL 9.5 for Windows?


The only Windows binaries I could find where for 9.4, here:

http://www.postgresonline.com/journal/archives/341-PLV8-binaries-for-PostgreSQL-9.4-windows-both-32-bit-and-64-bit.html 



Maybe you could contact the authors and see if they are going to 
build 9.5 versions?

Yes, that's where I found the binaries for 9.4 which worked very nicely.

The post cites Leo Hsu and Regina Obe (I actually have a couple of 
their books) as the authors, and judging by the replies to comments it 
looks like Regina is the one who posted it.


I believe that I've seen some posts by her in the mailing list here 
recently so hopefully she will see this thread and comment on it.


Thanks!

Hi Regina,

Do you have any plans to build plv8 for Windows for PostgreSQL 9.5?

Thanks,


Igal



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] plv8 binaries for PostgreSQL 9.5 on Windows

2016-01-13 Thread Igal @ Lucee.org

On 1/12/2016 12:57 PM, Adrian Klaver wrote:

On 01/11/2016 05:50 PM, Igal @ Lucee.org wrote:

Where can I find binaries for plv8 for PostgreSQL 9.5 for Windows?


The only Windows binaries I could find where for 9.4, here:

http://www.postgresonline.com/journal/archives/341-PLV8-binaries-for-PostgreSQL-9.4-windows-both-32-bit-and-64-bit.html 



Maybe you could contact the authors and see if they are going to build 
9.5 versions?

Yes, that's where I found the binaries for 9.4 which worked very nicely.

The post cites Leo Hsu and Regina Obe (I actually have a couple of their 
books) as the authors, and judging by the replies to comments it looks 
like Regina is the one who posted it.


I believe that I've seen some posts by her in the mailing list here 
recently so hopefully she will see this thread and comment on it.


Thanks!



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Building PostgreSQL in Eclipse on Windows

2016-01-11 Thread Igal @ Lucee.org
Can anyone help me figure out how to run/debug PostgreSQL in Eclipse on 
Windows?  This is a cross post with 
http://stackoverflow.com/questions/34733991/building-postgresql-in-eclipse-on-windows 
:


I'm trying to build|PosgreSQL|in Eclipse CDT Mars on Windows using the 
MinGW-64 gcc 5.3.0


I installed|MSYS2|and used|pacman|to update all the software that came 
with it and install gcc etc.


I downloaded the PostgreSQL source code and tried to follow thedocs 
*but 
can't figure out where to download the 
dependencies?*1.|libreadline5-dev|, 2.|zlib1g-dev|, 3.|bison (YACC)|, 
4.|flex|and where to save them to on disk?


I ran|./configure --prefix=$HOME/project --enable-depend 
--enable-cassert --enable-debug|and it seems to have completed with no 
errors,*but I do not see anything at|$HOME/project|?*


I installed Eclipse CDT and added the include paths to the 
Environment:Screenshot of Eclipse Environment 



I imported the project as Existing Code as Makefile Project.

When I try to|build|the project Eclipse says|Info: Nothing to build for 
postgres|


When I try to run it a dialog box says|Unable to Launch. The selection 
can not be launched and there are no recent launches|


How can I run/debug PostgreSQL in Eclipse on Windows?

Thanks!



--

Igal Sapir
Lucee Core Developer
Lucee.org 



[GENERAL] plv8 binaries for PostgreSQL 9.5 on Windows

2016-01-11 Thread Igal @ Lucee.org

Where can I find binaries for plv8 for PostgreSQL 9.5 for Windows?

Thanks,

--

Igal Sapir
Lucee Core Developer
Lucee.org 



Re: [GENERAL] Building PostgreSQL in Eclipse on Windows

2016-01-11 Thread Igal @ Lucee.org

On 1/11/2016 6:21 PM, John R Pierce wrote:
wait, are you trying to USE postgres on Eclipse on Windows, or BUILD 
it ?I would use the enterprisedb build of postgres-for-windows, 
and then you just need to install the appropriate postgresql jdbc jar 
file into Eclipse to be able to use it from your Eclipsed based 
development.

I'm trying to build from source, and hopefully be able to run in debug mode.



I ran|./configure --prefix=$HOME/project --enable-depend 
--enable-cassert --enable-debug|and it seems to have completed with 
no errors,*but I do not see anything at|$HOME/project|?*


that just sets up the makefiles... you now need to `make` and `make 
install` to build postgres, then put its binaries in your 'prefix' path.
thank you, I will try this as well as Michael's suggestions and let you 
guys know how it went.




[GENERAL] pg_upgrade on Windows

2016-01-07 Thread Igal @ Lucee.org

hi guys,

I am having issues with pg_upgrade on Windows.  I have posted a question 
on StackOverflow -- at 
http://stackoverflow.com/questions/34664236/pg-upgrade-on-windows-cannot-write-to-log-file-pg-upgrade-internal-log 
-- copied below for convenience:


I'm trying to run pg_upgrade on Windows, but I'm getting the error:

   cannot write to log file pg_upgrade_internal.log Failure, exiting

I saw a similar question for Linux at23216734 
which 
explains that the issue is with permissions, but it doesn't help with 
Windows as I do not have a user named|postgres|


Same goes for thepg_upgrade docs 
, which 
mention a|postgres|user:


   RUNAS /USER:postgres "CMD.EXE"

But again, I do not have such a user, and am trying to run this command 
as Administrator so I don't understand why I would have no permission. I 
even tried to do


|RUNAS /USER:Administrator "CMD.EXE"|

And run pg_upgrade in the new command prompt, but am getting the same error.

Also, I am not sure which directory needs permissions? Where is the 
process trying to write|pg_upgrade_internal.log|to?


any ideas?  TIA!

--

Igal Sapir
Lucee Core Developer
Lucee.org