CREATE COLLATION to match pg_collation data

2019-01-11 Thread rihad
Hi, since pg_import_system_collations() that would solve all this 
doesn't exist in 9.6.x, is there a way to I properly map it to a CREATE 
COLLATE call? Specifically I need to have this collation (from 10.x) in 
9.6.x so it works on FreeBSD:



    collname    | collnamespace | collowner | collprovider | 
collencoding |    collcollate    | collctype | collversion

+---+---+--+--+---+---+-

 az-x-icu   |    11 |    10 | i    
|   -1 | az    | az    | 153.88.34




CREATE COLLATIONname  (
[ LOCALE =locale, ]
[ LC_COLLATE =lc_collate, ]
[ LC_CTYPE =lc_ctype  ]
)
CREATE COLLATIONname  FROMexisting_collation 
https://www.postgresql.org/docs/9.6/sql-createcollation.html Can I just 
not use "create collation", and simply insert the new row as is? Thanks 
for any tips.




Re: insert into: NULL in date column

2019-01-11 Thread Ricardo Martin Gomez
Hi, one question.
Do you put explicit "NULL" in the column value?
Other option is.
You have to delete the column "next_contact" in your INSERT clause.
So, if the column has a default value, this value Will be inserted. Else Null 
value will be inserted.

Regards

Obtener Outlook para Android

De: Adrian Klaver
Enviado: viernes, 11 de enero 22:09
Asunto: Re: insert into: NULL in date column
Para: Rich Shepard, pgsql-general@lists.postgresql.org


On 1/11/19 4:00 PM, Rich Shepard wrote: > On Fri, 11 Jan 2019, Ken Tanzer 
wrote: > >> I think the problem is actually that you're trying to represent 
your NULL >> dates with '', which PG doesn't like. > > Ken, > >   That's 
certainly how I saw the error message. > >> cat test.csv >> >> 
my_text,my_date,my_int >> 'Some Text','1/1/18',3 >> 'More Text,,2 >> 
'Enough','',1 >> >> CREATE TEMP TABLE my_test (my_text TEXT, my_date DATE, 
my_int INT); >> >> \copy my_test FROM test.csv WITH CSV HEADER >> >> ERROR:  
invalid input syntax for type date: "''" >> CONTEXT:  COPY my_test, line 4, 
column my_date: "''" >> >> >> You'll note that it breaks on the last line, 
which is like yours, not >> the one before it. > >   Huh! I'll leave off the 
quote marks and see if that makes a > difference ... > tomorrow morning. Since 
dates are treated as strings I thought their > absence > also needed the 
quotes. Stay tuned to this mail list for test results. Dates are not treated as 
strings they are treated as dates. There is built casting for strings that are 
valid dates though: create table date_test(dt_fld date); insert into date_test 
values('01/11/19'); INSERT 0 1 --Trying MySQL 'null':) insert into date_test 
values('00/00/'); ERROR: date/time field value out of range: "00/00/" 
LINE 1: insert into date_test values('00/00/'); insert into date_test 
values(''); ERROR: invalid input syntax for type date: "" LINE 1: insert into 
date_test values(''); As pointed out you are being told '' is not a valid date. 
> > Thanks, > > Rich > > -- Adrian Klaver adrian.kla...@aklaver.com



Re: insert into: NULL in date column

2019-01-11 Thread Adrian Klaver

On 1/11/19 4:00 PM, Rich Shepard wrote:

On Fri, 11 Jan 2019, Ken Tanzer wrote:


I think the problem is actually that you're trying to represent your NULL
dates with '', which PG doesn't like.


Ken,

   That's certainly how I saw the error message.


cat test.csv

my_text,my_date,my_int
'Some Text','1/1/18',3
'More Text,,2
'Enough','',1

CREATE TEMP TABLE my_test (my_text TEXT, my_date DATE, my_int INT);

\copy my_test FROM test.csv WITH CSV HEADER

ERROR:  invalid input syntax for type date: "''"
CONTEXT:  COPY my_test, line 4, column my_date: "''"


You'll note that it breaks on the last line, which is like yours, not
the one before it.


   Huh! I'll leave off the quote marks and see if that makes a 
difference ...
tomorrow morning. Since dates are treated as strings I thought their 
absence

also needed the quotes. Stay tuned to this mail list for test results.


Dates are not treated as strings they are treated as dates. There is 
built casting for strings that are valid dates though:


create table date_test(dt_fld date);

insert into date_test values('01/11/19');
INSERT 0 1

--Trying MySQL 'null':)
insert into date_test values('00/00/'); 



ERROR:  date/time field value out of range: "00/00/" 



LINE 1: insert into date_test values('00/00/');

insert into date_test values(''); 



ERROR:  invalid input syntax for type date: "" 



LINE 1: insert into date_test values('');

As pointed out you are being told '' is not a valid date.




Thanks,

Rich





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: insert into: NULL in date column

2019-01-11 Thread David G. Johnston
On Fri, Jan 11, 2019 at 4:25 PM Rob Sargent  wrote:
> We don't have more context in "activities.sql" but if your OP was
> verbatim, it's keeling over on the comma ending the long text string.
> Something syntactically askew I think.

If the problem was where you described the parser would never have
gotten to the point of trying to pass an empty string to a date
constructor resulting in a runtime error.  It would have failed at
compile time with a very different error probably relating to
"malformed statement" or "unexpected identifier".

The OP provided sufficient detail (though an actual complete failing
command would have been nice) to pinpoint the misunderstanding that
the empty string and null are not the same thing at that converting
the empty string to a date is not possible (i.e., it doesn't just
silently return null for invalid input, one must pass in null
explicitly if one wishes to construct a date typed null.)

David J.



Re: insert into: NULL in date column

2019-01-11 Thread David G. Johnston
On Fri, Jan 11, 2019 at 3:56 PM Rich Shepard  wrote:
>
> A table has this column definition:
>
> next_contact date DEFAULT '2020-11-06'
> CONSTRAINT valid_next_date
> CHECK (next_contact >= CURRENT_DATE),
>
> (and I don't know that it needs a default).

The default does seem a bit arbitrary and pointless...

David J.



Re: insert into: NULL in date column

2019-01-11 Thread David G. Johnston
On Fri, Jan 11, 2019 at 5:01 PM Rich Shepard  wrote:
> On Fri, 11 Jan 2019, Ken Tanzer wrote:
> > \copy my_test FROM test.csv WITH CSV HEADER
> >
> > ERROR:  invalid input syntax for type date: "''"
> > CONTEXT:  COPY my_test, line 4, column my_date: "''"

Right problem wrong solution since it appears that the OP is using
INSERT/VALUES instead of COPY and you cannot just leave an empty field
in a VALUES expression.

> > You'll note that it breaks on the last line, which is like yours, not
> > the one before it.
>
>Huh! I'll leave off the quote marks and see if that makes a difference ...
> tomorrow morning. Since dates are treated as strings I thought their absence
> also needed the quotes. Stay tuned to this mail list for test results.

Using whatever syntax your chosen method requires, you need to express
the fact that you wish to pass "null" into the input function for
date.  The empty string is not "null".  For copy you can simply using
nothing or, as the COPY command says is required in text mode, \N.
For VALUES you need to provide an actual expression that resolves to
null - the null literal is usually the expression of choice.

VALUES (1, null, 3) is valid, VALUES (1,,3) is not.

David J.



Re: insert into: NULL in date column

2019-01-11 Thread Rich Shepard

On Fri, 11 Jan 2019, Ken Tanzer wrote:


I think the problem is actually that you're trying to represent your NULL
dates with '', which PG doesn't like.


Ken,

  That's certainly how I saw the error message.


cat test.csv

my_text,my_date,my_int
'Some Text','1/1/18',3
'More Text,,2
'Enough','',1

CREATE TEMP TABLE my_test (my_text TEXT, my_date DATE, my_int INT);

\copy my_test FROM test.csv WITH CSV HEADER

ERROR:  invalid input syntax for type date: "''"
CONTEXT:  COPY my_test, line 4, column my_date: "''"


You'll note that it breaks on the last line, which is like yours, not
the one before it.


  Huh! I'll leave off the quote marks and see if that makes a difference ...
tomorrow morning. Since dates are treated as strings I thought their absence
also needed the quotes. Stay tuned to this mail list for test results.

Thanks,

Rich



Re: insert into: NULL in date column

2019-01-11 Thread Rich Shepard

On Fri, 11 Jan 2019, Rob Sargent wrote:


Something syntactically askew I think.


Rob,

  I agree that's the problem. Why there's a problem is what I need to learn.

Thanks,

Rich





Re: insert into: NULL in date column

2019-01-11 Thread Ken Tanzer
On Fri, Jan 11, 2019 at 3:25 PM Rob Sargent  wrote:

>
> On 1/11/19 4:21 PM, Rich Shepard wrote:
> > On Fri, 11 Jan 2019, Rob Sargent wrote:
> >
> >>> psql:activities.sql:2: ERROR:  invalid input syntax for type date: ""
> >>> LINE 2: ...reaction they''ve experienced environmental issues.','','');
> >  ^
> >> Miss-matched single quotes in activities.sql?
> >
> > Rob,
> >
> >   I don't see the mis-match. The preceeding text column escapes the
> > internal
> > single quotes by doubling them while the entire string is single quoted.
> >
> > Regards,
> >
> > Rich
> >
> We don't have more context in "activities.sql" but if your OP was
> verbatim, it's keeling over on the comma ending the long text string.
> Something syntactically askew I think.
>
> I think the problem is actually that you're trying to represent your NULL
dates with '', which PG doesn't like.

cat test.csv

my_text,my_date,my_int
'Some Text','1/1/18',3
'More Text,,2
'Enough','',1

CREATE TEMP TABLE my_test (my_text TEXT, my_date DATE, my_int INT);

\copy my_test FROM test.csv WITH CSV HEADER

ERROR:  invalid input syntax for type date: "''"
CONTEXT:  COPY my_test, line 4, column my_date: "''"


You'll note that it breaks on the last line, which is like yours, not
the one before it.

I think there may be some other ways to specify the NULL value, but
I'm not really sure about that part.

Cheers,

Ken

-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://demo.agency-software.org/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: insert into: NULL in date column

2019-01-11 Thread Rob Sargent



On 1/11/19 4:21 PM, Rich Shepard wrote:

On Fri, 11 Jan 2019, Rob Sargent wrote:


psql:activities.sql:2: ERROR:  invalid input syntax for type date: ""
LINE 2: ...reaction they''ve experienced environmental issues.','','');

 ^

Miss-matched single quotes in activities.sql?


Rob,

  I don't see the mis-match. The preceeding text column escapes the 
internal

single quotes by doubling them while the entire string is single quoted.

Regards,

Rich

We don't have more context in "activities.sql" but if your OP was 
verbatim, it's keeling over on the comma ending the long text string. 
Something syntactically askew I think.




Re: insert into: NULL in date column

2019-01-11 Thread Rich Shepard

On Fri, 11 Jan 2019, Rob Sargent wrote:


psql:activities.sql:2: ERROR:  invalid input syntax for type date: ""
LINE 2: ...reaction they''ve experienced environmental issues.','','');

     ^

Miss-matched single quotes in activities.sql?


Rob,

  I don't see the mis-match. The preceeding text column escapes the internal
single quotes by doubling them while the entire string is single quoted.

Regards,

Rich



Re: insert into: NULL in date column

2019-01-11 Thread Rob Sargent



On 1/11/19 3:56 PM, Rich Shepard wrote:

A table has this column definition:

next_contact date DEFAULT '2020-11-06'
   CONSTRAINT valid_next_date
   CHECK (next_contact >= CURRENT_DATE),

(and I don't know that it needs a default).

In an input statement that column is left empty ('') when there's no 
defined

date. When I try to input that table I get a format error:

psql:activities.sql:2: ERROR:  invalid input syntax for type date: ""
LINE 2: ...reaction they''ve experienced environmental issues.','','');
    ^
and my web research has not shown me my error. Removing the check 
constraint

does not eliminate that error.

Please show me what I've done incorrectly.

TIA,

Rich



Miss-matched single quotes in activities.sql?




insert into: NULL in date column

2019-01-11 Thread Rich Shepard

A table has this column definition:

next_contact date DEFAULT '2020-11-06'
   CONSTRAINT valid_next_date
   CHECK (next_contact >= CURRENT_DATE),

(and I don't know that it needs a default).

In an input statement that column is left empty ('') when there's no defined
date. When I try to input that table I get a format error:

psql:activities.sql:2: ERROR:  invalid input syntax for type date: ""
LINE 2: ...reaction they''ve experienced environmental issues.','','');
^
and my web research has not shown me my error. Removing the check constraint
does not eliminate that error.

Please show me what I've done incorrectly.

TIA,

Rich



Re: Benchmark of using JSON to transport query results in node.js

2019-01-11 Thread Michel Pelletier
On Fri, Jan 11, 2019 at 10:31 AM Mitar  wrote:

> Hi!
>
> > Another option is to query directly from node.js and get JSON or native
> query from the database (extensive use of functions / stored procedures).
>
> For web applications, I was even thinking about this crazy approach:
> get PostgreSQL to encode all results in JSON, and then in node.js do
> not parse JSON, but send it as string directly to the browser.


Not crazy at all, that's basically how PostgREST works:


http://postgrest.org/en/v5.2/

-Michel

>


Re: Benchmark of using JSON to transport query results in node.js

2019-01-11 Thread Mitar
Hi!

On Fri, Jan 11, 2019 at 3:06 AM Tony Shelver  wrote:
> I'm fairly new to Postgres, but one question is how node.js implements the 
> native driver when fetching the data: fetchall, fetchmany or fetch.single?  
> Also which native driver is it using?

The package I used is here [1]. It is using libpq. Is there some
better native driver to use? It seems it is mostly using PQexec call,
not sure how that call internally fetches the data [2]. I think the
reason for slowness is because parsing of results is still done in
JavaScript [3].

[1] https://github.com/brianc/node-pg-native
[2] https://www.postgresql.org/docs/devel/libpq-exec.html
[3] https://github.com/brianc/node-pg-native/blob/master/lib/build-result.js

> Does the native driver do a round trip for each record fetched, or can it 
> batch them into multiples?

I think it waits for all results to first arrive using native driver
and then it starts processing it in JavaScript.

> Another option is to query directly from node.js and get JSON or native query 
> from the database (extensive use of functions / stored procedures).

For web applications, I was even thinking about this crazy approach:
get PostgreSQL to encode all results in JSON, and then in node.js do
not parse JSON, but send it as string directly to the browser. I have
seen this many times in my other projects. That having REST and other
stuff in between requires you to read from the database, serialize it
into something, then you deserialize it back on the server-side, maybe
even wrap it into ORM objects (Django I am looking at you) just so
that you can send it to your REST code, which then converts it to JSON
and sends it over. From my older benchmarks using Python/Django this
added few 100ms very quickly when having for example time-series
GeoJSON data being read and send to the client through a REST
interface, few thousands of points. I hacked it so that I directly
passed through GeoJSON fields as strings by injecting GeoJSON into a
placeholder in otherwise serialized JSON (luckily JSON is recursive)
through pure string manipulation and it improved greatly. And I just
disabled all other REST content types except for JSON. We have not
used XML or YAML over our API anyway.

So to me it seems encoding all results in JSON is a win always in
node.js. If you consume it by node.js, great. If you are planing to
pass it on, maybe just pass it on as-is.

> Our application is data-intensive, involving a lot of geotracking data across 
> hundreds of devices at it's core, and then quite a bit of geo/mapping/ 
> analytics around that..

It seems maybe very similar to what I had in Python/Django/REST in the past. :-)


Mitar

-- 
http://mitar.tnode.com/
https://twitter.com/mitar_m



Re: How to always run UPDATE FROM despite missing records in the source table?

2019-01-11 Thread Andrew Gierth
> "Alexander" == Alexander Farber  writes:

 Alexander> However the following query does not work as intended and
 Alexander> does not update any fields, because there is no matching
 Alexander> block in the geoip table found:

 Alexander> UPDATE users u SET
 Alexander> visited = now(),  -- HOW TO ALWAYS UPDATE THIS 
FIELD?
 Alexander> ip  = '20.20.20.20'::inet, -- HOW TO ALWAYS UPDATE THIS 
FIELD?
 Alexander> lat = i.lat,
 Alexander> lng = i.lng
 Alexander> FROM geoip i
 Alexander> WHERE u.uid = 2 AND '20.20.20.20'::inet <<= i.block;

 Alexander> The field visited and ip however should be always updated -
 Alexander> regardless if the block was found or not.

 Alexander> Kind of LEFT JOIN, but for UPDATE - how to achieve this please?

It can be done like this (this assumes you want to preserve the previous
values of u.lat/u.lng if the block was not found; if you want to set
them to null instead, then remove the coalesce() calls):

UPDATE users u
   SET visited = now(),
   ip = v.ip,
   lat = coalesce(i.lat, u.lat),
   lng = coalesce(i.lng, u.lng)
  FROM (VALUES ('20.20.20.20'::inet)) v(ip)
   LEFT JOIN geoip i ON (v.ip <<= i.block)
 WHERE u.uid = 2;

 Alexander> But that would run the same subquery twice (correct?) and my
 Alexander> geoip table is already slow with 3073410 records

Slow even with a gist or spgist index? what does the explain analyze
look like?

(You could also try using the ip4r module; I've not done any serious
benchmarking to see if it's faster than the built-in index types, though
it has some theoretical advantages due to not being restricted to CIDR
ranges. In pg versions before the built-in inet type got a gist index
method, ip4r was _the_ way to do ip block lookups for geoip etc.)

-- 
Andrew (irc:RhodiumToad)



Re: How to always run UPDATE FROM despite missing records in the source table?

2019-01-11 Thread Adrian Klaver

On 1/11/19 4:50 AM, Alexander Farber wrote:

Good afternoon

I have prepared a simplified test case for my question: 
https://www.db-fiddle.com/f/22jfWnsvqD8hVeFPXsyLbV/0


In PostgreSQL 10.6 there are 2 tables:

CREATE TABLE users (
     uid SERIAL PRIMARY KEY,
         created       timestamptz NOT NULL,
         visited       timestamptz NOT NULL,
         ip            inet        NOT NULL,
         lat           double precision,
         lng           double precision
   );

   CREATE TABLE geoip (
         block   inet    PRIMARY KEY,
         lat     double precision,
         lng     double precision
);

CREATE INDEX ON geoip USING SPGIST (block);

which are filled with the following test data:

INSERT INTO users (created, visited, ip) VALUES
   (now(), now(), '1.2.3.4'::inet),
   (now(), now(), '1.2.3.5'::inet),
   (now(), now(), '1.2.3.6'::inet);

INSERT INTO geoip (block, lat, lng) VALUES
  ('1.2.3.0/24 ', -33.4940, 143.2104),
  ('10.0.0.0/8 ', 34.6617, 133.9350);

Then in a stored function I run the following UPDATE command -

UPDATE users u SET
     visited = now(),
     ip      = '10.10.10.10'::inet,
     lat     = i.lat,
     lng     = i.lng
FROM geoip i
WHERE u.uid = 1 AND '10.10.10.10'::inet <<= i.block;

(the 1 and the ip address are actually in_uid and in_ip parameters in my 
stored function).


The above query works well and updates all 4 fields in the users table.

However the following query does not work as intended and does not 
update any fields, because there is no matching block in the geoip table 
found:


UPDATE users u SET
     visited = now(),              -- HOW TO ALWAYS UPDATE THIS FIELD?
     ip      = '20.20.20.20'::inet, -- HOW TO ALWAYS UPDATE THIS FIELD?
     lat     = i.lat,
     lng     = i.lng
FROM geoip i
WHERE u.uid = 2 AND '20.20.20.20'::inet <<= i.block;

The field visited and ip however should be always updated - regardless 
if the block was found or not.


Kind of LEFT JOIN, but for UPDATE - how to achieve this please?

The only workaround that I could think of is -

UPDATE users SET
     visited = now(),
     ip      = '20.20.20.20'::inet,
     lat     = (SELECT lat FROM geoip WHERE '20.20.20.20'::inet <<= block),
     lng     = (SELECT lng FROM geoip WHERE '20.20.20.20'::inet <<= block)
WHERE uid = 2;

But that would run the same subquery twice (correct?) and my geoip table 
is already slow with 3073410 records (and that is why I am trying to 
cache its lat and lng values in the users table on each user login event)


Why not put a test for the block in the function and then use different 
UPDATE's depending on the result?


Pseudo code:

IF ip IN block THEN
UPDATE users u SET
visited = now(),
ip  = '10.10.10.10'::inet,
lat = i.lat,
lng = i.lng
FROM geoip i
WHERE u.uid = 1;

ELSE

UPDATE users u SET
visited = now(),
ip  = '20.20.20.20'::inet
FROM geoip i
WHERE u.uid = 2 ;

END IF;



Regards
Alex



--
Adrian Klaver
adrian.kla...@aklaver.com



RE: How to always run UPDATE FROM despite missing records in the source table?

2019-01-11 Thread Kevin Brannen
From: Alexander Farber 


  *   The only workaround that I could think of is -

UPDATE users SET
visited = now(),
ip  = '20.20.20.20'::inet,
lat = (SELECT lat FROM geoip WHERE '20.20.20.20'::inet <<= block),
lng = (SELECT lng FROM geoip WHERE '20.20.20.20'::inet <<= block)
WHERE uid = 2;

But that would run the same subquery twice (correct?) and my geoip table is 
already slow with 3073410 records (and that is why I am trying to cache its lat 
and lng values in the users table on each user login event)

Have you considered using a WITH clause to get the data so the query is only 
run once?
See section 7.8.2 at https://www.postgresql.org/docs/9.6/queries-with.html
Kevin
This e-mail transmission, and any documents, files or previous e-mail messages 
attached to it, may contain confidential information. If you are not the 
intended recipient, or a person responsible for delivering it to the intended 
recipient, you are hereby notified that any disclosure, distribution, review, 
copy or use of any of the information contained in or attached to this message 
is STRICTLY PROHIBITED. If you have received this transmission in error, please 
immediately notify us by reply e-mail, and destroy the original transmission 
and its attachments without reading them or saving them to disk. Thank you.


Re: log level of "drop cascade" lists

2019-01-11 Thread Willy-Bas Loos
On Thu, Jan 10, 2019 at 4:44 PM Adrian Klaver 
wrote:

>
> 1) BEGIN;
> DROP schema myschema CASCADE;
> ROLLBACK/COMMIT;
>
> 2) \d myschema.*


On Thu, Jan 10, 2019 at 5:04 PM Tom Lane  wrote:

> I think that would be met with more complaints than kudos.
> "WARNING" is supposed to mean "there's probably something wrong here",
> and a report of a cascaded drop is not that.
>

OK, both are good points.
Since the list is truncated and possibly affects objects in other schemas,
I would recommend setting
SET log_min_messages = notice;
for that session (for anyone else reading this, no need to set it in the
settings file, the above is an sql command).
And then it is possible to view the full list in the log (e.g. after
rolling back the transaction with the drop query).

Cheers,
-- 
Willy-Bas Loos


How to always run UPDATE FROM despite missing records in the source table?

2019-01-11 Thread Alexander Farber
Good afternoon

I have prepared a simplified test case for my question:
https://www.db-fiddle.com/f/22jfWnsvqD8hVeFPXsyLbV/0

In PostgreSQL 10.6 there are 2 tables:

CREATE TABLE users (
uid SERIAL PRIMARY KEY,
created   timestamptz NOT NULL,
visited   timestamptz NOT NULL,
ipinetNOT NULL,
lat   double precision,
lng   double precision
  );

  CREATE TABLE geoip (
block   inetPRIMARY KEY,
lat double precision,
lng double precision
);

CREATE INDEX ON geoip USING SPGIST (block);

which are filled with the following test data:

INSERT INTO users (created, visited, ip) VALUES
  (now(), now(), '1.2.3.4'::inet),
  (now(), now(), '1.2.3.5'::inet),
  (now(), now(), '1.2.3.6'::inet);

INSERT INTO geoip (block, lat, lng) VALUES
 ('1.2.3.0/24', -33.4940, 143.2104),
 ('10.0.0.0/8', 34.6617, 133.9350);

Then in a stored function I run the following UPDATE command -

UPDATE users u SET
visited = now(),
ip  = '10.10.10.10'::inet,
lat = i.lat,
lng = i.lng
FROM geoip i
WHERE u.uid = 1 AND '10.10.10.10'::inet <<= i.block;

(the 1 and the ip address are actually in_uid and in_ip parameters in my
stored function).

The above query works well and updates all 4 fields in the users table.

However the following query does not work as intended and does not update
any fields, because there is no matching block in the geoip table found:

UPDATE users u SET
visited = now(),  -- HOW TO ALWAYS UPDATE THIS FIELD?
ip  = '20.20.20.20'::inet, -- HOW TO ALWAYS UPDATE THIS FIELD?
lat = i.lat,
lng = i.lng
FROM geoip i
WHERE u.uid = 2 AND '20.20.20.20'::inet <<= i.block;

The field visited and ip however should be always updated - regardless if
the block was found or not.

Kind of LEFT JOIN, but for UPDATE - how to achieve this please?

The only workaround that I could think of is -

UPDATE users SET
visited = now(),
ip  = '20.20.20.20'::inet,
lat = (SELECT lat FROM geoip WHERE '20.20.20.20'::inet <<= block),
lng = (SELECT lng FROM geoip WHERE '20.20.20.20'::inet <<= block)
WHERE uid = 2;

But that would run the same subquery twice (correct?) and my geoip table is
already slow with 3073410 records (and that is why I am trying to cache its
lat and lng values in the users table on each user login event)

Regards
Alex


Re: Benchmark of using JSON to transport query results in node.js

2019-01-11 Thread Tony Shelver
I'm fairly new to Postgres, but one question is how node.js implements the
native driver when fetching the data: fetchall, fetchmany or fetch.single?
Also which native driver is it using?
Does the native driver do a round trip for each record fetched, or can it
batch them into multiples?

For example, in the Oracle native driver (for Python, in my case), setting
the cursor arraysize makes a huge performance difference when pulling back
large datasets.
Pulling back 800k + records through a cursor on  a remote machine with the
default arraysize was way too long(3 hours before I canceled it).
Upping the arraysize to 800 dropped that to around 40 minutes, including
loading each record into a local Postgres via a function call (more complex
database structure to be handled).
This is on low-level test equipment.

This is a relevant issue for us, as we well be developing a new front end
to our application. and we still haven't finalized the architecture.
The backend build to date uses Python / Postgres.  Python/Flask is one
option, possibly serving the data to Android / web via JSON / REST.
Another option is to query directly from node.js and get JSON or native
query from the database (extensive use of functions / stored procedures).

Our application is data-intensive, involving a lot of geotracking data
across hundreds of devices at it's core, and then quite a bit of
geo/mapping/ analytics around that..



On Thu, 10 Jan 2019 at 23:52, Mitar  wrote:

> Hi!
>
> I made some benchmarks of using JSON to transport results to node.js
> and it seems it really makes a difference over using native or
> standard PostgreSQL. So the idea is that you simply wrap all results
> into JSON like SELECT to_json(t) FROM (... original query ...) AS t. I
> am guessing because node.js/JavaScript has really fast JSON parser but
> for everything else there is overhead. See my blog post for more
> details [1]. Any feedback welcome.
>
> This makes me wonder. If serialization/deserialization makes such big
> impact, where there efforts to improve how results are serialized for
> over-the-wire transmission? For example, to use something like
> Capnproto [2] to serialize into structure which can be directly used
> without any real deserialization?
>
> [1]
> https://mitar.tnode.com/post/181893159351/in-nodejs-always-query-in-json-from-postgresql
> [2] https://capnproto.org/
>
>
> Mitar
>
> --
> http://mitar.tnode.com/
> https://twitter.com/mitar_m
>
>