Re: [GENERAL] What limits Postgres performance when the whole database lives in cache?

2016-09-10 Thread Pavel Stehule
2016-09-11 7:20 GMT+02:00 dandl :

> > From: Jim Nasby [mailto:jim.na...@bluetreble.com]
> > My guess is this is a test scenario that completely favors VoltDB
> > while hamstringing Postgres, such as using no transaction durability
> > at all in VoltDB while using maximum durability in Postgres. Comparing
> > the cost of every COMMIT doing an fsync vs not could certainly produce
> > a 25x difference. There could be other cases where you'd get a 25x
> > difference.
>
> I guess my question then is: how much do you pay for that durability? If
> you benchmark Postgres configured for pure in-memory usage with absolutely
> no writes to disk (or SSD or network), where is it spending its time? Is
> there a lot of overhead in getting data in and out of cache buffers and
> conversions and in concurrency control?
>

It is not about durability only.

Postgres holds data in format equal or similar to saved data on persistent
storage. There are repeated serialization and deserialization. Some
structures are designed to be simply saved (like Btree), but the
performance is second target.

I believe so new memory databases can be 10-100x faster - depends on use
case, because they hold data primary in memory and uses different data
structures. The performance of these databases is great, when all data are
well placed in memory all time. But the performance is pretty bad, when
this rule is not true. There is another issue - when you increase speed of
database write operations, probably you will hit a file system limits, spin
lock issues - so it is one reason, why big system are based on distributed
systems more and more.

Regards

Pavel



>
> As a case study, assume an RBMS is required to monitor and record Internet
> (or phone or VHF) traffic. If the power goes off the traffic continues, and
> it really doesn’t matter whether you lose 60 seconds of down time or 63
> seconds; in any case another instance in another data centre will pick up
> the slack. So the requirement is atomicity yes, but not durability. Should
> you bid Postgres for the job, or look elsewhere? How much slower would
> Postgres be than a competitor? Do we care?


> > You need to be careful of benchmarks from commercial companies. MySQL
> > used to tout how fast it was compared to Postgres, using a benchmark
> > it created specifically for that purpose that had very little to do
> > with the real world. People eventually discovered that as soon as you
> > had a concurrent workload Postgres was actually faster.
>
> Of course; but at the same time insisting on including durability favours
> Postgres when I'm actually asking about alternatives.
>
> Regards
> David M Bennett FACS
>
> Andl - A New Database Language - andl.org
>
>
>
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] What limits Postgres performance when the whole database lives in cache?

2016-09-10 Thread dandl
> From: Jim Nasby [mailto:jim.na...@bluetreble.com]
> My guess is this is a test scenario that completely favors VoltDB
> while hamstringing Postgres, such as using no transaction durability
> at all in VoltDB while using maximum durability in Postgres. Comparing
> the cost of every COMMIT doing an fsync vs not could certainly produce
> a 25x difference. There could be other cases where you'd get a 25x
> difference.

I guess my question then is: how much do you pay for that durability? If you 
benchmark Postgres configured for pure in-memory usage with absolutely no 
writes to disk (or SSD or network), where is it spending its time? Is there a 
lot of overhead in getting data in and out of cache buffers and conversions and 
in concurrency control?

As a case study, assume an RBMS is required to monitor and record Internet (or 
phone or VHF) traffic. If the power goes off the traffic continues, and it 
really doesn’t matter whether you lose 60 seconds of down time or 63 seconds; 
in any case another instance in another data centre will pick up the slack. So 
the requirement is atomicity yes, but not durability. Should you bid Postgres 
for the job, or look elsewhere? How much slower would Postgres be than a 
competitor? Do we care?

> You need to be careful of benchmarks from commercial companies. MySQL
> used to tout how fast it was compared to Postgres, using a benchmark
> it created specifically for that purpose that had very little to do
> with the real world. People eventually discovered that as soon as you
> had a concurrent workload Postgres was actually faster.

Of course; but at the same time insisting on including durability favours 
Postgres when I'm actually asking about alternatives.

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Duplicate data despite unique constraint

2016-09-10 Thread Daniel Caldeweyher
I had a similar issue once and was able to recover from it. If this affects
only some rows and you are able to identify them, this is fixable:

--force table scan to skip using corrupt index
set enable_seqscan=1
set enable_indexscan=0
set enable_bitmapscan=0
select email,count(*)
from users
group by email
having count(*) > 1;

Then, if the rows are simply just duplicates and have no other changes, add
a new serial column (or to prevent blocking, add a bigint column and update
with sequential values), then using the emails from above, delete the ones
with the higher/lower sequence number. Ensure you are still skipping
indexes.

Once the table is clean, drop the sequence column again and re-index.

Hope this helps,
Daniel





On Fri, Sep 2, 2016 at 11:06 PM, Albe Laurenz 
wrote:

> Jonas Tehler wrote:
> > We’re running Postgresql 9.4.5 on Amazon RDS. One of our tables looks
> something like this:
> >
> >
> > CREATE TABLE users
> > (
> >   ...
> >   email character varying(128) NOT NULL,
> >   ...
> >   CONSTRAINT users_email_key UNIQUE (email)
> > )
> >
> > Despite this we have rows with very similar email values. I discovered
> the problem when I tried to add
> > a column and got the following error:
> >
> > ActiveRecord::RecordNotUnique: PG::UniqueViolation: ERROR:  could not
> create unique index
> > "users_email_key"
> > DETAIL:  Key (email)=(x...@yyy.com) is duplicated.
> > : ALTER TABLE "users" ADD COLUMN "client_branch" character varying(255)
> DEFAULT ‘beta'
> >
> >
> > Now look at this:
> >
> > => select email from users where email = 'x...@yyy.com';
> >email
> > ---
> >  x...@yyy.com
> > (1 row)
> >
> >
> > => select email from users where email LIKE 'x...@yyy.com';
> >email
> > ---
> >  x...@yyy.com
> >  x...@yyy.com
> > (2 rows)
> >
> >
> > I have tried to compare the binary data in various ways, email::bytes,
> md5(email),
> > encode(email::bytea, 'hex’), char_length(email) and it all looks the
> same for both rows.
> >
> > Any suggestions how I can discover the difference between the values and
> how they could have been
> > added without triggering the constraint? I know that the values were
> added after the constraint was
> > added.
> >
> > The data was added from a Ruby on Rails app that also has unique
> constraints on the email field and
> > validation on the email format.
>
> That looks very much like data corruption.
>
> I guess there is an index on "users" that is used for one query but not
> the other.
> Can you verify with EXPLAIN?
>
> Assuming that it is a 'text_ops' or 'varchar_ops' index, I'd say it gets
> used for the first
> query, but not for the second.  That would mean that there is an extra
> entry in the table that
> is not in the index.
>
> Did you have any crashes, standby promotion, restore with PITR or other
> unusual occurrences recently?
>
> Make sure you have a physical backup; there may be other things corrupted.
>
> This is a possible path to proceed:
>
> Once you have made sure that you have a physical backup, try to add the
> "ctid" column to both queries.
>
> Then delete the extra row from the second query with "DELETE FROM email
> WHERE ctid = ...".
>
> Then, to make sure there is no other corruption lurking, make a logical
> backup
> with pg_dumpall, create a new database cluster, create a new one with
> "initdb" and
> restore the data.
>
> Yours,
> Laurenz Albe
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Duplicate data despite unique constraint

2016-09-10 Thread Jim Nasby

On 9/2/16 8:02 AM, Adrian Klaver wrote:

Best guess is the INDEX on the column is corrupted and needs to be
reindexed:


You should contact AWS support about this; they'd want to know.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What limits Postgres performance when the whole database lives in cache?

2016-09-10 Thread Jim Nasby

On 9/8/16 3:15 AM, Nicolas Grilly wrote:

So my question is not to challenge the Postgres way. It's simply to
ask whether there are any known figures that would directly support
or refute his claims. Does Postgres really spend 96% of its time in
thumb-twiddling once the entire database resides in memory?


Alas, I've been unable to find any relevant benchmark. I'm not motivated
enough to install a PostgreSQL and VoltDB and try it for myself :-)


My guess is this is a test scenario that completely favors VoltDB while 
hamstringing Postgres, such as using no transaction durability at all in 
VoltDB while using maximum durability in Postgres. Comparing the cost of 
every COMMIT doing an fsync vs not could certainly produce a 25x 
difference. There could be other cases where you'd get a 25x difference.


You need to be careful of benchmarks from commercial companies. MySQL 
used to tout how fast it was compared to Postgres, using a benchmark it 
created specifically for that purpose that had very little to do with 
the real world. People eventually discovered that as soon as you had a 
concurrent workload Postgres was actually faster.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trigger is not working for Inserts from the application

2016-09-10 Thread Jim Nasby

On 9/10/16 6:03 AM, Kiran wrote:

If I insert a record from my application using following code :
db.myschema.cf_question.insert({
cf_question_type_id:request.payload.type_id,
cf_question_category_id:request.payload.cat_id,
lang:request.payload.lang,
body:request.payload.body
}
The above app code inserts the record in the DB, but the respective trigger
in the database is not triggered hence the "weighted_tsv" columns is empty
for this record.

But if I insert another record from the postgre's psql, it will insert and
the respective trigger is working perfectly.

What could be the problem ? Why trigger is not working if I insert from the
application ? Am I doing anything wrong ?
Any help would be really really appreciated.


Without knowing what that app code is doing it's impossible to know. Try 
turning on logging of all statements (log_statement = ALL) and see what 
queries the app is actually running.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 2.5TB Migration from SATA to SSD disks - PostgreSQL 9.2

2016-09-10 Thread Jim Nasby

On 9/8/16 3:29 PM, David Gibbons wrote:


Isn't this heading in the wrong direction?   We need to be more
precise than 0 (since 0 is computed off of rounded/truncated time
stamps), not less precise than 0.

Cheers,

Jeff



Hmm, You may be right, reading it 4 more times for comprehension it
looks like it should be set to -1 not 1.


Not according to my man page:

   --modify-window
  When comparing two timestamps, rsync treats the 
timestamps as being equal if they differ by no more than the 
modify-window value.  This is normally 0 (for an exact match), but you
  may find it useful to set this to a larger value in some 
situations.  In particular, when transferring to or from an MS Windows 
FAT  filesystem  (which  represents  times  with  a
  2-second resolution), --modify-window=1 is useful 
(allowing times to differ by up to 1 second).


--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
855-TREBLE2 (855-873-2532)   mobile: 512-569-9461


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trigger is not working for Inserts from the application

2016-09-10 Thread Kiran
Dear Tom, Adrian,

Excellent catch!
Thank you very very much to both. It is resolved now. I can sleep now.
Good night from Sweden.

regards
Kiran


On Sat, Sep 10, 2016 at 11:18 PM, Adrian Klaver 
wrote:

> On 09/10/2016 02:09 PM, Kiran wrote:
>
>> Hi Adrian,
>>
>> The JSON value is inserted into a column in the database which I can see.
>> But the trigger which has to convert this JSON value in not tsvector
>> column is not updating that column.
>>
>
> As Tom explained, in your trigger function you have:
>
> to_tsvector('swedish',coalesce(New.body->>'qtext','')::text);
>
> In the body JSON you are passing in:
>
> $4 = '{"name": "Do you like Pizza ?", "type": "cat", "store":
> [{"labeltext": "Yes", "labelvalue": 1}, {"labeltext": "No", "labelvalue":
> 2}, {"labeltext": "Other", "labelvalue": 3}, {"labeltext": "Don''t know",
> "labelvalue": 4}], "target": {"place": "Sweden"}, "askfreq": "once",
> "whydesc": "Because I like it :)"}'
>
> there is not 'qtext', so New.body->>'qtext' is getting you NULL which the
> COALESCE is turning into '' which is making weighted_tsv look empty.
>
>
>> regards
>>
>> On Sat, Sep 10, 2016 at 10:27 PM, Tom Lane > > wrote:
>>
>> Kiran mailto:bangalore.ki...@gmail.com>>
>> writes:
>> > LOG:  execute : INSERT INTO "myschema"."cf_question"
>> > ("cf_question_type_id", "cf_question_category_id", "lang", "body")
>> VALUES
>> > ($1, $2, $3, $4) RETURNING *
>> > DETAIL:  parameters: $1 = '1', $2 = '9', $3 = 'swedish', $4 =
>> '{"name": "Do
>> > you like Pizza ?", "type": "cat", "store": [{"labeltext": "Yes",
>> > "labelvalue": 1}, {"labeltext": "No", "labelvalue": 2},
>> {"labeltext":
>> > "Other", "labelvalue": 3}, {"labeltext": "Don''t know",
>> "labelvalue": 4}],
>> > "target": {"place": "Sweden"}, "askfreq": "once", "whydesc":
>> "Because I
>> > like it :)"}'
>>
>> Well, the obvious comment on that is "that JSON value hasn't got any
>> qtext
>> field".  So the ->> operator is returning null, the coalesce() is
>> replacing that with an empty string, and you end up with an empty
>> tsvector column.
>>
>> regards, tom lane
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: [GENERAL] Trigger is not working for Inserts from the application

2016-09-10 Thread Adrian Klaver

On 09/10/2016 02:09 PM, Kiran wrote:

Hi Adrian,

The JSON value is inserted into a column in the database which I can see.
But the trigger which has to convert this JSON value in not tsvector
column is not updating that column.


As Tom explained, in your trigger function you have:

to_tsvector('swedish',coalesce(New.body->>'qtext','')::text);

In the body JSON you are passing in:

$4 = '{"name": "Do you like Pizza ?", "type": "cat", "store": 
[{"labeltext": "Yes", "labelvalue": 1}, {"labeltext": "No", 
"labelvalue": 2}, {"labeltext": "Other", "labelvalue": 3}, {"labeltext": 
"Don''t know", "labelvalue": 4}], "target": {"place": "Sweden"}, 
"askfreq": "once", "whydesc": "Because I like it :)"}'


there is not 'qtext', so New.body->>'qtext' is getting you NULL which 
the COALESCE is turning into '' which is making weighted_tsv look empty.




regards

On Sat, Sep 10, 2016 at 10:27 PM, Tom Lane mailto:t...@sss.pgh.pa.us>> wrote:

Kiran mailto:bangalore.ki...@gmail.com>>
writes:
> LOG:  execute : INSERT INTO "myschema"."cf_question"
> ("cf_question_type_id", "cf_question_category_id", "lang", "body") VALUES
> ($1, $2, $3, $4) RETURNING *
> DETAIL:  parameters: $1 = '1', $2 = '9', $3 = 'swedish', $4 = '{"name": 
"Do
> you like Pizza ?", "type": "cat", "store": [{"labeltext": "Yes",
> "labelvalue": 1}, {"labeltext": "No", "labelvalue": 2}, {"labeltext":
> "Other", "labelvalue": 3}, {"labeltext": "Don''t know", "labelvalue": 4}],
> "target": {"place": "Sweden"}, "askfreq": "once", "whydesc": "Because I
> like it :)"}'

Well, the obvious comment on that is "that JSON value hasn't got any
qtext
field".  So the ->> operator is returning null, the coalesce() is
replacing that with an empty string, and you end up with an empty
tsvector column.

regards, tom lane





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trigger is not working for Inserts from the application

2016-09-10 Thread Kiran
Hi Adrian,

The JSON value is inserted into a column in the database which I can see.
But the trigger which has to convert this JSON value in not tsvector column
is not updating that column.

regards

On Sat, Sep 10, 2016 at 10:27 PM, Tom Lane  wrote:

> Kiran  writes:
> > LOG:  execute : INSERT INTO "myschema"."cf_question"
> > ("cf_question_type_id", "cf_question_category_id", "lang", "body") VALUES
> > ($1, $2, $3, $4) RETURNING *
> > DETAIL:  parameters: $1 = '1', $2 = '9', $3 = 'swedish', $4 = '{"name":
> "Do
> > you like Pizza ?", "type": "cat", "store": [{"labeltext": "Yes",
> > "labelvalue": 1}, {"labeltext": "No", "labelvalue": 2}, {"labeltext":
> > "Other", "labelvalue": 3}, {"labeltext": "Don''t know", "labelvalue":
> 4}],
> > "target": {"place": "Sweden"}, "askfreq": "once", "whydesc": "Because I
> > like it :)"}'
>
> Well, the obvious comment on that is "that JSON value hasn't got any qtext
> field".  So the ->> operator is returning null, the coalesce() is
> replacing that with an empty string, and you end up with an empty
> tsvector column.
>
> regards, tom lane
>


Re: [GENERAL] Question about locking and pg_locks

2016-09-10 Thread Jeff Janes
On Thu, Sep 8, 2016 at 4:30 AM, Moreno Andreo 
wrote:

> Hi folks! :-)
>
> This morning I was woken up by a call of a coworker screaming "Help, our
> Postgres server is throwing strange errors!"
> Not the best way to start your day...
>
> OK, to the serious part.
>
> "Strange errors" were (in postgresql-9.1-main.log)
> WARNING: out of shared memory
> ERROR: out of shared memory
> HINT: you may need to increase max_locks_per_transaction
>
> Restarting Postgresql solved the issue (for now), but that's what I'm
> wondering:
> - the greatest part of this locks are used by rubyrep (that we use to
> replicate users' databases), no new users since 3 weeks, first time error
> show up in almost 2 years
> - I read this: https://wiki.postgresql.org/wiki/Lock_Monitoring but still
> I can't figure out what to do if I need to know if I have to be worried or
> not :-)
> - I have
> OS: Ubuntu 12.04 (won't upgrade because we are leaving this server to a
> new one with Debian Jessie)
> PG: 9.1.6 (same as above, in new server ve have 9.5.4)
>

You have a problem now.  Upgrading PG now might help you solve the problem
you have now.  Why would you choose to work with one hand tied behind your
back now, just because you were already planning on upgrading later?  Also,
moving both the OS and the PG version at the same time is great if
everything goes well.  But if everything doesn't go well, you have greatly
increased the scope of the problem-solving by making two changes at once.
If it were me (and my employer gave me the freedom to do my job
effectively), I'd rather spend my time bringing forward the date on which I
upgrade PG, rather than spend that time tracking down problems that occur,
or at least are hard to track down, because I am running an obsolete PG.
Just double max_locks_per_transaction (with 32GB of ram, that should not be
a problem) and call it good until after the upgrade.


RAM: 32 GB
> shared_buffers = 2GB
> max_connections=800
> max_locks_per_transaction=64 (default value)
> max_prepared_transactions = 0
>
> so, I should be able to manage 800*64 = 5120 locks, right?
>
> Now my pg_locks table has more than 6200 rows, but if I reorder them by
> pid I see that one of them has 5800 of them, and it keeps on eating locks.
> If I dig more and get pid info, its state is " in transaction"
>

On PG9.2 or above, you would be able to see the most recently run
statement, in addition to the state.  That could help a lot in figuring out
how it doing this lock-fest (of course you can already get the
client_hostname and the application_name, which could also be a big help).

What is the distribution of locks by type?

select locktype, count(*) from pg_locks group by 1;



>
> ATM there are no locks that have granted = false.
>
> Now, question time:
> - Is there a number of pg_locks rows to be worried about? At more than
> 6000 I'm still not facing out of shared memory again
>

I don't think that the exact number of locks that will fit in shared memory
is predictable.  For example, if different processes co-hold sharable locks
on the same list of tables, it seems to take up slightly more memory than
if each process was locking a different list tables.  And different lock
types also take different amounts of memory.  And memory fragmentation
might also cause changes in capacity that are hard to predict--I can run
the same parallel program repeated, and have it sometimes run out of memory
and sometimes not.


> - Is there a way to release locks of that pid without pg_terminate() it?
>

I'm afraid not.  I believe your two options are pg_terminate_backend or
things which are functionally equivalent to that; or to identify who is
doing this (see application_name and client_hostname) and haranguing them
until they stop doing it.


Cheers,

Jeff


Re: [GENERAL] Trigger is not working for Inserts from the application

2016-09-10 Thread Adrian Klaver

On 09/10/2016 02:02 PM, Kiran wrote:

Hi Tom,

I have checked and the trigger is not disabled.

and \drds results
Role=blank
database = mydatabase
settings = default_text_search_config=pg_catalog.swedish

Any other tips or suggestions please.


Did you see this post?:

https://www.postgresql.org/message-id/10840.1473539270%40sss.pgh.pa.us



regards
Kiran



On Sat, Sep 10, 2016 at 9:26 PM, Tom Lane mailto:t...@sss.pgh.pa.us>> wrote:

Kiran mailto:bangalore.ki...@gmail.com>>
writes:
> But I do not know how to check whether the application is subject to
> different ALTER USER  SET parameters.

psql's \drds ("display role/database SETs") would help.

BTW, have you checked that the trigger is not disabled, and that there
isn't another trigger undoing its work?  (psql's \d command on the table
should show these things.)

regards, tom lane





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trigger is not working for Inserts from the application

2016-09-10 Thread Kiran
Hi Tom,

I have checked and the trigger is not disabled.

and \drds results
Role=blank
database = mydatabase
settings = default_text_search_config=pg_catalog.swedish

Any other tips or suggestions please.

regards
Kiran



On Sat, Sep 10, 2016 at 9:26 PM, Tom Lane  wrote:

> Kiran  writes:
> > But I do not know how to check whether the application is subject to
> > different ALTER USER  SET parameters.
>
> psql's \drds ("display role/database SETs") would help.
>
> BTW, have you checked that the trigger is not disabled, and that there
> isn't another trigger undoing its work?  (psql's \d command on the table
> should show these things.)
>
> regards, tom lane
>


Re: [GENERAL] Trigger is not working for Inserts from the application

2016-09-10 Thread Tom Lane
Kiran  writes:
> LOG:  execute : INSERT INTO "myschema"."cf_question"
> ("cf_question_type_id", "cf_question_category_id", "lang", "body") VALUES
> ($1, $2, $3, $4) RETURNING *
> DETAIL:  parameters: $1 = '1', $2 = '9', $3 = 'swedish', $4 = '{"name": "Do
> you like Pizza ?", "type": "cat", "store": [{"labeltext": "Yes",
> "labelvalue": 1}, {"labeltext": "No", "labelvalue": 2}, {"labeltext":
> "Other", "labelvalue": 3}, {"labeltext": "Don''t know", "labelvalue": 4}],
> "target": {"place": "Sweden"}, "askfreq": "once", "whydesc": "Because I
> like it :)"}'

Well, the obvious comment on that is "that JSON value hasn't got any qtext
field".  So the ->> operator is returning null, the coalesce() is
replacing that with an empty string, and you end up with an empty
tsvector column.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trigger is not working for Inserts from the application

2016-09-10 Thread Kiran
Hi Adrian,

This is the exact log in the file as it appears:

DETAIL:  parameters: $1 = '', $2 = ''
LOG:  connection received: host=localhost port=53284
LOG:  connection authorized: user=deva database=mydatabase
LOG:  connection received: host=localhost port=53285
LOG:  connection authorized: user=deva database=mydatabase
LOG:  execute : INSERT INTO "myschema"."cf_question"
("cf_question_type_id", "cf_question_category_id", "lang", "body") VALUES
($1, $2, $3, $4) RETURNING *
DETAIL:  parameters: $1 = '1', $2 = '9', $3 = 'swedish', $4 = '{"name": "Do
you like Pizza ?", "type": "cat", "store": [{"labeltext": "Yes",
"labelvalue": 1}, {"labeltext": "No", "labelvalue": 2}, {"labeltext":
"Other", "labelvalue": 3}, {"labeltext": "Don''t know", "labelvalue": 4}],
"target": {"place": "Sweden"}, "askfreq": "once", "whydesc": "Because I
like it :)"}'
LOG:  execute : select * from "myschema"."cf_user" where
cf_user_id=$1
DETAIL:  parameters: $1 = '$2a$13$g8VXS3Bt3489I'
*LOG:  LOG for TRIGER called on cf_question*
STATEMENT:  INSERT INTO "monolith"."cf_question" ("cf_question_type_id",
"cf_question_category_id", "lang", "body") VALUES
($1, $2, $3, $4) RETURNING *


As you can see from the above, there is a LOG which says Trigger called.
This is the statement inside the function which means the trigger is
firing, but why the subsequent column is not updated, I can't understand.

Also, I am using log_statement='all' setting. Anything wrong you finding
which I can't recognise  in the log statements ?

regards
Kiran

On Sat, Sep 10, 2016 at 9:08 PM, Adrian Klaver 
wrote:

> On 09/10/2016 11:39 AM, Kiran wrote:
>
>> Hi Adrian,
>>
>> Thanks for your response.
>> I tried with logging. The application is inserting the record that I am
>> passing into the database. But the trigger is not firing.
>>
>
> What is the text of the complete statement as it appears in the logs?
>
> When you do the INSERT the other fields are the same in the database as in
> the row, after the INSERT?
>
> Related to second question, is the the INSERT being done in an explicit
> transaction eg. BEGIN; INSERT something; and if so is there a COMMIT at the
> end?
>
> Have you tried with log_statement = 'all' to see if there are non-mod
> statements running at the same time?
>
>
> I have been looking into this issue since morning with out any positive
>> outcome :(.
>> If you have any other tips, it will be really helpful.
>>
>> regards
>> Kiran
>>
>> On Sat, Sep 10, 2016 at 3:22 PM, Adrian Klaver
>> mailto:adrian.kla...@aklaver.com>> wrote:
>>
>> On 09/10/2016 03:59 AM, Kiran wrote:
>>
>> Hi,
>>
>> *Problem background :*
>> I have a *function in the DB* as follows
>> CREATE FUNCTION question_tsv_trigger() RETURNS trigger AS
>> $BODY$
>> begin
>> New.weighted_tsv :=
>> to_tsvector('swedish',coalesce(New.body->>'qtext','')::text);
>> RAISE NOTICE 'TRIGER called on %', TG_TABLE_NAME;
>> return New;
>> end
>> $BODY$
>> LANGUAGE plpgsql VOLATILE
>> COST 100;
>>
>> **Trigger in the DB:**
>> CREATE TRIGGER insert_upd_question_tsvector BEFORE INSERT OR
>> UPDATE
>> ON myschema.cf_question
>> FOR EACH ROW EXECUTE PROCEDURE question_tsv_trigger();
>>
>> If I insert a record from my application using following code :
>> db.myschema.cf_question.insert({
>> cf_question_type_id:request.pa yload.type_id,
>> cf_question_category_id:request.payload.cat_id,
>> lang:request.payload.lang,
>> body:request.payload.body
>> }
>>
>> The above app code inserts the record in the DB, but the
>> respective trigger
>> in the database is not triggered hence the "weighted_tsv"
>> columns is empty
>> for this record.
>>
>> But if I insert a record from the postgres psql, it will insert
>> and
>> the respective trigger is working perfectly.
>>
>> What could be the problem ? Why trigger is not working if I
>> insert from the
>> application ? Am I doing anything wrong ?
>> Any help would be really really appreciated.
>>
>>
>> If you have not, turn on log_statement:
>>
>> https://www.postgresql.org/docs/9.5/static/runtime-config-
>> logging.html#RUNTIME-CONFIG-LOGGING-WHAT
>> > logging.html#RUNTIME-CONFIG-LOGGING-WHAT>
>>
>> Then check your log to see what the application is sending to the
>> database.
>>
>>
>> Thanks
>> Kiran
>>
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com 
>>
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: [GENERAL] Trigger is not working for Inserts from the application

2016-09-10 Thread Tom Lane
Kiran  writes:
> But I do not know how to check whether the application is subject to
> different ALTER USER  SET parameters.

psql's \drds ("display role/database SETs") would help.

BTW, have you checked that the trigger is not disabled, and that there
isn't another trigger undoing its work?  (psql's \d command on the table
should show these things.)

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trigger is not working for Inserts from the application

2016-09-10 Thread Adrian Klaver

On 09/10/2016 11:39 AM, Kiran wrote:

Hi Adrian,

Thanks for your response.
I tried with logging. The application is inserting the record that I am
passing into the database. But the trigger is not firing.


What is the text of the complete statement as it appears in the logs?

When you do the INSERT the other fields are the same in the database as 
in the row, after the INSERT?


Related to second question, is the the INSERT being done in an explicit 
transaction eg. BEGIN; INSERT something; and if so is there a COMMIT at 
the end?


Have you tried with log_statement = 'all' to see if there are non-mod 
statements running at the same time?




I have been looking into this issue since morning with out any positive
outcome :(.
If you have any other tips, it will be really helpful.

regards
Kiran

On Sat, Sep 10, 2016 at 3:22 PM, Adrian Klaver
mailto:adrian.kla...@aklaver.com>> wrote:

On 09/10/2016 03:59 AM, Kiran wrote:

Hi,

*Problem background :*
I have a *function in the DB* as follows
CREATE FUNCTION question_tsv_trigger() RETURNS trigger AS
$BODY$
begin
New.weighted_tsv :=
to_tsvector('swedish',coalesce(New.body->>'qtext','')::text);
RAISE NOTICE 'TRIGER called on %', TG_TABLE_NAME;
return New;
end
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;

**Trigger in the DB:**
CREATE TRIGGER insert_upd_question_tsvector BEFORE INSERT OR UPDATE
ON myschema.cf_question
FOR EACH ROW EXECUTE PROCEDURE question_tsv_trigger();

If I insert a record from my application using following code :
db.myschema.cf_question.insert({
cf_question_type_id:request.pa yload.type_id,
cf_question_category_id:request.payload.cat_id,
lang:request.payload.lang,
body:request.payload.body
}

The above app code inserts the record in the DB, but the
respective trigger
in the database is not triggered hence the "weighted_tsv"
columns is empty
for this record.

But if I insert a record from the postgres psql, it will insert and
the respective trigger is working perfectly.

What could be the problem ? Why trigger is not working if I
insert from the
application ? Am I doing anything wrong ?
Any help would be really really appreciated.


If you have not, turn on log_statement:


https://www.postgresql.org/docs/9.5/static/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT



Then check your log to see what the application is sending to the
database.


Thanks
Kiran



--
Adrian Klaver
adrian.kla...@aklaver.com 





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trigger is not working for Inserts from the application

2016-09-10 Thread Kiran
Dear Tom,

Thanks for your detailed reply.

(1) Application is connecting to the same database.
(2) Application is inserting to the same table without much luck with the
trigger.
(3) Application is issuing the right insert command.
(4) I am not able to check this, is there any way I can check this?

I have enabled the logging of all the statements in the conf file and  can
see the logs for connection, statements. I have raised the LOG and I can
see the logs in the file without any errors.

But I do not know how to check whether the application is subject to
different ALTER USER  SET parameters. Could you please give me some
reference to which I can refer for this type of error checking or any other
tips to solve this issue. I have been looking into this since morning
without any positive outcome :(

Thanks once again.

regards
Kiran







On Sat, Sep 10, 2016 at 5:21 PM, Tom Lane  wrote:

> Adrian Klaver  writes:
> > On 09/10/2016 03:59 AM, Kiran wrote:
> >> What could be the problem ? Why trigger is not working if I insert from
> the
> >> application ? Am I doing anything wrong ?
>
> > If you have not, turn on log_statement:
> > https://www.postgresql.org/docs/9.5/static/runtime-
> config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT
> > Then check your log to see what the application is sending to the
> database.
>
> At a slightly higher level: everything you've shown us looks fine,
> therefore the problem is in something you didn't show us.  I've seen
> people have problems like this for all sorts of reasons, eg
>
> 1. Application isn't connecting to the same database as your manual
> sessions.  Maybe not even the same server.
>
> 2. Application is using a different search_path setting and therefore
> touching a different table (same name but other schema).
>
> 3. Application isn't issuing the command you think it is, or is failing
> to commit it.
>
> 4. Trigger is firing as expected but something else is overriding its
> change to the data.
>
> Adrian's suggestion of watching log_statement output would help debug
> some of these cases; possibly log_connections would help with others.
> I'd also suggest checking to see if the application is subject to
> different ALTER USER ... SET parameters than your manual session.
> Lastly, RAISE NOTICE is a tried-and-true method of checking whether
> a trigger is firing, but it's not too helpful for debugging queries
> from applications because they invariably drop notices on the floor.
> I'd try RAISE LOG instead, and again watch the server log to see what
> the application is really doing.
>
> regards, tom lane
>


Re: [GENERAL] Trigger is not working for Inserts from the application

2016-09-10 Thread Kiran
Hi Adrian,

Thanks for your response.
I tried with logging. The application is inserting the record that I am
passing into the database. But the trigger is not firing.
I have been looking into this issue since morning with out any positive
outcome :(.
If you have any other tips, it will be really helpful.

regards
Kiran

On Sat, Sep 10, 2016 at 3:22 PM, Adrian Klaver 
wrote:

> On 09/10/2016 03:59 AM, Kiran wrote:
>
>> Hi,
>>
>> *Problem background :*
>> I have a *function in the DB* as follows
>> CREATE FUNCTION question_tsv_trigger() RETURNS trigger AS
>> $BODY$
>> begin
>> New.weighted_tsv :=
>> to_tsvector('swedish',coalesce(New.body->>'qtext','')::text);
>> RAISE NOTICE 'TRIGER called on %', TG_TABLE_NAME;
>> return New;
>> end
>> $BODY$
>> LANGUAGE plpgsql VOLATILE
>> COST 100;
>>
>> **Trigger in the DB:**
>> CREATE TRIGGER insert_upd_question_tsvector BEFORE INSERT OR UPDATE
>> ON myschema.cf_question
>> FOR EACH ROW EXECUTE PROCEDURE question_tsv_trigger();
>>
>> If I insert a record from my application using following code :
>> db.myschema.cf_question.insert({
>> cf_question_type_id:request.payload.type_id,
>> cf_question_category_id:request.payload.cat_id,
>> lang:request.payload.lang,
>> body:request.payload.body
>> }
>>
>> The above app code inserts the record in the DB, but the respective
>> trigger
>> in the database is not triggered hence the "weighted_tsv" columns is empty
>> for this record.
>>
>> But if I insert a record from the postgres psql, it will insert and
>> the respective trigger is working perfectly.
>>
>> What could be the problem ? Why trigger is not working if I insert from
>> the
>> application ? Am I doing anything wrong ?
>> Any help would be really really appreciated.
>>
>
> If you have not, turn on log_statement:
>
> https://www.postgresql.org/docs/9.5/static/runtime-config-
> logging.html#RUNTIME-CONFIG-LOGGING-WHAT
>
> Then check your log to see what the application is sending to the database.
>
>
>> Thanks
>> Kiran
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: [GENERAL] Trigger is not working for Inserts from the application

2016-09-10 Thread Tom Lane
Adrian Klaver  writes:
> On 09/10/2016 03:59 AM, Kiran wrote:
>> What could be the problem ? Why trigger is not working if I insert from the
>> application ? Am I doing anything wrong ?

> If you have not, turn on log_statement:
> https://www.postgresql.org/docs/9.5/static/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT
> Then check your log to see what the application is sending to the database.

At a slightly higher level: everything you've shown us looks fine,
therefore the problem is in something you didn't show us.  I've seen
people have problems like this for all sorts of reasons, eg

1. Application isn't connecting to the same database as your manual
sessions.  Maybe not even the same server.

2. Application is using a different search_path setting and therefore
touching a different table (same name but other schema).

3. Application isn't issuing the command you think it is, or is failing
to commit it.

4. Trigger is firing as expected but something else is overriding its
change to the data.

Adrian's suggestion of watching log_statement output would help debug
some of these cases; possibly log_connections would help with others.
I'd also suggest checking to see if the application is subject to
different ALTER USER ... SET parameters than your manual session.
Lastly, RAISE NOTICE is a tried-and-true method of checking whether
a trigger is firing, but it's not too helpful for debugging queries
from applications because they invariably drop notices on the floor.
I'd try RAISE LOG instead, and again watch the server log to see what
the application is really doing.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trigger is not working for Inserts from the application

2016-09-10 Thread Adrian Klaver

On 09/10/2016 03:59 AM, Kiran wrote:

Hi,

*Problem background :*
I have a *function in the DB* as follows
CREATE FUNCTION question_tsv_trigger() RETURNS trigger AS
$BODY$
begin
New.weighted_tsv :=
to_tsvector('swedish',coalesce(New.body->>'qtext','')::text);
RAISE NOTICE 'TRIGER called on %', TG_TABLE_NAME;
return New;
end
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;

**Trigger in the DB:**
CREATE TRIGGER insert_upd_question_tsvector BEFORE INSERT OR UPDATE
ON myschema.cf_question
FOR EACH ROW EXECUTE PROCEDURE question_tsv_trigger();

If I insert a record from my application using following code :
db.myschema.cf_question.insert({
cf_question_type_id:request.payload.type_id,
cf_question_category_id:request.payload.cat_id,
lang:request.payload.lang,
body:request.payload.body
}

The above app code inserts the record in the DB, but the respective trigger
in the database is not triggered hence the "weighted_tsv" columns is empty
for this record.

But if I insert a record from the postgres psql, it will insert and
the respective trigger is working perfectly.

What could be the problem ? Why trigger is not working if I insert from the
application ? Am I doing anything wrong ?
Any help would be really really appreciated.


If you have not, turn on log_statement:

https://www.postgresql.org/docs/9.5/static/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT

Then check your log to see what the application is sending to the database.



Thanks
Kiran



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Comibining UPDATE ... SET ... FROM (SELECT ...) with a JOIN

2016-09-10 Thread Alexander Farber
Thank you Brian and others, but -

On Fri, Sep 9, 2016 at 5:22 PM, Brian Dunavant  wrote:

> I'm making the assumption that you only have one ip/user in words_users.
>
> with lockrow as (
>SELECT g.gid, u.ip
>FROM   words_games g join words_users u
> ON (g.player1 = u.uid)
>   WHERE  g.finished IS NULL
> ANDg.player1 <> in_uid
> ANDg.played1 IS NOT NULL
> ANDg.player2 IS NULL
>LIMIT  1
>FORUPDATE SKIP LOCKED
> ), do_the_update as (
>UPDATE words_games g1
>SETplayer2 = in_uid
>FROM lockrow g2
>WHERE g1.gid = g2.gid
>RETURNING g1.gid, g1.player2
> )
> select m.gid into out_gid, u.ip into out_uip
> from do_the_update m
>   join lockrow u on (gid)
> ;
>
> The general idea being lock the row in the first CTE, update it in the
> second, returning your values, and then query against those in the
> final select to get the ip.  If it didn't update anything, you'll get
> no results.
>

unfortunately, the above query does not seem to ensure, that players with
same ip can not join the same game, which is actually my question...

But thanks for showing the CTE for UPDATE ... RETURNING - that is probably
the way to go for me

Regards
Alex


[GENERAL] Trigger is not working for Inserts from the application

2016-09-10 Thread Kiran
Hi,

*Problem background :*
I have a *function in the DB* as follows
CREATE FUNCTION question_tsv_trigger() RETURNS trigger AS
$BODY$
begin
New.weighted_tsv :=
to_tsvector('swedish',coalesce(New.body->>'qtext','')::text);
RAISE NOTICE 'TRIGER called on %', TG_TABLE_NAME;
return New;
end
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;

**Trigger in the DB:**
CREATE TRIGGER insert_upd_question_tsvector BEFORE INSERT OR UPDATE
ON myschema.cf_question
FOR EACH ROW EXECUTE PROCEDURE question_tsv_trigger();

If I insert a record from my application using following code :
db.myschema.cf_question.insert({
cf_question_type_id:request.payload.type_id,
cf_question_category_id:request.payload.cat_id,
lang:request.payload.lang,
body:request.payload.body
}
The above app code inserts the record in the DB, but the respective trigger
in the database is not triggered hence the "weighted_tsv" columns is empty
for this record.

But if I insert another record from the postgre's psql, it will insert and
the respective trigger is working perfectly.

What could be the problem ? Why trigger is not working if I insert from the
application ? Am I doing anything wrong ?
Any help would be really really appreciated.

Thanks
Kiran


[GENERAL] Trigger is not working for Inserts from the application

2016-09-10 Thread Kiran
Hi,

*Problem background :*
I have a *function in the DB* as follows
CREATE FUNCTION question_tsv_trigger() RETURNS trigger AS
$BODY$
begin
New.weighted_tsv :=
to_tsvector('swedish',coalesce(New.body->>'qtext','')::text);
RAISE NOTICE 'TRIGER called on %', TG_TABLE_NAME;
return New;
end
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;

**Trigger in the DB:**
CREATE TRIGGER insert_upd_question_tsvector BEFORE INSERT OR UPDATE
ON myschema.cf_question
FOR EACH ROW EXECUTE PROCEDURE question_tsv_trigger();

If I insert a record from my application using following code :
db.myschema.cf_question.insert({
cf_question_type_id:request.payload.type_id,
cf_question_category_id:request.payload.cat_id,
lang:request.payload.lang,
body:request.payload.body
}

The above app code inserts the record in the DB, but the respective trigger
in the database is not triggered hence the "weighted_tsv" columns is empty
for this record.

But if I insert a record from the postgres psql, it will insert and
the respective trigger is working perfectly.

What could be the problem ? Why trigger is not working if I insert from the
application ? Am I doing anything wrong ?
Any help would be really really appreciated.

Thanks
Kiran