Re: [GENERAL] NULL concatenation

2016-05-12 Thread Sridhar N Bamandlapally
Hi Adam

we need simple concatenation of all variables(which values may come NULL or
valid-values based on functional process),

coalesce is different functionality

Thanks
Sridhar
OpenText


On Thu, May 12, 2016 at 4:56 PM, Adam Pearson <
adam.pear...@realisticgames.co.uk> wrote:

> Hello Sridhar,
>
>   Have you tried the 'coalesce' function to handle the nulls?
>
>
> Kind Regards,
>
> Adam Pearson
> --
> *From:* pgsql-general-ow...@postgresql.org <
> pgsql-general-ow...@postgresql.org> on behalf of Sridhar N Bamandlapally <
> sridhar@gmail.com>
> *Sent:* 12 May 2016 09:47
> *To:* PG-General Mailing List; PostgreSQL-hackers
> *Subject:* [GENERAL] NULL concatenation
>
> Hi
>
> In migration, am facing issue with NULL concatenation in plpgsql,
> by concatenating NULL between any where/position to Text / Varchar, the
> total string result is setting value to NULL
>
>
> *In Oracle:*
>
> declare
> txt1 VARCHAR2(100) := 'ABCD';
> txt2 VARCHAR2(100) := NULL;
> txt3 VARCHAR2(100) := 'EFGH';
> txt VARCHAR2(100) := NULL;
> begin
>   txt:= txt1 || txt2 || txt3;
>   dbms_output.put_line (txt);
> end;
> /
>
> abcdefgh   *===>return value*
>
>
>
> *In Postgres*
>
> do $$
> declare
> txt1 text := 'ABCD';
> txt2 text := NULL;
> txt3 text := 'EFGH';
> txt text := NULL;
> begin
> txt:= txt1 || txt2 || txt3;
> raise notice '%', txt;
> end$$ language plpgsql;
>
> NOTICE:*===> return value*
>
>
> SQL-Server also does same like Oracle
>
> Is there any way alternate we have for same behavior in PostgreSQL
>
> Please
>
> Thanks
> Sridhar
> OpenText
>
>


Re: [GENERAL] Thoughts on "Love Your Database"

2016-05-12 Thread Marc Mamin
>What might I cover that I haven't mentioned? 
>What are the usual objections to server-side code and how can they be met? 
>When *are* they justified and what should the criteria be to put code in 
>Postgres? Any other thoughts?

Hi,
For my point of view, scalability considerations and possible performance gains 
are the key arguments to decide where to put things. 
If a project may end up with tens of application servers connected to a single 
database, then you should let most of the work on the application side. 
But to discharge the database, you also have to keep transactions as short as 
possible. So if some logic process requires many back and forth within a single 
transaction, then it may make sense to implement it on the DB side; most 
developpers I'm working with are resistent to this aspect. 
ORM layers are usefull, but nasty as they transform the database into a 
blackbox. I guess this can be a major reason why developers don't care for the 
SQL side. Testing should include some monitoring of the DB activity, which 
often happens way too late.

I try since years to convince my colleagues, that application code doesn't 
matter ;-) What counts are the data and how they are arranged within the DB. 
Therefore they should put more focus on them, rather then think in OOM.

That book is a great idea. A collection of bad code example and the SQL 
equivalents, strengthend with runtimes figures, may help move some minds

best regards,
Marc Mamin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] downloaded 9.1 pg driver but odbcad32 doesnt see it

2016-05-12 Thread Adrian Klaver

On 05/12/2016 09:30 AM, db042190 wrote:

Hi.  I downloaded a driver for 9.1 from
https://jdbc.postgresql.org/download.html and see a jar file that was
downloaded.  I went to odbcad32 to add a dsn but odbcad32 doesnt see a
driver for pg.  What steps am i missing?


I am not seeing any steps that will make a JDBC driver work with an ODBC 
tool(odbcad32).





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] index on ILIKE/LIKE - PostgreSQL 9.2

2016-05-12 Thread Lucas Possamai
>
> With those sizes, the gin index will probably be naturally kept mostly
> in the file-system cache, if it is used regularly.  So the original
> slowness of your first query is likely just a cold-cache problem.  Can
> you generate a stream of realistic queries and see what it stabilizes
> at?
>
>
> > I just wanted to understand why the GIN index is not working, but it
> works
> > here:
> https://www.depesz.com/2011/02/19/waiting-for-9-1-faster-likeilike/
>
> In your first email, the gin index did "work", according to the
> execution plan.  It just wasn't as fast as you wanted.  In general,
> the longer the query string is between the %%, the worse it will
> perform (until version 9.6, and to a smaller degree even with 9.6).
> But it still seems oddly slow to me, unless you have a cold-cache and
> really bad (or overloaded) IO.
>
> >
> >>
> >>
> >> It would be interesting to see the output of explain (analyze,
> >> buffers) with track_io_timing turned on.
> >
> >
> > explain analyze buffer with track_io_timing turned on:
> ...
>
> That is the wrong query.  The CTE (i.e. the WITH part) is an
> optimization fence, so it can't use the gin index, simply because of
> the way you query is written.  (I think Melvin suggested it because he
> noticed that using the gin index actually slowed down the query, so he
> wanted to force it to not be used.)
>


Oh ok.

- Here is the explain analyze buffer with the original query I posted here
with the gin index:

Query:

>  explain (analyze, buffers)
>  SELECT title
> FROM ja_jobs WHERE title ILIKE '%RYAN WER%'
> and clientid = 31239  AND time_job > 1457826264
> order BY title
> limit 10


Explain analyze:

> Limit  (cost=390.07..390.08 rows=1 width=20) (actual
> time=3945.263..3945.280 rows=4 loops=1)
>   Buffers: shared hit=5956 read=10
>   I/O Timings: read=60.323
>   ->  Sort  (cost=390.07..390.08 rows=1 width=20) (actual
> time=3945.256..3945.260 rows=4 loops=1)
> Sort Key: "title"
> Sort Method: quicksort  Memory: 25kB
> Buffers: shared hit=5956 read=10
> I/O Timings: read=60.323
> ->  Bitmap Heap Scan on "ja_jobs"  (cost=386.05..390.06 rows=1
> width=20) (actual time=3944.857..3945.127 rows=4 loops=1)
>   Recheck Cond: (("clientid" = 31239) AND ("time_job" >
> 1457826264) AND (("title")::"text" ~~* '% WER%'::"text"))
>   Buffers: shared hit=5951 read=10
>   I/O Timings: read=60.323
>   ->  BitmapAnd  (cost=386.05..386.05 rows=1 width=0) (actual
> time=3929.540..3929.540 rows=0 loops=1)
> Buffers: shared hit=5950 read=7
> I/O Timings: read=45.021
> ->  Bitmap Index Scan on "ix_jobs_client_times"
>  (cost=0.00..50.16 rows=1660 width=0) (actual time=45.536..45.536 rows=795
> loops=1)
>   Index Cond: (("clientid" = 31239) AND
> ("time_job" > 1457826264))
>   Buffers: shared hit=5 read=7
>   I/O Timings: read=45.021
> ->  Bitmap Index Scan on "ix_jobs_trgm_gin"
>  (cost=0.00..335.64 rows=485 width=0) (actual time=3883.886..3883.886
> rows=32 loops=1)
>   Index Cond: (("title")::"text" ~~* '%RYAN WER
> %'::"text")
>   Buffers: shared hit=5945
> Total runtime: 3945.554 ms


[GENERAL] ON CONFLICT DO for UPDATE statements

2016-05-12 Thread Klaus P. Pieper - ibeq GmbH
We run two separate databases which are synchronized through our own 
replication system (two master databases, so we use our own trigger based 
methods). Let's call the two databases "main" and "remote".

>From time to time we get constraint violations on sorting fields classified 
>UNIQUE when users add records belonging the same data entity:

Create table orderpos (
int id not null primary key,
int order not null, -- the entity
int sort not null, -- sorting criteria for reporting purposes
text data,
UNIQUE (order, sort) );

I know that with PostgreSQL 9.5 I can use UPSERT to modify the field "sort" 
when adding it to the remote database. I simply change the content of the field 
sort to MAX(sort)+1 for this order. This will resolve the issue on the remote 
side.

However, the modified value will be replicated back to the main database where 
the record was generated. My concern is that in the meantime someone might have 
added another order position to the master database and the value coming back 
from the remote database is no longer UNIQUE.

Is there a similar concept like UPSERT for UPDATE statements? I.e.

UPDATE orderpos SET sort = 2
WHERE if = 4
ON CONFLICT sort DO UPDATE SET sort = GetSafeSortValue(2)

If not, what is the best way of avoiding a constraint violation in UPDATE 
statements?


-  A BEFORE UPDATE trigger checking the UNIQUE condition and calling 
GetSafeSort() in case of a conflict?

-  A rule?

-  Can triggers be used to catch constraint violations?

My least preferred solution would by the replication software handling an 
exception.

Thanks
Klaus


Re: [GENERAL] Update or Delete causes canceling of long running slave queries

2016-05-12 Thread Jeff Janes
On Thu, May 12, 2016 at 11:14 AM, Viswanath  wrote:
> Hi Jeff,
>  Yes I am turning off autovacuum for experimental purpose only. I was
> curious to know what is causing the queries to be killed when the autovacuum
> is not running. I guess it had to be the pruning of HOT update chains like
> you have mentioned.
> Also I have already tried changing the parameters hot_standby_feedback and
> vacuum_defer_cleanup_age, it works fine, but if HOT update is the problem
> then changing these setting will hinder it right? If so then is it safe to
> use these parameters? Or will there be any notable problems or performance
> issues?


It will hinder it some, but not by more than it would be hindered if
the long-running query were running directly on the master rather than
on the slave.

It is possible that the amount of hindering it would cause problems,
but it is not very likely.  Do you have things like work-queue tables
which turn over extremely quickly?  And how long-running are the long
running queries?  Did you have serious bloat problems before you split
the workload between the master and the slaves?


> This pruning of HOT update could be the case for update only or
> delete queries too?

The delete query should not create new HOT update chains, but I think
it will still clean up HOT update chains left behind by previous
updates, as it touchs blocks which happen to contain them.

Cheers,

Jeff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ON CONFLICT DO for UPDATE statements

2016-05-12 Thread Klaus P. Pieper
We run two separate databases which are synchronized through our own
replication system (two master databases, so we use our own trigger based
methods). Let's call the two databases "main" and "remote".



>From time to time we get constraint violations on sorting fields classified
UNIQUE when users add records belonging the same data entity:



Create table orderpos (

int id not null primary key,

int order not null, -- the entity

int sort not null, -- sorting criteria for reporting purposes

text data,

UNIQUE (order, sort) );



I know that with PostgreSQL 9.5 I can use UPSERT to modify the field "sort"
when adding it to the remote database. I simply change the content of the
field sort to MAX(sort)+1 for this order. This will resolve the issue on the
remote side.



However, the modified value will be replicated back to the main database
where the record was generated. My concern is that in the meantime someone
might have added another order position to the master database and the value
coming back from the remote database is no longer UNIQUE.



Is there a similar concept like UPSERT for UPDATE statements? I.e.



UPDATE orderpos SET sort = 2

WHERE if = 4

ON CONFLICT sort DO UPDATE SET sort = GetSafeSortValue(2)



If not, what is the best way of avoiding a constraint violation in UPDATE
statements?



-  A BEFORE UPDATE trigger checking the UNIQUE condition and calling
GetSafeSort() in case of a conflict?

-  A rule?

-  Can triggers be used to catch constraint violations?



My least preferred solution would by the replication software handling an
exception.



Thanks

Klaus



---
Diese E-Mail wurde von Avast Antivirus-Software auf Viren geprüft.
https://www.avast.com/antivirus


Re: [GENERAL] Update or Delete causes canceling of long running slave queries

2016-05-12 Thread Viswanath
Hi,
I am using postgres 9.5
Yes,I restarted the server after changing the autovacuum to off. Also
verified that autovacuum process is not running.

standby db configurations: (mostly default conf only)
hot_standby = on
max_standby_archive_delay = 30s
max_standby_streaming_delay = 30s
wal_receiver_status_interval = 10s
hot_standby_feedback = off  
wal_receiver_timeout = 60s
wal_retrieve_retry_interval = 5s



--
View this message in context: 
http://postgresql.nabble.com/Update-or-Delete-causes-canceling-of-long-running-slave-queries-tp5903250p5903334.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Update or Delete causes canceling of long running slave queries

2016-05-12 Thread Viswanath
Hi Jeff,
 Yes I am turning off autovacuum for experimental purpose only. I was
curious to know what is causing the queries to be killed when the autovacuum
is not running. I guess it had to be the pruning of HOT update chains like
you have mentioned.
Also I have already tried changing the parameters hot_standby_feedback and
vacuum_defer_cleanup_age, it works fine, but if HOT update is the problem
then changing these setting will hinder it right? If so then is it safe to
use these parameters? Or will there be any notable problems or performance
issues? This pruning of HOT update could be the case for update only or
delete queries too?



--
View this message in context: 
http://postgresql.nabble.com/Update-or-Delete-causes-canceling-of-long-running-slave-queries-tp5903250p5903344.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] downloaded 32 bit pg 9.1 but driver isnt seeing some rows

2016-05-12 Thread db042190
i had the wrong ip address.  I'm ok.



--
View this message in context: 
http://postgresql.nabble.com/downloaded-32-bit-pg-9-1-but-driver-isnt-seeing-some-rows-tp5903332p5903338.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] downloaded 32 bit pg 9.1 but driver isnt seeing some rows

2016-05-12 Thread db042190
i now see that the ip address may have been wrong.  I'll post back here.



--
View this message in context: 
http://postgresql.nabble.com/downloaded-32-bit-pg-9-1-but-driver-isnt-seeing-some-rows-tp5903332p5903336.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] downloaded 32 bit pg 9.1 but driver isnt seeing some rows

2016-05-12 Thread db042190
Hi I downloaded pg 32 bit 9.1.  I noticed the list of drivers the install
referenced didnt look like 9.1 but i selected all and continued anyway. 
When i run a select distinct query against one of my existing tables on a
pre existing 9.1 pg server some values that i know are there dont show. 
Specifically this is a select distinct on a date column.  When i run the
same query on pg admin iii on the pre existing server, those values do show.



--
View this message in context: 
http://postgresql.nabble.com/downloaded-32-bit-pg-9-1-but-driver-isnt-seeing-some-rows-tp5903332.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Update or Delete causes canceling of long running slave queries

2016-05-12 Thread Jeff Janes
On Thu, May 12, 2016 at 6:37 AM, Viswanath  wrote:

> Hi,
> I have the following configurations.
>
> On master:
> autovacuum=off
> vacuum_defer_cleanup_age=0
>
> On slave:
> hot_standby_feedback = off
> max_standby_streaming_delay = 30s
>
> Now when I try to run a long query on slave and do some update or delete on
> the table on master I am getting the following error.
>
> ERROR:  canceling statement due to conflict with recovery
> DETAIL:  User query might have needed to see row versions that must be
> removed.
>
> The error code was : 40001
>
> I am not running vacuum manually and autovacuum is 'off'

I hope you doing that for experimental purposes only.

> What could possibly causing the conflict?

Probably pruning of HOT update chains.

Try turning hot_standby_feedback on, or setting
vacuum_defer_cleanup_age to something more than zero.

Cheers,

Jeff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Update or Delete causes canceling of long running slave queries

2016-05-12 Thread Sameer Kumar
On Thu, 12 May 2016 21:56 Viswanath,  wrote:

> Hi,
> I have the following configurations.
>
> On master:
> autovacuum=off
>

Have you changed it recently or it has always been off? In case you changed
it recently, a restart would be needed for this setting to take effect.

vacuum_defer_cleanup_age=0
>
> On slave:
> hot_standby_feedback = off
> max_standby_streaming_delay = 30s
>
> Now when I try to run a long query on slave and do some update or delete on
> the table on master I am getting the following error.
>
> ERROR:  canceling statement due to conflict with recovery
> DETAIL:  User query might have needed to see row versions that must be
> removed.
>
> The error code was : 40001
>
> I am not running vacuum manually and autovacuum is 'off'
> What could possibly causing the conflict?
>

Please share mode details like version and standby DB conf etc.


>
>
> --
> View this message in context:
> http://postgresql.nabble.com/Update-or-Delete-causes-canceling-of-long-running-slave-queries-tp5903250.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
-- 
--
Best Regards
Sameer Kumar | DB Solution Architect
*ASHNIK PTE. LTD.*

101 Cecil Street, #11-11 Tong Eng Building, Singapore 069 533

T: +65 6438 3504 | M: +65 8110 0350 | www.ashnik.com


[GENERAL] downloaded 9.1 pg driver but odbcad32 doesnt see it

2016-05-12 Thread db042190
Hi.  I downloaded a driver for 9.1 from
https://jdbc.postgresql.org/download.html and see a jar file that was
downloaded.  I went to odbcad32 to add a dsn but odbcad32 doesnt see a
driver for pg.  What steps am i missing?



--
View this message in context: 
http://postgresql.nabble.com/downloaded-9-1-pg-driver-but-odbcad32-doesnt-see-it-tp5903319.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Update or Delete causes canceling of long running slave queries

2016-05-12 Thread Viswanath
Hi,
I have the following configurations.

On master:
autovacuum=off
vacuum_defer_cleanup_age=0

On slave:
hot_standby_feedback = off
max_standby_streaming_delay = 30s

Now when I try to run a long query on slave and do some update or delete on
the table on master I am getting the following error.

ERROR:  canceling statement due to conflict with recovery
DETAIL:  User query might have needed to see row versions that must be
removed.

The error code was : 40001

I am not running vacuum manually and autovacuum is 'off' 
What could possibly causing the conflict?



--
View this message in context: 
http://postgresql.nabble.com/Update-or-Delete-causes-canceling-of-long-running-slave-queries-tp5903250.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] index on ILIKE/LIKE - PostgreSQL 9.2

2016-05-12 Thread Jeff Janes
On Wed, May 11, 2016 at 11:59 PM, Lucas Possamai  wrote:
>
>>
>> How big is the table?  The gin index?  shared_buffers?  RAM?  What
>> kind of IO system do you have, and how many other things were going on
>> with it?
>
>
> - Just a reminder that I'm not running these tests on my prod server.. I'm
> running on my test server. So the confs will be different
>
>> The table is 9GB big
>> The gin index is 400MB big
>> shared_buffers = 1536MB
>> RAM = 8 GB

With those sizes, the gin index will probably be naturally kept mostly
in the file-system cache, if it is used regularly.  So the original
slowness of your first query is likely just a cold-cache problem.  Can
you generate a stream of realistic queries and see what it stabilizes
at?


> I just wanted to understand why the GIN index is not working, but it works
> here: https://www.depesz.com/2011/02/19/waiting-for-9-1-faster-likeilike/

In your first email, the gin index did "work", according to the
execution plan.  It just wasn't as fast as you wanted.  In general,
the longer the query string is between the %%, the worse it will
perform (until version 9.6, and to a smaller degree even with 9.6).
But it still seems oddly slow to me, unless you have a cold-cache and
really bad (or overloaded) IO.

>
>>
>>
>> It would be interesting to see the output of explain (analyze,
>> buffers) with track_io_timing turned on.
>
>
> explain analyze buffer with track_io_timing turned on:
...

That is the wrong query.  The CTE (i.e. the WITH part) is an
optimization fence, so it can't use the gin index, simply because of
the way you query is written.  (I think Melvin suggested it because he
noticed that using the gin index actually slowed down the query, so he
wanted to force it to not be used.)

Cheers,

Jeff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Release Notes Link is broken on the website

2016-05-12 Thread Adrian Klaver

On 05/12/2016 06:48 AM, Daniel Westermann wrote:

just to let you know:

This link is broken:
http://www.postgresql.org/docs/9./static/release-9-6.html

The description on the website is wrong:

PostgreSQL 9.6 Beta 1, 9.5.2, 9.4.7, 9.3.12, 9.2.16 and 9.1.21 Released!

Shoud be 9.5.3, shouldn't it?


I just checked back and both issues are fixed.



Cheers,
Daniel




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Release Notes Link is broken on the website

2016-05-12 Thread Raymond O'Donnell
On 12/05/2016 15:01, Daniel Westermann wrote:
>>>
>>>Provide a link to the source document where you found the link you
> have posted
> 
> its the homepage
> 
> http://www.postgresql.org
> 

Looks like the link URL is missing a "6":

http://www.postgresql.org/docs/9./static/release-9-6.html
   ^^^

Ray.


-- 
Raymond O'Donnell :: Galway :: Ireland
r...@iol.ie


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Release Notes Link is broken on the website

2016-05-12 Thread Igor Neyman

From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Allan Kamau
Sent: Thursday, May 12, 2016 9:59 AM
To: Daniel Westermann 
Cc: Postgres General Postgres General 
Subject: Re: [GENERAL] Release Notes Link is broken on the website

Provide a link to the source document where you found the link you have posted.
Allan.

On Thu, May 12, 2016 at 4:48 PM, Daniel Westermann 
> 
wrote:
just to let you know:

This link is broken:
http://www.postgresql.org/docs/9./static/release-9-6.html

The description on the website is wrong:

PostgreSQL 9.6 Beta 1, 9.5.2, 9.4.7, 9.3.12, 9.2.16 and 9.1.21 Released!

Shoud be 9.5.3, shouldn't it?

Cheers,
Daniel


It’s on the main page: http://www.postgresql.org/

Regards,
Igor


Re: [GENERAL] Release Notes Link is broken on the website

2016-05-12 Thread Adrian Klaver

On 05/12/2016 06:58 AM, Allan Kamau wrote:

Provide a link to the source document where you found the link you have
posted.


Following up on Daniel's post I went to:

http://www.postgresql.org/

which is where the release announcement is.



Allan.

On Thu, May 12, 2016 at 4:48 PM, Daniel Westermann
> wrote:

just to let you know:

This link is broken:
http://www.postgresql.org/docs/9./static/release-9-6.html

The description on the website is wrong:

PostgreSQL 9.6 Beta 1, 9.5.2, 9.4.7, 9.3.12, 9.2.16 and 9.1.21
Released!

Shoud be 9.5.3, shouldn't it?

Cheers,
Daniel





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Release Notes Link is broken on the website

2016-05-12 Thread Daniel Westermann
>> 
>>Provide a link to the source document where you found the link you have 
>>posted 

its the homepage 

http://www.postgresql.org 



Re: [GENERAL] Release Notes Link is broken on the website

2016-05-12 Thread Allan Kamau
Provide a link to the source document where you found the link you have
posted.

Allan.

On Thu, May 12, 2016 at 4:48 PM, Daniel Westermann <
daniel.westerm...@dbi-services.com> wrote:

> just to let you know:
>
> This link is broken:
> http://www.postgresql.org/docs/9./static/release-9-6.html
>
> The description on the website is wrong:
>
> PostgreSQL 9.6 Beta 1, 9.5.2, 9.4.7, 9.3.12, 9.2.16 and 9.1.21 Released!
>
> Shoud be 9.5.3, shouldn't it?
>
> Cheers,
> Daniel
>
>


[GENERAL] Release Notes Link is broken on the website

2016-05-12 Thread Daniel Westermann
just to let you know: 

This link is broken: 
http://www.postgresql.org/docs/9./static/release-9-6.html 

The description on the website is wrong: 

PostgreSQL 9.6 Beta 1, 9.5.2, 9.4.7, 9.3.12, 9.2.16 and 9.1.21 Released! 

Shoud be 9.5.3, shouldn't it? 

Cheers, 
Daniel 



Re: [GENERAL] Alternate or Optimization for with hold cursor

2016-05-12 Thread Mike Sofen
>From: Sangeetha Sent: Thursday, May 12, 2016 1:58 AM

Currently , I am using "With hold" cursor. In our case , the With hold
cursor is used to fetch the next record of the given primary key . The
performance is very slow for large data set. Can you provide me some
alternative ways like having own copy of table , or optimization for With
hold cursor?

Thanks and Regards,
S.Sangeetha<
==

Cursors are the last tool I would ever grab out of my sql toolbox (aka, I
never use one) - it converts the enormous power of a relational database
engine into "RBAR" (row by agonizing row).  For a large dataset in
particular, you are killing the server since the entire resultset must be
retained in working memory for the duration of the query as it peels off one
row at a time from that resultset OR if it's larger than your ram, you'll be
paging to disk constantly.  And since you're working on a single row at
time, it will take forever.

Convert the cursor into a normal query and you should see BIG (10-1000x)
gains in speed.  A cursor can always be converted to normal
sql...always...it's not always easy but it's always worth the effort. 
 
Mike Sofen



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Alternate or Optimization for with hold cursor

2016-05-12 Thread amulsul
Not sure what you trying to achieve, you could give a try for Materialized
Views[1], see would this help you or not.

1. http://www.postgresql.org/docs/9.3/static/rules-materializedviews.html
2. http://www.postgresql.org/docs/9.3/static/sql-creatematerializedview.html


Regards,
Amul Sul



--
View this message in context: 
http://postgresql.nabble.com/Alternate-or-Optimization-for-with-hold-cursor-tp5903211p5903233.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] NULL concatenation

2016-05-12 Thread Adam Pearson
Hello Sridhar,

  Have you tried the 'coalesce' function to handle the nulls?


Kind Regards,

Adam Pearson


From: pgsql-general-ow...@postgresql.org  
on behalf of Sridhar N Bamandlapally 
Sent: 12 May 2016 09:47
To: PG-General Mailing List; PostgreSQL-hackers
Subject: [GENERAL] NULL concatenation

Hi

In migration, am facing issue with NULL concatenation in plpgsql,
by concatenating NULL between any where/position to Text / Varchar, the total 
string result is setting value to NULL


In Oracle:

declare
txt1 VARCHAR2(100) := 'ABCD';
txt2 VARCHAR2(100) := NULL;
txt3 VARCHAR2(100) := 'EFGH';
txt VARCHAR2(100) := NULL;
begin
  txt:= txt1 || txt2 || txt3;
  dbms_output.put_line (txt);
end;
/

abcdefgh   ===>return value



In Postgres

do $$
declare
txt1 text := 'ABCD';
txt2 text := NULL;
txt3 text := 'EFGH';
txt text := NULL;
begin
txt:= txt1 || txt2 || txt3;
raise notice '%', txt;
end$$ language plpgsql;

NOTICE:===> return value


SQL-Server also does same like Oracle

Is there any way alternate we have for same behavior in PostgreSQL

Please

Thanks
Sridhar
OpenText



Re: [GENERAL] Alternate or Optimization for with hold cursor

2016-05-12 Thread Rakesh Kumar

On May 12, 2016, at 4:57 AM, sangeetha  wrote:

Currently , I am using "With hold" cursor. In our case , the With hold cursor
is used to fetch the next record of the given primary key . 

Can you explain your use case. If i understand with hold correctly, it is 
typically used to preserve locks even after commit , so as to get a consistent 
view of data. 


The performance
is very slow for large data set. Can you provide me some alternative ways
like having own copy of table , or optimization for With hold cursor?

Thanks and Regards,
S.Sangeetha



--
View this message in context: 
http://postgresql.nabble.com/Alternate-or-Optimization-for-with-hold-cursor-tp5903211.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Alternate or Optimization for with hold cursor

2016-05-12 Thread sangeetha
Currently , I am using "With hold" cursor. In our case , the With hold cursor
is used to fetch the next record of the given primary key . The performance
is very slow for large data set. Can you provide me some alternative ways
like having own copy of table , or optimization for With hold cursor?

Thanks and Regards,
S.Sangeetha



--
View this message in context: 
http://postgresql.nabble.com/Alternate-or-Optimization-for-with-hold-cursor-tp5903211.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [HACKERS] NULL concatenation

2016-05-12 Thread Sridhar N Bamandlapally
Thanks Pavel

Great !!

I was thinking both || and CANCAT does same

Thanks again

-
Sridhar
OpenText


On Thu, May 12, 2016 at 2:22 PM, Pavel Stehule 
wrote:

> Hi
>
> 2016-05-12 10:47 GMT+02:00 Sridhar N Bamandlapally 
> :
>
>> Hi
>>
>> In migration, am facing issue with NULL concatenation in plpgsql,
>> by concatenating NULL between any where/position to Text / Varchar, the
>> total string result is setting value to NULL
>>
>>
>> *In Oracle:*
>>
>> declare
>> txt1 VARCHAR2(100) := 'ABCD';
>> txt2 VARCHAR2(100) := NULL;
>> txt3 VARCHAR2(100) := 'EFGH';
>> txt VARCHAR2(100) := NULL;
>> begin
>>   txt:= txt1 || txt2 || txt3;
>>   dbms_output.put_line (txt);
>> end;
>> /
>>
>> abcdefgh   *===>return value*
>>
>>
>>
>> *In Postgres*
>>
>> do $$
>> declare
>> txt1 text := 'ABCD';
>> txt2 text := NULL;
>> txt3 text := 'EFGH';
>> txt text := NULL;
>> begin
>> txt:= txt1 || txt2 || txt3;
>> raise notice '%', txt;
>> end$$ language plpgsql;
>>
>> NOTICE:*===> return value*
>>
>>
>> SQL-Server also does same like Oracle
>>
>> Is there any way alternate we have for same behavior in PostgreSQL
>>
>
> use function concat
> http://www.postgresql.org/docs/9.5/static/functions-string.html
>
>  postgres=# select concat('AHOJ', NULL,'XXX');
>  concat
> -
>  AHOJXXX
> (1 row)
>
> Regards
>
> Pavel
>
>
>> Please
>>
>> Thanks
>> Sridhar
>> OpenText
>>
>>
>


Re: [GENERAL] NULL concatenation

2016-05-12 Thread Tim Clarke
Wrap the source columns in your line:

txt:= txt1 || txt2 || txt3;

 in coalesce() calls

Tim Clarke

On 12/05/16 09:47, Sridhar N Bamandlapally wrote:
> txt:= txt1 || txt2 || txt3;




smime.p7s
Description: S/MIME Cryptographic Signature


Re: [GENERAL] [HACKERS] NULL concatenation

2016-05-12 Thread Pavel Stehule
Hi

2016-05-12 10:47 GMT+02:00 Sridhar N Bamandlapally :

> Hi
>
> In migration, am facing issue with NULL concatenation in plpgsql,
> by concatenating NULL between any where/position to Text / Varchar, the
> total string result is setting value to NULL
>
>
> *In Oracle:*
>
> declare
> txt1 VARCHAR2(100) := 'ABCD';
> txt2 VARCHAR2(100) := NULL;
> txt3 VARCHAR2(100) := 'EFGH';
> txt VARCHAR2(100) := NULL;
> begin
>   txt:= txt1 || txt2 || txt3;
>   dbms_output.put_line (txt);
> end;
> /
>
> abcdefgh   *===>return value*
>
>
>
> *In Postgres*
>
> do $$
> declare
> txt1 text := 'ABCD';
> txt2 text := NULL;
> txt3 text := 'EFGH';
> txt text := NULL;
> begin
> txt:= txt1 || txt2 || txt3;
> raise notice '%', txt;
> end$$ language plpgsql;
>
> NOTICE:*===> return value*
>
>
> SQL-Server also does same like Oracle
>
> Is there any way alternate we have for same behavior in PostgreSQL
>

use function concat
http://www.postgresql.org/docs/9.5/static/functions-string.html

 postgres=# select concat('AHOJ', NULL,'XXX');
 concat
-
 AHOJXXX
(1 row)

Regards

Pavel


> Please
>
> Thanks
> Sridhar
> OpenText
>
>


[GENERAL] NULL concatenation

2016-05-12 Thread Sridhar N Bamandlapally
Hi

In migration, am facing issue with NULL concatenation in plpgsql,
by concatenating NULL between any where/position to Text / Varchar, the
total string result is setting value to NULL


*In Oracle:*

declare
txt1 VARCHAR2(100) := 'ABCD';
txt2 VARCHAR2(100) := NULL;
txt3 VARCHAR2(100) := 'EFGH';
txt VARCHAR2(100) := NULL;
begin
  txt:= txt1 || txt2 || txt3;
  dbms_output.put_line (txt);
end;
/

abcdefgh   *===>return value*



*In Postgres*

do $$
declare
txt1 text := 'ABCD';
txt2 text := NULL;
txt3 text := 'EFGH';
txt text := NULL;
begin
txt:= txt1 || txt2 || txt3;
raise notice '%', txt;
end$$ language plpgsql;

NOTICE:*===> return value*


SQL-Server also does same like Oracle

Is there any way alternate we have for same behavior in PostgreSQL

Please

Thanks
Sridhar
OpenText


Re: [GENERAL] index on ILIKE/LIKE - PostgreSQL 9.2

2016-05-12 Thread Andreas Joseph Krogh
På torsdag 12. mai 2016 kl. 10:05:01, skrev Andreas Joseph Krogh <
andr...@visena.com >:
[snp] I created this test:
 
create table ja_jobs(id bigserial primary key, title varchar not null, 
clientid bigint not null, time_job bigint not null);
CREATE INDEX ix_ja_jobs_trgm_clientid_gin ON public.ja_jobs USING gin (title 
gin_trgm_ops, clientid);

 --- insert some test-data
 
As you see, this uses the index (when casting clientid to bigint):


 
andreak=# explain analyze SELECT DISTINCT title  
 FROM ja_jobs WHERE title ILIKE '%ras du%'
 and clientid = 12::bigint AND time_job > 257826264
 order BY title
 limit 10;
   QUERY 
PLAN   
 
---
  Limit  (cost=8.43..8.44 rows=1 width=32) (actual time=0.033..0.034 rows=1 
loops=1)
    ->  Unique  (cost=8.43..8.44 rows=1 width=32) (actual time=0.032..0.032 
rows=1 loops=1)
  ->  Sort  (cost=8.43..8.43 rows=1 width=32) (actual 
time=0.032..0.032 rows=1 loops=1)
    Sort Key: title
    Sort Method: quicksort  Memory: 25kB
    ->  Bitmap Heap Scan on ja_jobs  (cost=7.20..8.42 rows=1 
width=32) (actual time=0.025..0.025 rows=1 loops=1)
  Recheck Cond: (((title)::text ~~* '%ras du%'::text) AND 
(clientid = '12'::bigint))
  Filter: (time_job > 257826264)
  Heap Blocks: exact=1
  ->  Bitmap Index Scan on ix_ja_jobs_trgm_clientid_gin  
(cost=0.00..7.20 rows=1 width=0) (actual time=0.016..0.016 rows=1 loops=1)
    Index Cond: (((title)::text ~~* '%ras du%'::text) 
AND (clientid = '12'::bigint))
  Planning time: 0.169 ms
  Execution time: 0.061 ms
 (13 rows)

 
Forgot to say, this is in PG-9.6 (master), but should work on previous 
versions.
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 
 


 


Re: [GENERAL] index on ILIKE/LIKE - PostgreSQL 9.2

2016-05-12 Thread Andreas Joseph Krogh
På torsdag 12. mai 2016 kl. 09:57:58, skrev Andreas Joseph Krogh <
andr...@visena.com >:
På torsdag 12. mai 2016 kl. 02:30:33, skrev Lucas Possamai <
drum.lu...@gmail.com >:
Hi there!  
I've got a simple but slow query:
 
 SELECT DISTINCT title  
 FROM ja_jobs WHERE title ILIKE '%RYAN WER%'
 and clientid = 31239  AND time_job > 1457826264
 order BY title
 limit 10 
 
Explain analyze: 
 
Limit  (cost=5946.40..5946.41 rows=1 width=19) (actual time=2746.759..2746.772 
rows=1 loops=1)
   ->  Unique  (cost=5946.40..5946.41 rows=1 width=19) (actual 
time=2746.753..2746.763 rows=1 loops=1)
         ->  Sort  (cost=5946.40..5946.41 rows=1 width=19) (actual 
time=2746.750..2746.754 rows=4 loops=1)
               Sort Key: "title"
               Sort Method: quicksort  Memory: 25kB
               ->  Bitmap Heap Scan on "ja_jobs"  (cost=49.02..5946.39 rows=1 
width=19) (actual time=576.275..2746.609 rows=4 loops=1)
                     Recheck Cond: (("clientid" = 31239) AND ("time_job" > 
1457826264))
                     Filter: (("title")::"text" ~~* '%RYAN WER%'::"text")
                     Rows Removed by Filter: 791
                     ->  Bitmap Index Scan on "ix_jobs_client_times" 
 (cost=0.00..49.02 rows=1546 width=0) (actual time=100.870..100.870 rows=795 
loops=1)
                           Index Cond: (("clientid" = 31239) AND ("time_job" > 
1457826264))
 Total runtime: 2746.879 ms  
Then, I created a trgm index:
 
CREATE INDEX ix_ja_jobs_trgm_gin ON public.ja_jobs USING gin (title 
gin_trgm_ops);
 
Explain analyze after the index: (Yes, I ran the analyze)
 
Limit  (cost=389.91..389.91 rows=1 width=20) (actual time=3720.511..3720.511 
rows=0 loops=1)
   ->  Unique  (cost=389.91..389.91 rows=1 width=20) (actual 
time=3720.507..3720.507 rows=0 loops=1)
         ->  Sort  (cost=389.91..389.91 rows=1 width=20) (actual 
time=3720.505..3720.505 rows=0 loops=1)
               Sort Key: "title"
               Sort Method: quicksort  Memory: 25kB
               ->  Bitmap Heap Scan on "ja_jobs"  (cost=385.88..389.90 rows=1 
width=20) (actual time=3720.497..3720.497 rows=0 loops=1)
                     Recheck Cond: (("clientid" = 31239) AND ("time_job" > 
1457826264) AND (("title")::"text" ~~ '%RYAN WER%'::"text"))
                     Rows Removed by Index Recheck: 4
                     ->  BitmapAnd  (cost=385.88..385.88 rows=1 width=0) 
(actual time=3720.469..3720.469 rows=0 loops=1)
                           ->  Bitmap Index Scan on "ix_jobs_client_times" 
 (cost=0.00..50.00 rows=1644 width=0) (actual time=0.142..0.142 rows=795 
loops=1)
                                 Index Cond: (("clientid" = 31239) AND 
("time_job" > 1457826264))
                           ->  Bitmap Index Scan on "ix_ja_jobs_trgm_gin" 
 (cost=0.00..335.63 rows=484 width=0) (actual time=3720.213..3720.213 rows=32 
loops=1)
                                 Index Cond: (("title")::"text" ~~ '%RYAN WER
%'::"text")
 Total runtime: 3720.653 ms   
 
so.. the query is still slow.. 
Do you guys  know what can be done ? related to the ILIKE?

 
cheers
Lucas

 
It uses available indexes, but that isn't good enough.
 
Try including clientid in the index, using the btree_gin extension:
 
CREATE INDEX ix_ja_jobs_trgm_clientid_gin ON public.ja_jobs USING gin (title 
gin_trgm_ops, clientid);
 
Note that if clientid is a bigint you have to cast the value to bigint for 
btree_gin to use it (note that this isn't necessary if you use a prepared 
statement):
 
SELECT DISTINCT title  
         FROM ja_jobs WHERE title ILIKE '%RYAN WER%'
         and clientid = 31239::bigint  AND time_job > 1457826264
         order BY title
         limit 10

 
Also note that the index cannot ant won't be used for sorting. A bitmap-AND is 
also inevitable because GIN-indexes cannot be used for the '>' operator, so PG 
uses the ix_jobs_client_times btree-index and bigmap-ANDs the result.
 
Can you post you complete schema?
 
I created this test:
 
create table ja_jobs(id bigserial primary key, title varchar not null, 
clientid bigint not null, time_job bigint not null);
CREATE INDEX ix_ja_jobs_trgm_clientid_gin ON public.ja_jobs USING gin (title 
gin_trgm_ops, clientid);

 --- insert some test-data
 
As you see, this uses the index (when casting clientid to bigint):


 
andreak=# explain analyze SELECT DISTINCT title  
 FROM ja_jobs WHERE title ILIKE '%ras du%'
 and clientid = 12::bigint AND time_job > 257826264
 order BY title
 limit 10;
   QUERY 
PLAN   
 
---
  Limit  (cost=8.43..8.44 rows=1 width=32) (actual time=0.033..0.034 rows=1 
loops=1)
    ->  Unique  (cost=8.43..8.44 rows=1 width=32) (actual time=0.032..0.032 
rows=1 loops=1)

Re: [GENERAL] index on ILIKE/LIKE - PostgreSQL 9.2

2016-05-12 Thread Andreas Joseph Krogh
På torsdag 12. mai 2016 kl. 02:30:33, skrev Lucas Possamai >:
Hi there!  
I've got a simple but slow query:
 
 SELECT DISTINCT title  
 FROM ja_jobs WHERE title ILIKE '%RYAN WER%'
 and clientid = 31239  AND time_job > 1457826264
 order BY title
 limit 10 
 
Explain analyze: 
 
Limit  (cost=5946.40..5946.41 rows=1 width=19) (actual time=2746.759..2746.772 
rows=1 loops=1)
   ->  Unique  (cost=5946.40..5946.41 rows=1 width=19) (actual 
time=2746.753..2746.763 rows=1 loops=1)
         ->  Sort  (cost=5946.40..5946.41 rows=1 width=19) (actual 
time=2746.750..2746.754 rows=4 loops=1)
               Sort Key: "title"
               Sort Method: quicksort  Memory: 25kB
               ->  Bitmap Heap Scan on "ja_jobs"  (cost=49.02..5946.39 rows=1 
width=19) (actual time=576.275..2746.609 rows=4 loops=1)
                     Recheck Cond: (("clientid" = 31239) AND ("time_job" > 
1457826264))
                     Filter: (("title")::"text" ~~* '%RYAN WER%'::"text")
                     Rows Removed by Filter: 791
                     ->  Bitmap Index Scan on "ix_jobs_client_times" 
 (cost=0.00..49.02 rows=1546 width=0) (actual time=100.870..100.870 rows=795 
loops=1)
                           Index Cond: (("clientid" = 31239) AND ("time_job" > 
1457826264))
 Total runtime: 2746.879 ms  
Then, I created a trgm index:
 
CREATE INDEX ix_ja_jobs_trgm_gin ON public.ja_jobs USING gin (title 
gin_trgm_ops);
 
Explain analyze after the index: (Yes, I ran the analyze)
 
Limit  (cost=389.91..389.91 rows=1 width=20) (actual time=3720.511..3720.511 
rows=0 loops=1)
   ->  Unique  (cost=389.91..389.91 rows=1 width=20) (actual 
time=3720.507..3720.507 rows=0 loops=1)
         ->  Sort  (cost=389.91..389.91 rows=1 width=20) (actual 
time=3720.505..3720.505 rows=0 loops=1)
               Sort Key: "title"
               Sort Method: quicksort  Memory: 25kB
               ->  Bitmap Heap Scan on "ja_jobs"  (cost=385.88..389.90 rows=1 
width=20) (actual time=3720.497..3720.497 rows=0 loops=1)
                     Recheck Cond: (("clientid" = 31239) AND ("time_job" > 
1457826264) AND (("title")::"text" ~~ '%RYAN WER%'::"text"))
                     Rows Removed by Index Recheck: 4
                     ->  BitmapAnd  (cost=385.88..385.88 rows=1 width=0) 
(actual time=3720.469..3720.469 rows=0 loops=1)
                           ->  Bitmap Index Scan on "ix_jobs_client_times" 
 (cost=0.00..50.00 rows=1644 width=0) (actual time=0.142..0.142 rows=795 
loops=1)
                                 Index Cond: (("clientid" = 31239) AND 
("time_job" > 1457826264))
                           ->  Bitmap Index Scan on "ix_ja_jobs_trgm_gin" 
 (cost=0.00..335.63 rows=484 width=0) (actual time=3720.213..3720.213 rows=32 
loops=1)
                                 Index Cond: (("title")::"text" ~~ '%RYAN WER
%'::"text")
 Total runtime: 3720.653 ms   
 
so.. the query is still slow.. 
Do you guys  know what can be done ? related to the ILIKE?

 
cheers
Lucas

 
It uses available indexes, but that isn't good enough.
 
Try including clientid in the index, using the btree_gin extension:
 
CREATE INDEX ix_ja_jobs_trgm_clientid_gin ON public.ja_jobs USING gin (title 
gin_trgm_ops, clientid);
 
Note that if clientid is a bigint you have to cast the value to bigint for 
btree_gin to use it (note that this isn't necessary if you use a prepared 
statement):
 
SELECT DISTINCT title  
         FROM ja_jobs WHERE title ILIKE '%RYAN WER%'
         and clientid = 31239::bigint  AND time_job > 1457826264
         order BY title
         limit 10

 
Also note that the index cannot ant won't be used for sorting. A bitmap-AND is 
also inevitable because GIN-indexes cannot be used for the '>' operator, so PG 
uses the ix_jobs_client_times btree-index and bigmap-ANDs the result.
 
Can you post you complete schema?
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 
 


 


Re: [GENERAL] index on ILIKE/LIKE - PostgreSQL 9.2

2016-05-12 Thread Lucas Possamai
> How big is the table?  The gin index?  shared_buffers?  RAM?  What
> kind of IO system do you have, and how many other things were going on
> with it?
>

- Just a reminder that I'm not running these tests on my prod server.. I'm
running on my test server. So the confs will be different

The table is 9GB big
> The gin index is 400MB big
> shared_buffers = 1536MB
> RAM = 8 GB


I just wanted to understand why the GIN index is not working, but it works
here: https://www.depesz.com/2011/02/19/waiting-for-9-1-faster-likeilike/


>
> It would be interesting to see the output of explain (analyze,
> buffers) with track_io_timing turned on.
>

explain analyze buffer with track_io_timing turned on:


Limit  (cost=93466.83..93466.83 rows=1 width=218) (actual
> time=24025.463..24025.478 rows=5 loops=1)
>   Buffers: shared hit=8 read=42285
>   I/O Timings: read=23599.672
>   CTE ja_jobs
> ->  HashAggregate  (cost=93451.05..93455.90 rows=485 width=20) (actual
> time=23946.801..23967.660 rows=16320 loops=1)
>   Buffers: shared hit=3 read=42285
>   I/O Timings: read=23599.672
>   ->  Bitmap Heap Scan on "ja_jobs"  (cost=877.70..93374.92
> rows=30453 width=20) (actual time=161.372..23835.632 rows=48472 loops=1)
> Recheck Cond: (("clientid" = 14635) AND ("time_job" >
> 1436731799))
> Buffers: shared hit=3 read=42285
> I/O Timings: read=23599.672
> ->  Bitmap Index Scan on "ix_jobs_client_times"
>  (cost=0.00..870.09 rows=30453 width=0) (actual time=133.920..133.920
> rows=48472 loops=1)
>   Index Cond: (("clientid" = 14635) AND ("time_job" >
> 1436731799))
>   Buffers: shared hit=3 read=244
>   I/O Timings: read=120.137
>   ->  Sort  (cost=10.92..10.93 rows=1 width=218) (actual
> time=24025.457..24025.462 rows=5 loops=1)
> Sort Key: "ja_jobs"."title"
> Sort Method: quicksort  Memory: 25kB
> Buffers: shared hit=8 read=42285
> I/O Timings: read=23599.672
> ->  CTE Scan on "ja_jobs"  (cost=0.00..10.91 rows=1 width=218)
> (actual time=23977.095..24025.325 rows=5 loops=1)
>   Filter: (("title")::"text" ~~* '%To Electrical%'::"text")
>   Rows Removed by Filter: 16315
>   Buffers: shared hit=3 read=42285
>   I/O Timings: read=23599.672
> Total runtime: 24028.551 ms




>
> There have been improvements in this area since 9.2, you should
> consider upgrading to at least 9.4.
>
>
Yep I know. The upgrade will happen, but I don't know when.


Re: [GENERAL] Beta testers for database development tool wanted

2016-05-12 Thread Martijn Tonies (Upscene Productions)

Hello Steve,


I’ll just get at it right away --

We’re developing a database development tool called Database Workbench, 
it currently supports MySQL, InterBase, Firebird, Oracle, SQL Server, 
 >NexusDB and SQL Anywhere (see 
http://www.upscene.com/database_workbench/ )


Windows only, judging from the screenshots?


Native Windows, but runs fine in Wine (Platinum status, says the AppDB of 
WhineHQ)

https://appdb.winehq.org/objectManager.php?sClass=version=31080


With regards,

Martijn Tonies
Upscene Productions
http://www.upscene.com



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Beta testers for database development tool wanted

2016-05-12 Thread Steve Atkins

> On May 11, 2016, at 11:24 PM, Martijn Tonies (Upscene Productions) 
>  wrote:
> 
> Hello everyone,
>  
> I’ll just get at it right away --
>  
> We’re developing a database development tool called Database Workbench, it 
> currently supports MySQL, InterBase, Firebird, Oracle, SQL Server, NexusDB 
> and SQL Anywhere (see http://www.upscene.com/database_workbench/ )

Windows only, judging from the screenshots?

Cheers,
  Steve

>  
> We’re adding PostgreSQL support and the first beta is ready for testing
>  
> We would like to have people who:
> - would use this product on a daily basis, like they're using any other 
> PostgreSQL tool (eg PgAdmin) now
> - work with larger databases, both data volume and meta data object count
> - are able to report bugs in a (reasonable) detailed manner
> - are able to discuss new features or enhancements
> - are able to regularly download updates and use them
> - don’t mind being put on a private e-mail list to report issues
> 
> Limitations:
> - stored function overloading currently not supported
> - version 9.1 and up supported
>  
>  
> If anyone of you is interested is testing this tool, with a free license for 
> the PostgreSQL module and a second module of your choice, drop me an e-mail 
> at m.tonies @ upscene.com
>  
>  
> With regards,
> 
> Martijn Tonies
> Upscene Productions
> http://www.upscene.com



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Beta testers for database development tool wanted

2016-05-12 Thread Martijn Tonies (Upscene Productions)
Hello everyone,

I’ll just get at it right away --

We’re developing a database development tool called Database Workbench, it 
currently supports MySQL, InterBase, Firebird, Oracle, SQL Server, NexusDB and 
SQL Anywhere (see http://www.upscene.com/database_workbench/ )

We’re adding PostgreSQL support and the first beta is ready for testing. 

We would like to have people who:
- would use this product on a daily basis, like they're using any other 
PostgreSQL tool (eg PgAdmin) now
- work with larger databases, both data volume and meta data object count
- are able to report bugs in a (reasonable) detailed manner
- are able to discuss new features or enhancements
- are able to regularly download updates and use them
- don’t mind being put on a private e-mail list to report issues

Limitations:
- stored function overloading currently not supported
- version 9.1 and up supported


If anyone of you is interested is testing this tool, with a free license for 
the PostgreSQL module and a second module of your choice, drop me an e-mail at 
m.tonies @ upscene.com


With regards,

Martijn Tonies
Upscene Productions
http://www.upscene.com