Re: Semantics around INSERT INTO with SELECT and ORDER BY.

2018-06-12 Thread Ravi Krishna
>Or to put it another way, I want to select values from one table ordered by >complex criteria and insert them into another table. I want to be able to >retrieve the rows from the target table in the same order they were inserted, >but I don't care about the specific ordering criteria. I only care

Re: Semantics around INSERT INTO with SELECT and ORDER BY.

2018-06-12 Thread Ravi Krishna
Why is it even important? Once you use ORDER BY clause, you are guaranteed to get the rows in the order. Why do you need how it was inserted in the first place.

PG on AWS RDS and IAM authentication

2018-06-12 Thread Ravi Krishna
As per https://forums.aws.amazon.com/thread.jspa?threadID=258822=0 there was no IAM authentication for PG on AWS RDS. (It is there for MySQL). However the link is a year old. Has it changed since then? Can we use IAM authentication for PG. Thanks

Re: PostgreSQL Volume Question

2018-06-14 Thread Ravi Krishna
> > Hi, I'm new to the community. > > Recently, I've been involved in a project that develops a social network data > analysis service (and my client's DBMS is based on PostgreSQL). > I need to gather huge volume of unstructured raw data for this project, and > the problem is that with

Re: Load data from a csv file without using COPY

2018-06-19 Thread Ravi Krishna
Thanks all for replying. I see that I did not explain my requirement in detail. So let me explain it in detail. 1. Currently we have a legacy app running in DB2/LUW. Application writes to it either via Java program or uses a custom ETL scripts using a vendor product. 2. We want to

Load data from a csv file without using COPY

2018-06-19 Thread Ravi Krishna
In order to test a real life scenario (and use it for benchmarking) I want to load large number of data from csv files. The requirement is that the load should happen like an application writing to the database ( that is, no COPY command). Is there a tool which can do the job. Basically

Re: Load data from a csv file without using COPY

2018-06-19 Thread Ravi Krishna
> > If performance is relevant then your app should probably be using COPY > protocol, not line by line inserts. It's > supported by most postgresql access libraries. If your app does that then > using "\copy" from psql would be > an appropriate benchmark. Actually the reluctance to not use

Re: Load data from a csv file without using COPY

2018-06-19 Thread Ravi Krishna
> > I think an easy approach would be to COPY the CSV files into a separate > database using psql's \copy command and then pg_dump that as separate insert > statements with pg_dump —inserts. > This was my first thought too. However, as I understand, pg_dump --insert basically runs INSERT

Re: Convert Existing Table to a Partition Table in PG10

2018-06-23 Thread Ravi Krishna
Does this help: http://ashutoshpg.blogspot.com/2018/06/upgrade-your-partitioning-from.html

Two things bit baffling in RDS PG

2018-05-03 Thread Ravi Krishna
I am playing around with RDS PG and I am not able to understand the following: 1. The database name I created via RDS console is in upper case with no quotes. From the remote machine via psql, if I try to use lower case db name with the -d option it errors out "database not found". Works

Re: using pg_basebackup for point in time recovery

2018-06-21 Thread Ravi Krishna
> > > >You should avoid top-posting on the Postgres lists, this is not the > >usual style used by people around :) > > Will do, but Yahoo Mail! does not seem to like that, so I am typing the > > myself > Same here even though I use Mac mail. But it is not yahoo alone. Most of the web email

Re: PANIC: could not open critical system index 2662

2018-08-06 Thread Ravi Krishna
Just curious, why can't you restore the db from the backup ?

Re: What is the use case for UNLOGGED tables

2018-08-14 Thread Ravi Krishna
> > I use them for "ELT" oriented processing where the final results get stored > on permanently logged tables but I want to manipulate tables while > transforming from the original input. Yeah I see the use case. Basically raw data -> rolled up data -> to final results in normal tables.

What is the use case for UNLOGGED tables

2018-08-14 Thread Ravi Krishna
I am trying to understand the use case for UNLOGGED tables in PG. I am specifically talking about normal tables which need to be turned into UNLOGGED for a specific purpose like bulk loading because generating WAL logs during the load makes no sense, even when we take into consideration that

Re: [External] Multiple COPY on the same table

2018-08-20 Thread Ravi Krishna
> What is the goal you are trying to achieve here. > To make pgdump/restore faster? > To make replication faster? > To make backup faster ? None of the above.  We got csv files from external vendor which are 880GB in total size, in 44 files.  Some of the large tables had COPY running for

Re: [External] Multiple COPY on the same table

2018-08-20 Thread Ravi Krishna
1. The tables has no indexes at the time of load.2.  The create table and copy are in the same transaction. So I guess that's pretty much it.  I understand the long time it takes as some of the tables have 400+ million rows.Also the env is a container and since this is currently a POC system ,

COPY FROM - to avoid WAL generation

2018-08-21 Thread Ravi Krishna
In a recent thread of mine I learned something very interesting.  If a table is created and data is loaded via COPY FROM within the same transaction, then PG will be smart enough to not generate WAL logs because all it needs to do is to track the status of the transaction and let the data load

[no subject]

2018-08-20 Thread Ravi Krishna
Can I split a large file into multiple files and then run copy using each file.  The table does not contain any serial or sequence column which may need serialization. Let us say I split a large file to 4 files.  Will theperformance boost by close to 4x??

Multiple COPY on the same table

2018-08-20 Thread Ravi Krishna
Can I split a large file into multiple files and then run copy using each file.  The table does not contain any serial or sequence column which may need serialization. Let us say I split a large file to 4 files.  Will theperformance boost by close to 4x?? ps: Pls ignore my previous post which

Re: pg_sample

2018-08-24 Thread Ravi Krishna
> > sir have taken pg_sample > Now i want to run pg_sample with credential but i'm getting this error > > Can't locate DBI.pm in @INC (@INC contains: /usr/local/lib64/perl5 > /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl > /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5

Re: CTE with JOIN of two tables is much faster than a regular query

2018-08-18 Thread Ravi Krishna
> What can I do to improve the performance of the regular query without using a > CTE? Why do you care ? When I find that I can write a SQL 3 different ways, I will go for the most efficient one. So why not accept the CTE version of this SQL. Just curious.

Re: Can Pg somehow recognize/honor linux groups to control user access ?

2018-08-22 Thread Ravi Krishna
AFAIK PG does not support it , as yet. IMO this should be implemented as a priority.

Re: COPY FROM - to avoid WAL generation

2018-08-21 Thread Ravi Krishna
>Please note this is only the case if wal_level = minimal. If replication >(or PITR) is supported, that mode can't be used, because the data has to >go into the WAL. >Were you using wal_level = minimal? Aha. No it was not minimal. For a second I thought PG is super smart. Oh well.  Thanks.  

Re: Do we need yet another IDE (SQL development assistant) for PostgreSQL?

2018-07-15 Thread Ravi Krishna
What would this new IDE offer which a product like dbeaver does not have. --Sent from phone.

Re: Do we need yet another IDE (SQL development assistant) for PostgreSQL?

2018-07-15 Thread Ravi Krishna
shop and we love it. On Sun, Jul 15, 2018 at 4:22 PM, Dmitry Igrishin wrote: > > > вс, 15 июл. 2018 г. в 23:05, Ravi Krishna : > >> What would this new IDE offer which a product like dbeaver does not have. >> > AFAIK, DBeaver: > - covers many DBMS (I want to fo

PG backup check

2018-07-16 Thread Ravi Krishna
Not sure I am following this. Did Google release this because PG backups are not 100% reliable or the data corruption can occur due to hardware failure.

Disabling/Enabling index before bulk loading

2018-07-10 Thread Ravi Krishna
We recently did a test on COPY and found that on large tables (47 million rows , 20GB of raw data) the difference in COPY with 16 indexes and COPY without any index is 1:14. That is, COPY is 14 times slower when data is ingested with all indexes as opposed to COPY first without index and

Re: Disabling/Enabling index before bulk loading

2018-07-10 Thread Ravi Krishna
> > Did you include the time to CREATE INDEX after the COPY or is the 1:14 only > for the COPY stage? Yes. Time taken to load 47 mil rows with all 16 indexes intact: 14+ hrs Time taken to load the same after dropping index and then loading and finally creating 16 indexes: 1 hr 40 min

Re: Disabling/Enabling index before bulk loading

2018-07-10 Thread Ravi Krishna
> > > https://fle.github.io/temporarily-disable-all-indexes-of-a-postgresql-table.html > > > This does not work in RDS. In order to update system catalog tables (pg_index), one needs privileges which is

Re: Disabling/Enabling index before bulk loading

2018-07-10 Thread Ravi Krishna
> > I am very suspicious of why you need 16 indexes. Are you sure all those > indexes are actually being utilized? > Try executing the attached query, You may find find some are really not > needed. This is a DATAMART application and the indexes are to satisfy a large number of queries

Re: Extremely slow autovacuum:vacuum to prevent wraparound

2018-07-11 Thread Ravi Krishna
> Setting it that high and disabling autovacuum isn’t just silly - it borders > on sabotage! LOL. My thoughts too. Perhaps some disgruntled employee's parting shot before quitting :-)

Re: Using CTE vs temporary tables

2018-07-11 Thread Ravi Krishna
​Does temp tables also suffer from optimization fence we see in CTE.​ >

Re: Open Source tool to deploy/promote PostgreSQL DDL

2018-07-11 Thread Ravi Krishna
> > Where I work, the requirement to have rollback scripts is part of the ITIL > requirement for Changes to have a backout procedure. > Liquibase provides that ability, but IMO rollback for RDBMS is always bit tricky. Certain DDL operations can take long time if it involves a table rewrite.

FDW with DB2

2018-04-06 Thread Ravi Krishna
Has anyone used PG with DB2(Linux) ?

Re: FDW with DB2

2018-04-06 Thread Ravi Krishna
> > > > the CartoDB ODBC driver works quite well. I've used it to move a large > amount of data from DB2 and Netezza databases. > ​Hello Steven Will it be OK if I or my team reach out to you for any guidance/help. ​

Re: FDW with DB2

2018-04-06 Thread Ravi Krishna
Yes of course I respect your time. regards. Please do not contact me personally, as I'm too busy to provide technical > support on a private basis. Thanks for your understanding! >

Re: Rationale for aversion to the central database?

2018-04-08 Thread Ravi Krishna
1. With a micro service based architecture these days, it is difficult to justify putting all logic in a central database as you can only scale up in a database. Business logic in things like Spark can make a claim for scale out solution. 2. All RDBMS have a non portable stored proc language,

Re: Rationale for aversion to the central database?

2018-04-08 Thread Ravi Krishna
> I am however very comfortable with using psql and PL/pgSQL and I am very opinionated. Nothing wrong with this approach and it may very well work 90% of the time. Until ... a day comes when you need to migrate out of PG to another RDBMS. Good luck at that time.

PG security alerts

2018-09-27 Thread Ravi Krishna
Hi Is there a place to get all PG related security alerts?  I saw this in IBM site: https://www-01.ibm.com/support/docview.wss?uid=ibm10730491 which points to this: http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-10915 >From the looks of it, it seems to be a generic

COPY from a remote machine in Datastage

2018-10-05 Thread Ravi Krishna
We are doing a POC of using Datastage with PG using ODBC. Problem to solve: How to load a large CSV file using COPY command. The file is on the client machine. A typical SQL syntax of a copy coming from a remote machine COPY TABLE FROM STDIN WITH CSV HEADER Question is, how to make the

Re: COPY from a remote machine in Datastage

2018-10-05 Thread Ravi Krishna
> > Hello, if you need to use COPY command from remote machine and you use some > libpq bindings (aka ruby pg gem for example), you can use functions > associated with COPY command > (https://www.postgresql.org/docs/10/static/libpq-copy.html >

Re: COPY from a remote machine in Datastage

2018-10-05 Thread Ravi Krishna
> > Can you install the postgres client software (psql) on the client machine and > then have Datastage spawn "psql -c 'COPY ...'"? That is already an option for us :-)

Re: COPY threads

2018-10-11 Thread Ravi Krishna
>>> >> Thank you. Let me test it and see the benefit. We have a use case for this. > Well the result is not what I expected. this is the sql I used copy TEST.TABLE1 from '/var/lib/postgresql/data/rkrishna/copytesting/xaa' with delimiter '|' NULL as '' CSV HEADER; From another session copy

Re: COPY threads

2018-10-11 Thread Ravi Krishna
> > Well the result is not what I expected. > > this is the sql I used > > copy TEST.TABLE1 from '/var/lib/postgresql/data/rkrishna/copytesting/xaa' > with delimiter '|' NULL as '' CSV HEADER; > > From another session > > copy TEST.TABLE1 from

Re: Slot issues

2018-10-14 Thread Ravi Krishna
The best part in Db2 HADR is that when the standby is catching up with the master after a long time, it will start from the last LSN and fetch it from the primary WAL directory (active logs ). If not found, it will look for it in the archived logs and start applying from there until the current

Re: Slot issues

2018-10-14 Thread Ravi Krishna
When I read all such posts related to replication I realize how backward is PG's replication architecture specially when compared to DB2. This is how it is done in Db2 to set up replication. 1. take a full backup on the primary. 2. restore the backup on the other machine (aka standby) 3.

Re: Slot issues

2018-10-14 Thread Ravi Krishna
> > You're not forced to use slots. Their purpose is to allow to force the > primary to keep necessary resources around. Which also allows to get rid > of the archive in some setups. Thanks. Disclaimer: We don't use replication as we piggy back on AWS HA. The reason why I posted this is

Re: postgres server process crashes when using odbc_fdw

2018-10-17 Thread Ravi Krishna
> > Please note that odbc_fdw is not maintained by the postgresql developers, but > a separate project. Translation: You are on your own. We are hoping this will make our migration out of DB2 quicker. Oh well.

Re: postgres server process crashes when using odbc_fdw

2018-10-17 Thread Ravi Krishna
It turned out that enabling ODBC trace was causing PG to crash. Once disabled it started working, but found another issue. All object names in DB2 is assumed to be upper case. odbc_fdw sends queries like this select "fld1","fld2" from "schema_name"."table_name". So the foreign table in PG

Re: postgres server process crashes when using odbc_fdw

2018-10-17 Thread Ravi Krishna
> > Come on. We can't realistically support & debug random postgres extending > projects, nor do we have control over them. And you're not necessarily on > your own, you could report the issue to odbcfdw's authors/github tracker. Or > pay a company for support. > On a related note is fdw

postgres server process crashes when using odbc_fdw

2018-10-16 Thread Ravi Krishna
Version 10.5 AWS Linux Foreign server: Db2 on Linux 10.5.9 carto_odbc unix_odbc I am able to build odbc_fdw extension, register the extension and also create foreign server. I also created foreign table. When I run a sql 'select * from odbctest' postgres crashes, generate a core file.

Re: postgres server process crashes when using odbc_fdw

2018-10-16 Thread Ravi Krishna
I enabled ODBC trace . Attached trace log. Had to edit connection details and snipped few repetitive lines as I ran the same sql multiple times Connection Out [DSN=BLUE;UID=*;PWD=***;DATABASE=***;HOST...] [ODBC][4248][1539734369.611042][SQLGetInfo.c][554] Entry:

GIN Index for low cardinality

2018-10-17 Thread Ravi Krishna
In https://www.cybertec-postgresql.com/en/ideas-for-scaling-postgresql-to-multi-terabyte-and-beyond/ it is mentioned: "GIN, the most know non-default index type perhaps, has been actually around for ages (full-text search) and in short is perfect for indexing columns where there are lot of

Re: Oracle vs PG

2018-10-23 Thread Ravi Krishna
> > Since the article was almost content-free I not would use it on either side > of the argument. The only thing I pulled from it was Amazon changed databases > and hit the learning curve. That will happen in either direction. I agree but this is the key: "Savepoints are an important

Re: Oracle vs PG

2018-10-23 Thread Ravi Krishna
> > Is it so hard to accept commercial databases have advantages? > I find that not one bit surprising. > > I've used PG since 90's and it's no secret the "big guys" beat PG on certain > workloads. > In my previous workplace where they tested EDB to replace PG, they found all PL/SQL based

Re: Oracle vs PG

2018-10-23 Thread Ravi Krishna
e to handle the pressure, slowing down the overall database performance, the report said." > > Again, pretty much content-free. For all you know some application was > creating savepoints, needlessly: > > https://www.postgresql.org/docs/10/static/sql-savepoint.html > > and not cleaning up

Re: GIN Index for low cardinality

2018-10-25 Thread Ravi Krishna
>>Does it mean that GIN is a very good choice for low cardinality columns.   >Not necessary. There is other index which also don’t keep column value in an >every leaf. Hash, for instance.  Well I asked about GIN's usefulness for low cardinality. Good to know that Hash can also be considered.

Re: Compile and build portable postgresql for mac

2018-10-25 Thread Ravi Krishna
Your best bet in mac is to use docker. On Thursday, October 25, 2018 Pratik Parikh wrote: Thanks, I'll check them out.  But what I am trying to produce is a zip distribution fo Mac is x similar to the one available on postgresql download site. Homebrew works but it ties the libpg to prefix

Re: Truncation of UNLOGGED tables upon restart.

2018-11-01 Thread Ravi Krishna
> There is no such thing as a "read only" table in PostgreSQL.  All tables are > read/write no matter that frequency of either event.  There is nothing > > inherently special about "no writes for 4 days" and "no writes for 10 > seconds" that would allow for a distinction to be made.  There

Truncation of UNLOGGED tables upon restart.

2018-11-01 Thread Ravi Krishna
Per documentation unlogged tables are not crash safe and PG will truncate it when it restarts after a crash. Does this apply to even read only unlogged tables. For example: On Monday I load data into unlogged tables. Then from Tue onwards the table is only read by application. On Fri morning

Copy data from DB2 (Linux) to PG

2018-11-01 Thread Ravi Krishna
I have a project to develop a script/tool to copy data from DB2 to PG. The approach I am thinking is 1. Export data from db2 in a text file, with, say pipe as delimiter. 2. Load the data from the text file to PG using COPY command. In order to make it faster I can parallelize export and load

Re: Copy data from DB2 (Linux) to PG

2018-11-01 Thread Ravi Krishna
> I've never used it, but there is this in case it's helpful: > https://github.com/dalibo/db2topg/ I looked into it.  I thought it is a schema convertor plus data load.  In other words, it is one of those one time migration script.  What I need is a constant refresh. We plan to use it daily

Re: Oracle vs PG

2018-10-23 Thread Ravi Krishna
> Again, pretty much content-free. For all you know some application was > creating savepoints, needlessly: > https://www.postgresql.org/docs/10/static/sql-savepoint.html I have hardly used savepoints in any application, but if I understand it correctly, isn't it something which is typically

Re: Oracle vs PG

2018-10-23 Thread Ravi Krishna
> > Amazon's web store may be a (mostly) stateless application, that doesn't mean > their back end applications are. > Oh yes. There is nothing in that article which suggests that the root cause of the outage was in the web based apps. As you indicated, their back end may be the source of

Oracle vs PG

2018-10-23 Thread Ravi Krishna
Well it is Aurora. https://www.cnbc.com/2018/10/23/amazon-move-off-oracle-caused-prime-day-outage-in-warehouse.html

Re: why select count(*) consumes wal logs

2018-11-06 Thread Ravi Krishna
>That represents setting the yes-this-row-is-committed hint bits on the >newly loaded rows.  The first access to any such row will set that bit, >whether it's a select or a VACUUM or whatever. yes now I recollect reading this in a blog. Thanks Tom.

why select count(*) consumes wal logs

2018-11-06 Thread Ravi Krishna
PG 10.5 I loaded 133 million rows to a wide table (more than 100 cols) via COPY. The table has no index at this time. Since I am the only user I don't see any other activity. Now when I run select count(*) on the table where I just loaded data, it runs for ever, more than 10min and still

Re: why select count(*) consumes wal logs

2018-11-06 Thread Ravi Krishna
Must be something to do with Vaccum as the second time I ran the SQL, it did not consume WAL logs.

Re: Plpgsql search_path issue going from 9.3 to 9.6

2018-11-13 Thread Ravi Krishna
> > I apologize for top posting, Google hid all of the other stuff. > It is only me who thinks that when it comes to destroying email as a communication tool, no one did a better job than effing gmail.

Re: BDR and PostgreSQL 12 and Windows support

2018-11-16 Thread Ravi Krishna
> > Andrew Smith schrieb am 16.11.2018 um 11:01: >> Are there any core features at the >> moment that are Linux only? > > JIT, introduced in Postgres 11, comes to mind > A better question should be, are there any production users of PG on Windows :-)

Re: why select count(*) consumes wal logs

2018-11-07 Thread Ravi Krishna
> As long as you don’t have page checksums turned on, > you can prevent this by turning off wal_log_hints.   I did not run initdb. How to find out which parameter were used with initdb. For page checksums to be on, it must have been run with -k option. Our wal_log_hints is left at default

Re: why select count(*) consumes wal logs

2018-11-07 Thread Ravi Krishna
> select data_page_checksum_version from pg_control_init() returned 1. So we have page_checksum turned on, and wal_log_hints off.

Re: Impact on PostgreSQL due to Redhat acquisition by IBM

2018-11-14 Thread Ravi Krishna
Well your information needs some update. - On AIX, IBM had no issues selling Oracle, a rival to DB2. - IBM Global Services, a consulting unit was the single biggest sales force for Oracle Installations outside Oracle. In other words, they ended up using Oracle for projects done by IGM-GS more

Re: Copy data from DB2 (Linux) to PG

2018-11-11 Thread Ravi Krishna
>Haven't tried it myself, but you may be able to connect the DB2 database >to your PostgreSQL cluster using this FDW module: >https://github.com/wolfgangbrandl/db2_fdw >Looks like db2_fdw is DB2 LUW only though, so you might be out of luck >if your DB2 is on IBM i (or z ;-) As the thread

Re: COPY threads

2018-10-10 Thread Ravi Krishna
> > No, why would that seem to be the case? If it did so, then you could > not run pg_dump to dump data while regular activity was going on. Not sure. In fact I am now confused. I am talking about pg_restore which is COPY FROM, which takes exclusive lock on the table while the data is loaded

Re: COPY threads

2018-10-10 Thread Ravi Krishna
> > You obviously can just copy the data into postgres over multiple > connections if you need to speed COPY up. But that requires splitting up > the data on the clientside. > You obviously are referring to multiple connections running COPY on different tables, right? Like what pg_restore

Re: COPY threads

2018-10-10 Thread Ravi Krishna
Thank you. Let me test it and see the benefit. We have a use case for this. > On Oct 10, 2018, at 17:18 , Andres Freund wrote: > > > > On October 10, 2018 2:15:19 PM PDT, Ravi Krishna wrote: >>> >>> pg_restore doesn't take locks on the table for the COPY

Re: COPY threads

2018-10-10 Thread Ravi Krishna
> > pg_restore doesn't take locks on the table for the COPY, it does so > because creating the table takes an exclusive lock. Interesting. I seem to recollect reading here that I can't have concurrent COPY on the same table because of the lock. To give an example: If I have a large file with

Re: WARNING: could not flush dirty data: Function not implemented

2018-09-02 Thread Ravi Krishna
> > If this is on Ubuntu, I don't understand why you're talking > about Windows. Because I am using Ubuntu Bash on Windows, which requires WLS (Windows Linux Subsystem). I also have necessary build version of Windows which supports Ubuntu Bash.

WARNING: could not flush dirty data: Function not implemented

2018-09-02 Thread Ravi Krishna
Ubuntu 18.04 as Windows bash Distributor ID: Ubuntu Description:Ubuntu 18.04.1 LTS Release:18.04 Codename: bionic PG 10.5.1 postgres@ravi-lenovo:~$ psql -d postgres psql (10.5 (Ubuntu 10.5-1.pgdg16.04+1)) A CREATE DATABASE statement spewed out WARNING: could not flush

Re: WARNING: could not flush dirty data: Function not implemented

2018-09-03 Thread Ravi Krishna
> > Whee ... so you get to cope with all the bugs/idiosyncrasies of three > operating system layers, not just one. I concur that running Postgres > in the underlying Windows O/S is probably a much better idea. Me too, but this is purely for learning and I am much more use to Linux stack then

Re: WARNING: could not flush dirty data: Function not implemented

2018-09-03 Thread Ravi Krishna
>That means that the linux emulation by microsoft isn't good enough. You >can work around it by setting checkpoint_flush_after=0 and >wal_writer_flush_after=0. bgwriter_flush_after = 0# measured in pages, 0 disables backend_flush_after = 0# measured in pages, 0

Re: New behavior with JDBC 42.2.5

2018-09-21 Thread Ravi Krishna
> First are doing a socket connection or a host connection? Socket > connections ignore sslmode. The URL template of JDBC used by dbeaver is jdbc:postgresql://{host}[:{port}]/[{database}] >From the manual "The host component is interpreted as described for the parameter host. In particular,

Re: New behavior with JDBC 42.2.5

2018-09-21 Thread Ravi Krishna
Just to clarify: With JDBC 42.1.4 attempt to connect from dbeaver without SSL results in the following error: "pg_hba.conf rejected the connection: SSL off" This is what we expect. With 42.2.5, the connection succeeds without SSL.

Re: *Regarding brin_index on required column of the table

2018-09-21 Thread Ravi Krishna
> i can see a lot of max(string-field) (for instance, LastName, > MiddleName, FirstName). > wild guess: completely broken design, but i don't know your application > and use-case for that. > again, as i said already, i think this is a case for an in-deep > consultation. My thoughts exactly.

New behavior with JDBC 42.2.5

2018-09-21 Thread Ravi Krishna
We recently upgraded our JDBC driver to 42.2.5 after seeing this https://www.postgresql.org/about/news/1883/ All of our PG databases mandates SSL connections. So the first line in pg_hba.conf is hostnossl all all all reject We use dbeaver and while setting up connection we check box SSL

Re: Why my query not using index to sort?

2018-09-28 Thread Ravi Krishna
> Is there anyway, I can improve the sorting so that it can use the index ? Are you telling that why PG is not simply reading the data from the index (which is already in sorted order)?

Re: Question about unlogged to logged conversion

2018-12-27 Thread Ravi Krishna
On Thu, Dec 27, 2018, at 5:23 PM, Bhavin Gandhi wrote: > Hello, > I'm trying to understand for a given unlogged table of a specific size > and # of rows, if I do "alter table" on it to convert it to logged > table, is there a performance difference between 9.5, 9.6 and 10? in > other words are

Re: Read consistency when using synchronous_commit=off

2019-01-15 Thread Ravi Krishna
> I m not sure other clients are able to read from WAL buffer, therefore > i m not sure the data is available to other clients at that specific > point in time. No. On the standby the buffer cache has to be populated with the updates before other client sessions can read it. AFAIK other client

Re: the installation of pgadmin4 makes me weep in frustration

2019-01-14 Thread Ravi Krishna
> > pgadmin 4 is nothing to do with the PostgreSQL project itself, it's just a > third party client. > > There are many other third-party clients listed here - > https://wiki.postgresql.org/wiki/PostgreSQL_Clients - > most of them probably better than pgadmin4. Agreed. I use dbeaver and it

Re: Read consistency when using synchronous_commit=off

2019-01-15 Thread Ravi Krishna
Sorry I misunderstood. The term "read consistency" is generally used either in the context of isolation level or in the context of slaves. > We don't have standby instance, as I have mentioned we are using just > one instance of postgres serving local clients running on the same > machine, do you

explain analyze cost

2018-12-12 Thread Ravi Krishna
I am running explain analyze cost on a SQL which reads from two large tables (122mil and 37 mil). The query is an UPDATE SQL where we use derives table in the from clause and then join it back to the table being updated. The explain analyze cost itself is taking forever to run. It is running for

Re: explain analyze cost

2018-12-12 Thread Ravi Krishna
> Please do not hijack other threads by replying to a message and > changing> the subject. Just send a new mail to > pgsql-general@lists.postgresql.org, or whatever list you want > to send an> email to. > I am truly sorry and this will not be repeated. I was just lazy. I guess this would break

Re: date_trunc not immutable

2018-12-15 Thread Ravi Krishna
Thanks all.  I forgot the TZ part.

date_trunc not immutable

2018-12-15 Thread Ravi Krishna
Version: PG 10.6 on AWS Linux. I am trying to create an index on function date_trunc('month',timestamp) PG is complaining that the function must be marked as IMMUTABLE. So I assume that date_trunc is not marked as immutable. Definition of immutable from PG documentation

Re: Amazon Aurora

2018-12-20 Thread Ravi Krishna
Glen, I think your question can be posted here for a better response: https://forums.aws.amazon.com/forum.jspa?forumID=227 Original Message On Thu, Dec 20, 2018, at 3:57 PM, Glenn Schultz wrote: > > I have a Postgres database of about 1.5 terabytes on amazon aurora.

Re: Limitting full join to one match

2018-12-06 Thread Ravi Krishna
> Yes, it is becoming increasingly difficult to persuade gmail etc. that> you > are not a spammer if you run your own mail server. If you > have any> interesting headers suggesting exactly what they disliked about my > message,> could you please forward them off-list? Thanks. > > It is for

Re: Decrease time needed to CREATE INDEX and FOREIGN KEY on new table column which has all values NULL

2019-01-22 Thread Ravi Krishna
> The table is huge and it takes a lot of time to add the INDEX and the FOREIGN > KEY although all values are NULL. > Considering that the new DepartmentId column is NULL for all rows at this > point, is there a way to make the INDEX and FOREIGN KEY creation run faster? In your script to

Re: WSL (windows subsystem on linux) users will need to turn fsync off as of 11.2

2019-02-17 Thread Ravi Krishna
If this one appears in the list, then it means the problem is with AOL.

Re: WSL (windows subsystem on linux) users will need to turn fsync off as of 11.2

2019-02-18 Thread Ravi Krishna
Are there any plans to support PG on WSL ? Just curious.

  1   2   >