Re: [GENERAL] mild modification to pg_dump
On 11/17/2017 12:19 PM, marcelo wrote: Sorry, I was not exact. I don't need nor like to change pg_dump. Rather, based on pg_dump code, I need to develop a daemon which can receive a TCP message (from a privileged app) containing some elements: the database to dump, the user under which do that, and his password. (My apps are using that same data, of course, encripted to the common users). I would just fork pg_dump to do the actual dump rather than try and incorporate its source code into your app. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Setting up replication slave on remote high latency host
On 11/15/2017 6:02 PM, Rory Falloon wrote: Right now I am trying to dump the database, gzip, move across, and import into the new slave (which is configured as a master to perform the initial setup). Ideally I do this dump, move and import during a period of inactivity on the master so the new server will come up and immediately be able to catch up on replication due to lack of activity. However, I have been importing the current db as a test and after 90 minutes it seems to have only got 2/3 of the way. I am not confident this will work but it seems like the most efficient way to start. you can't use pg_dump to create a slave, as it won't have the same timeline. I would use pg_basebackup, but in general streaming replication over a high latency erratic link will never work real well. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] jsonb
On 11/14/2017 2:30 PM, hmidi slim wrote: I'm trying to use the function to_jsonb and create the name: to_jsonb ('{"key1":" ' || 'text1' || '","key2":" ' || 'text2' || '"}'); But after that I used Objection.js ORM to get data using the query: Product.query().where('id',1).then(prod => {console.log(prod)}) I think that the problem maybe with the usage of to_jsonb function, maybe I miss something. But when I fetch the data with the ORM I found that the type was a string and not a jsonb never heard of your ORM... does it even know what postgres jsonb is ? do you know what actual SQL query that piece of ORMism generates ? -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Migrating plattaform
On 11/8/2017 11:38 AM, Valdir Kageyama wrote: I need migrated the postgres from Linux on IBM Power to Oracle Linux on SPARC. My doubt is possible copy the datafiles to new enviorement ? or I need using other means of copying the data. For exemples: pg_dump/pg_restore. pretty sure you can't copy binary database files between architectures, as various data structures have different binary representations. sure, pg_dump -Fc | pg_restore, that works fine across architectures. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] OpeSSL - PostgreSQL
On 11/9/2017 1:59 PM, chiru r wrote: How to configure the PostgreSQL to allow specif cipher suites from different client applications? see https://www.postgresql.org/docs/9.5/static/ssl-tcp.html -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] database size changed after restoring using pg_restore
On 11/8/2017 2:59 PM, Dylan Luong wrote: Hi I am upgrading some databases from a PostgreSQL 9.2 server to a new PostgreSQL 9.6 server. I used pg_dump and pg_restore for the upgrade. . But when I listed the size of the database (postgres=# \l+) between the 9.2 and the upgraded 9.6, they were different. on 9.2 it was 3776 MB on 9.6 it was 1570 MB I also did a few more databases using the same steps and they all appeared to be smaller. Is that normal? yes. a freshly restored database will be all contiguous with no embedded free tuples left over from operations. databases that get updates (or inserts/deletes) tend to bloat. -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] Because PostgreSQL is compiling in old versions of OS?
On 11/8/2017 11:28 AM, DrakoRod wrote: Which the best OS version to complining with the goal to build binaries "standard" o "more compatible"? thats very difficult because library versions change between major distro releases. Stuff compiled for RHEL6/CentOS6 will run on RHEL7/CentOS7 if you install the corresponding -compat libraries, but thats about as good as it gets. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Incremental refresh - Materialized view
On 11/6/2017 11:34 PM, Krithika Venkatesh wrote: Materialized view log is one of the feature in oracle. It creates a log in which the changes made to the table are recorded. This log is required for an asynchronous materialized view that is refreshed incrementally. I read in the below link about incrementally refreshing the materialized view in postgresql: https://medium.com/@hariprasathnallsamy/postgresql-materialized-view-incremental-refresh-44d1ca742599 Can someone let me how to do incremental refresh using Write Ahead Log I note that bloggers sample code on github no longer exists.m I suspect it was half baked, and ran into intractable problems. to do what you want, you would need to implement logical decoding [1] of the WAL stream, you would need to 'understand' the views completely so you can tell if a given tuple update affects one of your views or not (relatively simple for a view which is just `select fields from table where simplecondition`, not so easy for a view which is a N way join with complex filtering and/or aggregation, or whatever), then accumulate these updates somewhere so your incremental refresh could replay them and update the table underlying a given materialized view. I'm sure i'm not thinking of major aspects complicating this. [1] https://www.postgresql.org/docs/current/static/logicaldecoding-explanation.html -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Incremental refresh - Materialized view
On 11/6/2017 10:38 PM, Krithika Venkatesh wrote: I need to implement incremental refresh of materialized view. Please let me know how to do the incremental refresh of materialized view in postgresql 9.5.9 version. Is there anything similar to materialized view log in postgresql. you refresh a materialized view with REFRESH MATERIALIZED VIEW name; There's no 'incremental' methods, as views can be quite complex. I do not know what you mean by 'materialized view log', is this a feature of some other database server ? -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgresql 9.3 service doesn't start on RedHat Linux 6.8
On 11/6/2017 5:28 AM, Vikas Sharma wrote: I am having issues in starting up postgresql service on RHEL 6.8, It was all working fine yesterday but now it doesn't start and throws error - same kind of error that is received connecting to remote postgres database. what /exact/ error is it throwing ? for the meantime I have started postgresql as below - /usr/pgsql-9.3/bin/postmaster -D /var/lib/pgsql/9.3/data & and it is working fine. Can someone let me know where to look at for why I can't start the service? whats in /var/lib/pgsql/9.3/pgstartup.log and /var/lib/pgsql/9.3/data/(latest).log ? -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] [HACKERS] SSL and Encryption
On 11/2/2017 10:12 PM, Jeff Janes wrote: https://wiki.postgresql.org/wiki/List_of_drivers What is 'python native'? psycopg works as long you update your libpq. I thought pythonistas preferred using a native driver that didn't use libpq ? -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] [HACKERS] SSL and Encryption
On 11/2/2017 9:39 PM, Michael Paquier wrote: The SCRAM discussion is spread across two threads mainly with hundreds of emails, which may discourage even the bravest. Here are links to the important documentation: https://www.postgresql.org/docs/current/static/auth-methods.html#auth-password so that says... ... [scram-sha-256] is the most secure of the currently provided methods, but it is not supported by older client libraries whats the state of the more popular bindings now? jdbc, python native, etc ? -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] How to access a second database
On 10/31/2017 12:41 AM, John R Pierce wrote: if you're doing a lot of this, why not use two schema in the same database? then its just ...schema.table... otherwise, you need to use FDW and foreign tables. see https://www.postgresql.org/docs/current/static/postgres-fdw.html oh, I should add... the advantage of using FDW is the other database can be on another server. the disadvantage is, the remote data has to be queried and merged with the local query, the optimizer may not be able to do as good a job as it might with tables in different schema of the same database (which are treated exactly the same as tables in the same schema, other than naming). -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to access a second database
On 10/31/2017 12:15 AM, Sherman Willden wrote: I am trying to access a table from another database. I have the permissions to create under my own login. I have performed the following so far: sherman@sql-dev: createdb sandbox01 sherman@sql-dev:~$ createdb sandbox02. After logging into sandbox02 I performed the following: sandbox02=# CREATE TABLE last_names(last_name TEXT); sandbox02=# INSERT INTO last_names VALUES(willden); Now I think I want to use a foreign key in sandbox01. Is the following how it works after logging into sandbox01? sandbox01=# CREATE TABLE first_and_last(first_name TEXT, last_name FOREIGN KEY last_name REFERENCES sandbox02(last_names(last_name)) and then sandbox01=# INSERT INTO first_and_last(sherman, willden); if you're doing a lot of this, why not use two schema in the same database? then its just ...schema.table... otherwise, you need to use FDW and foreign tables. see https://www.postgresql.org/docs/current/static/postgres-fdw.html -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pg_audit to mask literal sql
On 10/30/2017 10:55 AM, rakeshkumar464 wrote: Is there a way in pgaudit to mask literal sqls like the below: insert into table (col1,col2) values(1,2) select * from table where col1 = 1 These sqls are typed by our QA folks using pgadmin. pgaudit records this verbatim which runs afoul of our HIPAA requirement. Prepared statements are not an issue since pgaudit provides a way to suppress values. if you have a HIPAA requirement that says 'dont run manual sql statements', then, well, DONT. why are QA folks making changes on production databases, anyways? thats not within their domain. QA should be working on development or staging databases. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Can't build ODBC -- odbc_config not found
On 10/27/2017 12:39 AM, Devrim Gündüz wrote: On Thu, 2017-10-26 at 23:40 -0700, Chris Albertson wrote: I just tried to build Postgres ODBC from source (psqlodbc-10.00..tar.gz) I type "./configure" Then get this message: configure: error: odbc_config not found (required for unixODBC build) You need to install unixODBC package (or equivalent in your distro) for this command. and on a RHEL style system, you probably also need unixODBC-devel to compile the psqlodbc driver. -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] using conda environment for plpython3u?
On 10/23/2017 2:05 PM, Celia McInnis wrote: Is it possible for users to use their conda environment for plpython, complete with the modules that they have loaded in that environment? If so, what do I do? I am running postgres 9.6.2 and would like to use a conda environment for python 3.6 which contrains a fair number of modules that I want to use (eg., regex, recordclass, pandas, ...). plpython runs in the context of the server user, not the end user. as long as you can maket his 'conda environment' available to that user, and it doesn't violate the single threaded design of a postgres connection, I dunno why not. that said, everything you do in a PL is running in the process context of the core database server. I'm very very hesitant to drag in large complex external systems, and would generally prefer to do that sort of thing in an app server context outside the DB server. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Alternative to pgAdmin Postgres Manager that support pgagent (jobs)
On 10/19/2017 8:14 PM, Adam Brusselback wrote: No other tool I have used will manage pgAgent jobs. they can be managed with SQL, the schema pgAgent uses really isn't that complicated. each job is a row in a table, IIRC. there's also pg_cron, I've never used it, but it is standalone, and managed by SQL statements. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Is it OK to create a directory in PGDATA dir
On 10/19/2017 1:25 PM, Tomas Vondra wrote: Is it fine to create a subdir inside PGDATA and store our stuff there, or will PG freak out seeing a foreign object. PostgreSQL certainly does not check if there are unknown directories in the data directory, and it will not crash and burn. But it causes all sorts of problems, and it increases the probability of human error. most importantly, ONLY the postgres system process should have access to the pgdata directory, it should have permissions 700. your apps should be running as a different user, and that user won't have access to said PGDATA. -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] Alternative to pgAdmin Postgres Manager that support pgagent (jobs)
On 10/19/2017 3:15 PM, Juliano wrote: Omnidb looks nice, but, I guess doesn't support pgAgent as well, any suggestions? pgAgent isn't part of postgres, its part of pgAdmin. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Multiple Schemas vs. Multiple Databases
On 10/13/2017 12:29 PM, Igal @ Lucee.org wrote: I have read quite a few articles about multiple schemas vs. multiple databases, but they are all very generic so I wanted to ask here for a specific use case: I am migrating a Web Application from MS SQL Server to PostgreSQL. For the sake of easier maintenance, on SQL Server I have two separate databases: 1) Primary database containing the data for the application 2) Secondary database containing "transient" data, e.g. logging of different activities on the website in order to generate statistics etc. Both databases belong to the same application with the same roles and permissions. The secondary database grows much faster, but the data in it is not mission-critical , and so the data is aggregated daily and the summaries are posted to the primary database, because only the aggregates are important here. To keep the database sizes from growing too large, I periodically delete old data from the secondary database since the data becomes obsolete after a certain period of time. At first I thought of doing the same in Postgres, but now it seems like the better way to go would be to keep one database with two schemas: primary and transient. The main things that I need to do is: a) Be able to backup/restore each "part" separately. Looks like pg_dump allows that for schemas via the --schema=schema argument. b) Be able to query aggregates from the secondary "part" and store the results in the primary one, which also seems easier with multiple schemas than multiple databases. Am I right to think that two schemas are better in this use case or am I missing something important? generally, yeah, unless you eventually decide to split off the two databases onto separate servers for performance reasons. Of course, to access the 'other' database, you'd need to use postgres_fdw or dblink. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] core system is getting unresponsive because over 300 cpu load
On 10/10/2017 3:28 PM, pinker wrote: It was exactly my first guess. work_mem is set to ~ 350MB and I see a lot of stored procedures with unnecessary WITH clauses (i.e. materialization) and right after it IN query with results of that (hash). 1000 connections all doing queries that need 1 work_mem each will consume 1000*350MB == 350GB of your ram. many queries use several work_mem's. if the vast majority of your operations are OLTP and only access a few rows, then large work_mem is NOT a good idea. If you're doing large aggregate operations like OLAP for reporting or whatever, then thats another story, but generally doing that sort of thing does NOT use 1000 connections. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Adding Cyrillic support
On 9/23/2017 1:44 AM, Job wrote: how can i add more character support to PostgreSql 9.6? I need to also store some information in Cyrillic, for Russian users. Can more characters coexist in the same database? utf-8 should be able to store just about any character. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Why can't the database owner create schemas and how can I enable that?
On 9/22/2017 10:29 PM, Tim Uckun wrote: I am setting up a database for an app. So I create a user without superuser but with login priveleges I then create a database and set it's owner to that user like this... dropdb --if-exists api_development dropuser --if-exists api_user createuser api_user -P -d createdb api_development -O api_user The app can now connect to the database but it can't create any tables, schemas or anything else unless I give it superuser privileges. Is there any way I can make this user a superuser for this database without making it a superuser on other databases? that should have worked just fine. [root@new ~]# useradd fred [root@new ~]# su - postgres $ createuser fred $ createdb fred -O fred $ logout [root@new ~]# su - fred [fred@new ~]$ psql psql (9.3.19) Type "help" for help. fred=> create schema xyzzy; CREATE SCHEMA fred=> create table xyzzy.abc (id serial, dat text); CREATE TABLE fred=> \q . -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] libpq confusion
On 9/20/2017 10:34 AM, Igor Korot wrote: >From the documentation: https://www.postgresql.org/docs/9.1/static/libpq-exec.html [quote] PGRES_COMMAND_OK Successful completion of a command returning no data. [/quote] No data = no rows, right? from that same page, a bit farther down, clarifying the potentially confusing wording. If the result status isPGRES_TUPLES_OK, then the functions described below can be used to retrieve the rows returned by the query. Note that aSELECTcommand that happens to retrieve zero rows still showsPGRES_TUPLES_OK.PGRES_COMMAND_OKis for commands that can never return rows (INSERT,UPDATE, etc.). A response ofPGRES_EMPTY_QUERYmight indicate a bug in the client software. -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] [HACKERS] USER Profiles for PostgreSQL
On 9/20/2017 6:55 AM, Stephen Frost wrote: If AD is in the mix here, then there's no need to have things happening at the database level when it comes to passwords- configure PG to use Kerberos and create a princ in AD and put that on the database server and then users can authenticate that way. for web apps? how does a web browser do kerberos over http ? -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] libpq confusion
On 9/20/2017 6:30 AM, Igor Korot wrote: Hi, guys, On Wed, Sep 20, 2017 at 12:20 AM, Allan Harvey <allan.har...@libertyonesteel.com> wrote: How do I properly check if the record exists from libpq? Igor, I use PQntuples() to check the number of ... tuples, for > 0 I was actually curious - isn't it what "PGRES_COMMAND_OK" for? IIUC, this constant indicates successful query run, but no records was generated. Or am I missing something and I will have to check PQntuples()? a query that returns zero rows is still successful. -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] [HACKERS] USER Profiles for PostgreSQL
On 9/19/2017 3:32 PM, chiru r wrote: How those application accounts get recognized in database? Let say App_user1 authenticated through application ,after that how the App_user1 get access to DB? can you please provide more information ,how the app users are accessing database ? the user isn't accessing the database, the application itself is accessing the database. a web app might have 10s of 1000s of unique users, but a web app typically only uses a single application account to access the database. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] [HACKERS] USER Profiles for PostgreSQL
On 9/19/2017 12:33 PM, chiru r wrote: Yes, LDAP will do. However we need to sync the user accounts and groups between AD and PG servers.and then AD profiles will apply to PG user accounts for authentication. if you're using LDAP from the AD servers to authenticate, whats to sync? my database servers, the only 'users' connecting to them directly are the database administrators... the applications connect with application accounts, and if more security is required, these use certificates, or they use unix 'ident' local connections. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] advisory locks namespace?
On 9/17/2017 1:33 PM, Rob Nikander wrote: Am I right if two applications use advisory locks in the same database, they need to know which lock numbers are used to by the other application, to avoid conflicts? indeed. it also depends if they want to honor each others locks. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] New interface to PG from Chapel?
On 9/15/2017 12:56 PM, Thelonius Buddha wrote: I’m interested to know the level of effort to build a psycopg2-like library for Chapel: http://chapel.cray.com/ Not being much of a programmer myself, does someone have an educated opinion on this? I don't see any standard database interface frameworks to hang a SQL library/driver on. the fact that its a heavily concurrent/parallel language would likely mean there's many boobytraps en route to successfully using SQL, as you need to ensure that one PG connection is only ever used by the thread that created it -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] looking for a globally unique row ID
On 9/14/2017 12:45 AM, Rafal Pietrak wrote: Can anybody help me find a way to implement an ID which: 1. guarantees being unique across multiple tables. 2. guarantees its uniqueness not only during INSERT, but also during the lifetime of the database/application (e.i. during future UPDATES). 3. guarantees persistence of value across database backup/restore/upgrade. isn't that the problem that GUID are supposed to answer ? -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgres DB is failed due to pg_Xlog is continues full.
On 9/13/2017 9:11 PM, Yogesh Sharma wrote: >>What you could do is copying its contents to a large disk, and then allow it to recover from the crash. I will copy the PGDATA into large disk. After that it is require to execute some specific command or automatically recovery will start? If any command is require to execute please let me know. you're going to need an experienced postgres admin who understands low level disk recovery. there's a variety of postgres businesses who offer such services for hire. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgres DB is failed due to pg_Xlog is continues full.
On 9/13/2017 8:29 PM, Yogesh Sharma wrote: We are using Postgres 8.1.18 version. In Postgres log, we found below logs. –- CONTEXT:writing block 0 of relation 1664/0/1260 ERROR: could not write block 0 of relation 1664/0/1260: Bad address Due to this pglog_Xlog directory has been continuously increased and directory has been full and Postgres is stopped. Please let me know how to recover this issue. PostgreSQL 8.1 has been unsupported for quite a long time. 8.1.18 was released in 2009, 8.1.23 was the last update of 8.1 in late 2010. the oldest 'supported' postgres is 9.2, and thats at EOL. prior to that error, something else catastrophic must have happened to the system, that error is more of a side effect. recovering a database server that far gone which is running such an obsolete version will likely be an expensive proposition. before doing anything, you should make a complete backup of the $PGDATA directory (and other tablespace directories, if you use any). -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] SAP Application deployment on PostgreSQL
On 9/8/2017 12:34 PM, chiru r wrote: We have multiple SAP applications running on Oracle as backend and looking for an opportunity to migrate from Oracle to PostgreSQL. Has anyone ever deployed SAP on PostgreSQL community edition? Is PostgreSQL community involved in any future road-map of SAP application deployment on PostgreSQL? Does SAP support PostgreSQL ? -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] column names query
On 9/7/2017 12:18 AM, haman...@t-online.de wrote: is there a simple way to retrieve column names from a query - basically the way psql adds column headings when I do a select? if you're using libpq to do your queries, PQfname(*result, column_number) returns the name of that column number. there are equivalent functions in most other APIs. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Corrupt index
On 8/15/2017 1:07 PM, Andreas Kretschmer wrote: I only wanted to exclude it. Anyway, you should install the latest patches. he can't, with RDS, he's at the mercy of Amazon for all sysadmin duties. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pglogical repo
On 8/10/2017 10:30 AM, armand pirvu wrote: Looking at the installatoion steps and the yum repositories sudo yum installhttp://packages.2ndquadrant.com/pglogical/yum-repo-rpms/pglogical-rhel-1.0-2.noarch.rpm [ ] pglogical-rhel-1.0-1.noarch.rpm [ ] pglogical-rhel-1.0-2.noarch.rpm [ ] pglogical-rhel-1.0-3.noarch.rpm 1.0-3 is for Postgres 9.6 ? Or can it be used on 9.5 as well ? I am on 9.5 That is clearly answered here, https://www.2ndquadrant.com/en/resources/pglogical/pglogical-installation-instructions/ -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Audit based on role
On 8/7/2017 4:33 PM, anand086 wrote: We are running Postgres 9.6.3 version and have requirement, where we want to audit any DML action performed by a user whose has module_dml role granted. What would be the best way to do that? I was thinking to write something likehttps://wiki.postgresql.org/wiki/Audit_trigger, but I am not sure on how to integrate "user whose has module_dml role granted" into the function. Instead of trigger is there any better way to achieve this? DML as in select/insert/update/delete ? or did you mean DDL as in CREATE/ALTER TABLE, etc ? -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Invalid byte sequence for encoding UTF-8 0xc3\n
On 7/30/2017 1:43 PM, Igor Korot wrote: what encodings are default on your system ?`\l+` in psql should show the encodings. Is this "backslash + pipe + plus-sign"? Trying it gives: "Invalid command". \ + lower case L + plus sign, thats the psql metacommand to list all databases with extra info postgres=# \l+ List of databases Name | Owner | Encoding | Collate | Ctype| Access privileges | Size | Tablespace |Description +---+--+-+-+---+-++ junk | pierce| UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 6586 kB | pg_default | observers_test | observers | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 207 MB | pg_default | pierce | pierce| UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 6786 kB | pg_default | postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 6610 kB | pg_default | default administrative connection database scac | scac | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 75 MB | pg_default | scac_save | scac | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 105 MB | pg_default | smf| smf | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 34 MB | pg_default | sympa | sympa | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 6898 kB | pg_default | template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +| 6457 kB | pg_default | unmodifiable empty database | | | | | postgres=CTc/postgres | || template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +| 6465 kB | pg_default | default template for new databases | | | | | postgres=CTc/postgres | || tendenci | tendenci | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 15 MB | pg_default | test | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 6634 kB | pg_default | (12 rows) for instance, all my databases are UTF8 on that server. -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] Invalid byte sequence for encoding UTF-8 0xc3\n
On 7/30/2017 1:19 PM, Igor Korot wrote: I am using a database for my project that I created inside SQLite3. This database contains a table called "abc<ALT+225>" (it is "abc" + symbol with the code 225 - greek letter "beta or a German symbol for "ss"). in what encoding? in ISO 8859-1, -15, beta aka sharp S is code 223 (U+00DF), not 225. in UTF-8, its C3,9F. ... Both the database and the table are created with default encoding. what encodings are default on your system ?`\l+` in psql should show the encodings. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Question about paritioning
On 7/27/2017 12:43 AM, Alex Samad wrote: ... as long as the queries stay on a small amount of parts that we should be okay. thats true as long as the planner can restrict the queries to the correct partition... but there's lots of queries that end up having to hit all partitions because the planner can't guess correctly. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Question about paritioning
On 7/26/2017 10:08 PM, Alex Samad wrote: I have a large table about 3B rows, that I would like to partition on a column called _received which is of type timestamp a good goal is to have no more than about 100 partitions max, and ideally more like 25. when we partition on time stamp, we typically do it by the week, as we're doing 6 month data retention. IIRC, we're using DATE_TRUNC('week', timestamp)::DATE for use as the partition label and key. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] ODBC driver issue
On 7/26/2017 9:06 PM, Igor Korot wrote: With the char(), is there a "Standard SQL" way to do trimming? trim(trailing from fieldname) but really, if you want a variable length string without padding, don't use CHAR() as a data type. use VARCHAR or TEXT. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] ODBC driver issue
On 7/26/2017 7:25 PM, Igor Korot wrote: When I tried to query the database table with the column "char(129)" I get: "My field text" (the text with the bunch of spaces at the end). The driver is not the current one - but I don't remember the version. Is this known issue? Maybe its already fixed with the latest ODBC driver? a char(129) field is 129 characters long, and will always be padded with spaces to that length. maybe you wanted a varchar (aka char varying) ? these are variable length and return just what you put in them. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Logging at schema level
On 7/20/2017 11:11 PM, Nikhil wrote: Schema = tenant. So basically tenant level logging. select from schema1.table1 join schema2.table2 on where .; if you have per schema logging, where should that get logged ? you could implement per DATABASE logging, if you A) add the database name to the log_prefix, and B) feed your logs to a program that understands this and splits them out to a log file per database.you could also do this on a per user basis. but, schema is something very dynamic, its a namespace within a database, and queries can touch multiiple schemas. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Logging at schema level
On 7/20/2017 10:10 PM, Nikhil wrote: I am using postgresql schema feature for multi-tenancy. can we get postgresql logs at schema level. Currently it is for the whole database server (pg_log) if you /were/ able to split the logs by schema, and a query touched multiple schemas, then where would that get logged? -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] Backward compatibility
On 7/20/2017 8:40 PM, Tom Lane wrote: Applications might use this function to determine the version of the database server they are connected to. The number is formed by converting the major, minor, and revision numbers into two-decimal-digit numbers and appending them together. For example, version 8.1.5 will be returned as 80105, and version 8.2 will be returned as 80200 (leading zeroes are not shown). Zero is returned if the connection is bad. Hm, we need to update that text for the new 2-part version numbering scheme, don't we? will 10 return like 100100 if its 10.1, or 11 ? -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] Backward compatibility
On 7/20/2017 7:57 PM, David G. Johnston wrote: Actually, The docs do cover how to do this directly in libpq. odds are pretty good that... |PQserverVersion| Returns an integer representing the backend version. int PQserverVersion(const PGconn *conn); Applications might use this function to determine the version of the database server they are connected to. The number is formed by converting the major, minor, and revision numbers into two-decimal-digit numbers and appending them together. For example, version 8.1.5 will be returned as 80105, and version 8.2 will be returned as 80200 (leading zeroes are not shown). Zero is returned if the connection is bad. Actually invokes `show server_version_num;' -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] Backward compatibility
On 7/20/2017 7:46 PM, Igor Korot wrote: ALso, I presume there is no special libpg function, right? libpq would only be able to return the libpq version, which might not be the same as the server version. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] What is exactly a schema?
On 7/14/2017 4:59 AM, marcelo wrote: Now I have a related question. Could I select a specific schema in the connection string? Say, by example database=mydb.schemanumbertwo ? the default search_path is $user,public so if you connect with different SQL usernames for your different schemas, and have all your common tables in PUBLIC, then it will just fall out. you'll need to be careful with permissions, of course. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgresql - commercial version
On 7/7/2017 8:22 AM, Terry Schmitt wrote: You have the open source version installed. Now where it was installed from is a different story. if its a redhat/centos/fedora type system, try... rpm -qa |grep ^postgres if it was installed from rpm's, the full names of the rpms will clue you in. also, see what path postgres's tools are installed in, the RPM versions install everything in /usr/pgsql-9.x/bin while EDB typically installs to /opt or something. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Is PL-PGSQL interpreted or complied?
On 7/5/2017 5:10 PM, Tim Uckun wrote: I am curious about the stored proc languages inside of postgres. When I write a stored proc is it compiled to some internal representation or just interpreted? plpgsql is interpreted directly, I don't believe its even pre-tokenized. How does this work with other languages? that varies with the language.. PLJava is compiled to java byte codes by the javac compiler even before its loaded (as you load the precompiled .jar file with the pljava loader), plpython uses .pyc files, same as if python is run from the command line, plperl is direct interpreted, same as perl normally. embedded C is precompiled to machine language as you just load the DLL/SO files into postgres etc etc. Also would it be possible to extract PL-PGSQL into a standalone (albeit crippled) language? Is the interpreter/compiler modular like that? the interpreter *IS* SQL, which is the whole database server. I don't think a standalone plpgsql without SQL would be of much use. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Function with limit and offset - PostgreSQL 9.3
On 6/8/2017 6:36 PM, marcinha rocha wrote: |UPDATEtablea a SETmigrated =yes WHEREa.id =row.id;| On my original select, the row will have migrated = false. Maybe All I need to put is a limit 2000 and the query will do the rest? SELECT does not return data in any determinate order unless you use an ORDER BY so LIMIT 2000 would return some 2000 elements, not neccessarily the 'first' 2000 elements unless you somehow order them by however you feel 'first' is defined. WITH ids AS (INSERT INTO tableb (id) SELECT id FROM tablea WHERE migrated=FALSE ORDER BY id LIMIT 2000 RETURNING id) UPDATE tablea a SET a.migrated=TRUE WHERE a.id = ids.id RETURNING COUNT(a.id); I'm not 100% sure you can do UPDATE RETURNING COUNT(...), worse case the UPDATE RETURNING would be a subquery of a SELECT COUNT()... -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] Function with limit and offset - PostgreSQL 9.3
On 6/8/2017 5:53 PM, marcinha rocha wrote: Hi guys! I have the following queries, which will basically select data, insert it onto a new table and update a column on the original table. I'm sure your example is a gross simplification of what you're really doing, but if that's really all you're doing, why not do it all at once, instead of row at a time? BEGIN; insert into tableb (id) select id from tablea; update tablea set migrated=true; COMMIT; thats far more efficient that the row-at-a-time iterative solution you showed. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Db backup
On 6/8/2017 12:29 PM, John R Pierce wrote: On 6/8/2017 12:10 PM, Jimmy Lunceford wrote: A windows 7 computer went bad but the hdd is still operable. Is there a way to transfer the db data directories to another computer or does it require a dump and restore? the data directories should work as-is on another Windows computer with the same major version and 'build' of postgresql and same 32 vs 64 bitness. By 'build' I mean like enterpriseDB installer vs BigSQL vs custom-built. if custom built, the build options will need to be pretty much the same (visualC vs gcc/mingw vs cygwin, and the same general ./configure options, etc etc). to restore said file system level backup, I would... A) install the compatible postgresql build (if the original was 9.3.5, its fine to use 9.3.17) B) stop the postgresql database service C) replace the 'data' directory with a copy of the data directory from the disk of said dead box D) restart the postgresql database service E) verify your databases are intact and complete. if you used custom tablespaces, be sure to restore all those at step C, too, qirh the same paths as the originala. -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] Db backup
On 6/8/2017 12:10 PM, Jimmy Lunceford wrote: A windows 7 computer went bad but the hdd is still operable. Is there a way to transfer the db data directories to another computer or does it require a dump and restore? the data directories should work as-is on another Windows computer with the same major version and 'build' of postgresql and same 32 vs 64 bitness. By 'build' I mean like enterpriseDB installer vs BigSQL vs custom-built. if custom built, the build options will need to be pretty much the same (visualC vs gcc/mingw vs cygwin, and the same general ./configure options, etc etc). -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Help with restoring a dump in Tar format? (dependencies/ordering)
On 6/5/2017 5:49 PM, David G. Johnston wrote: On Mon, Jun 5, 2017 at 5:40 PM, John R Pierce <pie...@hogranch.com <mailto:pie...@hogranch.com>>wrote: i ndeed, any sort of constraint that invokes a function call which looks at other tables could later be invalidated if those other tables change, and postgres would be none the smarter. the same goes for trigger based checks. Yes. I could imagine a new kind of "multi-referential trigger" that would specify all relations it touches and the function to fire when each of them is updated. While you'd still have to write the functions correctly it would at least allow one to explicitly model the multi-table dynamic in pg_catalog. Lacking that CHECK is no worse than TRIGGER and we've decided to say "use triggers". at $job, the policy is, AVOID ALL TRIGGERS AND FANCY CONSTRAINTS :) they don't even like using foreign key references, and rely on code logic to do most joins in the performance-critical OLTP side of things. -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] Help with restoring a dump in Tar format? (dependencies/ordering)
On 6/5/2017 5:32 PM, David G. Johnston wrote: On Mon, Jun 5, 2017 at 5:15 PM, Ken Tanzer <ken.tan...@gmail.com <mailto:ken.tan...@gmail.com>>wrote: From the docs: https://www.postgresql.org/docs/9.6/static/sql-createtable.html <https://www.postgresql.org/docs/9.6/static/sql-createtable.html> "Currently, CHECK expressions cannot contain subqueries nor refer to variables other than columns of the current row. The system column tableoid may be referenced, but not any other system column. I wonder if that should say "should not," or be followed by something like this: Make it say "must not" and I'd agree to change the word "cannot" and leave the rest. Adding a note regarding functions seems appropriate. Aside from being a bit more verbose there is nothing useful that writing this as "CHECK function()" provides that you don't also get by writing "CREATE TRIGGER". In a green field we'd probably lock down CHECK a bit more but there is too much code that is technically wrong but correctly functioning that we don't want to break. IOW, we cannot mandate that the supplied function be immutable even though we should. And we don't even enforce immutable execution if a function is defined that way. indeed, any sort of constraint that invokes a function call which looks at other tables could later be invalidated if those other tables change, and postgres would be none the smarter. the same goes for trigger based checks. -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] Help with restoring a dump in Tar format? (dependencies/ordering)
On 6/5/2017 5:15 PM, Ken Tanzer wrote: I can't really make this an FK. I can (and probably will) put this into a trigger. Although it seems like an extra layer of wrapping just to call a function. I'm curious if there's any conceptual reason why constraints couldn't (as an option) be restored after all the data is loaded, and whether there would be any negative consequences of that? I could see if your data still didn't pass the CHECKs, it's already loaded. But the constraint could then be marked not valid? when you have constraints that rely on calling functions, how would it know what order to check things in ? -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Access Management question
On 5/30/2017 2:06 PM, chiru r wrote: Is there any reason, why it is showing roles name as owner of table instead of user? 'user' is a synonym for 'role' with login permission. CREATE USER fred; is exactly the same as... CREATE ROLE fred WITH LOGIN; when you SET ROLE rolename; its virtually the same as logging in as rolename Why it is not allowed to GRANT SUPERUSER/CREATEDB etc roles Directly to user (using GRANT ,not ALTER USER) ? GRANT is used to grant object related permissions or to grant role membership, its not a 'role' nor do roles inherit special attributes like SUPERUSER, CREATEDB. -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] Installing module for 9.6, not 9.2, on Centos?
On 5/23/2017 11:39 PM, Ken Tanzer wrote: Can I also ask y'all a more general question about this, specifically related to how Postgres is packaged for RHEL/Centos? I've got both 9.6 and 9.2 installed. In this case though, it seems that the 9.2 version is privileged/selected by default. But psql defaults to the 9.6 version. Are there other similar things that will default to either 9.2 or 9.6? And if so, what controls that behavior, is it easily-changeable, and/or can you go back and forth? I've never tried running two versions at once before. Maybe this is an isolated incident, but I'm just trying to get my mind around the concept, and know what kind of pitfalls if any to expect or beware of. Thanks! when you run multiple versions, you need to keep the path *and* the port straight. each server running is on a separate port. I have one dev box at work that runs pg 9.3, 9.4, 9.5, and 9.6, all on seperate ports. -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] Have just libpg installer
On 5/23/2017 7:45 PM, Igor Korot wrote: Because I can't find libpg.so or libpg.dylib or libpg.Framework anywhere. Is there a place to get them? maybe because thats not the correct name? its libpq as in PQ. I have no idea what a .Framework file is, but any of the postgres installers for Mac OSX should install libpq.dylib and the appropriate .h files etc. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Installing module for 9.6, not 9.2, on Centos?
On 5/23/2017 4:54 PM, Ken Tanzer wrote: But the install still goes to 9.2: PATH=/usr/local/pgsql96/bin/:$PATH make USE_PGXS=1 install | /bin/mkdir -p '/usr/pgsql-9.2/share/contrib' earlier you said something about /usr/pgsql-9.6/bin ... is it that, or is it /usr/local/pgsql96/bin ? does /usr/pgsql-9.6/bin/pg_config output a whole pile of directory assignments that make sense ? or does /usr/local/pgsql96/bin/pg_config do that? -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] Weird periodical pg log
On 5/19/2017 1:25 AM, cen wrote: < 2017-05-15 17:00:41.861 CEST >LOG: parameter "archive_command" changed to ""/opt/omni/lbin/pgsqlbar.exe" -stage %p -backup" I believe /opt/omni is the default installation path for HP Data Protector, formerly known as OmniBack. That comprehensive backup system includes database backup capabilities, I'm guessing thats what you're seeing here. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Call for users to talk about table partitioning
On 5/18/2017 2:29 PM, Robert Eckhardt wrote: All the code for creating and managing partitions is part of the core Postgres code. What we are interested in looking into is what that work flow might look like and how that workflow can be supported with a GUI management tool. only thing I'd expect from a GUI management tool would be to allow me to create partitioned tables and its partitions, and display their attributes appropriately. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Error that shouldn't happen?
On 5/18/2017 1:40 PM, Andrew Kerber wrote: It appears to me you might be making this a lot more difficult than necessary. Why not just pre-create the required partitions daily or weekly or monthly? Or do you have a requirement that a new partition only be created the first time it is required? +1 we create new partitions in advance of their being needed as part of a maintenance process that's strictly single threaded. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgrSQL server : CPU and Memory
On 5/17/2017 8:50 PM, kaustubh kelkar wrote: I am a developer and I want to monitor metrics related to CPU and Memory of PostgreSQL server only using SQL queries. postgres itself doesn't track that level of OS specific stuff. Do we have any system tables which can give us the data? I have found this one https://aaronparecki.com/2015/02/19/8/monitoring-cpu-memory-usage-from-postgres but dont know whether it is exactly the same what I need. thats reading the /proc/ stuff available on Linux systems via SQL, its the same stuff OS level tools like ps, top, free use... it won't work on non-Linux systems, even other Unix systems like BSD don't have the same /proc stuff, and other OS's like Windows don't have /proc at all. if you don't know whether that's what you need, I'm not sure how we could know what you need. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] column names and dollar sign
On 5/17/2017 4:51 PM, Gavin Flower wrote: Variables ending in '$' date back to at least the early days of BASIC - long before the spectre of Microsoft loomed large, let alone 'Visual Basic'! I note even INT fields have $ names there... IBM used to like to use $ in names for system stuff, SYS$BLAHBLAH or whatever. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] storing large files in database - performance
On 5/16/2017 5:25 AM, Eric Hill wrote: I do have the Sequelize ORM and the pg driver in between my code and the database. Can you try a similar test without the ORM, just going straight from node.js to sql ? -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] storing large files in database - performance
On 5/16/2017 7:35 AM, Thomas Kellerer wrote: When my (JDBC based) SQL client and the database server are on the same computer... node.js is Javascript, not java w/ jdbc -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] EnterpriseDB installed PostgreSQL 9.6 vs. REPMGR. Round 2 - compilation issues.
On 5/14/2017 2:20 PM, Joshua D. Drake wrote: A bit outdated but: http://raghavt.blogspot.com/2015/01/configuring-automatic-failover-using.html the advice on that page to unlink a library in /lib64 on an RPM managed system makes me very leery of the rest of the article. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to recover my postgres database ?
On 5/11/2017 9:53 AM, Pierre Couderc wrote: I have the pg_dumpall of last night, but many dbs have changed today... ? suggestion in the future, instead of simply pg_dumpall, where all your databases are in one opaque lump, try something like... #!/bin/bash # d=`date +\%a` dst=/home2/backups/pgsql /usr/pgsql-9.3/bin/pg_dumpall --globals-only | gzip > $dst/pgdumpall.globals.$d.sql.gz for i in $(psql -tc "select datname from pg_database where not datistemplate"); do pg_dump -Fc -f $dst/pgdump.$i.$d.dump $i done which uses pg_dumpall to dump the globals only into one file, then uses pg_dump -Fc to create compressed format dumps of each individual database, these can be selectively restored with pg_restore (for instance, you could restore just one table, or schema only, or data only, etcetc).that script is setup to create a different set of files for each day of the week, so you have 7 days backup history, change the parameter of the d=`date... line if you want a different backup rotation scheme, and of course, dst is the destination -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to recover my postgres database ?
On 5/11/2017 9:53 AM, Pierre Couderc wrote: I have broken my postgres database by typing : psql How can I safely repair, knowing that I have the pg_dumpall of last night, but many dbs have changed today... ? Thanks in advance was there anything in the postgres database other than the default? psql template1 -c "drop database postgres; create database postgres with template template0" should restore it to a virgin stock empty 'postgres' -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL]
varying::information_schema.yes_or_no AS is_self_referencing, 'NO'::character varying::information_schema.yes_or_no AS is_identity, NULL::character varying::information_schema.character_data AS identity_generation, NULL::character varying::information_schema.character_data AS identity_start, NULL::character varying::information_schema.character_data AS identity_increment, NULL::character varying::information_schema.character_data AS identity_maximum, NULL::character varying::information_schema.character_data AS identity_minimum, NULL::character varying::information_schema.yes_or_no AS identity_cycle, 'NEVER'::character varying::information_schema.character_data AS is_generated, NULL::character varying::information_schema.character_data AS generation_expression, CASE WHEN c.relkind = 'r'::"char" OR (c.relkind = ANY (ARRAY['v'::"char", 'f'::"char"])) AND pg_column_is_updatable(c.oid::regclass, a.attnum, false) THEN 'YE S'::text ELSE 'NO'::text END::information_schema.yes_or_no AS is_updatable FROM pg_attribute a LEFT JOIN pg_attrdef ad ON a.attrelid = ad.adrelid AND a.attnum = ad.adnum JOIN (pg_class c JOIN pg_namespace nc ON c.relnamespace = nc.oid) ON a.attrelid = c.oid JOIN (pg_type t JOIN pg_namespace nt ON t.typnamespace = nt.oid) ON a.atttypid = t.oid LEFT JOIN (pg_type bt JOIN pg_namespace nbt ON bt.typnamespace = nbt.oid) ON t.typtype = 'd'::"char" AND t.typbasetype = bt.oid LEFT JOIN (pg_collation co JOIN pg_namespace nco ON co.collnamespace = nco.oid) ON a.attcollation = co.oid AND (nco.nspname <> 'pg_catalog'::name OR co.collname <> 'default'::name) WHERE NOT pg_is_other_temp_schema(nc.oid) AND a.attnum > 0 AND NOT a.attisdropped AND (c.relkind = ANY (ARRAY['r'::"char", 'v'::"char", 'f'::"char"])) AND (pg_has_ role(c.relowner, 'USAGE'::text) OR has_column_privilege(c.oid, a.attnum, 'SELECT, INSERT, UPDATE, REFERENCES'::text)); -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Python versus Other Languages using PostgreSQL
On 5/10/2017 2:43 PM, Adrian Klaver wrote: 1) Uber 2)Yahoo 3) Instagram and, each of those giant businesses has their own entirely custom 'platforms', so its not really fair to call them 'largest web platforms' as each of those custom platforms is in use at only one business. Sure, largest web SITES by traffic, usage. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Caching and Blobs in PG? Was: Can PG replace redis, amqp, s3 in the future?
On 5/5/2017 11:28 AM, Peter J. Holzer wrote: On 2017-05-04 23:08:25 +0200, Sven R. Kunze wrote: On 03.05.2017 12:57, Thomas Güttler wrote: Am 02.05.2017 um 05:43 schrieb Jeff Janes: No. You can certainly use PostgreSQL to store blobs. But then, you need to store the PostgreSQL data **someplace**. If you don't store it in S3, you have to store it somewhere else. I don't understand what you mean here. AFAIK storing blobs in PG is not recommended since it is not very efficient. Seems like several people here disagree with this conventional wisdom. I think it depends very much on what level of "efficiency" you need. On my home server (i5 processor, 32GB RAM, Samsung 850 SSD - not a piece of junk, but not super powerful either) I can retrieve a small blob from a 100GB table in about 0.1 ms, and for large blobs the speed approaches 200MB/s. For just about everything I'd do on that server (or even at work) this is easily fast enough. S3 is often used for terabyte to petabyte file collections. I would not want to burden my relational database with this. -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] Caching and Blobs in PG? Was: Can PG replace redis, amqp, s3 in the future?
On 5/4/2017 2:50 PM, John R Pierce wrote: But there's an extension - pg_cron: https://www.citusdata.com/blog/2016/09/09/pgcron-run-periodic-jobs-in-postgres/ there's also pg_agent which is a cron-like extension, usually bundled with pg_admin but also available standalone https://www.pgadmin.org/docs4/dev/pgagent.html oh, it should be made clear... both of these extensions require an OS level cron/scheduler job to be run every minute or whatever, this job invokes some SQL stuff which checks the pg_cron or pg_agent tables and runs whatever sql tasks are due. -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] Caching and Blobs in PG? Was: Can PG replace redis, amqp, s3 in the future?
On 5/4/2017 2:08 PM, Sven R. Kunze wrote: After searching the web, it seems to me that PostgreSQL doesn't offer a cron-like background job for cleanup tasks. http://stackoverflow.com/questions/18187490/postgresql-delete-old-rows-on-a-rolling-basis But there's an extension - pg_cron: https://www.citusdata.com/blog/2016/09/09/pgcron-run-periodic-jobs-in-postgres/ there's also pg_agent which is a cron-like extension, usually bundled with pg_admin but also available standalone https://www.pgadmin.org/docs4/dev/pgagent.html -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Caching and Blobs in PG? Was: Can PG replace redis, amqp, s3 in the future?
On 5/4/2017 2:28 PM, Alan Hodgson wrote: On Thursday 04 May 2017 14:21:00 John R Pierce wrote: or EBS, and I've heard from more than a few people that EBS can be something of a sand trap. Sorry for following up off-topic, but EBS has actually improved considerably in the last few years. You can get guaranteed (and very high) IOPS on SSD storage, and many instance types come with high-speed throughput to EBS. It's much much better for databases than it was 5 years ago. has it become more stable when Amazon has their occasional major hiccups? -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] Caching and Blobs in PG? Was: Can PG replace redis, amqp, s3 in the future?
On 5/4/2017 2:08 PM, Sven R. Kunze wrote: No. You can certainly use PostgreSQL to store blobs. But then, you need to store the PostgreSQL data **someplace**. If you don't store it in S3, you have to store it somewhere else. I don't understand what you mean here. AFAIK storing blobs in PG is not recommended since it is not very efficient. Seems like several people here disagree with this conventional wisdom. I think what he was talking about the data itself. You have to store the bits and bytes somewhere (e.g. on S3). afaik, S3 is not suitable for the $PGDATA directory, its more of an archival block file store for sequential access.for the actual database storage in the AWS world, you'd either use EC2 local storage, or EBS, and I've heard from more than a few people that EBS can be something of a sand trap. re: storing blobs in postgres, I would be very hesitant to storage LARGE amounts of bulk data directly in postgres -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] Compatibility of libpg
On 5/4/2017 2:19 AM, Magnus Hagander wrote: On Wed, May 3, 2017 at 11:31 PM, John R Pierce <pie...@hogranch.com <mailto:pie...@hogranch.com>> wrote: On 5/3/2017 2:20 PM, Magnus Hagander wrote: Please note that this method of building libpq has been removed from Postgres 10, so it's considered to be deprecated for quite some time. this page https://www.postgresql.org/docs/current/static/install-windows-libpq.html <https://www.postgresql.org/docs/current/static/install-windows-libpq.html> probably should be updated then. That page is completely gone in version 10. If you look at https://www.postgresql.org/docs/devel/static/install-windows.html you will notice the entire 17.2 section has been removed, as well as the text on the root page referring to it. well, dropped in the latest not-yet-released version doesn't really make it 'deprecated for quite some time'. if this has been the long term intention, the docs should have reflected this some revisions back. I still think the Windows packagers (EnterpriseDB) should have a client-only package which has options to just install the libs, or the libs + client utils (psql, pg_dump/restore/dumpall and optionally pg_admin). I realize that this wouldn't be /that/ much smaller than the whole 9 yards, but its a psychological thing for the end user, they think of the server as 'heavy', also would be good for automated client deployments in business envirnoments. -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] Compatibility of libpg
On 5/3/2017 2:20 PM, Magnus Hagander wrote: Please note that this method of building libpq has been removed from Postgres 10, so it's considered to be deprecated for quite some time. this page https://www.postgresql.org/docs/current/static/install-windows-libpq.html probably should be updated then. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Language support of postgresql
On 5/2/2017 11:41 AM, Tom Lane wrote: John R Pierce<pie...@hogranch.com> writes: I thought Postgres supported client_encodings of BIG5, GB18030, and GBK, all of which can be stored in the server using either UTF8 or MULE_INTERNAL (MultiLingual EMACS) encodings for internal storage ? Hm, there's MULE<=>BIG5 converters but I don't see any for GBK or GB18030. Also, it looks like the MULE<=>BIG5 converters do some re-encoding, so it's not clear to me whether they're lossless, which I assume is the concern driving this request. I based my statement on misreading the tables on here, https://www.postgresql.org/docs/current/static/multibyte.html but, now I see, MULE only supports big5 and EUC_CN. My limited readings earlier about BIG5 suggested its a mess of conflicting extensions, E-TEN and others, and the GB* stuff wasn't much better. Anyways, it seems to me like UTF8 is the correct server encoding for most all uses. -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] Language support of postgresql
On 4/28/2017 7:45 AM, Martel, Hong wrote: As I understand, currently Postgres doesn’t support Chinese encoding GBK and BIG5 on both server and client side, only UNICODE. Is it true? Are there any plans for postgresql team to implement GBK and BIG5 encoding anytime soon? Are there any alternative solutions for this besides switching our database to Oracle or others that support the encodings? One of our customers insists that we need to support all three encoding (BIG5, GB2312安and UNICODE). We would love to stick to Postgres if there is any alternative way to solve the problem without incurring big cost. I thought Postgres supported client_encodings of BIG5, GB18030, and GBK, all of which can be stored in the server using either UTF8 or MULE_INTERNAL (MultiLingual EMACS) encodings for internal storage ? -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] Compatibility of libpg
On 5/1/2017 5:44 PM, Igor Korot wrote: But I want to build from MSVC. I already have a solution for it. All I need is to create a project inside that solution which will build the dll and lib files for me. Or I have to use nmake? pretty sure you need to run the top level config script to generate all the right stuff, then you probably can have MSVC run the makefile in the libpq directory. I find it easier to just build the whole server, then just use the libpq.dll rather than trying to build pieces seperately, as it really doesn't take very long. i believe there are notes on building with MSVC on Windows, https://www.postgresql.org/docs/current/static/install-windows-full.html ... I see there are instructions for building libpq only, https://www.postgresql.org/docs/current/static/install-windows-libpq.html but I believe you still need most of the prerequisites as outlined in 17.1.1 ... 17.1.3 discusses a mkvcbuild.pl which makes a pgsql.sln project(?) for Visual Studio -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Compatibility of libpg
On 5/1/2017 4:28 PM, Igor Korot wrote: Like I said, I don't have dll, I downloaded a source files and would like to compile the code myself from the MSVC. And I will use libpg calls directly. build the postgres server, and it will generate the DLL, then link to that with your own apps. static linking to runtime libraries like this is strongly discouraged. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Compatibility of libpg
On 5/1/2017 3:08 PM, Igor Korot wrote: Also - I want to create a project inside my MSVC 2010 solution and compile. Is there anything I need besides src/include and src/interface/libpg? if you're using .NET stuff like adodb or oledb, you'll want the npgsql .net stuff for postgres. if you're directly calling libpq functions, libpq.dll and libpq-fe.h should be all you need, along with the couple .h files libpq-fe references, afaik, thats just pg_config_ext.h and postgres_ext.h -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL Required Monitoring
On 4/28/2017 7:39 AM, Andrew Kerber wrote: I am a fairly experienced Oracle DBA, and we are starting to move in to the PostgreSQL world. I would expect the standard monitoring items are required for mission critical postgres apps, Ie, disk space, wal log space, log monitoring, process counts,software running, connection available on the correct port, CPU usage. the nagios project has a rather handy monitoring script, check_postgres, this is a perl script that can be invoked from most any configurable monitoring framework, and has options to do 100s of different sorts of things, returning simple terse text output that can be parsed by said monitoring framework. Are there additional PostgreSQL specific items that need to be monitored? if so, what items? its always a good idea to watch for stale 'idle in transaction' connections, as they gum up the important VACUUM processing. you can make a simple query against pg_stat_activity to find the oldest 'idle in transaction', and if there are any more than, say, 1 hour old, its worth tracking down why they are happening and hammering the developers to fix it.oracle developers working in java seem to generate a lot of these (speaking from experience) if they aren't careful to avoid it. Postgres JDBC starts a transaction on a simple SELECT, and if the app then just sits there doing nothing, that transaction stays open indefinitely. I had a lot of pushback from developers insisting that SELECT's should not need commit. the one big thing I don't see mentioned in your list above is monitoring replication -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Questions regarding JSON processing
On 4/25/2017 9:21 PM, Glen Huang wrote: For updating db using JSON requests from clients, that I'm not so sure. Should I directly pass the request JSON to PostgreSQL and ask it to parse this JSON and execute a transaction all by itself, or should I parse it in the server and generate the transaction SQL and execute that on PostgreSQL? The former sounds optimal, but I'm not sure if PostgreSQL is able to walk a JSON structure and run a transaction along the way? Should I do it with PL/pgSQL? It seems functions can't execute a transaction? what does "walk a JSON structure and run a transaction along the way" actual entail?Normally, the application starts a transaction, executes one or more SQL commands, then does a commit or rollback. any JSON would be within this transaction. functions are called within a transaction. If your JSON includes instructions as well as data, I'd be rather cautious of letting a remote client send that directly to the database server unless you can ensure that nothing hostile can be done with it, or completely trust all clients. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Total ram size study
On 4/22/2017 8:27 AM, Melvin Davidson wrote: *Has anyone ever done a study on performance increase via ram increase? I have a client on AWS with 8GB total ram (2GB shared_buffers), and I am curious if doubling the ram to 16GB (4GB shared_buffers) will result in minimizing query response time.* entirely dependent on your data set and workload.if the working set fits in 2GB, then more memory likely won't do that much. -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] pg_basebackup issue
On 4/22/2017 2:28 AM, chiru r wrote: I am using Postgresql 9.5 and I have created*backup_admin* user and created *dba_admin *ROLE with SUPERUSER and REPLICATION ,after that GRANT *dba_admin * role to backup_admin user and executed pg_basebakup utility with backup_admin user. role group membership only inherits object rights, like grant . on table ... to role. it doesn't inherit role attributes like SUPERUSER, or REPLICATION. you need to ALTER ROLE to add these to each role. -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] Recover PostgreSQL database folder data
On 4/21/2017 2:13 PM, Edson Lidorio wrote: Apr 21 18:11:41 localhost postgresql96-check-db-dir: "/var/lib/pgsql/9.6/data/" is missing or empty. whats there? ls -la /var/lib/pgsql/9.6/data -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] Recover PostgreSQL database folder data
On 4/21/2017 1:14 PM, Edson Lidorio wrote: On 21-04-2017 16:00, Adrian Klaver wrote: Is that really the case? Yes, I have already given permission on this folder and it does not initialize. I'll try to make another copy in another vm. who owns the files IN the folder? try chown -R postgres.postgres /path/to/data -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] Recover PostgreSQL database folder data
On 4/21/2017 12:00 PM, Adrian Klaver wrote: Apr 21 04:01:48 localhost postgresql96-check-db-dir: cat: /var/lib/pgsql/9.6/data//PG_VERSION: Permissão negada So you got a permissions error when the script was trying to read PG_VERSION. The suspicious part is this: /var/lib/pgsql/9.6/data//PG_VERSION in particular the // Is that really the case? unix/linux seems quite happy to ignore extra /'s ... # ls -l /var/lib/pgsql/9.3/data//PG_VERSION -rw---. 1 postgres postgres 4 May 22 2014 /var/lib/pgsql/9.3/data//PG_VERSION -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] Recover corrupted data
On 4/19/2017 9:24 AM, Alexandre wrote: 2) We dont use RAID. so just a direct attached single disk drive? is it perchance a 'desktop' type disk? those often have dodgy write buffering and lie about writes (saying they are complete when they are just in a volatile ram buffer). sometimes you can turn this behavior off, depending on the brand and model drive, but do note it will slow your system down a fair bit. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Large data and slow queries
On 4/19/2017 12:31 AM, vinny wrote: Given the number of records, my first thought was either partitioning or partial-indexes. The fewer rows are in the index, the quicker it will be to check, and it's not a lot of work to create separate indexes for lat/long ranges or dates. that only works if the planner can figure out which partitions to use in advance, otherwise it ends up having to scan all the partitions. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Large data and slow queries
On 4/18/2017 9:01 PM, Samuel Williams wrote: We want the following kinds of query to be fast: SELECT ... AND (latitude > -37.03079375089291 AND latitude < -36.67086424910709 AND longitude > 174.6307139779924 AND longitude < 175.0805140220076); I wonder if GIST would work better if you use the native POINT type, and compared it like mypoint <@ BOX '((174.6307139779924,-37.03079375089291),(175.0805140220076,-36.67086424910709 ))' with a gist index on mypoint... but, it all hinges on which clauses in your query are most selective, thats where you want an index. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Error During PostGIS Build From Source on Linux
On 4/15/2017 10:49 AM, Adrian Klaver wrote: Probably going to involve dealing with ldconfig. Look in /etc/ld.so.conf and see if the directory that contains libproj.so.12 is in there? the catch-22 is, /etc/ld.so.conf is a global thing, and this guy is building all his stuff to run under his /home/username/ (where, btw, selinux might get unhappy with you). instead, if its private stuff like that, use LD_LIBRARY_PATH in the environment of the processes that need the libraries. or link with the paths hard coded, eg -Wl,-rpath=/home/username//lib64 when linking the packages that refer to these libraries. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Error During PostGIS Build From Source on Linux
On 4/12/2017 10:14 AM, Osahon Oduware wrote: Hi All, I am working on a Linux OS (Centos 6.5). I built GDAL successfully from source as below: ... I sure need the raster support to be enabled in PostGIS. Could anyone help me out with this. if you install those packages from the yum repository, doesn't postgis include raster support? assuming you want pg 9.5 (if another, replace the 95's below with the version you want...) # yum install https://download.postgresql.org/pub/repos/yum/9.5/redhat/rhel-6-x86_64/pgdg-centos95-9.5-3.noarch.rpm # yum install gdal{,-devel} postgresql95{,-server,-devel,-contrib} postgis2_95{,-devel,-utils} -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general