[GENERAL] ORDER with CASE and Random for each case
Hi, If have a view that I would like to sort where I divide the return in 3 different groups. These 3 groups then should have a random sort order each. As I am I using it with an offset, and limit, the randomness should be the same. For example: SELECT user_id, age FROM view_users ORDER BY CASE WHEN age < 20 THEN 1 WHEN age < 50 THEN 2 ELSE 3 END OFFSET 0 LIMIT 20; If I have for each age group 30 users. I want these 3 groups to be ordered randomly but during the paging maintain the order. The way I would do it now is to use setseed() and a union of 3 selects with the 3 conditions and random() in each of the three. Is there a better and more efficient way to do it in one query? Thanks for any suggestions A
[GENERAL] Searching array for multiple items
Hi, I can search an array with 1 = ANY('{1,3,4,7}'::int[]) I need to check for one or multiple items in the array. e.g. '1,7,3' = ANY('{1,3,4,7}'::int[] I do need to check if a) all items exist in the array b) at least one item exists in the array Is there a an operator that allows me to do these two? Does the order of left and right side matter? Right now I have a small function but I guess there is a more efficient way. Thanks for any help. A
[GENERAL] Extract data from JSONB
Hi, I need some help with extracting data from json. I have the following jsonb field modules { "accounts": {"status": true}, "admin":{"status": true}, "calendar": {"status": false}, "chat": {"status": true}, "contacts": {"status": true}, "dashboard":{"status": false}, "help": {"status": true} } How can I convert that into one row each based on status; for example if I only want to have the active modules. Only true module| status --+--- accounts | true admin | true contacts | true help | true Thanks for any advise
[GENERAL] Retrieving comment of rules and triggers
Hi, is there a way to retrieve the comment of rules and triggers. I worked it out on functions, tables, views but am kind of stuck with rules and triggers. Any help is appreciated. Thanks Alex
[GENERAL] Keeping top N records of a group
Hi, i want to archive data and am looking for a query to replace a rather slow function i am currently using. The idea is that for every sym and doc_key I want to keep the records of the top 2 sources. Eg. in below table I want to archive sym 1022 of source 3000 but keep the 4 records from sources 4045 and 4081. Any help on this would be appreciated. source| sym | doc_id | doc_key ---+--+-+-- 3000 | 1022 | 551008 | 23834363 <-- Archive 3000 | 1022 | 551008 | 23834363 <-- Archive 4045 | 1022 | 3699290 | 23834363 4045 | 1022 | 3699290 | 23834363 4081 | 1022 | 3811196 | 23834363 4081 | 1022 | 3811196 | 23834363 3000 | 1028 | 550997 | 23834363 3468 | 1085 | 1740526 | 23834363 3000 | 1149 | 551017 | 23834363 4045 | 1149 | 3699291 | 23834363 3000 | 12 | 551015 | 23834363 3000 | 12 | 551015 | 23834363 3951 | 12 | 3147700 | 23834363 3951 | 12 | 3147700 | 23834363 3000 | 13 | 551000 | 23834363 4045 | 13 | 3699283 | 23834363 3000 | 1327 | 551010 | 23834363 3971 | 1327 | 3394469 | 23834363 3000 | 15 | 551001 | 23834363 4045 | 15 | 3699284 | 23834363 3000 | 16 | 551002 | 23834363 4045 | 16 | 3699285 | 23834363 3000 | 18 | 551013 | 23834363 4045 | 18 | 3699286 | 23834363 3000 | 257 | 551005 | 23834363 3951 | 257 | 3147701 | 23834363 3000 | 2795 | 551011 | 23834363 <-- Archive 3459 | 2795 | 1710571 | 23834363 <-- Archive 3905 | 2795 | 2994791 | 23834363 4045 | 2795 | 3699292 | 23834363 3060 | 2913 | 856199 | 23834363 3000 | 2954 | 551012 | 23834363 <-- Archive 3971 | 2954 | 3394470 | 23834363 4212 | 2954 | 4650870 | 23834363 3183 | 3427 | 1055492 | 23834363 <-- Archive 3971 | 3427 | 3394471 | 23834363 4248 | 3427 | 4763105 | 23834363 3188 | 594 | 1062642 | 23834363 <-- Archive 3188 | 594 | 1062642 | 23834363 <-- Archive 3191 | 594 | 1067501 | 23834363 <-- Archive 3191 | 594 | 1067501 | 23834363 <-- Archive 3192 | 594 | 1068391 | 23834363 <-- Archive 3192 | 594 | 1068391 | 23834363 <-- Archive 3199 | 594 | 1096070 | 23834363 <-- Archive 3199 | 594 | 1096070 | 23834363 <-- Archive 3303 | 594 | 1305467 | 23834363 3303 | 594 | 1305467 | 23834363 4117 | 594 | 4000987 | 23834363 Thanks a lot for any ideas. Alex
[GENERAL] Log Monitoring with PG Admin
Hi, i am running and RDS instance on AWS but monitoring logs is a bit cumbersome. Is there a way to do the through pgadmin3 ? Like tailing the logfile? Does anyone know when 9.5 will be available on aws? Thanks Alex
[GENERAL] Cannot install Extention plperl in 9.5
Hi, I installed 9.5 as described in http://tecadmin.net/install-postgresql-9-5-on-centos/ on centos7. Is there a problem with the Repo or do I miss something? [local]:template1=# \dx List of installed extensions Name | Version | Schema | Description ---+-++-- cube | 1.0 | public | data type for multidimensional cubes dblink| 1.1 | public | connect to other PostgreSQL databases from within a database earthdistance | 1.0 | public | calculate great-circle distances on the surface of the Earth plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language (4 rows) [local]:template1=# *CREATE EXTENSION plperl;* ERROR: could not open extension control file "/usr/pgsql-9.5/share/extension/plperl.control": No such file or directory [root@localhost 06:14:37 extension]# *find /usr/pgsql-9.5/|grep perl* /usr/pgsql-9.5/lib/hstore_plperl.so /usr/pgsql-9.5/include/server/lib/hyperloglog.h /usr/pgsql-9.5/include/server/plperl.h /usr/pgsql-9.5/share/extension/hstore_plperl.control /usr/pgsql-9.5/share/extension/hstore_plperlu--1.0.sql /usr/pgsql-9.5/share/extension/hstore_plperlu.control /usr/pgsql-9.5/share/extension/hstore_plperl--1.0.sql Thanks for any advice Alex
[GENERAL] Cannot Create Objects
Hi, I am having a few problems with access permissions. When I create a new role with NOCREATEUSER and then create a database for that role I can connect to the DB but when trying to create a db object I will get the ERROR: permission denied for schema public. Strangely though, if the role is created with CREATEUSERS I don't have any problems. Here is what I want to do: 1. Create a DBO role e.g. dbo_xxx NOCREATEDB NOCREATEUSER 2. Create a db mydb WITH OWNER db_xxx 3. REVOKE all connection rights from public 4. GRANT only rights to dbo_xxx 5. GRANT all create rights on mydb TO dbo_xxx ; allowing the user to load the db schema This is what I tried REVOKE ALL ON SCHEMA public FROM PUBLIC; CREATE USER dbo_xxx WITH PASSWORD 'mypass' NOCREATEDB NOCREATEUSER; CREATE DATABASE my_db WITH OWNER dbo_xxx ENCODING 'UTF8'; REVOKE CONNECT ON DATABASE my_db FROM PUBLIC; GRANT CONNECT ON DATABASE my_db TO dbo_xxx; GRANT ALL PRIVILEGES ON DATABASE my_db TO dbo_xxx; -- After schema is loaded CREATE USER read_only WITH PASSWORD 'mypass' NOCREATEDB NOCREATEUSER; REVOKE ALL ON ALL TABLES IN SCHEMA public FROM PUBLIC ; GRANT SELECT ON ALL TABLES IN SCHEMA public TO read_only ; But i end up with permission denied errors. Anyone having a suggestion how to get this to work? Did I mess up permissions in public schema? Any help and suggestion is greatly appreciated. Alex
[GENERAL] Creating Report for PieChart
Hello, I need to process some statistics for a pie chart (json) where I only want to show a max of 8 slices. If I have more data points like in below table I need to combine all to a slice called others. If there are less or equal 8 i use them as is. I am currently doing this with a plperl function which works well but was just wondering out of curiosity if that could be done withing an sql query. Anyone having done something similar who could point me in the right direction? SELECT count(*),country_name FROM stats_archive WHERE id=400 GROUP BY country_name ORDER BY COUNT DESC; count | country_name ---+--- 302 | Malaysia 65 | Singapore 57 | Thailand 26 | Indonesia 15 | France 14 | United States 14 | India 13 | Philippines 12 | Vietnam 10 | Republic of Korea 10 | Canada 7 | Australia 6 | Brazil 6 | Czech Republic 5 | Switzerland 4 | Saudi Arabia 3 | Ireland 3 | Japan 3 | Sweden 3 | South Africa 3 | Belarus 3 | Colombia 3 | United Kingdom 1 | Peru country_name | count | perc -+---+--- Malaysia| 302 | 51.4 Singapore |65 | 11.0 Thailand|57 | 9.7 Indonesia |26 | 4.4 France |15 | 2.6 United States |14 | 2.4 India |14 | 2.4 Others |95 | 16.1 Total | 588 | 100 Thanks a lot for any suggestions Alex
[GENERAL] Left Join with Limit 1
Hello, I am trying to extract ip addresses from golite by joining two tables as posted below. Running the queries individually and using a limit on the golite ip db results are back immediately 1-2ms but when using the first query it takes 2-3 seconds. Is there a way to use a limit in the join? Thanks for any advice on this. A SELECT S.referrer_ip, I.geoname_id FROM viewing_stats AS S LEFT JOIN geolite_city_ip4 AS I ON (S.referrer_ip::inet <<= network) WHERE viewing_id=74; referrer_ip | geoname_id + 111.93.173.230 |1269750 (1 row) Time: *2609.125 ms* SELECT geoname_id,network FROM geolite_city_ip4 WHERE '111.93.173.230'::inet <<= network *LIMIT 1;* geoname_id | network +- 1269750 | 111.93.168.0/21 (1 row) Time: *1.926 ms* SELECT geoname_id,network FROM geolite_city_ip4 WHERE '111.93.173.230'::inet <<= network; geoname_id | network +- 1269750 | 111.93.168.0/21 (1 row) Time: 645.999 ms explain SELECT S.referrer_ip, I.geoname_id FROM viewing_stats AS S LEFT JOIN geolite_city_ip4 AS I ON (S.referrer_ip::inet <<= network) WHERE viewing_id=74; QUERY PLAN - Nested Loop Left Join (cost=0.00..242446.05 rows=3746698 width=36) Join Filter: ((s.referrer_ip)::inet <<= i.network) -> Seq Scan on viewing_stats s (cost=0.00..16.62 rows=3 width=32) Filter: (viewing_id = 74) -> Materialize (cost=0.00..74411.99 rows=2497799 width=11) -> Seq Scan on geolite_city_ip4 i (cost=0.00..49725.99 rows=2497799 width=11) (6 rows) Time: 1.326 ms SELECT S.referrer_ip FROM viewing_stats AS S WHERE viewing_id=74; referrer_ip 111.93.173.230 (1 row) Time: *1.268 ms*
[GENERAL] Grant SELECT/Execute to View/Function but not underlying Table
Hi, is it possible to grant select to views and functions without the need to also grant the user the SELECT privileges to the Tables used in the views or functions? That way I could create read only users on a website and limit their access to the bare minimum. Thanks in advance for any advise on this Alex
[GENERAL] Inject rows into query resultes
Hi, I am having a problem where I need to inject rows into a query result. That is I have a list of add records, lets say car listings which I select from a table. The order of the results are given either by the user or default/initial. I now want to inject featured car adds after every 5th record. (coming from the same table) Right now, I am doing this on the PHP just calling the query twice but I am wondering if there is a way to only run one query. The main problem is that a) i am using paging so the original listing should not be affected b) the injected records should not be in the results returned for the current page. Any one having an idea how to solve that? Thanks Alex
[GENERAL] Rounding Float Array
Hello, I have a float array holding geo location information. geoloc --- {5.3443133704554,100.29457569122} {5.3885574294704,100.29601335526} {3.1654978750403,101.60915851593} {5.3766154817748,100.31472444534} {3.1545014704258,101.70036971569} (5 rows) Is there an easy way to round all values to 4 decimals. I can round the individual values and return them seperately but I need to return them as an array. lat| long -+--- 5.34431 | 100.29458 5.38856 | 100.29601 3.16550 | 101.60916 5.37662 | 100.31472 3.15450 | 101.70037 (5 rows) Any suggestion is highly appreciated. Thanks Alex
[GENERAL] PLPerl Trigger to update text search
Hi, I am trying to update a tsvector field through a plperl trigger. $_TD-{new}{text_search} = to_tsvector('pg_catalog.english', 'text1 text2'); but plperl does not seem to like that... ERROR: Undefined subroutine main::to_tsvector called anyone done that and could help me out if that is actually possible? Thanks in advance Alex
[GENERAL] Sorting CSV string and removing Duplicates
Hello, I have a csv string in a text field that is unsorted and contains duplicates. Is there a simple way to remove these and sort the string. E.g 2,18,20,23,1,27,1,2,8,16,17,18,20,22,23,27 i tried string to array and unique but that did not work... Any suggestions on how to do this without writing a function? Any help is appreciated. Thanks A
Re: [GENERAL] Sorting CSV string and removing Duplicates
Hi Danish, yes thats the one I was looking for. Thanks a lot!!! On Tue, Jul 28, 2015 at 1:32 AM, dinesh kumar dineshkuma...@gmail.com wrote: On Mon, Jul 27, 2015 at 12:57 PM, dinesh kumar dineshkuma...@gmail.com wrote: On Mon, Jul 27, 2015 at 12:53 PM, Alex Magnum magnum11...@gmail.com wrote: Hello, I have a csv string in a text field that is unsorted and contains duplicates. Is there a simple way to remove these and sort the string. E.g 2,18,20,23,1,27,1,2,8,16,17,18,20,22,23,27 i tried string to array and unique but that did not work... Any suggestions on how to do this without writing a function? Any help is appreciated. Are you looking for this. postgres=# SELECT unnest(string_to_array(t, ',')) from test group by 1; unnest 2 18 8 20 22 16 27 17 23 1 (10 rows) OR Might be something like this postgres=# WITH sortedstring as postgres-# ( postgres(# SELECT unnest(string_to_array(t, ','))::int from test group by 1 ORDER BY 1 postgres(# ) SELECT array_agg(unnest) FROM sortedstring; array_agg -- {1,2,8,16,17,18,20,22,23,27} (1 row) Regards, Dinesh manojadinesh.blogspot.com Regards, Dinesh manojadinesh.blogspot.com Thanks A
[GENERAL] Working with Array of Composite Type
Hello, I am struggling with finding the right way to deal with arrays of composite types. Bellow is an example of the general setup where I defined an image type to describe the image properties. A user can have mulitple images stored. What is the best way to: a) search for an image within the array e.g. WHERE image.id = 3 WHERE is_private IS TRUE b) to update an image inside the array. e.g. is_private = FALSE WHERE image.id = 2 c) to delete an image why its id e.g. WHERE image.id=2 d) to create a listing like in the unset, but with the field names of the type e.g. user_id | id | caption | is_primary | is_private -++-++- 1 | 1 | This is Image A | f | f 1 | 2 | This is Image B | f | f CREATE TYPE image AS ( idsmallint, caption text, is_primaryboolean, is_privateboolean ); CREATE TABLE users ( user_id serial NOT NULL, curr_countsmallint,-- just an image identifier imagesimage[] ); -- create the initial user record INSERT INTO users VALUES (default,0,null); -- inserting new elements UPDATE users SET curr_count=curr_count+1, images=images || ARRAY[row(curr_count+1,'This is Image A',false,false)::image] WHERE user_id=1; UPDATE users SET curr_count=curr_count+1, images=images || ARRAY[row(curr_count+1,'This is Image B',false,false)::image] WHERE user_id=1; UPDATE users SET curr_count=curr_count+1, images=images || ARRAY[row(curr_count+1,'This is Image C',false,true)::image] WHERE user_id=1; UPDATE users SET curr_count=curr_count+1, images=images || ARRAY[row(curr_count+1,'This is Image D',true,false)::image] WHERE user_id=1; -- list the images SELECT user_id,curr_count,unnest(images) as limages from users WHERE user_id=1; SELECT user_id,curr_count,unnest(images) as limages from users WHERE user_id=1; user_id | curr_count | limages -++--- 1 | 4 | (1,This is Image A,f,f) 1 | 4 | (2,This is Image B,f,f) 1 | 4 | (3,This is Image C,f,t) 1 | 4 | (4,This is Image D,t,f) Any help or suggestion on this topic is highly appreciated. Thanks Alex