Naked EXISTS vs SELECT EXISTS very different performance.
Hi everyone. Can someone tell me why these two equivalent queries, one involving a "naked" EXISTS versus one involving an EXISTS inside a SELECT statement perform so differently? I can see that the slow one scans the entire child table while the fast one only scans children that have the same parent_id as the parent. The tables have been vacuumed. I have confirmed both give the same results across the whole table. Setup is attached. Cheers and regards. Slow https://explain.depesz.com/s/DzcK Fast https://explain.depesz.com/s/EftS exists.sql Description: Binary data
Re: Unexpected behavior sorting strings
Many thanks! That clarifies things well. Jimmy On Wed, Apr 8, 2020, at 11:49 AM, Adrian Klaver wrote: > On 4/8/20 7:35 AM, Jimmy Thrasher wrote: > > Am I missing something about how sorting works? > > I believe you are looking for 'C' collation: > > test=# select unnest(array[('> N' collate "C") , ('< S' COLLATE "C")]) > as s order by s; > >s > - > < S > > N > (2 rows) > > > For more information see: > > https://www.postgresql.org/docs/12/collation.html
Unexpected behavior sorting strings
I'm seeing some unexpected behavior when sorting some strings, and it indicates I don't fully understand how postgresql string sorting works. As I understand it, postgresql sorts strings roughly like strcmp does: character by character based on encoding value. In particular, I'm seeing the following. I would expect "< S" to come first, because "<" (0x3c) is less than ">" (0x3e). ``` supercatdev=# select unnest(array['> N', '< S']) as s order by s; s - > N < S (2 rows) ``` I've broken this down further: ``` supercatdev=# select '> N' < '< S'; ?column? -- t (1 row) ``` Am I missing something about how sorting works? Metadata: - postgresql 9.5.19, running on Ubuntu 16LTS - encoding, collate, and ctype are all UTF8 or en_US.UTF-8, as appropriate Thanks! Jimmy
Re: PostgreSQL10.x client Vs. PostgreSQL 11.x server
Hi Deepti, As Peter pointed out (and I should have clarified), the server-side features of each version should work regardless of client. So the pg11 client will support all of the features of pg10 server (should be fully compatible). Best regards, Jimmy Jimmy Angelakos Senior PostgreSQL Architect 2ndQuadrant - PostgreSQL Solutions for the Enterprise https://www.2ndQuadrant.com/
Re: PostgreSQL10.x client Vs. PostgreSQL 11.x server
Hi Deepti, By using an older client, you will be missing out on the additional features that Postgres 11 (and its corresponding client) supports. By referring to the release notes, you can identify those: https://www.postgresql.org/docs/release/11.0/ You can generally use an older client with a new release, but for the above reasons you should try to match server and client versions. Best regards Jimmy Jimmy Angelakos Senior PostgreSQL Architect 2ndQuadrant - PostgreSQL Solutions for the Enterprise https://www.2ndQuadrant.com/ On Fri, 13 Mar 2020 at 08:19, Deepti Sharma S wrote: > > Hello Team, > > > > Can anyone help us to answer below query. > > > > > > DEEPTI SHARMA > Specialist > ITIL 2011 Foundation Certified > BDGS, R > > > Ericsson > 3rd Floor, ASF Insignia - Block B Kings Canyon, > Gwal Pahari, Gurgaon, Haryana 122 003, India > Phone 0124-6243000 > deepti.s.sha...@ericsson.com > www.ericsson.com > > > > From: Deepti Sharma S > Sent: Monday, March 2, 2020 3:20 PM > To: 'postgres-disc...@mailman.lmera.ericsson.se' > > Subject: PostgreSQL10.x client Vs. PostgreSQL 11.x server > > > > Hello Team, > > > > Can you please confirm the compatibility of PostgreSQL 10.x client with > PostgreSQL 11.x server. > > > > > > > > DEEPTI SHARMA > Specialist > ITIL 2011 Foundation Certified > BDGS, R > > > Ericsson > 3rd Floor, ASF Insignia - Block B Kings Canyon, > Gwal Pahari, Gurgaon, Haryana 122 003, India > Phone 0124-6243000 > deepti.s.sha...@ericsson.com > www.ericsson.com > >
Re: parsing xml with PG 9.2.4
Hi Mario, First off, as you will be aware, 9.2 is quite an old Postgres version and is currently unsupported - for security reasons alone, you should upgrade ASAP. Regardless, this query should work for you: SELECT xpath('/ProgramInformation/BasicDescription/CreditsList', program_information.description, NAMESPACE_ARRAY) FROM program_information WHERE id = 8768787; where NAMESPACE_ARRAY needs to contain your definition for prefix mpeg7, otherwise you'll get "Namespace prefix is not defined" errors when parsing. You will find more Xpath guidance here: https://www.postgresql.org/docs/9.2/functions-xml.html#FUNCTIONS-XML-PROCESSING Best regards, Jimmy On Mon, 17 Feb 2020 at 16:32, Mario Vlahovic wrote: > > Hello Developers, > I hope you can help me. I'm having troubles parsing some data from my psql > table, which I need for further manipulation. > So my query: > > select program_information.description FROM program_information WHERE id = > 8768787; > > GIves me: > > > > > Zla smrt > Pet prijateljev, starih nekaj čez dvajset let, v > samotni koči najde Knjigo mrtvih. S posnetka, ki so ga napravili arheologi, > izvedo, da je bilo starodavno besedilo odkrito med kandarijskimi ruševinami > sumerske civilizacije. > > > > PG > > > > > > Bruce > Campbell > > > > > Ellen > Sandweiss > > > > > Betsy > Baker > > > > > Sam > Raimi > > > > > > 1981 > > > > > > 2 > > > > > What I need is parsed data from , GivenName + FamilyName for all > entries. I know it should be doable with xpath but I just can't get it to > work :/. > > Please help. > > Thanks,
Re: pg full text search very slow for Chinese characters
It is all default values. I just check maintenance_work_mem and indeed it is 64MB. At first I gave 2GB ram to docker and later I increased ram to 4GB. It did not make much difference when ram increased from 2GB to 4GB. I will try increasing maintenance_work_mem and see if it helps. 发件人: Michael Lewis 日期: 2019年9月11日 星期三 上午12:11 收件人: 黄 少君 抄送: "pgsql-general@lists.postgresql.org" 主题: Re: pg full text search very slow for Chinese characters >My postgres instance is based on docker image postgres:11 and runs on my >MacBook Pro i7 16GB. How much ram and such did you give to this vm? >To my surprise, postgres 11 is extremely slow when creating a full text index. >I added a column of tsvector type and tried to create an index on that column. >Pg could not finish creating a GIN index for a long time and I had to cancel >the execution.I then tried to create a partial full text index for 500 rows >and it took postgres 2 to 3 minutes to create the index. Did you customize any config? maintenance_work_mem specifically would be relevant to the time to create an index and default value is only 64MB. Especially if you are running a spinning hard drive and not ssd, then this could be problematic.
pg full text search very slow for Chinese characters
Hi Team, Can anyone shed some light on why postgres 11 is extremely slow in my case? I am making a mirror of zh.wikisource.org and I have downloaded 303049 pages and stored them in a postgres 11 database. My postgres instance is based on docker image postgres:11 and runs on my MacBook Pro i7 16GB. Database schema is as follows Table pages(id, url, html, downloaded, inserted_at, updated_at) and books(id, name, info, preface, text, html, url, parent_id, inserted_at, updated_at, info_html, preface_html) A wikisource web page is downloaded and its html text is inserted into table “pages” column “html. Later, books.{name, info, preface, text, html, info_html, preface_html} are extracted from pages.html. The text column of books is a txt version of the content of html column of table pages. On average there are 7635 characters (each characters is 3 bytes long because of utf-8 encoding) for text column of table books and I want to add full text search to books(text). I tried pg_trgm and my own customized token parser https://github.com/huangjimmy/pg_cjk_parser To my surprise, postgres 11 is extremely slow when creating a full text index. I added a column of tsvector type and tried to create an index on that column. Pg could not finish creating a GIN index for a long time and I had to cancel the execution. I then tried to create a partial full text index for 500 rows and it took postgres 2 to 3 minutes to create the index. Based on this estimation, pg will need at least one day to create a full GIN full text search index for 303049 rows of data. I think this is ridiculous slow. If I tried to create fts index for books(name) or books(info), it took just 3 minutes to create the index. However, name and info are extremely short compared to books(text). I switched to Elasticsearch and it turned out that Elasticsearch is extremely efficient for my case. It took Elasticsearch 3 hours to index all 303049 rows. Jimmy Huang jimmy_hu...@live.com
how to know that one query use the data in the os cache
I use postgresql for windows server 2012 R2. I use select pg_prewarm('tablename','read','main'); to load data into the os cache. How can I know the database used the data in the os cache when I use the sql, select * from tablename, to query. explain(analyze true, buffers true) select * from tablename This sql above just show that the data in the database cache are used by querying. Can not show the data in the os cache. Because I am optimizing the query speed by loading data into the os cache. but the speed of query is as the same as the speed before using pg_prewarm. So I doubt the database do not use the data in the os cache when it is querying. I want to pick up the speed of query by using os cache and database cache.
Re:Re: how to clean the cache from databases and operating system
I use windows server 2012 R2. How to drop postgresql's data in the system cache. In windows server 2012 R2, I restart postgresql by restarting postgresql service, wether it can drop postgres' cache? At 2018-09-26 22:52:08, "Maxence Ahlouche" wrote: Hi, On 26 September 2018 at 08:25, jimmy wrote: 1、When I execute the firse sql query, like below: select * from tablename; there are some datas that will be loaded into the database cache. How to clean the data from cache. 2、When I execute second sql query like below: SELECT pg_prewarm('tablename', 'buffer') . Data will be loaded into the database cache. How to clean the data from cache. 3、When I execute the third sql query like below: SELECT pg_prewarm('tablename', 'main') . Data will be loaded into the os cache. How to clean the data from cache. To drop the system cache, as per [0] : echo 3 > /proc/sys/vm/drop_caches To drop postgres' cache, afaik the easiest is to restart postgres. If you're like me and too lazy to do all that (and don't care about potentially losing data), you can also install an extension rjuju and I wrote that allows you to do that simply with `SELECT pg_drop_caches`. See [1] for the extension and [2] for how to use it. [0] https://www.kernel.org/doc/Documentation/sysctl/vm.txt search for "drop_caches" in that page [1] https://github.com/rjuju/pg_dropbuffers [2] https://maahl.net/pg_dropbuffers
how to clean the cache from databases and operating system
1、When I execute the firse sql query, like below: select * from tablename; there are some datas that will be loaded into the database cache. How to clean the data from cache. 2、When I execute second sql query like below: SELECT pg_prewarm('tablename', 'buffer') . Data will be loaded into the database cache. How to clean the data from cache. 3、When I execute the third sql query like below: SELECT pg_prewarm('tablename', 'main') . Data will be loaded into the os cache. How to clean the data from cache.
Re:Re: how to know whether query data from memory after pg_prewarm
But I use windows server 2012R. pgfincore can not run on the windows. Is there some replacements in windows system? At 2018-09-19 15:44:06, "Cédric Villemain" wrote: >Le 19/09/2018 à 05:29, Thomas Munro a écrit : >> On Wed, Sep 19, 2018 at 1:35 PM jimmy wrote: >>> I use select pg_prewarm('table1','read','main') to load data of table1 >>> into the memory. >>> when I use select count(1) from table1 group by aa to query data. >>> I find the speed of query is not fast, I wonder whether it query data from >>> memory. >>> And it is slower than Oracle, both of Oracle and Postgresql has same table >>> and count of data. >>> when pg_prewarm use 'read' mode, the data is put into the OS cache, how to >>> examine the table which is pg_prewarmed into the OS cache . >>> I know pg_buffercache ,but it just examine the table in the shared buffer >>> of Postgresql, not the table in the OS cache. >> >> This is a quick and dirty hack, but it might do what you want: >> >> https://github.com/macdice/pgdata_mincore >> >> Tested on FreeBSD, not sure how well it'll travel. > >You can use pgfincore extension for that purpose, and more. > >https://github.com/klando/pgfincore/blob/master/README.md > > >-- >Cédric Villemain +33 (0)6 20 30 22 52 >http://2ndQuadrant.fr/ >PostgreSQL: Support 24x7 - Développement, Expertise et Formation
Why the sql is not executed in parallel mode
Why the sql is not executed in parallel mode, does the sql has some problem? with sql1 as (select a.* from snaps a where a.f_date between to_date('2018-03-05', '-MM-dd') and to_date('2018-03-11', '-MM-dd') ), sql2 as (select '1' as pId, PM_TO as pValue, type_code as typeCode, version_no as versionNo, bs as bs, l.order_rule as orderRule from sql1, qfpl l where PM_TO is not null and l.pid = 1 union all select '2' as pId, PRTO as pValue, type_code as typeCode, version_no as versionNo, bs as bs, l.order_rule as orderRule from sql1, qfpl l where PRTO is not null and l.pid = 2 union all select '3' as pId, PRATO as pValue, type_code as typeCode, version_no as versionNo, bs as bs, l.order_rule as orderRule from sql1, qfpl l where PRATO is not null and l.pid = 3 ), sql4 as ( select typeCode, pId, orderRule, versionNo, row_number() over(partition by pId, typeCode order by pValue) as rnn from sql2 ), sql5 as ( select sql4.typeCode as typeCode, sql4.pId as pId, sql4.orderRule as orderRule, t.pValue as pValue, sql4.versionNo as versionNo from sql4, (select sql2.typeCode,sql2.pId,sql2.orderRule, (case when sql2.orderRule = 1 then PERCENTILE_DISC(0.05) WITHIN GROUP(ORDER BY sql2.pValue) else PERCENTILE_DISC(0.95) WITHIN GROUP(ORDER BY sql2.pValue) end) as pValue, (case when sql2.orderRule = 1 then (case when round(count(1) * 0.05) - 1 < 0 then 1 else round(count(1) * 0.05) end) else (case when round(count(1) * 0.95) - 1 < 0 then 1 else round(count(1) * 0.95) end) end) as rnn from sql2 group by sql2.typeCode, sql2.pId, sql2.orderRule) t where sql4.typeCode = t.typeCode and sql4.pId = t.pId and sql4.orderRule = t.orderRule and sql4.rnn = t.rnn ), sql6 as ( select sql2.pId, sql2.typeCode as typeCode, count(1) as fCount from sql2, sql5 where sql2.pId = sql5.pId and sql2.typeCode = sql5.typeCode and ((sql2.orderRule = 2 and sql2.pValue >= sql5.pValue) or (sql2.orderRule = 1 and sql2.pValue <= sql5.pValue)) and sql2.pId != '22' group by sql2.pId, sql2.typeCode union select sql5.pId, sql5.typeCode, 0 as fCount from sql5 where sql5.pId = '22' group by sql5.pId, sql5.typeCode ) select sql5.pId, sql5.typeCode, (case when sql5.pId = '22' then (select p.d_chn from qlp p where p.version_no = sql5.versionNo and p.cno = sql5.pValue and (p.typeCode = sql5.typeCode or p.typeCode is null)) else sql5.pValue || '' end) pValue, sql6.fCount, (case when d.delta = 'Y' then d.dy_val else d.y_val end) yVal, (case when d.is_delta = 'Y' then d.dr_val else d.r_val end) rVal, f.p_no pNo, f.p_name ||(case when f.unit = '' then '' else '('|| f.unit ||')' end) pName, f.pe_name || (case when f.unit = '' then '' else '(' || f.unit || ')' end) peName, c.fp_name fpName, f.order_rule as orderRule, f.pflag pFlag, f.pdesc as pDesc from sql5, sql6, qfpl f, qpa d,qfp c where sql5.pId = sql6.pId and sql5.typeCode = sql6.typeCode and sql5.pId = f.pid||'' and f.deleted = 0 and f.pid = d.pid and sql5.typeCode = d.typeCode and f.fp_id = c.fp_id order by f.t_sort, c.fp_id,f.p_no
how to know whether query data from memory after pg_prewarm
I use select pg_prewarm('table1','read','main') to load data of table1 into the memory. when I use select count(1) from table1 group by aa to query data. I find the speed of query is not fast, I wonder whether it query data from memory. And it is slower than Oracle, both of Oracle and Postgresql has same table and count of data. when pg_prewarm use 'read' mode, the data is put into the OS cache, how to examine the table which is pg_prewarmed into the OS cache . I know pg_buffercache ,but it just examine the table in the shared buffer of Postgresql, not the table in the OS cache.
Re:Re: How to install pgAgent on windows for postresql-bigsql-10.5
Is EnterpriseDB opensource and free. Has this database some limits and restrictions ? At 2018-09-06 15:57:30, "Dave Page" wrote: Hi On Thu, Sep 6, 2018 at 6:23 AM, jimmy wrote: I use PostgreSQL-10.5-1-win64-bigsql.exe to install postgresql database. How to install pgAgent on windows for postresql-bigsql-10.5. I have been searching some articles to install pgAgent. But they do not work. I found there has not any version of pgAgent for windows in the website 'www.pgadmin.org'. And PostgreSQL-10.5-1-win64-bigsql.exe installer also has not the pgAgent. When I execute CREATE EXTENSION pgagent, it throws ERROR: could not open extension control file "H:/PostgreSQL/pg10/../pg10/share/postgresql/extension/pgagent.control": No such file or directory. How can I resolve these problems. Thank you. If you use the EDB PostgreSQL installers, you can install pgAgent using StackBuilder. For BigSQL you'll probably have to build and install it manually from source (which isn't exactly easy on Windows, and is dependent on what is included with and how BigSQL is packaged - which I know nothing about). -- Dave Page Blog: http://pgsnake.blogspot.com Twitter: @pgsnake EnterpriseDB UK: http://www.enterprisedb.com The Enterprise PostgreSQL Company
How to install pgAgent on windows for postresql-bigsql-10.5
I use PostgreSQL-10.5-1-win64-bigsql.exe to install postgresql database. How to install pgAgent on windows for postresql-bigsql-10.5. I have been searching some articles to install pgAgent. But they do not work. I found there has not any version of pgAgent for windows in the website 'www.pgadmin.org'. And PostgreSQL-10.5-1-win64-bigsql.exe installer also has not the pgAgent. When I execute CREATE EXTENSION pgagent, it throws ERROR: could not open extension control file "H:/PostgreSQL/pg10/../pg10/share/postgresql/extension/pgagent.control": No such file or directory. How can I resolve these problems. Thank you.
Re: PostgreSQL 9.6 Temporary files
Thanks all for your response, $du -h $MY_DATA/base/$BASE_OID/ returns 162GB but when I execute this query: SELECT stats.relname AS table, pg_size_pretty(pg_relation_size(statsio.relid)) AS table_size, pg_size_pretty(pg_total_relation_size(statsio.relid) - pg_relation_size(statsio.relid)) AS related_objects_size, pg_size_pretty(pg_total_relation_size(statsio.relid)) AS total_table_size, stats.n_live_tup AS live_rows FROM pg_catalog.pg_statio_user_tables AS statsio JOIN pg_stat_user_tables AS stats USING (relname) WHERE stats.schemaname = current_schema UNION ALLSELECT 'TOTAL' AS table, pg_size_pretty(sum(pg_relation_size(statsio.relid))) AS table_size, pg_size_pretty(sum(pg_total_relation_size(statsio.relid) - pg_relation_size(statsio.relid))) AS related_objects_size, pg_size_pretty(sum(pg_total_relation_size(statsio.relid))) AS total_table_size, sum(stats.n_live_tup) AS live_rows FROM pg_catalog.pg_statio_user_tables AS statsio JOIN pg_stat_user_tables AS stats USING (relname) WHERE stats.schemaname = current_schema ORDER BY live_rows ASC; I obtain 80GB in total_table_size (half of my database), where are missing data at? 2018-03-19 19:32 GMT+01:00 Adrian Klaver <adrian.kla...@aklaver.com>: > On 03/19/2018 10:27 AM, Jimmy Augustine wrote: > >> I tried this query and my database size is equal to 162GB. >> >> > Well you can always look in $DATA directly. The database will be under > $DATA/base/. > > You can find the like this: > > select oid, datname from pg_database where datname=''; > > > -- > Adrian Klaver > adrian.kla...@aklaver.com >
Re: PostgreSQL 9.6 Temporary files
2018-03-19 18:25 GMT+01:00 Andreas Kretschmer <andr...@a-kretschmer.de>: > On 19 March 2018 18:21:42 CET, Jimmy Augustine <jimmy.august...@enyx.fr> > wrote: > >2018-03-19 18:15 GMT+01:00 Adrian Klaver <adrian.kla...@aklaver.com>: > > > >> On 03/19/2018 10:12 AM, Jimmy Augustine wrote: > >> > >> > >>> On 03/19/2018 09:31 AM, Jimmy Augustine wrote: > >>> > >>> Dear Friends, > >>> > >>> I am newbie to postgresql. > >>> I have 162 GB on my database but when I check size > >of all > >>> tables, I approximately obtain 80 GB. > >>> I also see that I have 68GB of temporary files > >however > >>> I only > >>> found 2.4MB at postgres/data/base/pgsql_tmp. > >>> > >>> > >>> Exactly how did you determine this? > >>> > >>> I used this command and sum result for all database : > >>> SELECT pg_size_pretty(pg_total_relation_size('table_name')); > >>> > >>> And this for complete database : > >>> SELECT pg_size_pretty(pg_database_size('Database Name')); > >>> > >>> > >>> So where did the 68GB number for temporary files come from? > >>> > >>> I don't measure this value by my own. I was disappointed by the gap > >>> between the two queries, so I checked pgAdmin 4 and I saw this > >value. > >>> > >> > >> In what section of pgAdmin4? > >> > >In section "Statistics" when I click on my database. > > > >Or do you know what query it used? > >> > >I have found this but not sure > > > >SELECT temp_files AS "Temporary files" > > , temp_bytes AS "Size of temporary files"FROM pg_stat_database db; > > > That's aggregated. Not current values. > Ah did you know some documentation about that ? > Andreas > > > -- > 2ndQuadrant - The PostgreSQL Support Company >
Re: PostgreSQL 9.6 Temporary files
I tried this query and my database size is equal to 162GB. 2018-03-19 18:17 GMT+01:00 Melvin Davidson <melvin6...@gmail.com>: > > > On Mon, Mar 19, 2018 at 1:12 PM, Jimmy Augustine <jimmy.august...@enyx.fr> > wrote: > >> >> >> 2018-03-19 18:09 GMT+01:00 Adrian Klaver <adrian.kla...@aklaver.com>: >> >>> On 03/19/2018 10:04 AM, Jimmy Augustine wrote: >>> >>>> >>>> >>>> 2018-03-19 17:45 GMT+01:00 Adrian Klaver <adrian.kla...@aklaver.com >>>> <mailto:adrian.kla...@aklaver.com>>: >>>> >>>> On 03/19/2018 09:31 AM, Jimmy Augustine wrote: >>>> >>>> Dear Friends, >>>> >>>> I am newbie to postgresql. >>>> I have 162 GB on my database but when I check size of all >>>> tables, I approximately obtain 80 GB. >>>> I also see that I have 68GB of temporary files however I only >>>> found 2.4MB at postgres/data/base/pgsql_tmp. >>>> >>>> >>>> Exactly how did you determine this? >>>> >>>> I used this command and sum result for all database : >>>> SELECT pg_size_pretty(pg_total_relation_size('table_name')); >>>> >>>> And this for complete database : >>>> SELECT pg_size_pretty(pg_database_size('Database Name')); >>>> >>>> >>> So where did the 68GB number for temporary files come from? >>> >>> I don't measure this value by my own. I was disappointed by the gap >> between the two queries, so I checked pgAdmin 4 and I saw this value. >> >>> >>> -- >>> Adrian Klaver >>> adrian.kla...@aklaver.com >>> >> >> > > > *>I don't measure this value by my own. I was disappointed by the gap > between the two queries, so I checked pgAdmin 4 and I saw this value. * > > *I think your problem is that SELECT > pg_size_pretty(pg_total_relation_size('table_name')); only looks at the > current database* > > > *but SELECT pg_size_pretty(pg_database_size('Database Name')); looks at > ALL databases.* > > > > > > > > > > > > > *Try this query instead to show individual database sizes.SELECT oid, >datname,pg_size_pretty(pg_database_size(datname))as > size_pretty,pg_database_size(datname) as size, (SELECT > pg_size_pretty (SUM( pg_database_size(datname))::bigint)FROM > pg_database) AS total, ((pg_database_size(datname) / (SELECT SUM( > pg_database_size(datname)) FROM > pg_database) ) * 100)::numeric(6,3) AS pct FROM pg_database ORDER BY > datname;* > > -- > *Melvin Davidson* > *Maj. Database & Exploration Specialist* > *Universe Exploration Command – UXC* > Employment by invitation only! >
Re: PostgreSQL 9.6 Temporary files
2018-03-19 18:15 GMT+01:00 Adrian Klaver <adrian.kla...@aklaver.com>: > On 03/19/2018 10:12 AM, Jimmy Augustine wrote: > > >> On 03/19/2018 09:31 AM, Jimmy Augustine wrote: >> >> Dear Friends, >> >> I am newbie to postgresql. >> I have 162 GB on my database but when I check size of all >> tables, I approximately obtain 80 GB. >> I also see that I have 68GB of temporary files however >> I only >> found 2.4MB at postgres/data/base/pgsql_tmp. >> >> >> Exactly how did you determine this? >> >> I used this command and sum result for all database : >> SELECT pg_size_pretty(pg_total_relation_size('table_name')); >> >> And this for complete database : >> SELECT pg_size_pretty(pg_database_size('Database Name')); >> >> >> So where did the 68GB number for temporary files come from? >> >> I don't measure this value by my own. I was disappointed by the gap >> between the two queries, so I checked pgAdmin 4 and I saw this value. >> > > In what section of pgAdmin4? > In section "Statistics" when I click on my database. Or do you know what query it used? > I have found this but not sure SELECT temp_files AS "Temporary files" , temp_bytes AS "Size of temporary files"FROM pg_stat_database db; >> >> -- Adrian Klaver >> adrian.kla...@aklaver.com <mailto:adrian.kla...@aklaver.com> >> >> >> > > -- > Adrian Klaver > adrian.kla...@aklaver.com >
Re: PostgreSQL 9.6 Temporary files
Hi, I used this command and I found the same value in total_size column. 2018-03-19 18:01 GMT+01:00 Melvin Davidson <melvin6...@gmail.com>: > > > On Mon, Mar 19, 2018 at 12:45 PM, Adrian Klaver <adrian.kla...@aklaver.com > > wrote: > >> On 03/19/2018 09:31 AM, Jimmy Augustine wrote: >> >>> Dear Friends, >>> >>> I am newbie to postgresql. >>> I have 162 GB on my database but when I check size of all tables, I >>> approximately obtain 80 GB. >>> I also see that I have 68GB of temporary files however I only found >>> 2.4MB at postgres/data/base/pgsql_tmp. >>> >> >> Exactly how did you determine this? >> >> >>> Could you tell me what are those temporary files and where are they at? >>> Can I delete some of them? >>> >>> All values come from pgAdmin 4 and checked by my own SQL >>> queries(postgresql-9.6). >>> >> >> Can you show actual queries used? >> >> I already run vacuum full and there is few dead tuples. >>> >>> Best regards, >>> Jimmy AUGUSTINE >>> >> >> >> -- >> Adrian Klaver >> adrian.kla...@aklaver.com >> >> > I have 162 GB on my database but when I check size of all tables, I > approximately obtain 80 GB. > >I also see that I have 68GB of temporary files however I only found 2.4MB > at postgres/data/base/pgsql_tmp. > > > *I am not sure what your query was that deteremined table and index sizes, > but try using the query instead.* > > *Note that total_size is the size of the table and all it's indexes.* > > > > > > > > > > > > > > > > > > > > > > > > > > > > > *SELECT n.nspname as schema, c.relname as table, a.rolname as > owner, c.relfilenode as filename, c.reltuples::bigint, > pg_size_pretty(pg_relation_size(quote_ident(n.nspname) || '.' || > quote_ident(c.relname) )) as size, > pg_size_pretty(pg_total_relation_size(quote_ident(n.nspname) || '.' || > quote_ident(c.relname) )) as total_size, > pg_relation_size(quote_ident(n.nspname) || '.' || quote_ident(c.relname) ) > as size_bytes, pg_total_relation_size(quote_ident(n.nspname) || '.' > || quote_ident(c.relname) ) as total_size_bytes, CASE WHEN > c.reltablespace = 0THEN 'pg_default'ELSE (SELECT > t.spcname FROM pg_tablespace t WHERE (t.oid = > c.reltablespace) )END as tablespaceFROM > pg_class c JOIN pg_namespace n ON (n.oid = c.relnamespace) JOIN pg_authid > a ON ( a.oid = c.relowner ) WHERE quote_ident(nspname) NOT LIKE 'pg_%' > AND quote_ident(relname) NOT LIKE 'pg_%' AND quote_ident(relname) NOT > LIKE 'information%' AND quote_ident(relname) NOT LIKE 'sql_%' AND > quote_ident(relkind) IN ('r')ORDER BY total_size_bytes DESC, 1, 2;-- * > > > > *Melvin DavidsonMaj. Database & Exploration SpecialistUniverse Exploration > Command – UXCEmployment by invitation only!* >
Re: PostgreSQL 9.6 Temporary files
2018-03-19 18:09 GMT+01:00 Adrian Klaver <adrian.kla...@aklaver.com>: > On 03/19/2018 10:04 AM, Jimmy Augustine wrote: > >> >> >> 2018-03-19 17:45 GMT+01:00 Adrian Klaver <adrian.kla...@aklaver.com >> <mailto:adrian.kla...@aklaver.com>>: >> >> On 03/19/2018 09:31 AM, Jimmy Augustine wrote: >> >> Dear Friends, >> >> I am newbie to postgresql. >> I have 162 GB on my database but when I check size of all >> tables, I approximately obtain 80 GB. >> I also see that I have 68GB of temporary files however I only >> found 2.4MB at postgres/data/base/pgsql_tmp. >> >> >> Exactly how did you determine this? >> >> I used this command and sum result for all database : >> SELECT pg_size_pretty(pg_total_relation_size('table_name')); >> >> And this for complete database : >> SELECT pg_size_pretty(pg_database_size('Database Name')); >> >> > So where did the 68GB number for temporary files come from? > > I don't measure this value by my own. I was disappointed by the gap between the two queries, so I checked pgAdmin 4 and I saw this value. > > -- > Adrian Klaver > adrian.kla...@aklaver.com >
Re: PostgreSQL 9.6 Temporary files
2018-03-19 17:45 GMT+01:00 Adrian Klaver <adrian.kla...@aklaver.com>: > On 03/19/2018 09:31 AM, Jimmy Augustine wrote: > >> Dear Friends, >> >> I am newbie to postgresql. >> I have 162 GB on my database but when I check size of all tables, I >> approximately obtain 80 GB. >> I also see that I have 68GB of temporary files however I only found 2.4MB >> at postgres/data/base/pgsql_tmp. >> > > Exactly how did you determine this? > I used this command and sum result for all database : SELECT pg_size_pretty(pg_total_relation_size('table_name')); And this for complete database : SELECT pg_size_pretty(pg_database_size('Database Name')); > >> Could you tell me what are those temporary files and where are they at? >> Can I delete some of them? >> >> All values come from pgAdmin 4 and checked by my own SQL >> queries(postgresql-9.6). >> > > Can you show actual queries used? > > > I already run vacuum full and there is few dead tuples. >> >> Best regards, >> Jimmy AUGUSTINE >> > > > -- > Adrian Klaver > adrian.kla...@aklaver.com >
Re: PostgreSQL 9.6 Temporary files
Hi Andreas thanks for your response, 2018-03-19 17:44 GMT+01:00 Andreas Kretschmer <andr...@a-kretschmer.de>: > On 19 March 2018 17:31:20 CET, Jimmy Augustine <jimmy.august...@enyx.fr> > wrote: > >Dear Friends, > > > >I am newbie to postgresql. > >I have 162 GB on my database but when I check size of all tables, I > >approximately obtain 80 GB. > > > Indexes? > > Indexes are included into 80 GB that I mentioned. >I also see that I have 68GB of temporary files however I only found > > Where can you see that? > > I used pgAdmin 4 and I see statistics on my global database. >2.4MB > >at postgres/data/base/pgsql_tmp. > > > >Could you tell me what are those temporary files and where are they at? > >Can > >I delete some of them? > > > No, never delete files in datadir! > > > > >All values come from pgAdmin 4 and checked by my own SQL > >queries(postgresql-9.6). > >I already run vacuum full and there is few dead tuples. > > A few dead tuples arn't a real problem. > > > > > >Best regards, > >Jimmy AUGUSTINE > > > -- > 2ndQuadrant - The PostgreSQL Support Company >
PostgreSQL 9.6 Temporary files
Dear Friends, I am newbie to postgresql. I have 162 GB on my database but when I check size of all tables, I approximately obtain 80 GB. I also see that I have 68GB of temporary files however I only found 2.4MB at postgres/data/base/pgsql_tmp. Could you tell me what are those temporary files and where are they at? Can I delete some of them? All values come from pgAdmin 4 and checked by my own SQL queries(postgresql-9.6). I already run vacuum full and there is few dead tuples. Best regards, Jimmy AUGUSTINE