Hi
I'd like to know how to get information about which PG entities are in
kernel cache, if possible.
--
Regards,
Sergey Konoplev
---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at
his query useful:
SELECT
granted,
count(1) AS locks,
pid,
now() - xact_start AS xact_age,
now() - query_start AS query_age,
current_query
FROM
pg_locks AS l
LEFT JOIN pg_stat_activity AS a ON
pid = procpid
GROUP BY 1, 3, 4, 5, 6
ORDER BY 1 DESC, 2 DESC
-- ORDER BY 4 DES
Hello,
On 17 July 2010 12:50, Srikanth wrote:
> I am sending u the query along with execution plan. Please help
>
It would be better if you start with it:
http://www.postgresql.org/docs/8.4/interactive/indexes.html
http://www.mohawksoft.org/?q=node/56
--
Sergey Konoplev
Blog: http:
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> --Seeks--
> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec
> %CP
> dbtest 64240M 78829 99 266172 42 47904 6 58410 72 116247 9 767.9
> 1
> --Sequentia
Hi,
On 12 November 2011 00:18, Stephen Frost wrote:
> In a crash, unlogged tables are automatically truncated.
BTW I wonder what for they are truncated in a crash case?
--
Sergey Konoplev
Blog: http://gray-hemp.blogspot.com
LinkedIn: http://ru.linkedin.com/in/grayhemp
JID/GTalk: g
ct. It
truncates all the records of the table or several recent records only?
>
> --
> Richard Huxton
> Archonet Ltd
>
--
Sergey Konoplev
Blog: http://gray-hemp.blogspot.com
LinkedIn: http://ru.linkedin.com/in/grayhemp
JID/GTalk: gray...@gmail.com Skype: gray-hemp
--
Sent vi
On 14 November 2011 14:17, Richard Huxton wrote:
> On 14/11/11 10:08, Sergey Konoplev wrote:
>>
>> On 14 November 2011 12:58, Richard Huxton wrote:
> Let's say you were doing something like "UPDATE unlogged_table SET x=1 WHERE
> y=2". If a crash occurs durin
tgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
--
Sergey Konoplev
a database architect, software developer at PostgreSQL-Consulting.com
http://www.postgresql-consulting.com
Jabber: gray...@gmail.com Skype: gray-hemp Phone: +7916068620
ient, any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone an
s
article http://www.westnet.com/~gsmith/content/postgresql/pgbench-scaling.htm
where you will find some hints for your case.
Also look at the playback tools
http://wiki.postgresql.org/wiki/Statement_Playback.
--
Sergey Konoplev
a database architect, software developer at PostgreSQL-Consulting.
1.17 0.00
> 71.15
> 08:11:53all 17.53 0.00 3.13 0.68 0.00
> 78.65
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.o
with triggers if you need to get
counts fast.
--
Sergey Konoplev
a database and software architect
http://www.linkedin.com/in/grayhemp
Jabber: gray...@gmail.com Skype: gray-hemp Phone: +14158679984
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to
d.aspx?tid=974484
>
> 11.10.2012, 01:30, "Sergey Konoplev" :
>> On Wed, Oct 10, 2012 at 9:09 AM, Korisk wrote:
>>
>>> Hello! Is it possible to speed up the plan?
>>> Sort (cost=573977.88..573978.38 rows=200 width=32) (actual
>>> tim
s.
> (9 rows)
>
> Postgresql 9.2.1 was configured and built with default settings.
>
> Thank you.
--
Sergey Konoplev
a database and software architect
http://www.linkedin.com/in/grayhemp
Jabber: gray...@gmail.com Skype: gray-hemp Phone: +14158679984
--
Sent via pgsql-per
;0.5'::text)
-> Bitmap Index Scan on h_idx1 (cost=0.00..1616.10 rows=102367
width=0) (actual time=19.027..19.027 rows=100271 loops=1)
(5 rows)
--
Sergey Konoplev
a database and software architect
http://www.linkedin.com/in/grayhemp
Jabber: gray...@gmail.com Skype: gray-hemp Pho
040.96 rows=25990002 width=32) (act
> ual time=0.121..3624.624 rows=25990002 loops=1)
> Output: name
> Heap Fetches: 0
> Total runtime: 7272.735 ms
> (6 rows)
>
>
>
>
>
>
> 11.10.2012, 21:55, "Sergey Konoplev" :
>> On Thu, Oct
2) (actual
> time=21731.551..21733.277 rows=4001 loops=1)
> Output: name, count(name)
> -> Seq Scan on public.hashcheck (cost=0.00..435452.02
> rows=25990002 width=32) (actual time=29.431..13383.812 rows=25990002 loop
> s=1)
>Output: id, na
pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
--
Sergey Konoplev
Database and Software Architect
http://www.linkedin.com/in/grayhemp
Phones:
USA +1 415 867 9984
Russia, Moscow
mon case is when backup (pg_dump*) is running TRUNCATE has
to wait for it because it acquires an access exclusive lock on a table
and all other queries including INSERT have to wait for the TRUNCATE.
Check the backup case first.
> Our previous Postgresql 8.2 instance did not have this probl
e of this links will help you:
- http://www.postgresql.org/docs/9.2/static/file-fdw.html
- http://pgxn.org/dist/odbc_fdw/.
>
> thanks
--
Sergey Konoplev
Database and Software Architect
http://www.linkedin.com/in/grayhemp
Phones:
USA +1 415 867 9984
Russia, Moscow +7 901 903 0499
Russia, K
ed. Have you had a lot of
updates/deletes on rows with exit_state is null?
Try to reindex tbl_tracker_performance_1_idx.
To reindex it without locks create a new index with temporary name
concurrently, delete the old one and rename the new one using the old
name.
--
Sergey Konoplev
Database
fter with it
will be slow. Also it depends on the index column values.
--
Sergey Konoplev
Database and Software Architect
http://www.linkedin.com/in/grayhemp
Phones:
USA +1 415 867 9984
Russia, Moscow +7 901 903 0499
Russia, Krasnodar +7 988 888 1979
Skype: gray-hemp
Jabber: gray...@gmail.com
--
S
always worked ok, perhaps it makes us particularly vulnerable to
> kernel/scheduler changes.
>
> I would be very grateful for any suggestions as to the best way to diagnose
> the source of this problem and/or general recommendations?
--
Sergey Konoplev
Database and Software Ar
a number of kernel (and not only) tuning issues
with short explanations to prevent it from affecting database behavior
badly. Try to follow them:
https://code.google.com/p/pgcookbook/wiki/Database_Server_Configuration
--
Sergey Konoplev
Database and Software Architect
http://www.linkedin.com/in/grayh
o_id,
>ag3.extra_coins
> FROM (aggregation2 ag2
> left join aggregation3 ag3
>ON (( ag2.missionid = ag3.missionidtemp )));
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subs
as it
has a lot of performance improvements.
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
Profile: http://www.linkedin.com/in/grayhemp
Phone: USA +1 (415) 867-9984, Russia +7 (901) 903-0499, +7 (988) 888-1979
Skype: gray-hemp
Jabber: gray...@gmail.com
--
Sent via pgsql-performa
, a
comparison against a non-immutable function such as CURRENT_TIMESTAMP
cannot be optimized, since the planner cannot know which partition the
function value might fall into at run time."
http://www.postgresql.org/docs/9.2/static/ddl-partitioning.html#DDL-PARTITIONING-CAVEATS
--
Kind re
Index Cond: (k = 1942)
>> Total runtime: 481.600 ms
These are plans of two different queries. Please show the second one
(where d2, g2, etc are) with secscans off.
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
Profile: http://www.linkedin.com/in/grayhemp
So my question is:
>
>
>
> Is there any performance tips for creating index on Postgres?
>
> how to monitor the progress the creation process?
>
>
>
> Thanks and best regards,
>
> Suya Huang
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
Profile:
solution for pagination (OFFSET) problem you might
also use the "prev/next" technique, like
SELECT * FROM table
WHERE id > :current_last_id
ORDER BY id LIMIT 10
for "next", and
SELECT * FROM (
SELECT * FROM table
WHERE id < :current_first_id
ORDER BY id D
one page.
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
Profile: http://www.linkedin.com/in/grayhemp
Phone: USA +1 (415) 867-9984, Russia +7 (901) 903-0499, +7 (988) 888-1979
Skype: gray-hemp
Jabber: gray...@gmail.com
--
Sent via pgsql-performance mailing list (pgsql-perform
meter gives planner a hint of how much it would cost to perform a
random page read used by index scans. It looks like you need to
decrease random_page_cost.
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
Profile: http://www.linkedin.com/in/grayhemp
Phone: USA +1 (415) 867-9984
, however, my assumption is
that it was IDLE in transaction. You mentioned the "incomplete message
from client" error, so it might somehow be a network problem that led
to a hunging connection to pgbouncer, that made pgbouncer kept a
connection to postgres after transaction was started.
-
n_duration_statement to
the value less that the age of hunging inserts and debug_print_parse,
debug_print_rewritten, debug_print_plan and debug_pretty_print to
'on'. It will allow you to log what is happening with these inserts
and what takes so many time.
--
Kind regards,
Sergey Konop
the symptoms look similar.
Another thing that might cause it is network. Try to monitor it at the
time of these stalls.
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com
--
guration. This
> is my values for this kernel settings:
Then it is definitely not THP.
ps. BTW, pgcookbook has been moved to GitHub several weeks ago
https://github.com/grayhemp/pgcookbook.
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
http://www.linkedin.com/in/grayhemp
+1 (
-c "select count(1) from pg_stat_activity"
sleep 1
done > activity.log
and its correlation with slowdowns.
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.co
atabase level one.
[1] http://www.postgresql.org/docs/9.3/static/sql-prepare.html
[2] http://search.cpan.org/dist/DBD-Pg/Pg.pm#prepare
[3] https://github.com/dimitri/preprepare
[4] https://github.com/ohmu/pgmemcache/
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
http://www.linke
ning of the table with DELETEs or may be
you use UPDATEs for some another reason?
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com
--
Sent via pgsql-performance mailing
On Thu, Dec 19, 2013 at 12:54 PM, Sev Zaslavsky wrote:
> On 12/19/2013 3:34 PM, Sergey Konoplev wrote:
>> On Thu, Dec 12, 2013 at 9:30 AM, Sev Zaslavsky wrote:
>>> Table rt_h_nbbo contains several hundred million rows. All rows for a
>>> given
>>> entry_d
Dave, in case if you need to archive old partitions to compressed
files out of your database you can use this tool [1]. Consult with the
configuration example [2], look at the ARCHIVE_* parameters.
[1] https://github.com/grayhemp/pgcookbook/blob/master/bin/archive_tables.sh
[2] https://github.com/grayhem
Debian.
Could you please show EXPLAIN ANALYZE for both cases, the current one
and with feed_user_id_active_id_added_idx dropped?
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gr
aven't had time yet to verify whether it goes back to 65% after
> vacuum full (that will take time, maybe a month).
Try pgcompact, it was designed particularily for such cases like yours
https://github.com/grayhemp/pgtoolkit.
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and D
On Sun, Feb 9, 2014 at 2:58 PM, Claudio Freire wrote:
> On Sun, Feb 9, 2014 at 7:32 PM, Sergey Konoplev wrote:
>> Try pgcompact, it was designed particularily for such cases like yours
>> https://github.com/grayhemp/pgtoolkit.
>
> It's a pity that that requires sever
. It should do the trick. If it wont, please, show the
plans.
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com
--
Sent via pgsql-performance mailing list (pgsql-p
//github.com/grayhemp/pgcookbook/blob/master/statement_statistics_collecting_and_reporting.md
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com
--
Sent via pgsql-performance mailing list (pgsql-pe
e problem appears when hot_standby is set on, so you need to
turn it off. Also, take a look at the link below:
http://www.databasesoup.com/2013/12/why-you-need-to-apply-todays-update.html
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
http://www.linkedin.com/in/grayhemp
+1 (415)
ified one as a
resulting one. Another risk is the case when you need to update 2
tables on different servers and have their modified_timestamp fields
in sync. Here you need to determine the new value of the column in the
application.
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
r, in your index
only by using expressions in it and in the query, eg.
...USING gin (strip(fts_data))
and
... WHERE strip(fts_data) @@ q
[1] http://www.postgresql.org/docs/9.3/static/textsearch-features.html
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
http://www.linkedin.com/in/
On Thu, Apr 24, 2014 at 4:34 AM, Heikki Linnakangas
wrote:
> On 04/24/2014 01:56 AM, Sergey Konoplev wrote:
>> My guess is that you could use strip() function [1] to get rid of
>> weights in your table or, that would probably be better, in your index
>> only by using express
uot;GIN improvements part 3: ordering in index"
patch, was it committed?
http://www.postgresql.org/message-id/flat/capphfduwvqv5b0xz1dzuqaw29erdculzp2wotfjzdbs7bhp...@mail.gmail.com
Ivan, there is a hope that we could get a more effective FTS solution
that any others I have heard about with
dom_bytes() you set a
search_path that allows to see get_byte() and the search_path that was
set before the gen_random() call doesn't allow it.
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (499) 346-7196, +7 (988) 8
52 matches
Mail list logo