Hello,
Time for a broad question. I'm aware of some specific select queries that will
generate disk writes - for example, a sort operation when there's not enough
work_mem can cause PG to write out some temp tables (not the correct
terminology?). That scenario is easily remedied by enabling
Hello
2012/7/6 CSS c...@morefoo.com:
Hello,
Time for a broad question. I'm aware of some specific select queries that
will generate disk writes - for example, a sort operation when there's not
enough work_mem can cause PG to write out some temp tables (not the correct
terminology?).
On 07/06/2012 02:20 PM, Pavel Stehule wrote:
Hello
2012/7/6 CSS c...@morefoo.com:
So my question is, what else can generate writes when doing read-only
operations? I know it sounds like a simple question, but I'm just not finding
a concise answer anywhere.
statistics
Hello,
My question below is almost exact copy of the on on SO:
http://stackoverflow.com/questions/11311079/postgresql-db-30-tables-with-number-of-rows-100-not-huge-the-fastest-way
The post on SO caused a few answers, all as one stating DO ONLY TRUNCATION
- this is the fast.
Also I think I've
I want to implement a paged Query feature, where the user can enter in
a dialog, how much rows he want to see. After displaying the first page
of rows, he can can push a button to display the next/previous page.
On database level I could user limit to implement this feature. My
problem now is,
I have grabbed one day slow query log and analyzed it by pgfouine, to my
surprise, the slowest query is just a simple select statement:
*select diggcontent_data_message.thing_id, diggcontent_data_message.KEY,
diggcontent_data_message.value, diggcontent_data_message.kind FROM
On Thu, Jul 5, 2012 at 11:17 PM, Yan Chunlu springri...@gmail.com wrote:
I have grabbed one day slow query log and analyzed it by pgfouine, to my
surprise, the slowest query is just a simple select statement:
select diggcontent_data_message.thing_id, diggcontent_data_message.KEY,
On 07/06/2012 02:17 PM, Yan Chunlu wrote:
so I wonder could this simple select is innocent and affected badly by
other queries? how could I find those queries that really slow down
the database?
It might not be other queries. Your query could be taking that long
because it was blocked by a
On 07/03/2012 11:22 PM, Stanislaw Pankevich wrote:
I cannot! use transactions.
Everything in PostgreSQL uses transactions, they are not optional.
I'm assuming you mean you can't use explicit transaction demarcation, ie
BEGIN and COMMIT.
need the fastest cleaning strategy for such case
On 07/06/2012 07:29 PM, Craig Ringer wrote:
On 07/03/2012 11:22 PM, Stanislaw Pankevich wrote:
I cannot! use transactions.
Everything in PostgreSQL uses transactions, they are not optional.
I'm assuming you mean you can't use explicit transaction demarcation,
ie BEGIN and COMMIT.
need
On Fri, Jul 6, 2012 at 4:29 AM, Craig Ringer ring...@ringerc.id.au wrote:
1) Truncate each table. It is too slow, I think, especially for empty
tables.
Really?!? TRUNCATE should be extremely fast, especially on empty tables.
You're aware that you can TRUNCATE many tables in one run, right?
Yan Chunlu wrote:
I have grabbed one day slow query log and analyzed it by pgfouine, to
my surprise, the slowest query
is just a simple select statement:
select diggcontent_data_message.thing_id,
diggcontent_data_message.KEY,
diggcontent_data_message.value, diggcontent_data_message.kind FROM
What language are you using? Usually there is iterator with chunked fetch
option (like setFetchSize in java jdbc). So you are passing query without
limit and then read as many results as you need. Note that query plan in
this case won't be optimized for your limit and I don't remember if
postgres
Hermann Matthes wrote:
I want to implement a paged Query feature, where the user can enter
in
a dialog, how much rows he want to see. After displaying the first
page
of rows, he can can push a button to display the next/previous page.
On database level I could user limit to implement this
On Wed, Jul 4, 2012 at 6:25 AM, Hermann Matthes hermann.matt...@web.dewrote:
I want to implement a paged Query feature, where the user can enter in a
dialog, how much rows he want to see. After displaying the first page of
rows, he can can push a button to display the next/previous page.
On
On 07/06/2012 07:38 PM, Daniel Farina wrote:
On Fri, Jul 6, 2012 at 4:29 AM, Craig Ringer ring...@ringerc.id.au wrote:
1) Truncate each table. It is too slow, I think, especially for empty
tables.
Really?!? TRUNCATE should be extremely fast, especially on empty tables.
You're aware that you
Stanislaw Pankevich wrote:
PostgreSQL db, 30 tables with number of rows 100 (not huge) - the
fastest way to clean each
non-empty table and reset unique identifier column of empty ones
I wonder, what is the fastest way to accomplish this kind of task in
PostgreSQL. I am
Hi Hermann,
Well,
Not clear how you get rows for user without paging?
If it is some query:
SELECT columns FROM table WHERE UserHasPerimision(rowPK, userid)
Paging would be:
SELECT columns FROM table WHERE UserHasPerimision(rowPK, userid) LIMIT
NoOfRecords OFFSET page*NoOfRecords
Kind
On Friday, July 06, 2012 01:38:56 PM Daniel Farina wrote:
ll, I don't know a
mechanism besides slow file system truncation time that would explain
why DELETE would be significantly faster.
There is no filesystem truncation happening. The heap and the indexes get
mapped into a new file.
On 07/06/2012 09:45 PM, Stanislaw Pankevich wrote:
Question: Is there a possibility in PostgreSQL to do DELETE on many
tables massively, like TRUNCATE allows. Like DELETE table1, table2, ...?
Yes, you can do it with a writable common table expression, but you
wanted version portability.
Stanislaw Pankevich wrote:
PostgreSQL db, 30 tables with number of rows 100 (not huge) - the
fastest way to clean each
non-empty table and reset unique identifier column of empty ones
I wonder, what is the fastest way to accomplish this kind of task in
PostgreSQL. I am
Hi to all,
I run Postgresql 8.3.9 on a dedicated server running with Debian 5.0.4, a
strong bi quad-proc with RAM 16Go. My biggest db contains at least 100 000
tables. Last time, I started a Java process I use to make some change on it, it
created 170 new tables and it took one full minute.
Stanislaw Pankevich wrote:
PostgreSQL db, 30 tables with number of rows 100 (not huge) - the
fastest way to clean each
non-empty table and reset unique identifier column of empty ones
Hello,
2 'exotic' ideas:
- use dblink_send_query to do the job in multiple threads (I doubt
On Fri, Jul 6, 2012 at 4:29 AM, Craig Ringer ring...@ringerc.id.au wrote:
On 07/03/2012 11:22 PM, Stanislaw Pankevich wrote:
1) Truncate each table. It is too slow, I think, especially for empty
tables.
Really?!? TRUNCATE should be extremely fast, especially on empty tables.
You're aware
On 07/03/2012 08:22 AM, Stanislaw Pankevich wrote:
PostgreSQL db, 30 tables with number of rows 100 (not huge) -
the fastest way to clean each non-empty table and reset unique
identifier column of empty ones
I wonder, what is the fastest way to accomplish this kind of task in
On Fri, Jul 6, 2012 at 8:15 AM, Sylvain CAILLET scail...@alaloop.com wrote:
Hi to all,
I run Postgresql 8.3.9 on a dedicated server running with Debian 5.0.4, a
strong bi quad-proc with RAM 16Go. My biggest db contains at least 100 000
tables. Last time, I started a Java process I use to make
On 06/07/12 16:15, Sylvain CAILLET wrote:
Hi to all,
I run Postgresql 8.3.9 on a dedicated server running with Debian 5.0.4,
a strong bi quad-proc with RAM 16Go. My biggest db contains at least 100
000 tables.
That is a *lot* of tables and it's probably going to be slow whatever
you do.
Daniel Farina-4 wrote
On Fri, Jul 6, 2012 at 4:29 AM, Craig Ringer lt;ringerc@.idgt; wrote:
1) Truncate each table. It is too slow, I think, especially for empty
tables.
Really?!? TRUNCATE should be extremely fast, especially on empty tables.
You're aware that you can TRUNCATE many
On 07/06/2012 11:15 PM, Sylvain CAILLET wrote:
Hi to all,
I run Postgresql 8.3.9 on a dedicated server running with Debian
5.0.4, a strong bi quad-proc with RAM 16Go. My biggest db contains at
least 100 000 tables. Last time, I started a Java process I use to
make some change on it, it
29 matches
Mail list logo