On 07/06/2012 11:15 PM, Sylvain CAILLET wrote:
Hi to all,
I run Postgresql 8.3.9 on a dedicated server running with Debian
5.0.4, a strong bi quad-proc with RAM 16Go. My biggest db contains at
least 100 000 tables. Last time, I started a Java process I use to
make some change on it, it create
Daniel Farina-4 wrote
>
> On Fri, Jul 6, 2012 at 4:29 AM, Craig Ringer wrote:
>> 1) Truncate each table. It is too slow, I think, especially for empty
>> tables.
>>
>> Really?!? TRUNCATE should be extremely fast, especially on empty tables.
>>
>> You're aware that you can TRUNCATE m
On 06/07/12 16:15, Sylvain CAILLET wrote:
Hi to all,
I run Postgresql 8.3.9 on a dedicated server running with Debian 5.0.4,
a strong bi quad-proc with RAM 16Go. My biggest db contains at least 100
000 tables.
That is a *lot* of tables and it's probably going to be slow whatever
you do.
La
On Fri, Jul 6, 2012 at 8:15 AM, Sylvain CAILLET wrote:
> Hi to all,
>
> I run Postgresql 8.3.9 on a dedicated server running with Debian 5.0.4, a
> strong bi quad-proc with RAM 16Go. My biggest db contains at least 100 000
> tables. Last time, I started a Java process I use to make some change on
On 07/03/2012 08:22 AM, Stanislaw Pankevich wrote:
PostgreSQL db, 30 tables with number of rows < 100 (not huge) -
the fastest way to clean each non-empty table and reset unique
identifier column of empty ones
I wonder, what is the fastest way to accomplish this kind of task in
Po
On Fri, Jul 6, 2012 at 4:29 AM, Craig Ringer wrote:
> On 07/03/2012 11:22 PM, Stanislaw Pankevich wrote:
> > 1) Truncate each table. It is too slow, I think, especially for empty
> > tables.
>
> Really?!? TRUNCATE should be extremely fast, especially on empty tables.
>
> You're aware that you can
Stanislaw Pankevich wrote:
>>> PostgreSQL db, 30 tables with number of rows < 100 (not huge) - the
>>> fastest way to clean each
>>> non-empty table and reset unique identifier column of empty ones
Hello,
2 'exotic' ideas:
- use dblink_send_query to do the job in multiple threads
Hi to all,
I run Postgresql 8.3.9 on a dedicated server running with Debian 5.0.4, a
strong bi quad-proc with RAM 16Go. My biggest db contains at least 100 000
tables. Last time, I started a Java process I use to make some change on it, it
created 170 new tables and it took one full minute.
Stanislaw Pankevich wrote:
>>> PostgreSQL db, 30 tables with number of rows < 100 (not huge) - the
>>> fastest way to clean each
>>> non-empty table and reset unique identifier column of empty ones
>>>
>>> I wonder, what is the fastest way to accomplish this kind of task in
>>> PostgreS
On 07/06/2012 09:45 PM, Stanislaw Pankevich wrote:
Question: Is there a possibility in PostgreSQL to do DELETE on many
tables massively, like TRUNCATE allows. Like DELETE table1, table2, ...?
Yes, you can do it with a writable common table expression, but you
wanted version portability.
WI
On Friday, July 06, 2012 01:38:56 PM Daniel Farina wrote:
> ll, I don't know a
> mechanism besides slow file system truncation time that would explain
> why DELETE would be significantly faster.
There is no filesystem truncation happening. The heap and the indexes get
mapped into a new file. Other
Hi Hermann,
Well,
Not clear how you get rows for user without paging?
If it is some query:
SELECT columns FROM table WHERE UserHasPerimision(rowPK, userid)
Paging would be:
SELECT columns FROM table WHERE UserHasPerimision(rowPK, userid) LIMIT
NoOfRecords OFFSET page*NoOfRecords
Kind Regards
Stanislaw Pankevich wrote:
> PostgreSQL db, 30 tables with number of rows < 100 (not huge) - the
> fastest way to clean each
> non-empty table and reset unique identifier column of empty ones
>
> I wonder, what is the fastest way to accomplish this kind of task in
> PostgreSQL. I am in
On 07/06/2012 07:38 PM, Daniel Farina wrote:
On Fri, Jul 6, 2012 at 4:29 AM, Craig Ringer wrote:
1) Truncate each table. It is too slow, I think, especially for empty
tables.
Really?!? TRUNCATE should be extremely fast, especially on empty tables.
You're aware that you can TRUNCATE many table
On Wed, Jul 4, 2012 at 6:25 AM, Hermann Matthes wrote:
> I want to implement a "paged Query" feature, where the user can enter in a
> dialog, how much rows he want to see. After displaying the first page of
> rows, he can can push a button to display the next/previous page.
> On database level I c
Hermann Matthes wrote:
> I want to implement a "paged Query" feature, where the user can enter
in
> a dialog, how much rows he want to see. After displaying the first
page
> of rows, he can can push a button to display the next/previous page.
> On database level I could user "limit" to implement th
What language are you using? Usually there is iterator with chunked fetch
option (like setFetchSize in java jdbc). So you are passing query without
limit and then read as many results as you need. Note that query plan in
this case won't be optimized for your limit and I don't remember if
postgres h
Yan Chunlu wrote:
> I have grabbed one day slow query log and analyzed it by pgfouine, to
my surprise, the slowest query
> is just a simple select statement:
>
> select diggcontent_data_message.thing_id,
diggcontent_data_message.KEY,
> diggcontent_data_message.value, diggcontent_data_message.kind
On Fri, Jul 6, 2012 at 4:29 AM, Craig Ringer wrote:
> 1) Truncate each table. It is too slow, I think, especially for empty
> tables.
>
> Really?!? TRUNCATE should be extremely fast, especially on empty tables.
>
> You're aware that you can TRUNCATE many tables in one run, right?
>
> TRUNCATE TABL
On 07/06/2012 07:29 PM, Craig Ringer wrote:
On 07/03/2012 11:22 PM, Stanislaw Pankevich wrote:
I cannot! use transactions.
Everything in PostgreSQL uses transactions, they are not optional.
I'm assuming you mean you can't use explicit transaction demarcation,
ie BEGIN and COMMIT.
need
On 07/03/2012 11:22 PM, Stanislaw Pankevich wrote:
I cannot! use transactions.
Everything in PostgreSQL uses transactions, they are not optional.
I'm assuming you mean you can't use explicit transaction demarcation, ie
BEGIN and COMMIT.
need the fastest cleaning strategy for such case w
On 07/06/2012 02:17 PM, Yan Chunlu wrote:
so I wonder could this simple select is innocent and affected badly by
other queries? how could I find those queries that really slow down
the database?
It might not be other queries. Your query could be taking that long
because it was blocked by a l
On Thu, Jul 5, 2012 at 11:17 PM, Yan Chunlu wrote:
> I have grabbed one day slow query log and analyzed it by pgfouine, to my
> surprise, the slowest query is just a simple select statement:
>
> select diggcontent_data_message.thing_id, diggcontent_data_message.KEY,
> diggcontent_data_message.valu
I have grabbed one day slow query log and analyzed it by pgfouine, to my
surprise, the slowest query is just a simple select statement:
*select diggcontent_data_message.thing_id, diggcontent_data_message.KEY,
diggcontent_data_message.value, diggcontent_data_message.kind FROM
diggcontent_data_messa
I want to implement a "paged Query" feature, where the user can enter in
a dialog, how much rows he want to see. After displaying the first page
of rows, he can can push a button to display the next/previous page.
On database level I could user "limit" to implement this feature. My
problem now i
Hello,
My question below is almost exact copy of the on on SO:
http://stackoverflow.com/questions/11311079/postgresql-db-30-tables-with-number-of-rows-100-not-huge-the-fastest-way
The post on SO caused a few answers, all as one stating "DO ONLY TRUNCATION
- this is the fast".
Also I think I've m
26 matches
Mail list logo