On Fri, Jul 24, 2015 at 12:13 PM, Chris Withers
wrote:
> On 24/07/2015 19:21, Jan Lentfer wrote:
>
>>
>> I've been doing some lightweight load testing with
“pgbench -c8 -j8 -T10”
When run locally on the postgres server I've testing, this gives
around 5000tps
When I
On 24/07/2015 19:21, Jan Lentfer wrote:
I've been doing some lightweight load testing with
“pgbench -c8 -j8 -T10”
When run locally on the postgres server I've testing, this gives
around 5000tps
When I do it from a server that has a 13ms ping latency, it drops to
37tps.
This is using the defa
>> Am 24.07.2015 um 18:59 schrieb Chris Withers :
>>
>> Hi all,
>>
>> I've been doing some lightweight load testing with
>> “pgbench -c8 -j8 -T10”
>>
>> When run locally on the postgres server I've testing, this gives around
>> 5000tps
>>
>> When I do it from a server that has a 13ms ping la
That seems to be a large drop. On the other hand 13 ms is also like a very
large network latency. On LAN your usually in the sub ms area. So going from
e.g. 0.2 ms to 13ms is 65 fold decrease. What is the network toplogy like?
Jan
Von meinem iPad gesendet
> Am 24.07.2015 um 18:59 schrieb Chris
On 07/24/2015 02:32 AM, Andres Freund wrote:
On 2015-07-24 10:29:21 +0100, Tim Smith wrote:
That's not the point. Backups are important, but so is the concept of
various layers of anti-fat-finger protection. Restoring off backups
should be last resort, not first.
Oh, comeon. Install a TRU
On Fri, Jul 24, 2015 at 5:17 AM, Zdeněk Bělehrádek <
zdenek.belehra...@superhosting.cz> wrote:
> What about creating a SAVEPOINT before each INSERT, and if the INSERT
> returns
> an error, then ROLLBACK TO SAVEPOINT?
Make sure you release the savepoint if there was no error. Otherwise you
will
Hi all,
I've been doing some lightweight load testing with
“pgbench -c8 -j8 -T10”
When run locally on the postgres server I've testing, this gives around
5000tps
When I do it from a server that has a 13ms ping latency, it drops to 37tps.
This is using the default pgbench script, is it to be
On Thu, Jul 23, 2015 at 5:55 AM, JPLapham wrote:
> Hello,
>
> I have an application that occasionally performs large batch inserts of
> user
> hand-generated data. Input is a tab delimited file with typically hundreds
> to a thousand lines of data.
>
> Because the data is generated by hand, there
On Fri, Jul 24, 2015 at 3:27 AM, Tim Smith wrote:
> On 23 July 2015 at 19:25, Scott Marlowe wrote:
>> stick to triggers, they're faster
>
>
> Erm, not according to your beloved manual !!!
>
> 38.7. Rules Versus Triggers
>
> " a rule issuing one extra command is likely to be faster than a trigger"
On 07/24/2015 02:24 AM, Tim Smith wrote:
Just in case it has not been made obvious yet, rules are silently
deprecated. They still exist because views depend on them, but it is
generally considered best practices to not use them outside that realm.
Well, the manual doesn't seem to reflect that
What about creating a SAVEPOINT before each INSERT, and if the INSERT returns
an error, then ROLLBACK TO SAVEPOINT? This way you will have all the
insertable data in your table, and you can still ROLLBACK the whole
transaction, or COMMIT it if there were no errors.
It will probably be quite slo
On Fri, Jul 24, 2015 at 10:24:55AM +0100, Tim Smith wrote:
>
> If that's how the developers feel about rules, i.e they can't be
> bothered to make any changes to the rules code any more, no matter how
> minor (e.g. this TRUNCATE issue)
Who is this "they"? As I think I suggested in another mail,
On 2015-07-24 10:29:21 +0100, Tim Smith wrote:
> That's not the point. Backups are important, but so is the concept of
> various layers of anti-fat-finger protection. Restoring off backups
> should be last resort, not first.
Oh, comeon. Install a TRUNCATE trigger and let this thread die.
Andre
On 24 July 2015 at 01:37, Rob Sargent wrote:
> Fair enough but both blackhats and the authorized are just as likely to drop
> the database as truncate something (intentionally or not) and backups
> stashed everywhere is the first order of business.
That's not the point. Backups are important,
Thanks for sharing the results of your experiment !
In fact I did not partition by *fk_job* but by the *id* (primary key) instead
thoughpartitioning by *fk_job* was my first idea.
I use various columns when querying the table, *fk_job* is not always there.
--
View this message in context:
http://postgresql.nabble.com/The-fastest-way-to-update-
Yes, partitioning by fk_job can significantly improve performance of this
update.
And all the SELECTs with definited fk_job can be faster.
All you should check carefully is those SELECTs without definited fk_job.
2015-07-24 17:18 GMT+09:00 twoflower :
> Thank you, I will look into those sugges
Thank you, I will look into those suggestions.
Meanwhile, I started experimenting with partitioning the table into smaller
tables, each holding rows with ID spanning 1 million values and using this
approach, the UPDATE takes 300ms. I have to check if all the SELECTs I am
issuing against the origi
OK.
In your case, I can not see any reasons that
using a temp table to do joint-update is faster than directly updating.
And from the execution plan, index scan just takes very little time.
Most of the time is doing insert/delete.
As you mentioned, fk_assignmentwhere is updated frequently,
and di
19 matches
Mail list logo