Thanks Lukas,

we changed the implementation to:

   getContext()
         .loadInto(table)
         .batchAll()
         .onDuplicateKeyUpdate()
         .loadRecords(records)
         .fields(fields)
         .execute();


It seems to work so far for PostgresSql and for SqlServer.

Best Regards
-- Benjamin

Am Mittwoch, 6. Juli 2016 15:44:15 UTC+2 schrieb Lukas Eder:
>
> Hi Benjamin,
>
> Thank you very much for your enquiry and for your interest in jOOQ. I will 
> comment directly inline
>
> 2016-07-05 16:25 GMT+02:00 <benjami...@gmail.com <javascript:>>:
>
>> Hello together,
>>
>> we are currently evaluating Jooq in a prototype context in order
>> to find out if we can use it later in production. So far it seems 
>> pretty nice, with an almost perfect documentation.
>>
>> We need to support different PostgreSql and Sql Server types. 
>>
>> One problem we could not yet solve, is the following:
>>
>> We want to insert a batch of 1000 entries into a table. If
>> some entries already exist, we want to update some of the fields instead.
>>
>> We used:
>>
>> InsertValuesStep insertStep = context.insertInto(Table, SomeFields);
>> List<UpdateTableRecord> records = SomeInitialzedRecords();
>>
>> records.stream().foreach(record -> {
>> insertStep.values(record.getField1(), record.getField2()...);
>> insertStep.onDuplicateKeyUpdate().set(Field1, record.getField1());
>> }
>>
>> insertStep.execute();
>>
>> This works for PostgresSql as expected, but in sql server it does insert 
>> only one row per execution instead of 1000.
>>
>
> Hmm, this currently works because the jOOQ API is not (yet) immutable, so 
> you can keep calling methods on your insertStep to modify its internal 
> model, even if there isn't really any database that supports adding several 
> VALUES() clauses and ON DUPLICATE KEY UPDATE clauses. I would strongly 
> recommend not following such a pattern. I'm actually surprised that this 
> would work for PostgreSQL.
>  
>
>> We also tried:
>>
>> context.batchStore(records).execute();
>>
>> But this throws the DuplicateKey Exception instead of updating the row 
>> entry and it seems somewhat slower than the other option under PostgreSql.
>>
>
> With this approach, the decision whether an INSERT or an UPDATE is 
> executed is made in the client based on how you created the individual 
> records. If you never fetched any record from the database, this will only 
> produce INSERT statements - just like when you call record.store();
>
> The reason why it might be slower is because each record currently 
> generates an individual SQL string.
>
> Is this the right way to do what we like to achieve or is there something 
>> better ? 
>>
>
> I think you might prefer the Loader API, which works with CSV data:
> http://www.jooq.org/doc/latest/manual/sql-execution/importing/importing-csv
>
> ... or also with records, as in your case:
>
> http://www.jooq.org/doc/latest/manual/sql-execution/importing/importing-records
>
> The Loader API lets you fine tune the transaction commit size, batch size 
> and bulk size, as you might want to use different sizes for each database.
>
> Let me know if this helps, and if you have any additional questions about 
> the loader API.
>
> Best Regards,
> Lukas
>

-- 
You received this message because you are subscribed to the Google Groups "jOOQ 
User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jooq-user+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to