Indeed, the JDK's Stream API doesn't offer any way to create chunks /
substreams based on chunk sizes. Perhaps, you might find some appropriate
abstraction in jOOλ
https://github.com/jOOQ/jOOL
But really, an imperative approach based on the org.jooq.Cursor type might
be the easiest way forward.
It's easy to explain.
A "jOOQ Stream" is really just a wrapper around a "jOOQ Cursor" with some
convenient API.
A "jOOQ Cursor" is really just an Iterator wrapper around a JDBC ResultSet
with some convenient API.
Now, every time a "jOOQ Stream" or a "jOOQ Cursor" pulls another value from
the
Of course. Just one last question regarding Jooq if you don't mind.
With a Cursor I can understand how it works... Like, it maintains an open
ResultSet and you can fetch X records in a loop, do stuff with them and
repeat. But how does Jooq handles that with a stream ? Is it abstracted ?
You
>
> For my culture : is this the PortalSuspended and multiple Execute limit 1
> stuff that indicates if it works ?
>
Probably :)
If you want to be sure, I think that the PostgreSQL mailing lists, or Stack
Overflow are more appropriate channels...
--
You received this message because you are
Alright thanks. I'm seeing this when doing chunks of 1 :
-2018-03-08 09:52:59.811 TRACE 28417 --- [ main]
o.postgresql.core.v3.QueryExecutorImpl : FE=>
Parse(stmt=null,query="REDACTED",oids={1043,0,0})
-2018-03-08 09:52:59.811 TRACE 28417 --- [ main]
Hello,
The API usage is correct:
- fetchSize() overrides the JDBC driver's default, which in the case of
PostgreSQL is 0 (reading the source code), meaning that all rows are
fetched in one go by default.
- You're using jOOQ's fetchStream(), which keeps an open JDBC ResultSet
Hi,
I'm not sure I'm using lazy fetching with streams correctly... I'm using
Jooq 3.9.6 en Postgresql 9.5.
This is my (simplified) repo method :
public Stream findLazy() {
return dslContext.select(...)
.fetchSize(100)
.fetchStream()