I've opened the PR: https://github.com/jeremyevans/sequel/pull/1711

I tried that in my scripts and it works as expected, but I couldn't find a 
way to trigger integration tests for the sequel repo.

> You can use Dataset#paged_each if you want to load the information in 
chunks.  

I've tried it, but it was quite slow with Snowflake

On Tuesday, August 25, 2020 at 4:37:26 PM UTC+2 Jeremy Evans wrote:

> On Tuesday, August 25, 2020 at 6:24:21 AM UTC-7, Michal Wrobel wrote:
>>
>> Hi,
>>
>> I just started using Sequel with ruby-odbc to connect to SnowflakeDB.
>>
>> I've noticed that Sequel performs some eager loading of the whole result 
>> set before giving back to the caller.
>>
>> Eg:
>>
>> ```
>> connection['SELECT * from big_table'].each do |row|
>> p row
>> end
>> ```
>> this will not return any row until everything is loaded in memory.
>>
>> The problem seems to be here:
>>
>>
>> https://github.com/jeremyevans/sequel/blob/67cc4245227a6c01ecb20d0138aaf37b05a5aebc/lib/sequel/adapters/odbc.rb#L95
>> on `fetch_all`.
>>
>> I can monkey-patch it, but would prefer to have the right fix in master.
>> Does anyone know why this behaviour is there, doesn't seem right.
>>
>
> That's how the postgres adapter works by default as well.  You can use 
> Dataset#paged_each if you want to load the information in chunks.  That 
> being said, if you send in a pull request that streams records and it 
> passes all of the integration tests, I wouldn't have a problem merging it.
>
> Thanks,
> Jeremy
>

-- 
You received this message because you are subscribed to the Google Groups 
"sequel-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sequel-talk/db0cd731-217d-4c28-9dfb-c260966a5671n%40googlegroups.com.

Reply via email to