On Mon, Jun 13, 2011 at 4:43 PM, Alucard <alucard...@gmail.com> wrote:

>
> As far as I know, by using DIH it will read all the documents from database
> (I am using SQLite v3) to memory.


That is incorrect. DIH does not read rows into memory, rather, only the set
of rows needed to create a Solr document is kept in memory at any given
time. The documents are streamed.


>
> Now I would like to ask if I have a lot of records (let say 7 millions), it
> will put
> All 7 millions record in memory, how can I avoid that?
>
> There is a piece of documentation that say: setting
> responseBuffering="adaptive"(MSSQL)
> Or setting batchsize=”-1” (MySQL), but there is no attributes for SQLite.
> Can
> We use those parameters?  What other parameters can SQLite users use?
>
>
Those parameters are JDBC driver specific settings e.g. MySQL JDBC driver
reads rows into memory unless you set batchSize="-1"

You'll have to look at SQLite's jdbc driver's docs to see if it reads rows
into memory or has a switch to stream rows one at a time to the client.

-- 
Regards,
Shalin Shekhar Mangar.

Reply via email to