On 20 November 2017 at 18:58, Karthik Ram <[email protected]> wrote:

> Argg. Thank you Thomas. It did run longer(17 min as opposed to 10 min)
> this time after I un-commented those lines, but still saw the same issue.
> Is there any limitation in Jupyter that it cannot handle more than certain
> GB data or query more than certain million or billion rows from post gre
> SQL DB ?
>

Jupyterhub is the only bit that could impose such a limit, and you've
already found the config options for it.


> Its strange because when I do run the SQL script from SQL work bench
> locally on my machine(which is less powerful than the server on which
> jupyter is running), I do get the resulting rows.
>

I would guess that the SQL work bench doesn't load all of the selected rows
into memory at once. Databases are designed to work with data larger than
will fit into memory, so database software is usually written to load a
bit, process it, discard that, and load the next bit. But if you try to
make a pandas DataFrame from an SQL query, pandas does try to load all the
rows at once.

Thomas

-- 
You received this message because you are subscribed to the Google Groups 
"Project Jupyter" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jupyter/CAOvn4qgDr2jJTXrejdpsMNmcuV9kzMQbM7niMrhUPYtxi4TSGg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to