-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
> Second, the feature needed to do this without even
> using a cursor was added 1.5 years ago (PQsetSingleRowMode). The DBD::Pg
> was just not taught how to use it yet.
True. And we were hoping for something better, so we can make one request
On Mar 12, 2014, at 5:57 AM, matshyeq wrote:
> I don't see why? I can't think of any single SQL tool I've been working with
> that didn't have this functionality, really.
> The principle I find very simple and useful.
> There is defined "fetch row size" parameter (each tool calls give its own
On 03/12/2014 06:05 AM, matshyeq wrote:
Don't fully agree with last statement. FETCH is WAY better and more
functional than LIMIT.
PQsetSingleRowMode() is not meant for partial resultsets, but it can
be used if you accept the downsides.
Given the 'discretion mode' I'm still not
On Wed, Mar 12, 2014 at 9:30 AM, Marko Kreen wrote:
> On Tue, Mar 11, 2014 at 12:39:12PM +, matshyeq wrote:
>
> - when using PQsetSingleRowMode() function - does it give an option to
>
> define how many rows to cache on client's side (like JDBC setFetchSize()
>
> does) or leaves it at pqli
>
> > Well, I don't know what "streamed" exactly means here.
> > If server pushes sequentially all the data not asking client if ready
> to
> > receive then that's what the issue is about.
>
> This problem is handled in kernel's TCP stack - it will slow down
> the connection if userspace does not
On Wed, Mar 12, 2014 at 10:57:03AM +, matshyeq wrote:
> On Wed, Mar 12, 2014 at 9:30 AM, Marko Kreen wrote:
> > This option would not make sense as you are not "fetching" anything,
> > full resultset is being streamed from server over TCP connection.
>
> Well, I don't know what "streamed" exa
On Tue, Mar 11, 2014 at 12:39:12PM +, matshyeq wrote:
> - when using PQsetSingleRowMode() function - does it give an option to
> define how many rows to cache on client's side (like JDBC setFetchSize()
> does) or leaves it at pqlib's discretion?
This option would not make sense as you are not
Marko, Tom, Adrian, Jeff, Daniel - thank you all for valuable feedback!
Two general questions:
- when using PQsetSingleRowMode() function - does it give an option to
define how many rows to cache on client's side (like JDBC setFetchSize()
does) or leaves it at pqlib's discretion?
- is it/would
matshyeq wrote:
> The only solution is CURSOR based which I find an awkward low level hack
> comparing to elegant option supported by native library.
That's not the only solution, even with the current DBD::Pg
you could do:
$dbh->do("COPY (sql-squery) TO STDOUT");
my $data;
while ($dbh->
On Sun, Mar 9, 2014 at 6:43 AM, matshyeq wrote:
> Hello,
>
> I've found an issue when tried to implement fetching rows from big table
> (2mln rows) in my app.
> Basically I don't find an elegant and easy way (other than always use
> cursors) to limit the number of rows returned.
> This causes my
matshyeq writes:
> If 'SingleRowMode' goes row-by-row then again it's not a solution,
> especially given that this particular issue applies to rather large row
> sets.
Perhaps you should actually experiment with that solution instead of
rejecting it out of hand. Or at least RTFM about it.
It do
On 03/10/2014 04:51 AM, matshyeq wrote:
Albe Laurenz wrote:
I would believe the stackoverflow
(http://stackoverflow.com/questions/21960121/perl-script-fails-when-selecting-data-from-big-postgresql-table)
question referred to explains the issue well.
You can retrieve the full result set,
not
>Albe Laurenz wrote:
I would believe the stackoverflow (
http://stackoverflow.com/questions/21960121/perl-script-fails-when-selecting-data-from-big-postgresql-table)
question referred to explains the issue well.
> You can retrieve the full result set,
not an option because of client memory limita
Fully agree with Laurenz.
LIMIT in some (limited!) cases could be seen as a workaround but it's far
from being elegant (what if your end user types the query?)
If 'SingleRowMode' goes row-by-row then again it's not a solution,
especially given that this particular issue applies to rather large row
On Mon, Mar 10, 2014 at 06:58:26AM +, Albe Laurenz wrote:
> Daniel Verite wrote:
> > matshyeq wrote:
>
> [ runs out of memory on the client because all results from a large query are
> retrieved at once ]
>
> >> "Unfortunately, this is a limitation in the underlying driver (libpq)
> >> rath
On 3/9/2014 6:43 AM, matshyeq wrote:
Hello,
I've found an issue when tried to implement fetching rows from big
table (2mln rows) in my app.
Basically I don't find an elegant and easy way (other than always use
cursors) to limit the number of rows returned.
This causes my application to break d
matshyeq wrote:
> Postgresql is there for a good while perceived as one of the best (or just
> simply the best!?)
> available open source DB solution, so I'm really surprised this functionality
> is not yet supported...
You can retrieve the full result set,
you can retrieve it row by row,
you ca
Daniel Verite wrote:
> matshyeq wrote:
[ runs out of memory on the client because all results from a large query are
retrieved at once ]
>> "Unfortunately, this is a limitation in the underlying driver (libpq) rather
>> than DBD::Pg itself. There have been talks over the years of supporting
>> t
matshyeq wrote:
> "Unfortunately, this is a limitation in the underlying driver (libpq) rather
> than DBD::Pg itself. There have been talks over the years of supporting
> this, but nothing concrete yet. Your best bet would be to ask about this on
> the Postgres lists"
>
> Would you consid
On 03/09/2014 06:43 AM, matshyeq wrote:
Hello,
I've found an issue when tried to implement fetching rows from big table
(2mln rows) in my app.
Basically I don't find an elegant and easy way (other than always use
cursors) to limit the number of rows returned.
This causes my application to break
Hello,
I've found an issue when tried to implement fetching rows from big table
(2mln rows) in my app.
Basically I don't find an elegant and easy way (other than always use
cursors) to limit the number of rows returned.
This causes my application to break due to the excessive memory consumption.
21 matches
Mail list logo