Hey, I am also facing the similar issue but in a different context (related to coprocessor), but that's a totally different use case (still need to look in to it).
But in this case, will reducing the scanner cache size helps (at least temporarily)? In a case when scanner is busy collecting/computing rows just to meet up the cache limit expectations. Himanshu On Thu, May 19, 2011 at 5:09 PM, Vidhyashankar Venkataraman < [email protected]> wrote: > >> maybe you could bump the timeouts high enough so that you don't > >> hit the issue at all? > Don't you think setting a high timeout might be a little ad hoc? This might > just work except that it could lead to a really long delay during cases when > there should be a timeout. Also we have non-homogeneous data, the timeout > setting may have to be set differently for different access sizes. > > I can go through the patch and ping you guys back. > > Vidhya > > On 5/19/11 3:42 PM, "Jean-Daniel Cryans" <[email protected]> wrote: > > The latest patch would need some more work, I did more than what's > really required. > > If you are really taking more than a minute to do a single next() > call, maybe you could bump the timeouts high enough so that you don't > hit the issue at all? The default is pretty arbitrary. > > J-D > > On Thu, May 19, 2011 at 3:37 PM, Vidhyashankar Venkataraman > <[email protected]> wrote: > > I had spoken a while back about this problem (clients timing out when > scanners do not return with a row yet: search for "A possible bug in the > scanner. " > > > > I am trying to fix the problem in the next few days: our system is a > little crippled without the fix (We use filters in scans and the bug crops > up during filters that return sparse sets.). > > > > JD, can you let me know the status of the recentmost patch attached > (31/Mar/10)? > > > > Thank you > > Vidhya > > > >
