Vlad, > >>> These connections perform only a few heavy weight SQL statements > >> (taking max 3-4 of real execution time). > >> > Most of the time is spent in the Firebird engine waiting for the > >> next fetch, due to network latencies. > >> > >> In the engine ? > > > > Yes, the engine would be **waiting** for the next fetch request from the > client. > > Engine never waits for the client ! This is against client-server > architecture.
Oh yes it does! I can Execute the SQL, and in 23 seconds the first "page" of rows will be returned to the client. At this point, the server stops and waits for the client to perform a "fetch". So, only when I navigate the result set to the point that the client app determines that more rows must be requested, nothing will be executing on the server process. > > Based on your logic, the SELECT would be killed after 10 minutes, with only > 3,000 rows having been processed by the client application. > > > > Following my logic, any time waiting for a "fetch" would not count, > > and thus all 10,000 rows would be processed -- > > but the transaction/connection would be 'active' for 33 minutes. > > Following *logic* developer should set timeout based on application > processing time or (much better) fetch whole resultset, *commit* ASAP, and > then process data. > You describe very bad application (sorry) which holds open transaction 33 > times longer than necessary. "Bad" or not has nothing to do with my point. I am saying that applications are not perfect, we need a solution that provides the best possible outcome for all usage patterns. The fact that the transaction is open for longer that it should, has nothing to do with my issue. I want to use timeout to control very bad SQL statements -- which is a separate/unrelated issue to the length of a transaction. Perhaps we are talking about different timeout values? Execution vs. Transaction vs Connection? I am more concerned with Execution timeout -- since it is the value that represent a direct CPU cost of a SQL statement. > >> If you insist on changes in implementation, please, specify > >> exactly what you need and where it is implemented in a such way. > > > > Add logic to stop and start the timer in locations where the server is > > waiting for client requests/"fetch" operations. > > No, sorry, without me. This is against my feeling of common sence and > against all my experience. I could agree to completely exclude fetches from > timeout scope, i.e. stop timer right after execute()\open(), but i'm not sure > it > is correct way. Well, Jiri and Mark agree with me. So, my POV is not unreasonable. > > 1- I don't know of any other engines which allow for results to be fetched > > in > "pages". > > It is the fact that the results can be fetch in "pages" with Firebird > that, IMO, > raises > the need for the additional level of 'accounting'. > > I see no relation of batch fetches (if you speak about it) with all said > above. My point was that I don't know if those engines allow for results to be return in pages or whether result is returned as a single block. In which case, the execution time == the "cost" of the query since there would be no interaction with the client (there are no page fetches). Whereas your implementation currently just represents the elapsed time since the query started -- including time that the engine is doing nothing. > MSSQL implements timeouts at client side: > Server-side statement timeouts implemented in MySQL: > PostgreSQL docs is very limited: Do you/anyone know if these engines return full results sets or follow the "page set" approach? Sean ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, SlashDot.org! http://sdm.link/slashdot Firebird-Devel mailing list, web interface at https://lists.sourceforge.net/lists/listinfo/firebird-devel