25.02.2017 2:55, Leyne, Sean wrote: > Vlad, > >>>>> These connections perform only a few heavy weight SQL statements >>>> (taking max 3-4 of real execution time). >>>> > Most of the time is spent in the Firebird engine waiting for the >>>> next fetch, due to network latencies. >>>> >>>> In the engine ? >>> >>> Yes, the engine would be **waiting** for the next fetch request from the >> client. >> >> Engine never waits for the client ! This is against client-server >> architecture. > > Oh yes it does!
Engine is passive (except of events notification). It does nothing by own initiative. > I can Execute the SQL, and in 23 seconds the first "page" of rows will be > returned to the client. > At this point, the server stops and waits for the client to perform a > "fetch". So, only when I First, stop to mix server and engine. Second, *server* at this point doesn't stop, it continues to fetch records from *engine* and caches it to be able to send it immediately to the client when next op_fetch arrives. ... >>> Based on your logic, the SELECT would be killed after 10 minutes, with only >> 3,000 rows having been processed by the client application. >>> >>> Following my logic, any time waiting for a "fetch" would not count, >>> and thus all 10,000 rows would be processed -- >> > but the transaction/connection would be 'active' for 33 minutes. >> >> Following *logic* developer should set timeout based on application >> processing time or (much better) fetch whole resultset, *commit* ASAP, and >> then process data. >> You describe very bad application (sorry) which holds open transaction 33 >> times longer than necessary. > > "Bad" or not has nothing to do with my point. I consider it has. > I am saying that applications are not perfect, we need a solution that > provides the best > possible outcome for all usage patterns. We need a solution for *most* usage patterns and i am against support of bad practices. > The fact that the transaction is open for longer that it should, has nothing > to do with my issue. It is impossible (and very wrong) to ignore such important details. > I want to use timeout to control very bad SQL statements -- which is a > separate/unrelated > issue to the length of a transaction. Statement and transaction lifetime is bound, despite of you wishes. > Perhaps we are talking about different timeout values? Execution vs. > Transaction vs Connection? I speak about statement exection timeout. And i alredy wrote here that there is no "transaction timeout" - does you read that ? > I am more concerned with Execution timeout -- since it is the value that > represent a direct CPU > cost of a SQL statement. It represent a tens of things except of CPU cost. If you need to control CPU cost, ask for it explicitly, but it is very, very different thing. >>>> If you insist on changes in implementation, please, specify >>>> exactly what you need and where it is implemented in a such way. >>> >>> Add logic to stop and start the timer in locations where the server is >>> waiting for client requests/"fetch" operations. >> >> No, sorry, without me. This is against my feeling of common sence and >> against all my experience. I could agree to completely exclude fetches from >> timeout scope, i.e. stop timer right after execute()\open(), but i'm not >> sure it >> is correct way. > > Well, Jiri and Mark agree with me. So, my POV is not unreasonable. You mixed few things in one, found no confirmation of existance of such feature and doesn't listen to the arguments - yes, of course, it is very reasonable ! >>> 1- I don't know of any other engines which allow for results to be fetched >>> in >> "pages". >> > It is the fact that the results can be fetch in "pages" with Firebird >> that, IMO, >> raises > the need for the additional level of 'accounting'. >> >> I see no relation of batch fetches (if you speak about it) with all said >> above. > > My point was that I don't know if those engines allow for results to be > return in pages > or whether result is returned as a single block. > > In which case, the execution time == the "cost" of the query since there > would be no > interaction with the client (there are no page fetches). > > Whereas your implementation currently just represents the elapsed time since > the query started -- Yes, exactly. > including time that the engine is doing nothing. And many other "interesting" times you do not want to consider. >> MSSQL implements timeouts at client side: > >> Server-side statement timeouts implemented in MySQL: > >> PostgreSQL docs is very limited: > > Do you/anyone know if these engines return full results sets or follow the > "page set" approach? If think a bit deeper, it will be clear that there is no other way as send big result sets in parts over the wire. Also, why don't you ask if they fully fetch resultset at the server side ? What you offer is very hard to use in practice as nobody able to explain why statement was cancelled at this moment and it is impossible to predict moment when timeout should fire. Well, as we can't agree i offer to choose one of the following : 1. Leave it as is 2. Completely exclude fetches from timeout scope 3. Remove the whole feature Vlad PS You doesn't even consider my sample cases and arguments and give no comments on it. I think it is not fair and suspend my participation in this one-way conversation. ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, SlashDot.org! http://sdm.link/slashdot Firebird-Devel mailing list, web interface at https://lists.sourceforge.net/lists/listinfo/firebird-devel
