On 07/25/14 18:43, Jim Starkey wrote: > If an interface is incompatible, existing applications have to be recoded. > If they need to be recoded, there isn't any real purpose to retaining an > interface "style."
For existing applications we support legacy ISC interface. Certainly, when new features (like schema) will be added, they will be accessible only from new one. > I don't understand the difference between creating a statement in prepared > state and prepareStatement. Could you explain? Presence of "prepareStatement" in API suggests a user to prepare same statement instance multiple times. > And explain why Connection::prepareStatement is bad. I see 2 ways how prepareStatement may be implemented. First - execution of it can be enabled only once (like setCursorName() in ISC API). In that case we just have useless API call - all of it's functionality might be moved to statement creation. Second - multiple execution of prepareStatement for same statement object may be enabled. In that case I see a need in a lot of related activity in client program (like changing parameters formats) that should be done in sync. Changing some data structures instead of creating a new one is a potential source of errors. Also a problem exists what to do with cursor name set for a statement. According to SQL CLI it should be cleaned, but I'm sure not all users will like that approach - it's not intuitively logical that re-preparing statement drops cursor name from it. > Fetching values from a result set does require a virtual function call. > That's may 10 nanoseconds on the client side, maybe. On the other hand, it > completely isolates the client from whatever the database engine generated as > a type. Is it good in all cases? Data types (specially numeric data types) is rather conservative thing, they usually match in engine and language. Hiding the fact that engine in some particular case is using for example 16-bit value with int in API can become a source of errors instead good service. > Remember that the original interface was designed for use with a preprocessor > that allowed free references to database values within a database block. > They was a very good idea in its day, but preprocessors have gone by the way > side. Yes, but on the other hand accessing data buffer today does not necessary require preprocessor. A set of templates (probably combined with macros) in C++ provide enough service for one who wants to use API without any additional layer over it. > Coersing data into predeclared structures no longer makes sense, Why? Imagine a visual design component associated with SQL statement. User opens select statement, probably changes data types of returned values, component stores them and build interface (IMessageMetadata in FB3 case) which is passed to openCursor() at runtime. fetchNext() returns buffers in exactly that form that is needed for component, and how does it parse them later is that component deal. I.e. yes, we do not have 'struct isc_371' with a set of fields in source code, but structure of data buffer may be predeclared in other way. > and managing dynamically structured buffers is a royal pain in the butt. Here I do agree. > My guess is that 80+% of all database client access is through Java. May be, but most of bug reports we get are from delphi clients. > Moving the database engine, API, and protocol closer to JDBC would go a long > way to eliminating unnecessary overhead. > > And I don't understand what you mean about JDBC being "too high level." > Could you explain? I've meant first of all use of function calls to access value of any single field returned by DB engine. ------------------------------------------------------------------------------ Infragistics Professional Build stunning WinForms apps today! Reboot your WinForms applications with our WinForms controls. Build a bridge from your legacy apps to the future. http://pubads.g.doubleclick.net/gampad/clk?id=153845071&iu=/4140/ostg.clktrk Firebird-Devel mailing list, web interface at https://lists.sourceforge.net/lists/listinfo/firebird-devel