On 09/05/13 18:45, Jim Starkey wrote: > On 9/5/2013 4:28 AM, Alex Peshkoff wrote: >> On 09/04/13 18:11, Jim Starkey wrote: >> >>> So here's another seriously heretical thought. If you're planning for >>> Firebird to hang around another 27 years, it's time to define a modern >>> OO API that is easy to use from modern languages, is flexible and >>> extensible, supports an efficient remote protocol, and can fully support >>> the legacy SQL APIs as a proper client layer and maybe even BLR, if >>> that's still important. >> That's what was done in FB3, except probably one point - what do you >> mean under 'supports an efficient remote protocol'? API is not >> _directly_ releated with remote protocol. In FB3 we kept remote protocol >> more or less (sooner more) as it was before. But new OO API, working >> with that protocol, was defined. SQLDA API is implemented as a layer >> over it (in FB3 - directly in yvalve, i.e. yvalve currently supports >> both APIs). > OK, I haven't looked at the protocol for about a million years, so > consider this a critique of my earlier work, not necessarily the current > state of affairs.
On the other hand I do not know waht was it like a million years ago :) But I can guess that some enhancements like prefetching records in big portions were added after it, right? If yes million years is really long-long time. > Network efficiency has a number of elements. The most important is > reducing the number of round trips to a bare minimum. Following that > (distantly) is minimizing the number of bytes sent. Last is minimizing > the CPU load to encode/decode communication packets. > > The keys to reducing round trips are to bundle metadata with data to > eliminate the need for "info" trips and, most importantly, to batch > record retrievals and batch inserts into single large packets. Sending a > MB worth of records is obviously a great win. So is implementing the > equivalent of the JDBC batch update semantics. Moving big typical tasks to server is also useful sometimes. Currently firebird supports running gbak as service with backup file transfered over the wire using services API. Users reported that for databases, accessed using internet, backup performance has grown more then 10 times. (Certainly this helped to satisfy the most important requirement - number of round trips was seriously reduced.) > The encoding we've been talking about for record encoding is also an > excellent encoding for communication, being both dense and platform > independent, eliminating the need for separate homogenous / > heterogeneous modes of the original Interbase protocol. And, since all > numbers and strings can be arbitrarily large without wasting bytes, > there are no tradeoffs necessary on field size (as Ann loves to point > out, I'm a victim of the bit depression and tend to makes things a > little smaller than I should to save space). > As far as I understand this also means that DB format becomes closer to endianess independent? I.e. it remains to solve problems with transaction numbers and other out of record format data? ------------------------------------------------------------------------------ Learn the latest--Visual Studio 2012, SharePoint 2013, SQL 2012, more! Discover the easy way to master current and previous Microsoft technologies and advance your career. Get an incredible 1,500+ hours of step-by-step tutorial videos with LearnDevNow. Subscribe today and save! http://pubads.g.doubleclick.net/gampad/clk?id=58041391&iu=/4140/ostg.clktrk Firebird-Devel mailing list, web interface at https://lists.sourceforge.net/lists/listinfo/firebird-devel