On Tue, Mar 16, 2010 at 11:54 AM, Olaf Schmidt <s...@online.de> wrote:
> "HLS" <sant9...@gmail.com> schrieb im
> Newsbeitrag
> news:9cf5931e1003160705v38335b5g47a3d91193c28...@mail.gmail.com...
>> ...we have a ISAM/BTREE database under our full
>> reader/writer and exclusive locking controls...
> From your posted code-snippet I don't see any
> "heavy SQL-Query-usage" - so, first the question,
> why would you want to change your current system?

Thank you!  Trust me, we are trying to figure that out.  I guess the
justification is all of what you stated.   Overall:

1) Improve the search system
2) Allow for 3rd party growth (add 3rd party fields)
3) "Social Networking" ideas.
5) Low footprint installation
6) API compatibility with existing installed base of client applets.
7) Extend the API to be more "SQL" ready

> If you need only something like a "fast entry" into
> some of your DataFile-Chunks (per FileNameAreaKey)
> and then enumerate (record-)content in these
> Data-Chunks, respecting some "sort-order" - why choose
> a system which is based on SQL-query-processing?

Trying to see if we can make a transparent replacement. Right now, the API ...


has over 250 functions. For the databases, they are structured the
same. For example for files,  we have 5 keys:

//! Files Database Function Keys

const int FileNameAreaKey = 0;
const int FileAreaNameKey = 1;
const int FileAreaDateKey = 2;
const int FileUploaderKey = 3;
const int FileDateAreaKey = 4;

and a set of transversal and I/O functions.  For for example, for the
client function:

BOOL  APIENTRY GetFirstFileRec(DWORD keynum, TFileRecord &f, DWORD &tid);

On the backend, the server stub is:

/* [fault_status][comm_status] */ error_status_t WcsGetFirstFileRec(
   /* [in] */ TWildcatContextHandle wch,
   /* [in] */ unsigned long keynum,
   /* [ref][out] */ TFileRecord __RPC_FAR *f,
   /* [ref][out] */ unsigned long __RPC_FAR *tid)
   ZeroMemory(f, sizeof(TFileRecord));
   TClientContext *cc = GetClientContext(wch);
   if (!cc->LoggedIn()) return WC_USER_NOT_LOGGED_IN;
  TUserContext *uc = cc->GetUserContext();

  TReaderGrabber grab(FileAreas);

  // release/create new SQL cursor
  if (cc->qFiles)  delete cc->qFiles;
  cc->qFiles = new CSqliteQuery(*sqlFilesDb);

  CString sql = "select * from files";
  switch (keynum) {
     case FileNameAreaKey: sql += " order by Name, Area"; break;
     case FileAreaNameKey: sql += " order by Area, Name"; break;
     case FileAreaDateKey: sql += " order by Area, FileTime"; break;
     case FileUploaderKey: sql += " order by UploaderId"; break;
     case FileDateAreaKey: sql += " order by FileTime, Area"; break;
  sqlite3_stmt *stm = cc->qFiles->get_result(sql.GetString());
  if (!stm) {
     delete cc->qFiles;
     cc->qFiles = NULL;
      return WC_RECORD_NOT_FOUND;

  // save tid point, not used, but checked in Next/Preve
  *tid = (unsigned long)&cc->qFiles;

  for (;;)
     if (!cc->qFiles->fetch_row()) {
        delete cc->qFiles;
        cc->qFiles = NULL;
        return WC_RECORD_NOT_FOUND;
     DWORD area = cc->qFiles->getuval("Area");
     BOOL bCheckFileAccess = FALSE;
     if (area < FileAreas.Count() &&
         CheckFileAreaAccess(cc,area,luds,bCheckFileAccess)) {
        if (uc && bCheckFileAccess) {
           // Private Files Logic
           DWORD pvtid = cc->qFiles->getuval("PrivateUserId");
           DWORD upid  = cc->qFiles->getuval("UploaderId");
           if (!CheckFileOwnership(pvtid,upid,uc->User.Info.Id)) {
        // perform full SQL conversion and return record
        TFullFileRecord ff = {0};
        *f = ff.Info;
        return 0;
   return 0;

And for the Next (and Prev) functions, the logic is the same except
the cc->qFiles context cursor is expected.

So yes, the methodology is not fitting for SQL which requires a cursor
release concept. It is expected that a new level of API functions will
do this, add a GetFileRecClose() function.

But as you said, the direction is to more more of the work to the
client, so there is a lot of truth that we battling older design I/O

I think SQLITE3 is just a way to explore this because it may not
matter what sql engine we use.  But we want to use "generic" SQL so
that different engines can be used.

I explored berkerley-db back in 1998 or so and it appears that we
could do something, but it never happen - the goals above were not as

> We also use an RPC-server, with the SQLite-engine
> as the serverside backend - but we "completely isolate"
> the serverside-cursors from the clientside - meaning, we
> perform our SQL-query at the serverside - and serialize
> (copy) all the records, according to the "set the SQL-string
> describes" into a "transferrable container-object" (a Recordset,
> or ResultSet) first.  This container-objects content is then
> compressed and transferred over sockets to the clientside
> (after all the sqlite-handles for the query in question were
> freed).

I see.  But you needed to change the API approach as you mentioned I
have no choice to do myself. :)  Yes, this is understood.

> I understand, that you "don't want to rewrite, what you
> currently have" - but (at least for an *SQLite*-engine in the
> backend) you will have to IMO.

Right,  it just seem I could do it based on what I was reading.

> Nonetheless, such a "disconnected approach" is in the
> meantime something like "common-practise" - given all
> the Browser-based clients which work against
> "RPC-Servers, or Application-Servers - or let's call them
> *WebServers* ;-) ...over sockets (e.g. receiving JSON-
> serialized Resultsets over http) - ... no matter what SQL-
> engine is working in the backend of such a WebSite ...
> "clientside-controlled Servercursors" are "a thing of the
> past" IMO, since they "introduce more problems than
> they solve".

I agree.  But we would probably "go out of business" if we altered the
basic client API and don't provide a backward compatibility module or
proxy.   Thats the critical part of this exploration.  Adding more
transactional based API is not the issue, its trying to give the same
functional with the existing API as its currently understood. The same
API as shown in C/C++ above is available in different languages and a
server-side p-code embedded language (WCBASIC- Basic-Like language):

   dim f as TFileRecord
   dim tid as integer
   dim n as integer = 0
   if GetFirstFileRec(FileNameAreaKey, f, tid) then
      ... do something ...
      loop while GetNextFileRec(FileNameAreaKey, f, tid)
   end if

Cur customers have many scripts like this for their needs.  So this is
what I are trying to accomplish here - how to keep a persistent
SQLITE3 cursor per thread context for the current API implementation.
If we told customers that they now have to change the above logic for
no other reason that we changed the backend database,  we will get a
lot of resistance and probably less upgrades.

sqlite-users mailing list

Reply via email to