RE: [sqlite] Question regarding schema change error

2006-04-03 Thread Andrew Shakinovsky
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Sent: Saturday, April 01, 2006 7:08 AM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Question regarding schema change error ...
> It is easy to put wrappers around sqlite3_prepare(), sqlite3_step(), 
> and sqlite3_finalize() to fix this.
> ...

Thanks. That works great. I would agree with Dennis that this
functionality should belong in the API, however, after reading the
following post, I can see why it is not done this way:

http://www.mail-archive.com/sqlite-users@sqlite.org/msg11398.html


RE: [sqlite] Question regarding schema change error

2006-04-03 Thread Andrew Shakinovsky
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
> Sent: Saturday, April 01, 2006 7:08 AM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Question regarding schema change error
> ...
> It is easy to put wrappers around sqlite3_prepare(), 
> sqlite3_step(), and sqlite3_finalize() to fix this.  
> ...

Thanks. That works great. I would agree with Dennis that this
functionality should belong in the API, however, after reading the
following post, I can see why it is not done this way:




RE: [sqlite] Re: Question regarding schema change error

2006-03-31 Thread Andrew Shakinovsky
 

> -Original Message-
> From: Igor Tandetnik [mailto:[EMAIL PROTECTED] 
> Sent: Friday, March 31, 2006 12:14 PM
> To: SQLite
> Subject: [sqlite] Re: Question regarding schema change error
> 
...
> 
> > The thing is, in 99.9%
> > of the time, the error would be received on the first call to 
> > sqlite3_step.
> 
> 100% of the time, actually. Once step succeeds, and until 
> reset or finalize, there is a read transaction (a SHARED 
> lock) against the database. Any schema change is a write 
> operation and cannot proceed in the presence of active readers.

Ok, I was under the assumption that a schema change after one or more
calls
to step could cause the SQLITE_SCHEMA error. Thanks for pointing that
out.


[sqlite] Question regarding schema change error

2006-03-31 Thread Andrew Shakinovsky
I am having a minor issue with schema changes and am looking for the
best solution/workaround. I am using the latest released version of
Sqlite, by the way.
 
I have a database which needs to be accessed by multiple
users/processes. During it's lifetime it will have tables added to it
periodically through a CREATE TABLE statement. The problem I am running
into is that
when the sqlite3_step function is called, I receive a SQLITE_SCHEMA
error notifying me that the schema
has changed. According to the docs, I am asked to finalize the
statement, and re-prepare it. Unfortunately
this doesn't fit nicely into my architecture. The thing is, in 99.9% of
the time, the error would be received on the first call to sqlite3_step.
It looks like the call to sqlite3_prepare is not checking for schema
changes, whereas the call to sqlite3_step is. Example with two processes
connected to the db:
 
Process 1 issues CREATE TABLE XYZ
Process 2 issues a sqlite3_prepare to run a SELECT STATEMENT (no error
returned even though the schema has changed since process 2 opened the
database).
Process 2 then issues a sqlite3_step which comes into the code handled
by the "case OP_VerifyCookie". At this point it sees that the schema has
changed and I get the error. 
 
Obviously it would be more helpful if the prepare function would do the
schema check. I am also aware that between the time the prepare is run,
and the first step is called, process 1 may have issued a CREATE TABLE
(as opposed to before prepare is called). To mitigate that, I was
thinking to have process 1 start an exclusive transaction thereby
causing process 2 to wait before running the select. The problem is, as
I mentioned, the schema check is not done during a prepare. My other
problem is that if I start an exclusive transaction on process 1,
process 2 still allows the prepare to go through, but fails (or waits)
on the first call to step.
 
Has anyone else encountered this? 
My workaround would be to use pragma schema_version to get the initial
schema, and then call it again before the prepare. If it has changed,
since I can find no clean way of getting the schema reloaded, I would
either close and reopen the database, or I would issue a SELECT against
a bogus table which apparently causes it to flag the schema to be
updated. This would solve most of my issue. To solve the other part of
my issue, I might try to start an exclusive transaction in process 2
before it does the prepare, and end it after the data has been read.
However this would make the database less responsive to multiple users.
Any other ideas?
 


RE: [sqlite] SUM and NULL values

2005-09-09 Thread Andrew Shakinovsky
I'll second that opinion, FWIW.

>>-Original Message-
>>From: Jay Sprenkle [mailto:[EMAIL PROTECTED] 
>>Sent: Friday, September 09, 2005 4:22 PM
>>To: sqlite-users@sqlite.org
>>Subject: Re: [sqlite] SUM and NULL values
>>Importance: Low
>>
>>Dr. Hipp,
>>
>>If my opinion is worth anything I'd prefer to stay with the 
>>standard, even though it might be a pain, etc. I've had too 
>>much bad experience with people writing code that depends on 
>>the quirks in other people's code.
>>
>>The decision is ultimately yours, and thanks for putting so 
>>much effort into it!
>>


[sqlite] Test suite fails on AIX using xlc

2005-03-11 Thread Andrew Shakinovsky
Has anyone had any luck compiling SQLite successfully on this platform?
These are the results from the test suite (with some successful runs
removed to preserve space).

AIX 5.2 Power4+ 64 bit using xlc 7.0:
The test aborted at some point before completion.

attach2-4.1.1...
Expected: [main unlocked temp closed file2 unlocked]
 Got: []
attach2-4.1.2...
Expected: [main unlocked temp closed file2 unlocked]
 Got: []
attach2-4.2... Ok
attach2-4.2.1...
Expected: [main shared temp closed file2 unlocked]
 Got: []
attach2-4.2.2...
Expected: [main unlocked temp closed file2 unlocked]
 Got: []
attach2-4.3... Ok
attach2-4.3.1...
Expected: [main shared temp closed file2 unlocked]
 Got: []
attach2-4.3.2...
Expected: [main unlocked temp closed file2 unlocked]
 Got: []
attach2-4.4... Ok
attach2-4.4.1...
Expected: [main shared temp closed file2 unlocked]
 Got: []
attach2-4.4.2...
Expected: [main unlocked temp closed file2 unlocked]
 Got: []
attach2-4.5... Ok
attach2-4.5.1...
Expected: [main shared temp closed file2 unlocked]
 Got: []
attach2-4.5.2...
Expected: [main unlocked temp closed file2 reserved]
 Got: []
attach2-4.6.1... Ok
attach2-4.6.1.1...
Expected: [main shared temp closed file2 shared]
 Got: []
attach2-4.6.1.2...
Expected: [main unlocked temp closed file2 reserved]
 Got: []
attach2-4.6.2... Ok
attach2-4.6.2.1...
Expected: [main shared temp closed file2 shared]
 Got: []
attach2-4.6.2.2...
Expected: [main unlocked temp closed file2 reserved]
 Got: []
attach2-4.7... Ok
attach2-4.7.1...
Expected: [main shared temp closed file2 shared]
 Got: []
attach2-4.7.2...
Expected: [main reserved temp closed file2 reserved]
 Got: []
attach2-4.8... Ok
attach2-4.8.1...
Expected: [main shared temp closed file2 shared]
 Got: []
attach2-4.8.2...
Expected: [main reserved temp closed file2 reserved]
 Got: []
attach2-4.9... Ok
attach2-4.9.1...
Expected: [main shared temp closed file2 shared]
 Got: []
attach2-4.9.2...
Expected: [main reserved temp closed file2 reserved]
 Got: []
attach2-4.10... Ok
attach2-4.10.1...
Expected: [main shared temp closed file2 shared]
 Got: []
attach2-4.10.2...
Expected: [main pending temp closed file2 reserved]
 Got: []
attach2-4.11... Ok
attach2-4.11.1...
Expected: [main unlocked temp closed file2 unlocked]
 Got: []
attach2-4.11.2...
Expected: [main pending temp closed file2 reserved]
 Got: []
attach2-4.12...
Expected: [0 {}]
 Got: [1 {disk I/O error}]
attach2-4.12.1...
Expected: [main unlocked temp closed file2 unlocked]
 Got: []
attach2-4.12.2...
Expected: [main unlocked temp closed file2 unlocked]
 Got: []
attach2-4.13... Ok
attach2-4.14... Ok
attach2-4.15...
Expected: [1 2 1 2]
 Got: [1 2]
attach2-5.1... Ok
attach2-5.2...
Error: disk I/O error
autovacuum-ioerr-2.2.1...
Error: no such function: randstr
autovacuum-ioerr-2.2.2... Ok
autovacuum-ioerr-2.2.3...
Expected: [1]
 Got: [0]
autovacuum-ioerr-3.1.1...
Error: no such function: randstr
autovacuum-ioerr-3.1.2... Ok
autovacuum-ioerr-3.1.3...
Expected: [1]
 Got: [0]
autovacuum-ioerr-5.1.3...
Expected: [1]
 Got: [0]
autovacuum-ioerr-7.2.1... Ok
autovacuum-ioerr-7.2.2... Ok
autovacuum-ioerr-7.2.3... Ok
autovacuum-ioerr2-1.1.1...
Error: no such function: randstr
autovacuum-ioerr2-1.1.2... Ok
autovacuum-ioerr2-1.1.3...
Expected: [1]
 Got: [0]
autovacuum-ioerr2-2.1.1...
Error: no such function: randstr
autovacuum-ioerr2-2.1.2... Ok
autovacuum-ioerr2-2.1.3...
Expected: [1]
 Got: [0]
autovacuum-ioerr2-3.1.1... Ok
autovacuum-ioerr2-3.1.2... Ok
autovacuum-ioerr2-3.1.3... Ok
autovacuum-ioerr2-4.1.1...
Error: no such function: randstr
autovacuum-ioerr2-4.1.2... Ok
autovacuum-ioerr2-4.1.3...
Expected: [1]
 Got: [0]
bigfile-1.1... Ok
bigfile-1.2...
Error: file is encrypted or is not a database
bigfile-1.3...
Error: file is encrypted or is not a database
bigfile-1.4...
Error: file is encrypted or is not a database
bigfile-1.5...
Error: file is encrypted or is not a database
bigfile-1.6...
Error: file is encrypted or is not a database
bigfile-1.7...
Error: file is encrypted or is not a database
bigfile-1.8...
Error: file is encrypted or is not a database
bigfile-1.9...
Error: file is encrypted or is not a database
bigfile-1.10...
Error: file is encrypted or is not a database
bigfile-1.11...
Error: file is encrypted or is not a database
bigfile-1.12...
Error: file is encrypted or is not a database
bigfile-1.13...
Error: file is encrypted or is not a database
bigfile-1.14...
Error: file is encrypted or is not a database
bigfile-1.15...
Error: file is encrypted or is not a database
bigfile-1.16...
Error: file is encrypted or is not a database
collate4-1.1.3...
Expected: [{} A B a b sort]
 Got: [{} A B a b nosort]
collate4-1.1.4... Ok
collate4-1.1.5... Ok
collate4-1.1.6...
Expected: [{} A a B b sort]
 Got: [{} A a B b nosort]
collate4-1.1.7... Ok
collate4-1.1.8... Ok
collate4-1.1.9... Ok
collate4-1.1.10...
Expected: [{} B a sort]

RE: [sqlite] What has changed for AUTOINCREMENT in 3.1?

2005-02-02 Thread Andrew Shakinovsky
>>With just INTEGER PRIMARY KEY, if you delete the largest
>>key in the table then insert a new row with a NULL key,
>>the key that was deleted will get reused.  If you add 
>>AUTOINCREMENT, keys will never be reused this way.  The value 
>>inserted will be one more than the largest key 
>>that has ever been in the table since the table was
>>first created.
>>-- 
>>D. Richard Hipp <[EMAIL PROTECTED]>

Is AUTOINCREMENT more efficient (for speed)?


[sqlite] quick question regarding INTEGER PRIMARY KEY default value

2005-02-02 Thread Andrew Shakinovsky
Will the default value of an INTEGER PRIMARY KEY column always start at
1?
Example:
CREATE TABLE xyz (id INTEGER PRIMARY KEY);
INSERT INTO xyz VALUES (NULL);

Will the value of the id column always be 1 in this example? I tried
numerous tests, and it seems to always start at 1, I wanted to just make
sure this would always be the case before I start relying on it in my
code.



[sqlite] How efficiently will SQLite work as a queue?

2005-01-13 Thread Andrew Shakinovsky
I am contemplating using SQLite as a message queueing mechanism, and was
wondering whether this would be efficient. In particular, I will be adding
new records to a table, while another process removes those records when it
is ready to handle that "message". 
My question relates to possible fragmentation that may occur within the
database file as records are deleted. Will SQLite efficiently reclaim the
space used by the deleted records?

I am looking at the PRAGMA auto_vacuum command. How efficient is this (as
far as time is concerned. I am not concerned about space)?



RE: [sqlite] excessive malloc() calls

2005-01-10 Thread Andrew Shakinovsky
I have noticed with SQLite (at least this was true with 2.8x, not sure about
3x) that if you try to use an ORDER BY with a table that doesn't have an
index on the field you are ORDERing by, it will do the entire sort
(presumably just the keys) in memory. This will cause you to run out of
memory if the table is too large.

>>-Original Message-
>>From: Will Leshner [mailto:[EMAIL PROTECTED] 
>>Sent: Monday, January 10, 2005 4:49 PM
>>To: sqlite-users@sqlite.org
>>Subject: Re: [sqlite] excessive malloc() calls
>>
>>
>>On Mon, 10 Jan 2005 13:43:20 -0800, 
>>[EMAIL PROTECTED] 
>>> This does point out however, that Win32 users who need to 
>>use ORDER BY 
>>> on a large table are paying a hefty price.
>>
>>Is that true even if they are using ORDER BY properly?
>>


RE: [sqlite] SQLite on 64-bit unix

2004-09-07 Thread Andrew Shakinovsky
You can try HP TestDrive (www.testdrive.compaq.com) or the sourceforge.net
compile farm.

>>-Original Message-
>>From: D. Richard Hipp [mailto:[EMAIL PROTECTED] 
>>Sent: Tuesday, September 07, 2004 1:01 PM
>>To: [EMAIL PROTECTED]
>>Subject: [sqlite] SQLite on 64-bit unix
>>
>>
>>I keep hearing rumors that SQLite is not working right on 
>>64-bit unix.  But lacking access to any 64-bit hardware, 
>>there isn't much I can do about it.
>>
>>Does anybody out there have a 64-bit unix box that they
>>can give me ssh access to so that I can test and debug
>>SQLite?  Access can be intermittant (in other words, I
>>call you up to let you know that I need to log on) and
>>can come from a fixed IP address.
>>-- 
>>D. Richard Hipp -- [EMAIL PROTECTED] -- 704.948.4565
>>


[sqlite] archive of this list

2004-04-16 Thread Andrew Shakinovsky
Does anyone know if there is an archive of this list anywhere?

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: [sqlite] How do I coalesce values in a join?

2003-12-22 Thread Andrew Shakinovsky
This is how I have always done it and currently do it using SQLite. Not sure
if this is standard, but some DBMS' I have encountered support an
'immediate-if' type function (on Oracle it's called DECODE, on Access and
Foxpro, it's slightly different and is called IIF). This function can be
used to do this transpose type operation by using an aggregate query, ie,
let's say you have a table like the original poster's with names and phone
numbers. We'll simplify it to a single table with name, phone type and phone
number. Let's say phone types are A, B and C, and for each phone number a
person has, they have one entry in this table:

Andrew A 000-
Andrew B 111-
Andrew C 222-

So to get the desired result, on Oracle you would do:

SELECT name, max(decode(type,'A',phone)) as phoneA, 
   max(decode(type,'B',phone)) as phoneB, 
   max(decode(type,'C',phone)) as phoneC
FROM phones group by name

I implement a function called equalcheck for SQLite using the
sqlite_create_function. This function takes 3 parameters, it returns the
value of the third parameter if the first two are equal, otherwise it
returns null:

void equalCheck(sqlite_func* fn,int nump,const char**parameters)
{
if(!strcmp(parameters[0], parameters[1])) {
sqlite_set_result_string(fn,parameters[2],-1);
} else {
sqlite_set_result_string(fn,0,0);
}
} 

Then I can execute the following statement to get the results, which in the
original poster's database would be one row for each person, and phones
broken out into columns:

select name,  
  max(equalcheck(type,'A',phone)) as aphone,
  max(equalcheck(type,'B',phone)) as bphone,
  max(equalcheck(type,'C',phone)) as cphone
from phones group by name

So my suggestion would be to add the equivalent of the Oracle DECODE
function to the list of supported functions, since it would facilitate this
type of transpose operation.


-Original Message-
From: D. Richard Hipp [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 22, 2003 4:27 PM
To: [EMAIL PROTECTED]
Subject: Re: [sqlite] How do I coalesce values in a join?


Scott Lamb wrote:
> Ron Aaron wrote:
> 
>> I have two tables:
>> people (id,first,last)
>>
>> and phones (id, type, phone)
> 
> 
> ...
> 
>> What I would like to do, is 'coalesce' the phones to produce another
>> table:
>>
>> peopleandphones (id,first,last,home,cel,work,fax)
>>
>> So I could make a much nicer report.  But I cannot figure out how to 
>> do this!
> 
> 
> Raw SQL doesn't really do that well.

Can anybody suggest some kind of extended SQL syntax that
could be added to SQLite that would make this kind of thing easier?

-- 
D. Richard Hipp -- [EMAIL PROTECTED] -- 704.948.4565


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: [sqlite] Performance tuning question

2003-11-14 Thread Andrew Shakinovsky
The sqlite_bind() function is brand new as far as I can tell from looking at
CVS. You would have to get the latest sources. It looks like it would also
be simpler to use sqlite_bind() than to use the way I am doing it which is
to use sqlite_create_function() and call the function in my sql to get
parameter values. 

>From what I can tell, the sqlite_bind() will allow you to just set your parm
values between calls to sqlite_step and sqlite_reset. 


-Original Message-
From: Arthur Hsu [mailto:[EMAIL PROTECTED] 
Sent: Friday, November 14, 2003 3:04 PM
To: Andrew Shakinovsky; [EMAIL PROTECTED]
Subject: Re: [sqlite] Performance tuning question


Hello Andrew,

I'm a little confused about the precompiled SQL.  According to Wiki, there
should be something like sqlite_bind() that I can use.  However, I can't
find sqlite_bind() in my sqlite source ...

I'll try the sqlite_create_function().  Thanks.

Regards,

Arthur

- Original Message ----- 
From: "Andrew Shakinovsky" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Friday, November 14, 2003 7:24 AM
Subject: RE: [sqlite] Performance tuning question


> Arthur,
> For pre-compiled execution in SQLite, try using the sqlite_reset()
function.
> It allows you to avoid having to re-compile your SQL every time you
execute
> it. You can use sqlite_create_function() to do the equivalent of 
> "parameterized" queries by creating a function which takes a parameter 
> position and and returns the parameter value. I use this a lot in my 
> code since I do a lot of inserts and want them to
run
> fast without invoking the query compiler for each one.
>
>
> -Original Message-
> From: Arthur C. Hsu [mailto:[EMAIL PROTECTED]
> Sent: Friday, November 14, 2003 3:50 AM
> To: 'Greg Obleshchuk'; [EMAIL PROTECTED]
> Subject: RE: [sqlite] Performance tuning question
>
>
> Hello Greg,
>
> The insert is not in transaction.  I do the insert and update like
>
> CCriticalSection cs;
>
> void CDB::insert()
> {
> cs.Lock();
> sqlite_exec_printf(pDB, "insert into db values(%d, %q, null, null, 0, 
> null", 0, 0, 1, "A"); cs.Unlock();
> }
>
> void CDB::update()
> {
> cs.Lock();
> sqlite_exec_printf(pDB, "update db set str=%q where id=%d", 0, 0, "A", 
> 1); cs.Unlock();
> }
>
> As described in my previous mail, I have two threads that 
> simultaneously accessing this in-mem DB.  Thus I have to use locks to 
> be sure of thread-safety.  I keep pumping data into DB via insert from 
> one thread.
My
> sqlite version is 2.8.6 and I didn't compile it with -DTHREAD_SAFE=1.
>
> I have 1G bytes of memory in my development machine and it's hard to
believe
> that I didn't have enough memory for a 50M database :)
>
> I use MDAC via .Net framework.  I use ADO.Net in-memory table
> (System.Data.DataTable) and does not connect to any data source.
>
> BTW, I use Intel VTune and try to find out the bottleneck of my 
> program. Execution of sqlite vbe is the most time-consuming (55%).  
> However,
yyparser
> of sqlite contributes 30% of load.  I tend to believe this explains 
> why
the
> first 30 seconds the ADO.Net is faster than sqlite.  SQL statements in 
> ADO.Net always run precompiled, thus it saves time for parsing SQL
commands.
> I'm trying to do precompiled execution in sqlite, and getting 
> frustrated about that.
>
> The decaying phoenomenon is quite weird.  My wild guesses are
>
> A. Issues of memory indexing: page index table grows bigger and 
> bigger,
thus
> indirection of memory tends to slow down B. SQLite try to do "safe" 
> memory allocations, that is, malloc new memory block, memcpy old data, 
> then free old memory block.
>
> These are just guesses.  I'm trying to dig out why.  Maybe you guys 
> who
are
> more familiar with sqlite internals can show me the answer :)
>
> Regards,
>
> Arthur
>
>
> -
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
>
>
>


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: [sqlite] Performance tuning question

2003-11-14 Thread Andrew Shakinovsky
Arthur,
For pre-compiled execution in SQLite, try using the sqlite_reset() function.
It allows you to avoid having to re-compile your SQL every time you execute
it. You can use sqlite_create_function() to do the equivalent of
"parameterized" queries by creating a function which takes a parameter
position and and returns the parameter value.
I use this a lot in my code since I do a lot of inserts and want them to run
fast without invoking the query compiler for each one. 


-Original Message-
From: Arthur C. Hsu [mailto:[EMAIL PROTECTED] 
Sent: Friday, November 14, 2003 3:50 AM
To: 'Greg Obleshchuk'; [EMAIL PROTECTED]
Subject: RE: [sqlite] Performance tuning question


Hello Greg,

The insert is not in transaction.  I do the insert and update like

CCriticalSection cs;

void CDB::insert()
{
cs.Lock();
sqlite_exec_printf(pDB, "insert into db values(%d, %q, null, null,
0, null", 0, 0, 1, "A");
cs.Unlock();
} 

void CDB::update()
{
cs.Lock();
sqlite_exec_printf(pDB, "update db set str=%q where id=%d", 0, 0,
"A", 1);
cs.Unlock();
}

As described in my previous mail, I have two threads that simultaneously
accessing this in-mem DB.  Thus I have to use locks to be sure of
thread-safety.  I keep pumping data into DB via insert from one thread.  My
sqlite version is 2.8.6 and I didn't compile it with -DTHREAD_SAFE=1.

I have 1G bytes of memory in my development machine and it's hard to believe
that I didn't have enough memory for a 50M database :)

I use MDAC via .Net framework.  I use ADO.Net in-memory table
(System.Data.DataTable) and does not connect to any data source.

BTW, I use Intel VTune and try to find out the bottleneck of my program.
Execution of sqlite vbe is the most time-consuming (55%).  However, yyparser
of sqlite contributes 30% of load.  I tend to believe this explains why the
first 30 seconds the ADO.Net is faster than sqlite.  SQL statements in
ADO.Net always run precompiled, thus it saves time for parsing SQL commands.
I'm trying to do precompiled execution in sqlite, and getting frustrated
about that.

The decaying phoenomenon is quite weird.  My wild guesses are

A. Issues of memory indexing: page index table grows bigger and bigger, thus
indirection of memory tends to slow down B. SQLite try to do "safe" memory
allocations, that is, malloc new memory block, memcpy old data, then free
old memory block.

These are just guesses.  I'm trying to dig out why.  Maybe you guys who are
more familiar with sqlite internals can show me the answer :)

Regards,

Arthur


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: [sqlite] Left field question about non windows platforms

2003-11-10 Thread Andrew Shakinovsky
On unix (that is to say Solaris and Linux, and possibly others) you use
dlopen() (in place of LoadLibrary), dlsym() (in place of GetProcAddress),
and dlclose() (in place of FreeLibrary). They work in a pretty similar
fashion to their Windows counterparts. On HP-UX however, you would want to
use shl_load(), shl_findsym(), and shl_unload().

-Original Message-
From: Greg Obleshchuk [mailto:[EMAIL PROTECTED] 
Sent: Sunday, November 09, 2003 8:45 PM
To: [EMAIL PROTECTED]
Subject: [sqlite] Left field question about non windows platforms


Hi ,
This may seem like a question un-related to SQLite but there is a link.

On Windows platform to dynamically load a DLL into your program you use
LoadLibrary and GetProcAddress to get the reference to an exported function
in the DLL.

My question is how do non Windows platforms do this?  

regards
Greg O

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]