Hi Jose,

On Sun, 13 Jan 2008 23:47:04 -0500, "jose isaias cabrera"
<[EMAIL PROTECTED]> wrote:

>"Kees Nuyt" trying to help me said...
>
>> On Fri, 11 Jan 2008 15:32:28 -0500, "jose isaias cabrera"
>> <[EMAIL PROTECTED]> wrote:
>>
>>>
>>>Greetings.
>>>
>>>I have a problem.  I have this shared DB amongst 4 co-workers where I am
>>>getting this error:
>>>
>>>SQL error: database disk image is malformed
>>>
>>>that is after I do a,
>>>
>>>select * from LSOpenProjects;
>>>
>>>and goes and display 1006 records and then I get that error above...
>>>
>>>1006|1006|1006|NEW||...ASCII data deleted...||||||||||||||||||
>>>SQL error: database disk image is malformed
>>>sqlite>
>>>
>>>How can I recover the data all the way to record 1006?
>>>
>>>thanks,
>>>
>>>josé
>>
>> You can try to use the .dump command of the command line tool to
>> recover what's left. Sometimes it works.
>I had to go back to the previous day backup and I was able to recover it.
>
>> But it is weird the database is corrupted in the first place.
>I agree.
>
>
>> Did you use any dangerous PRAGMA's to improve speed?
>No.  The only PRAGMA command I have is "PRAGMA table_info(TableName);".
>
>> What's the operating system?
>Let me explain: I have created a Program Manager Software.  This software 
>handles two DBs:
>1. a local DB on the client's machine
>2. a shared DB on a shared drive.
>This latter is the damaged DB.  This damaged DB is on a Shared drive handled 
>by a software called Hummingbird.  The hummingbird client software is 
>installed on the XP client machine to connect to the shared drive which is 
>hosted on an UNIX server.  Yes, I know.  The shared DB is used to keep track 
>of unique records, so each client will always have an unique record number.

That's probably the culprit.
You either try to work around it yourself or ask for support at
connectivity.hummingbird.com. Hummingbird is quite an expensive
product, so bugs like these (file locking problems) should be
corrected.

There are a few workarounds:

1) Another way to guarantee uniqueness is to assign every client
a range of numbers (for example increments of 1000) to work
with.

That range could be obtained from the shared database initially
(and when the client runs out of numbers after using an assigned
range). It's a bit more difficult to program, but the
concurrency at the server would diminish tremendously.

2) Prefix a locally generated non-unique serial number with a
unique client ID.

3) Use a real database server for handing out ID's. PostgreSQL
or MySQL or FireBird.

4) Compile SQLite with optrions to use another locking
mechanism, with lock files. It will be significantly slower, but
it should work.

>> Is your database on a network share / NFS / SAMBA drive, and if
>> so, is the filelocking of your filesystem fail proof?

Hummingbird seems to be an NFS implementation.

>I don't know about if it is working or not, but it started after lots of 
>activity (new records ID requests.  This are INSERTs or UPDATES)
>
>> Or do you use a server layer that connects your users to the
>> database?
>I don't understand the server layer part.

That would be a program (daemon, service) running on the server
that handles all SQLite transactions on the shared database on
behalf of the clients. It would listen to a socket for commands
("give me a new ID"), and return an answer. All database
requests can be serialized by such a service, so there is no
concurrent access to the database file. 

>> Does your application handle exceptions properly?
>I think so.  This is the new record INSERT called,
>
>    try
>    {
>      dba.execute (cmd);
>    }
>    catch (DBIException dbe)
>    {
>      DBIExceptionCaught(dbe,__LINE__);
>    }
>
>cmd contains the SQL INSERT command.
>
>thanks for the help. 

I hope this helps.
-- 
  (  Kees Nuyt
  )
c[_]

-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------

Reply via email to