Re: [sqlite] Loading a existing database 100% into memory

2008-08-06 Thread RaghavendraK 70574
Hi  Stephen Woodbridge,

> (although it might
> > be an interesting academic exercise to make a VFS port of SQLite 
> that uses
> > memory arrays for read/write ops.)

Do u have any ref impl of this kind?

regards
ragha


**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: Stephen Woodbridge <[EMAIL PROTECTED]>
Date: Thursday, August 7, 2008 7:24 am
Subject: Re: [sqlite] Loading a existing database 100% into memory

> Stephen Oberholtzer wrote:
> > On Wed, Aug 6, 2008 at 6:26 PM, Brown, Daniel <[EMAIL PROTECTED]> 
> wrote:> 
> >> Good afternoon list,
> >>
> >> I would like to load my current database file completely into 
> memory,>> mostly as an experiment to check SQLite's maximum memory 
> footprint,>> however searching through the documentation I can 
> only find references
> >> about how to create new databases that are completely memory 
> resident.>> Is there a way to do this?  I'm currently using the 
> SQlite console
> >> application for my testing if that makes a difference.
> > 
> > 
> > What, exactly, is it you're after?  I can load a SQLite database 
> 100% into
> > memory quite quickly:
> > 
> > int fd = open("sqlitedb.dat");
> > struct stat info;
> > fstat(fd, );
> > char *buf = malloc(info.st_size);
> > read(fd, buf, info.st_size);
> > 
> > I find it extremely unlikely that this is what you want 
> (although it might
> > be an interesting academic exercise to make a VFS port of SQLite 
> that uses
> > memory arrays for read/write ops.)
> > 
> > At the other end of the spectrum, you could just dump the entire 
> database on
> > disk and then insert all the data into a :memory: database.  
> However, this
> > doesn't seem like it would be very useful, either.
> > 
> > This sounds like an XY problem.  What are you really trying to 
> accomplish?> What constraints are preventing you from simply using 
> an on-disk database?
> > 
> 
> Another interesting option might be to mmap the DB file so you use 
> the 
> OS virtual memory paging to map the file to memory as you need 
> access to 
> it. But this probably has the downside that writes are not sync'd 
> to 
> disk so in the event of a crash you out of luck, but that is the 
> case 
> with any memory DB. The upside is that when you shutdown your DB 
> is 
> sync'd to disk and the OS paging is pretty efficient.
> 
> -Steve W
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Loading a existing database 100% into memory

2008-08-06 Thread Jay A. Kreibich
On Wed, Aug 06, 2008 at 03:26:30PM -0700, Brown, Daniel scratched on the wall:
> Good afternoon list,
> 
> I would like to load my current database file completely into memory,
> mostly as an experiment to check SQLite's maximum memory footprint,
> however searching through the documentation I can only find references
> about how to create new databases that are completely memory resident.
> Is there a way to do this?  I'm currently using the SQlite console
> application for my testing if that makes a difference.

  If the database is fairly static in size, just figure out how many pages
  the database file takes up (default page size is 1K).  Make the page cache
  a bit larger than this via PRAGMA commands.  As you use the database,
  pages will be loaded into memory and generally stay there.  Adjust the
  write policy (sync, etc.) as desired, if you're looking for performance
  comparisons.

  Pages in the cache have an overhead of roughly 0.5K per page.  That
  should give you some idea of your memory footprint.

  See the website for more details.

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"'People who live in bamboo houses should not throw pandas.' Jesus said that."
   - "The Ninja", www.AskANinja.com, "Special Delivery 10: Pop!Tech 2006"
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Loading a existing database 100% into memory

2008-08-06 Thread Stephen Woodbridge
Stephen Oberholtzer wrote:
> On Wed, Aug 6, 2008 at 6:26 PM, Brown, Daniel <[EMAIL PROTECTED]> wrote:
> 
>> Good afternoon list,
>>
>> I would like to load my current database file completely into memory,
>> mostly as an experiment to check SQLite's maximum memory footprint,
>> however searching through the documentation I can only find references
>> about how to create new databases that are completely memory resident.
>> Is there a way to do this?  I'm currently using the SQlite console
>> application for my testing if that makes a difference.
> 
> 
> What, exactly, is it you're after?  I can load a SQLite database 100% into
> memory quite quickly:
> 
> int fd = open("sqlitedb.dat");
> struct stat info;
> fstat(fd, );
> char *buf = malloc(info.st_size);
> read(fd, buf, info.st_size);
> 
> I find it extremely unlikely that this is what you want (although it might
> be an interesting academic exercise to make a VFS port of SQLite that uses
> memory arrays for read/write ops.)
> 
> At the other end of the spectrum, you could just dump the entire database on
> disk and then insert all the data into a :memory: database.  However, this
> doesn't seem like it would be very useful, either.
> 
> This sounds like an XY problem.  What are you really trying to accomplish?
> What constraints are preventing you from simply using an on-disk database?
> 

Another interesting option might be to mmap the DB file so you use the 
OS virtual memory paging to map the file to memory as you need access to 
it. But this probably has the downside that writes are not sync'd to 
disk so in the event of a crash you out of luck, but that is the case 
with any memory DB. The upside is that when you shutdown your DB is 
sync'd to disk and the OS paging is pretty efficient.

-Steve W
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Loading a existing database 100% into memory

2008-08-06 Thread Stephen Oberholtzer
On Wed, Aug 6, 2008 at 6:26 PM, Brown, Daniel <[EMAIL PROTECTED]> wrote:

> Good afternoon list,
>
> I would like to load my current database file completely into memory,
> mostly as an experiment to check SQLite's maximum memory footprint,
> however searching through the documentation I can only find references
> about how to create new databases that are completely memory resident.
> Is there a way to do this?  I'm currently using the SQlite console
> application for my testing if that makes a difference.


What, exactly, is it you're after?  I can load a SQLite database 100% into
memory quite quickly:

int fd = open("sqlitedb.dat");
struct stat info;
fstat(fd, );
char *buf = malloc(info.st_size);
read(fd, buf, info.st_size);

I find it extremely unlikely that this is what you want (although it might
be an interesting academic exercise to make a VFS port of SQLite that uses
memory arrays for read/write ops.)

At the other end of the spectrum, you could just dump the entire database on
disk and then insert all the data into a :memory: database.  However, this
doesn't seem like it would be very useful, either.

This sounds like an XY problem.  What are you really trying to accomplish?
What constraints are preventing you from simply using an on-disk database?

-- 
-- Stevie-O
Real programmers use COPY CON PROGRAM.EXE
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Loading a existing database 100% into memory

2008-08-06 Thread Brown, Daniel
Good afternoon list,

I would like to load my current database file completely into memory,
mostly as an experiment to check SQLite's maximum memory footprint,
however searching through the documentation I can only find references
about how to create new databases that are completely memory resident.
Is there a way to do this?  I'm currently using the SQlite console
application for my testing if that makes a difference.

Cheers,

Daniel Brown 
"The best laid schemes o' mice an' men, gang aft agley"


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Creating Indexes

2008-08-06 Thread Kees Nuyt
On Wed, 6 Aug 2008 16:45:35 -0400, Jeffrey Becker wrote:

> On Wed, Aug 6, 2008 at 4:10 PM, 
> Kees Nuyt <[EMAIL PROTECTED]> wrote:

>> Additionally: NodeID and ParentID shouldn't be blobs.
>> Integer is the most suitable type for IDs.
>
>The blobs I'm inserting are actually a binary representation designed
>_specifically_ to index properly.  So yes they should be blobs.

Aha, ok, all clear now.
-- 
  (  Kees Nuyt
  )
c[_]
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Creating Indexes

2008-08-06 Thread Jeffrey Becker
On Wed, Aug 6, 2008 at 4:10 PM, Kees Nuyt <[EMAIL PROTECTED]> wrote:
> On Tue, 05 Aug 2008 17:22:05 -0500, you wrote:
>
>>Jeffrey Becker wrote:
>>> I have a table 'SiteMap' defined as :
>>>
>>> Create Table SiteMap
>>> (
>>>  NodeID blob not null PRIMARY KEY,
>>>  Title text NOT NULL UNIQUE,
>>>  Url text NOT NULL
>>> );
>>>
>>> I'd like to index on the node's parent value as defined by the
>>> expression ancestor(NodeID,1).  'ancestor' being a user defined
>>> function.
>>> However sqlite doesnt permit indicies on expressions, only columns.
>>> Why is this? More importantly what's my best bet on achieveing
>>> something similar?
>>
>>Create Table SiteMap
>>(
>>   NodeID blob not null PRIMARY KEY,
>>   ParentID blob not null,
>>   Title text NOT NULL UNIQUE,
>>   Url text NOT NULL
>>);
>>
>>Then index on (ParentID, NodeID) or something like that.
>>
>>-Steve
>
> Additionally: NodeID and ParentID shouldn't be blobs.
> Integer is the most suitable type for IDs.
>
> http://article.gmane.org/gmane.comp.db.sqlite.general/17286/match=managing+trees+database
> and
> http://www.sitepoint.com/article/hierarchical-data-database
> may help on maintaining trees in a relational database.
>
> Or search the net on "Joe Celko" and "nested sets".
> --
>  (  Kees Nuyt
>  )
> c[_]

The blobs I'm inserting are actually a binary representation designed
_specifically_ to index properly.  So yes they should be blobs.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Creating Indexes

2008-08-06 Thread Kees Nuyt
On Tue, 05 Aug 2008 17:22:05 -0500, you wrote:

>Jeffrey Becker wrote:
>> I have a table 'SiteMap' defined as :
>> 
>> Create Table SiteMap
>> (
>>  NodeID blob not null PRIMARY KEY,
>>  Title text NOT NULL UNIQUE,
>>  Url text NOT NULL
>> );
>> 
>> I'd like to index on the node's parent value as defined by the
>> expression ancestor(NodeID,1).  'ancestor' being a user defined
>> function.
>> However sqlite doesnt permit indicies on expressions, only columns.
>> Why is this? More importantly what's my best bet on achieveing
>> something similar?
>
>Create Table SiteMap
>(
>   NodeID blob not null PRIMARY KEY,
>   ParentID blob not null,
>   Title text NOT NULL UNIQUE,
>   Url text NOT NULL
>);
>
>Then index on (ParentID, NodeID) or something like that.
>
>-Steve

Additionally: NodeID and ParentID shouldn't be blobs.
Integer is the most suitable type for IDs.

http://article.gmane.org/gmane.comp.db.sqlite.general/17286/match=managing+trees+database
and
http://www.sitepoint.com/article/hierarchical-data-database
may help on maintaining trees in a relational database.

Or search the net on "Joe Celko" and "nested sets".
-- 
  (  Kees Nuyt
  )
c[_]
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Possible Error in 3.6.0?

2008-08-06 Thread Peter Holmes
FWIW, on a Dell Inspiron 530 running Ubuntu 7.10 with Tcl8.4 installed,

 ../configure --with-tcl=/usr/lib/tcl8.4
 make

works fine, but

 make fulltest

generates the following:

 perm-memsubsys1.tcl-1.1...
 Expected: [1 {wrong # args: should be "sqlite3 HANDLE FILENAME 
?-vfs VFSNAME? ?-readonly BOOLEAN? ?-create BOOLEAN? ?-nomutex BOOLEAN?"}]
  Got: [1 {wrong # args: should be "really_sqlite3 HANDLE FILENAME 
?-vfs VFSNAME? ?-readonly BOOLEAN? ?-create BOOLEAN? ?-nomutex BOOLEAN?"}]
   .
   .
   .
 1 errors out of 600511 tests
 Failures on these tests: perm-memsubsys1.tcl-1.1
 All memory allocations freed - no leaks
 Memory used:  now -8  max 911680
 Page-cache used:  now  0  max  0
 Page-cache overflow:  now  0  max 769024
 Scratch memory used:  now  0  max  0
 Scratch overflow: now  0  max   6936
 Maximum alloc size:   6936
 Maximum memory usage: 911680 bytes
 Current memory usage: -8 bytes
 Number of malloc()  : -1 calls

If you need any more info, feel free to contact me.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Realtime backup of database

2008-08-06 Thread Kees Nuyt
On Tue, 5 Aug 2008 18:57:10 +0200, you wrote:

>Hi everyone,
>i need to backup a sqlite database on a remote device for  
>configuration redundancy purposes. Due to a really slow connection  
>between the devices triggering rsync is not the best solution yet.  
>Although rsync is efficient it is transfering kilobytes of data to see  
>what it has to update. Because the updates are always one-way (from  
>the live database to the backup database) it must be sufficient only  
>to transfer the updates.
>I don't get the right idea for my application. Something like  
>capturing the querys that update the database and transmit them to the  
>remote system would fit the purpose, but that seems to me very  
>complicated to teach the database.

I would add a "last update date time stamp" column to every
table you want to synchronize and maintain it using a
trigger (on update and on insert).
Then SELECT the rows which were updated since the last
synchronisation, in a format proper for import at the other
side.

Alternatively, at every synchronisation, .dump your
database, and run a diff on the .dump result and the
previous version of the .dump output.
Transfer the diff and use it to patch the other side (also
.dump format). After the patch, create a new database with
the patched .dump file.
Initially you transfer a full dump to build your initial
target database.
Update the status on the source machine after a successful
update of the target. 

You need quite some smart scripting. Not easy, but probably
feasible.

Both methods only work if your schema is stable.

>I'm stuck with my problem so perhaps here is someone with a really  
>clever idea.
>
>Sorry for my bad english,

No problem, it's clear.

>greetings from hamburg, germany
>Till

Greetings from Rotterdam, Netherlands.
-- 
  (  Kees Nuyt
  )
c[_]
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Realtime backup of database

2008-08-06 Thread Robert Simpson
The new VFS implementation is probably the way to go.  If you wrote some
wrapper code around the default VFS, you could capture all the writes that
go to the main db and clone/wirexfer those writes to a 2nd sync file.

Just an idea -- haven't worked with VFS's yet, though I plan on implementing
them in the ADO.NET provider.  SQLite may be one of the very few database
engines that can work on Microsoft's "Isolated Storage" mechanism fairly
soon!  If it works as well as I think, I could probably implement several
different VFS implementations in the provider for doing sync stuff, SQLite
over streams, etc.

Robert

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Till Steinbach
Sent: Tuesday, August 05, 2008 9:57 AM
To: sqlite-users@sqlite.org
Subject: [sqlite] Realtime backup of database

Hi everyone,
i need to backup a sqlite database on a remote device for  
configuration redundancy purposes. Due to a really slow connection  
between the devices triggering rsync is not the best solution yet.  
Although rsync is efficient it is transfering kilobytes of data to see  
what it has to update. Because the updates are always one-way (from  
the live database to the backup database) it must be sufficient only  
to transfer the updates.
I don't get the right idea for my application. Something like  
capturing the querys that update the database and transmit them to the  
remote system would fit the purpose, but that seems to me very  
complicated to teach the database.

I'm stuck with my problem so perhaps here is someone with a really  
clever idea.

Sorry for my bad english,
greetings from hamburg, germany
Till
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Realtime backup of database

2008-08-06 Thread Till Steinbach
Hi everyone,
i need to backup a sqlite database on a remote device for  
configuration redundancy purposes. Due to a really slow connection  
between the devices triggering rsync is not the best solution yet.  
Although rsync is efficient it is transfering kilobytes of data to see  
what it has to update. Because the updates are always one-way (from  
the live database to the backup database) it must be sufficient only  
to transfer the updates.
I don't get the right idea for my application. Something like  
capturing the querys that update the database and transmit them to the  
remote system would fit the purpose, but that seems to me very  
complicated to teach the database.

I'm stuck with my problem so perhaps here is someone with a really  
clever idea.

Sorry for my bad english,
greetings from hamburg, germany
Till
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Fts3 and JOIN sometimes results in inability to use MATCH operator

2008-08-06 Thread Alexandre Courbot
Hello everybody,

Using sqlite 3.6.0 with fts3 enabled:

create table indexes(docid int);
create virtual table texts using fts3(reading);
insert into texts values("text1");
insert into indexes values(last_insert_rowid());
select * from indexes join texts on texts.docid == indexes.docid where
texts.reading match "text1";

Last line outputs:
SQL error: unable to use function MATCH in the requested context

I first noticed that behavior in a much larger query with plenty of
joins. Reordering them gives me a query that works. Is there any
reason for this? I don't know sqlite enough to affirm this is a bug,
but this looks suspicious to me. Shall I fill in a trac ticket?

Alex.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] "Unload to variable" and Openstring functionality

2008-08-06 Thread Alexey Pechnikov
Hello!

I did read prezentation of SQL Anywhere® 11
http://www.sybase.com/files/White_Papers/Sybase_SQLAnywhere_Top10newFeatures_wp.pdf

and there are very useful features which can make replication very simple. How 
may be it realised in SQLite?

"1. UNLOAD to a Variable
   The LOAD and UNLOAD statements now work with variables as well as files. I 
don’t mean just using a variable to hold a
file specification, I mean loading and unloading data stored in variables as 
well as files. You can UNLOAD all the rows and
columns of a table into a single string variable, and the reverse: LOAD a 
table from a string:
   BEGIN
DECLARE s LONG VARCHAR;
UNLOAD TABLE t1 TO s;
LOAD TABLE t2 FROM s;
   END;
"

And 

"2. FROM OPENSTRING
   The new OPENSTRING clause lets you name a file or variable in the FROM 
clause of a SELECT and have SQL Anywhere
treat the data inside that file or variable as a set of rows and columns. 
That’s a little bit like using a proxy table to treat a file
like a table via ODBC remote access middleware, but it’s way easier and more 
flexible, not to mention that OPENSTRING lets
you use variables as well as files.

...

CREATE VARIABLE @var LONG VARCHAR;
UNLOAD
   SELECT *
  FROM SYSDOMAIN
INTO VARIABLE @var;
CREATE VIEW V AS
   SELECT *
  FROM OPENSTRING ( VALUE @var ) WITH
   ( domain_id   SMALLINT,
 domain_num VARCHAR ( 128 ),
 type_id SMALLINT,
 “precision” SMALLINT )
   AS T;
"


Best regards, Alexey.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Clone SQLite databases

2008-08-06 Thread csabi81

I have done again a test, its nice to work with you guys :)

So, I query all the PRAGMAs from the original database, and note them,
temporary on a sheet of paper;
then after dooing a .dump obtain the hole schema of the database, CREATE
TABLE, INDEX, INSERT stc..., saved in a file. I append from the begining the
PRAGMAS, and than create the datanase file based on this schema file. Good
and bad news: the good one is that the size of the file is exactly the same
as the original one, the bad news is that firefox3 still cannot work with
the cloned database.
Is there a priority on how to apply PRAGMAs? this is the only thing I can
imagine that can affect the cloned database file. :(

Thanks to all



csabi81 wrote:
> 
> Thanks Steve, I think I am on the right way.
> It is a difference between the page sizes of the two databases. I have
> opened each database with Firefox SQLite Manager, and at the DB Settings I
> have seen a lot of differences, such as page size, Cache size, max page
> count etc. How can I Copy all these settings from the original DB to the
> clone. Is there a safe way, or I can only copy one by one, by querying and
> setting these Settings?
> 
> Thanks
> 
> 
> Steve Kallenborn wrote:
>> 
>> Could there be a different page_size, resulting in differing amounts of 
>> 'wasted' space?
>> 
>> Check with "PRAGMA page_size;" or sqlite3_analyzer program.
>> 
>> Thanks
>>Steve
>> 
>> csabi81 wrote:
>>> I have made .dump for each database, original and clone, and saved into
>>> separated files. Than compare them.
>>> Differences are in INSERT INTO entries, in the clone database those
>>> entries
>>> are missing which I did not copy from the original. But the CREATE
>>> TABLE,
>>> CREATE INDEX, CREATE TRIGGER Commands are the same 100%
>>> 
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> 
>> 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Clone-SQLite-databases-tp18827472p18850113.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] jdbc:sqlite:// - db connection problem

2008-08-06 Thread Kyuuu

first of all, hello to the board and all users!

now to my problem: i want a jdbc:sqlite connection per
"mysql-connector-java-5.1.6-bin.jar". everthing works fine, as long as the
db-file is not packed with the rest of the program to a jar-file. i found
out that the files in a jar-file are reachable by expressios with an
exclamation mark so the path to my db-file should be:
"jar:file://C:/af.jar!/af.db"
now i tried to use this expression with the jdbc:sqlite-driver, with this
outcome:

connection =
DriverManager.getConnection("jdbc:sqlite://jar:file:/C:/af.jar!/af.db");

but it doesnt work at all...
so the main question is: is that even possible? because it works fine when i
just take a normal path to a second testing-db-file next to my jar-file like
this:

connection = DriverManager.getConnection("jdbc:sqlite://C:/af.db");

one more thing: when i execute my program with this expression
("jdbc:sqlite://jar:file:/C:/af.jar!/af.db") it takes a huge time to realise
that it cant connect to this database, when i change it to
"jdbc:sqlite://jar:file:/C:/af.jar!/af222.db" it needs no time to
realise that this file doesnt exist at all, so i think that my script is
able to connect to the db-file, but cant read it, or something like that.
maybe is that a problem with jarfiles?!

thank you for all replies kyuuu

ps: sorry for my bad english
-- 
View this message in context: 
http://www.nabble.com/jdbc%3Asqlite%3A-db-connection-problem-tp18849621p18849621.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Clone SQLite databases

2008-08-06 Thread csabi81

Thanks Steve, I think I am on the right way.
It is a difference between the page sizes of the two databases. I have
opened each database with Firefox SQLite Manager, and at the DB Settings I
have seen a lot of differences, such as page size, Cache size, max page
count etc. How can I Copy all these settings from the original DB to the
clone. Is there a safe way, or I can only copy one by one, by querying and
setting these Settings?

Thanks


Steve Kallenborn wrote:
> 
> Could there be a different page_size, resulting in differing amounts of 
> 'wasted' space?
> 
> Check with "PRAGMA page_size;" or sqlite3_analyzer program.
> 
> Thanks
>Steve
> 
> csabi81 wrote:
>> I have made .dump for each database, original and clone, and saved into
>> separated files. Than compare them.
>> Differences are in INSERT INTO entries, in the clone database those
>> entries
>> are missing which I did not copy from the original. But the CREATE TABLE,
>> CREATE INDEX, CREATE TRIGGER Commands are the same 100%
>> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Clone-SQLite-databases-tp18827472p1884.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Clone SQLite databases

2008-08-06 Thread Steve Kallenborn
Could there be a different page_size, resulting in differing amounts of 
'wasted' space?

Check with "PRAGMA page_size;" or sqlite3_analyzer program.

Thanks
   Steve

csabi81 wrote:
> I have made .dump for each database, original and clone, and saved into
> separated files. Than compare them.
> Differences are in INSERT INTO entries, in the clone database those entries
> are missing which I did not copy from the original. But the CREATE TABLE,
> CREATE INDEX, CREATE TRIGGER Commands are the same 100%
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Clone SQLite databases

2008-08-06 Thread csabi81

I have made .dump for each database, original and clone, and saved into
separated files. Than compare them.
Differences are in INSERT INTO entries, in the clone database those entries
are missing which I did not copy from the original. But the CREATE TABLE,
CREATE INDEX, CREATE TRIGGER Commands are the same 100%


Derrell Lipman wrote:
> 
> On Tue, Aug 5, 2008 at 10:31 AM, csabi81 <[EMAIL PROTECTED]> wrote:
> 
>>
>> I have made the test with following results:
>> Useing "SELECT ALL * FROM  WHERE... order by id;" on the
>> original
>> database and
>> "SELECT ALL * FROM  order by id;" on the cloned database give
>> me
>> the same results.
>> outputing the results from SELECT in different files give me two 100%
>> identical files.
> 
> 
> You might also try doing ".dump" on each database from the sqlite shell
> and
> compare the CREATE entries for tables, triggers, etc. to ensure that they
> were created identically.
> 
> Derrell
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Clone-SQLite-databases-tp18827472p18846161.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Realtime Backup of Database

2008-08-06 Thread Till Steinbach
Hi Stephen,
I have a slightly different requirement. Because the second device is  
the hot-standby part of my device hosting the database, I must be sure  
that the change was successfull. Everything else would lead to  
confusion when there is a failover and some data is not yet changed.
I'm just looking into Ingos code (thanks for that!) i have had no idea  
how to log these statements. Now I'm far closer to the solution.

To be conform with my requirements i thought of changing the triggers.  
My idea is to write a custom function (with the extension api) that  
takes the statement as argument. That function does the transfer to my  
redundant database on the second device and returns with success or  
error.
When there is an error due to a lost link between the devices. I can  
do a RAISE() to show the user-programm that the setting could not be  
saved.

Do you think that is a straight solution for my problem?
greetings Till



> The way I accomplished this is by giving each setting a "dirty" flag.
> When the setting is changed for any reason, the "dirty" flag is set.
> When the device reports in (so I know that it is still working), it
> checks for any "dirty" settings and includes them in the report.  When
> the server they report to receives and stores those settings, it sends
> back a response indicating such. Upon receiving that response, the
> device clears the "dirty" flag for all settings.
>
> This scenario works fine so long as it is impossible for a setting to
> be changed while the device is reporting in.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users