Re: [sqlite] Limiting the amount of memory usage for Sqlite

2009-02-16 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Gokila Dorai wrote:
> I will be great if someone can help me on how to limit the amount of memory
> usage to a fixed size for Sqlite version 3.3.6. 

Look closer at the source code.  You can provide your own memory
functions and tuning.  They have changed over the releases.

> At present the amount of
> memory usage increases when the number of objects inserted into the database
> is increased.   

That would be the cache you seeing.  You can use pragmas to alter how
much memory is used for caching.

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)

iEYEARECAAYFAkmaaTwACgkQmOOfHg372QQVQgCeM1DuxusUpRYyZS5cSnbnBpEf
928An0qFhjyoRzKIXAsKEs6j4iWurS1N
=4KKT
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Database path in widows

2009-02-16 Thread Jibin Scaria
Hi,

I am new to SQLite. I am trying to develop a small application in C# using
sqlite-dotnet2 

I am facing problem with spaces in the database path, able to open database
but queries are returning "no such table: table name".

Can somebody help me?

Thanks in advance

-- 
Jibin
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Limiting the amount of memory usage for Sqlite

2009-02-16 Thread Gokila Dorai

I will be great if someone can help me on how to limit the amount of memory
usage to a fixed size for Sqlite version 3.3.6. At present the amount of
memory usage increases when the number of objects inserted into the database
is increased.   

I want to set a maximum allowable limit for the memory usage. 

Kindly help me with your valuable suggestions. 
Thanks in advance. 
-- 
View this message in context: 
http://www.nabble.com/Limiting-the-amount-of-memory-usage-for-Sqlite-tp22051229p22051229.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 100 million records will be fine?

2009-02-16 Thread Rajesh Nair
I have a real time program which logs more than 30,000 records, each
record of about 200 bytes, per day and the company in which it has
been installed is working 24/365. I installed the project on 2005
August and it is working fine till date. It perform some report
generations (4 or 5) every day. The data is dumped to Sqlite database
and I don't know the current size of that database. But the PC is with
just 256 MB RAM and 160 GB hard disk and CPU is 800MHz. It is working
for last 3-4 years without shutting down and without any data crash or
program crash. It is so designed to VACUUM the database on 20th of
each month or the nearest sunday.
The database when I last checked was of size 4 GB and it may be
increased to 6 or 7 GB now.
I have modified the program to split the database on yearly basis long
before, but the company is not ready to accept the modification.

But still it works fine. Is that enough for you..



On 2/17/09, Alexey Pechnikov  wrote:
> Hello!
>
> В сообщении от Monday 16 February 2009 22:14:03 Jay A. Kreibich написал(а):
>> > Of cource, write operations must be grouped becouse memory allocation
>> > for write transaction is proportional to database size (see offsite).
>>
>>   This limitation was removed about a year ago around 3.5.7.  Rather than
>>   using a static bit-map for dirty pages, there is a bitvec class that
>>   implements a sparse array, removing most of the memory size
>> dependencies.
>
> It's very well! Thanks!
>
> Best regards, Alexey.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
Regards
Rajesh Nair
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite- Memory heap usage query

2009-02-16 Thread P Kishor
Didn't you ask this question 4 days ago, and it was answered? The only
difference in your question this time around seems to be that you are
asking for a way to reduce the db file size. You could utilize a
compression library on your data... check the mail archives and also
google for that information.

On Mon, Feb 16, 2009 at 10:02 PM, jaya_kumar
 wrote:
>
> Hi All,
>
> In the following use case, when I try to update the following number of
> objects SQLite DB file size is very large
>
> 1. 1k object - DB file size was 264 Kb
> 2. 41k object - DB file size was 11 Mb
> 3. 100k object - DB file size was 26 Mb
>
> Is there any way to reduce the DB file size?
>
> Thanks in advance,
> Jai
>
>
>
> Kees Nuyt wrote:
>>
>> On Thu, 12 Feb 2009 19:14:44 -0800 (PST), jaya_kumar
>>  wrote in General
>> Discussion of SQLite Database :
>>
>>>
>>>I am trying to find the peak heap usage for creating a table with few
> entries
>>>and when calculated the peak heap usage increases as the numbers of items
>>>inserted into the table increases.
>>>
>>>Following are the steps done,
>>>1. Table created
>>>2. Table updated by insert statement
>>>3. Query for a single tupple from the table based on a primary key
>>>
>>>Please find below the heap usage when sqlite was operated in file mode,
>>>1k entries - 315 Kb
>>>10k entries - 2.3 Mb
>>>100k entries- 5.7 Mb
>>>
>>>Please find below the heap usage when sqlite was operated in memory mode
>>>(using ":memory:"),
>>>1k entries - 318 Kb
>>>10k entries - 11.6 Mb
>>>100k entries- 29.6 Mb
>>>
>>>I am newbie to SQLite, so please let me know if these figures are expected
>>>for SQLite.
>>
>> In general, yes, they are expected.
>>
>>>Also please let me know why is the heap usage increasing based
>>>on the total number of entries updated to the database?
>>
>> For the file database you see the cache growing.
>> There is a maximum on the cache size, which can be
>> influenced by
>>
>>   PRAGMA page_size;
>>   PRAGMA default_cache_size;
>>   PRAGMA cache_size;
>>
>> Every page in cache has some administrative overhead, so the
>> memory occupied will be more than just
>>   page_size * cache_size.
>>
>> For the :memory: database, you see the rows being stored in
>> memory and perhaps also some cache.
>>
>> Obviously, both cases also use some memory to store the
>> datastructure that stores the interpreted schema and
>> housekeeping stuff.
>>
>>>Thanks in advance,
>>>Jai
>>
>> More detail can be found on the site (architecture etc.) and
>> in the (well-documented) source.
>> --
>>   (  Kees Nuyt
>>   )
>> c[_]
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Double entry bookkeeping

2009-02-16 Thread Rich Shepard
On Tue, 17 Feb 2009, BareFeet wrote:

> I'm trying to design a schema for double entry book-keeping. Is there
> already a schema available that provides this functionality?

   Well, if you're going to re-invent the wheel, take a look at the source
code for gnucash or, better yet, SQL-Ledger.

Rich

-- 
Richard B. Shepard, Ph.D.   |  IntegrityCredibility
Applied Ecosystem Services, Inc.|Innovation
 Voice: 503-667-4517  Fax: 503-667-8863
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite- Memory heap usage query

2009-02-16 Thread jaya_kumar

Hi All,

In the following use case, when I try to update the following number of
objects SQLite DB file size is very large

1. 1k object - DB file size was 264 Kb
2. 41k object - DB file size was 11 Mb
3. 100k object - DB file size was 26 Mb

Is there any way to reduce the DB file size?

Thanks in advance,
Jai



Kees Nuyt wrote:
> 
> On Thu, 12 Feb 2009 19:14:44 -0800 (PST), jaya_kumar
>  wrote in General
> Discussion of SQLite Database :
> 
>>
>>I am trying to find the peak heap usage for creating a table with few
entries
>>and when calculated the peak heap usage increases as the numbers of items
>>inserted into the table increases.
>>
>>Following are the steps done,
>>1. Table created
>>2. Table updated by insert statement
>>3. Query for a single tupple from the table based on a primary key
>>
>>Please find below the heap usage when sqlite was operated in file mode,
>>1k entries - 315 Kb
>>10k entries - 2.3 Mb
>>100k entries- 5.7 Mb
>>
>>Please find below the heap usage when sqlite was operated in memory mode
>>(using ":memory:"),
>>1k entries - 318 Kb
>>10k entries - 11.6 Mb
>>100k entries- 29.6 Mb
>>
>>I am newbie to SQLite, so please let me know if these figures are expected
>>for SQLite. 
> 
> In general, yes, they are expected.
> 
>>Also please let me know why is the heap usage increasing based
>>on the total number of entries updated to the database?
> 
> For the file database you see the cache growing.
> There is a maximum on the cache size, which can be
> influenced by 
> 
>   PRAGMA page_size;
>   PRAGMA default_cache_size;
>   PRAGMA cache_size;
> 
> Every page in cache has some administrative overhead, so the
> memory occupied will be more than just
>   page_size * cache_size.
> 
> For the :memory: database, you see the rows being stored in
> memory and perhaps also some cache.
> 
> Obviously, both cases also use some memory to store the
> datastructure that stores the interpreted schema and
> housekeeping stuff.
> 
>>Thanks in advance,
>>Jai
> 
> More detail can be found on the site (architecture etc.) and
> in the (well-documented) source.
> -- 
>   (  Kees Nuyt
>   )
> c[_]
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://www.nabble.com/SQLite--Memory-heap-usage-query-tp21989418p22049781.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Double entry bookkeeping

2009-02-16 Thread BareFeet
Hi all,

I'm trying to design a schema for double entry book-keeping. Is there  
already a schema available that provides this functionality?

As I see it, I need to start with something like the schema below. Any  
other approaches?

create table Accounts
(
  Code text unique collate nocase
, Name text collate nocase
)
;
create table "Moves"-- Move of money in one direction as 
part of an  
event
(
  ID integer primary key
, Event integer --> Events.ID
, Account text  --> Accounts.Code
, Amount integer-- money in cents, +ve or -ve
)
;
create table Descriptions   -- For each Move, Statement text from 
the  
bank or manual description
(
  ID integer primary key--> Moves.ID
, Description text  -- text shown on bank statement or 
manually entered
)
;
create table "Events"   -- Event where money moves into and 
from 2 or  
more accounts
(
  ID integer primary key
, Date date -- julianday when event occured
)
;
create view "Moves Entry"
as
select
  Moves.ID as rowid
, Moves.ID as ID
, Moves.Event as Event
, date(Events.Date, '0.1 seconds', 'localtime') as Date
, case when Amount < 0 then round(-Amount/100.0,2) end as Debit
, case when Amount >= 0 then round(Amount/100.0,2) end as Credit
, Accounts.Code
, Accounts.Name
, Description
from Moves
left join Events on Moves.Event = Events.ID
left join Descriptions on Moves.ID = Descriptions.ID
left join Accounts on Moves.Account = Accounts.Code
;
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 100 million records will be fine?

2009-02-16 Thread Alexey Pechnikov
Hello!

В сообщении от Monday 16 February 2009 22:14:03 Jay A. Kreibich написал(а):
> > Of cource, write operations must be grouped becouse memory allocation
> > for write transaction is proportional to database size (see offsite).
>
>   This limitation was removed about a year ago around 3.5.7.  Rather than
>   using a static bit-map for dirty pages, there is a bitvec class that
>   implements a sparse array, removing most of the memory size dependencies.

It's very well! Thanks!

Best regards, Alexey.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 100 million records will be fine?

2009-02-16 Thread Jay A. Kreibich
On Mon, Feb 16, 2009 at 07:55:33PM +0300, Alexey Pechnikov scratched on the 
wall:

> Of cource, write operations must be grouped becouse memory allocation 
> for write transaction is proportional to database size (see offsite).

  This limitation was removed about a year ago around 3.5.7.  Rather than
  using a static bit-map for dirty pages, there is a bitvec class that
  implements a sparse array, removing most of the memory size dependencies.

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Our opponent is an alien starship packed with atomic bombs.  We have
 a protractor."   "I'll go home and see if I can scrounge up a ruler
 and a piece of string."  --from Anathem by Neal Stephenson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Visual Studio 2005 Linker Problems - SQLite v3.6.3

2009-02-16 Thread Ian Thomas
Hi Shane,

Thanks very much for the suggestion. I excluded the 'rtree.c' source 
file from my build configuration and it now links fine (release and debug).

I guess there's still an underlying issue but as I don't need the rtree 
module it doesn't affect me!

Thanks again,

Ian


Shane Harrelson wrote:
> rtree support is an extension to SQLite
> and is not needed to use the core functionality.
> Try leaving the rtree source files out of
> your builds to see if this corrects your
> problem.  You might also consider using
> the SQLITE_OMIT_LOAD_EXTENSION
> option as well.
>
> http://www.sqlite.org/compile.html
>
>
> On Mon, Feb 16, 2009 at 9:29 AM, Ian Thomas  wrote:
>   
>> Hello,
>>
>> I'm trying to compile my own debug build of SQLite v3.6.3 in Visual
>> Studio 2005 so that I can step through a problem I'm having with my
>> application. I have followed the instructions found here:
>>
>> http://www.sqlite.org/cvstrac/wiki?p=HowToCompileWithVsNet
>>
>> Using the standard v3.6.3 source tree (not the amalgamation version)
>> everything compiles fine but fails to link with this error:
>>
>> 1>rtree.obj : error LNK2005: _sqlite3_api already defined in
>> fts3_tokenizer.obj
>> 1>   Creating library
>> D:\x_mirror\buildman\tools\sqlite-3.6.3\SQLiteVS2005\Debug\SQLiteVS2005.lib
>> and object
>> D:\x_mirror\buildman\tools\sqlite-3.6.3\SQLiteVS2005\Debug\SQLiteVS2005.exp
>> 1>D:\x_mirror\buildman\tools\sqlite-3.6.3\SQLiteVS2005\Debug\SQLiteVS2005.dll
>> : fatal error LNK1169: one or more multiply defined symbols found
>>
>> I have only two SQLite preprocessor defines set in my project:
>>
>> SQLITE_ENABLE_COLUMN_METADATA
>> SQLITE_ENABLE_FTS3
>>
>> I get this error both with and without the SQLITE_ENABLE_FTS3 define.
>>
>> I get the same linker error when trying to build SQLite v3.6.10 and when
>> trying to use Visual Studio 2003 instead of 2005.
>>
>> I'm sure there must be an easy fix for this problem - any advice would
>> be really appreciated. I imagine I could use the /FORCE:MULTIPLE linker
>> option to get past this but I'm reluctant to do so without checking first.
>>
>> Thanks in advance,
>>
>> Ian Thomas
>>
>>
>> LEGAL NOTICE
>> Unless expressly stated otherwise, information contained in this
>> message is confidential. If this message is not intended for you,
>> please inform postmas...@ccdc.cam.ac.uk and delete the message.
>> The Cambridge Crystallographic Data Centre is a company Limited
>> by Guarantee and a Registered Charity.
>> Registered in England No. 2155347 Registered Charity No. 800579
>> Registered office 12 Union Road, Cambridge CB2 1EZ.
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

-- 
Ian Thomas
Scientific Software Engineer
CCDC
12 Union Road
Cambridge
CB2 1EZ
UK

Tel - +44 1223 763884


LEGAL NOTICE
Unless expressly stated otherwise, information contained in this
message is confidential. If this message is not intended for you,
please inform postmas...@ccdc.cam.ac.uk and delete the message.
The Cambridge Crystallographic Data Centre is a company Limited
by Guarantee and a Registered Charity.
Registered in England No. 2155347 Registered Charity No. 800579
Registered office 12 Union Road, Cambridge CB2 1EZ.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Visual Studio 2005 Linker Problems - SQLite v3.6.3

2009-02-16 Thread Shane Harrelson
rtree support is an extension to SQLite
and is not needed to use the core functionality.
Try leaving the rtree source files out of
your builds to see if this corrects your
problem.  You might also consider using
the SQLITE_OMIT_LOAD_EXTENSION
option as well.

http://www.sqlite.org/compile.html


On Mon, Feb 16, 2009 at 9:29 AM, Ian Thomas  wrote:
> Hello,
>
> I'm trying to compile my own debug build of SQLite v3.6.3 in Visual
> Studio 2005 so that I can step through a problem I'm having with my
> application. I have followed the instructions found here:
>
> http://www.sqlite.org/cvstrac/wiki?p=HowToCompileWithVsNet
>
> Using the standard v3.6.3 source tree (not the amalgamation version)
> everything compiles fine but fails to link with this error:
>
> 1>rtree.obj : error LNK2005: _sqlite3_api already defined in
> fts3_tokenizer.obj
> 1>   Creating library
> D:\x_mirror\buildman\tools\sqlite-3.6.3\SQLiteVS2005\Debug\SQLiteVS2005.lib
> and object
> D:\x_mirror\buildman\tools\sqlite-3.6.3\SQLiteVS2005\Debug\SQLiteVS2005.exp
> 1>D:\x_mirror\buildman\tools\sqlite-3.6.3\SQLiteVS2005\Debug\SQLiteVS2005.dll
> : fatal error LNK1169: one or more multiply defined symbols found
>
> I have only two SQLite preprocessor defines set in my project:
>
> SQLITE_ENABLE_COLUMN_METADATA
> SQLITE_ENABLE_FTS3
>
> I get this error both with and without the SQLITE_ENABLE_FTS3 define.
>
> I get the same linker error when trying to build SQLite v3.6.10 and when
> trying to use Visual Studio 2003 instead of 2005.
>
> I'm sure there must be an easy fix for this problem - any advice would
> be really appreciated. I imagine I could use the /FORCE:MULTIPLE linker
> option to get past this but I'm reluctant to do so without checking first.
>
> Thanks in advance,
>
> Ian Thomas
>
>
> LEGAL NOTICE
> Unless expressly stated otherwise, information contained in this
> message is confidential. If this message is not intended for you,
> please inform postmas...@ccdc.cam.ac.uk and delete the message.
> The Cambridge Crystallographic Data Centre is a company Limited
> by Guarantee and a Registered Charity.
> Registered in England No. 2155347 Registered Charity No. 800579
> Registered office 12 Union Road, Cambridge CB2 1EZ.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 100 million records will be fine?

2009-02-16 Thread Alexey Pechnikov
Hello!

В сообщении от Monday 16 February 2009 17:42:25 danjenkins написал(а):
> I fully understand that performance will depend on the coding, database
> structure and indexing (& hardware) but, assuming these are taken care of,
> should a 100 million record table perform loosely in the same performance
> class as other popular databases?

I did test SQLite databases up to 100 GB size with 1 000 000 blobs or 100 000 
000 strings in single 
table. Of cource, write operations must be grouped becouse memory allocation 
for write transaction 
is proportional to database size (see offsite). I did not find any explicit 
non-linear size 
effects. 

P.S. In real projects I'm prefer to split databases of a few gigabytes pieces 
and attach these when 
it's needed. For example, a lot of datasets can be splitted by months.

On debian etch/lenny hosts with ext3 filesystem and RAM>1 Gb this options may 
be useful:

pragma page_size=4096;
PRAGMA default_cache_size=20;
PRAGMA cache_size=20;

P.P.S. SQLite very effectively work with huge databases when database size >> 
RAM size. And 
databases is compact. So, 4,5 Gb SQLite database work fine but 18 Gb PostgreSQL 
database with equal 
data (yes, PostgreSQL databases are not compact) is work very bad on linux host 
with 1Gb RAM. 

Best regards, Alexey.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] "analyze" command slow for large amounts of data; alternative possible?

2009-02-16 Thread John Wood
The problem
---

I'm using the  "ANALYZE" command of SQLITE to try and improve
performance for queries.

However, I'm finding that the actual analyze call can take a long time
to run for large amounts of rows (e.g. >2). This is especially the
case if a lot of data has been changed between calls to analyze. It's
taking 10's of seconds to run.

The typical problematic pattern is as follows: start up a new blank
install of my app. "analyze" is run at startup which of course is
instantaneous since no data exists. The user then does his stuff and
results in lots of new rows added. At restart, "analyze" is ran again
which then takes ages.

Interestingly, on subsequent restarts, if the user has added no new
data, the next analyze is very quick, suggesting this is an
incremental cost.


One solution: Manually force the query analyzer to pick statememts
--

This is the solution described in the SQLITE docs (where you tell the
query engine to ignore certain fields for the purposes of indexes).
This is the last resort for me.

Possible Solution 2: Have hard coded results


I've heard of other people doing this, but I don't know the details.
What you do is setup your DB to reflect a "typical" set of data. You
then run "ANALYZE" and hardcode the data into your app's DB table. You
never actually run ANALYZE in the wild.

So my questions:

1) Will this work?
2) Is it "dangerous"? e.g. could this completely confuse the query
optimiser, or is it the case that as long as the hard coded values are
"realistic" it doesn't matter that they don't reflect the reality of
the table (which is what the docs imply)
2) Has anyone experience of trying this?


Thanks,

John
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Full Table Read

2009-02-16 Thread Nathan Biggs


>  
>
> Griggs, Donald wrote:
>   
>>  > On linux you can do
>>   
>> 
>>> dd if=database.db of=/dev/null bs=1M
>>> and after perform "select ..."
>>> 
>>>   
>> =
>> From: sqlite-users-boun...@sqlite.org
>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Nathan Biggs
>> Subject: Re: [sqlite] Full Table Read
>>
>> We are actually using Windows XP for an OS.  (I know, lots of
>> limitations, but that is what we have to use).
>>
>> =
>>
>> I think the windows equivalent would be:
>>copy /b database.db nul
>>
>> In either case, we're assuming your database fits into available
>> ram-cached virtual space.
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>
>> 
> =
> Biggs wrote:
>
> Is that just creating an in-memory database then?  I'm not familiar with
> a copy to a null location.
> =  
>
> Nothing quite that fancy.   It's just attempting to get most or all of
> your database into the read-cache of the operating system.   (I probably
> was not accurate earlier when I wrote "virtual space" since I don't know
> how the read-cache is implemented.)
>
> Your performance, after a successful loading into cache, should be about
> that of a ram-based  database -- without the data integrity and
> synchronization problems a ramdisk-based database can cause.
>
> You didn't mention any "order by" clause on your "select *".   If you
> *do* require a an ordering of the output then an index, of course, could
> possibly make a large difference.
>
> Instead of the copy to a nul device, you could also try running a vacuum
> command -- since it   can "defragment" a table who's portions are spread
> throughout the file.
>
> All this assumes that the time to perform the copy or vacuum is
> tolerable in your application.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>
>   
Thanks for the detailed information.  The real reason I was asking is 
because I created a custom function in C++ with pre-allocated memory to 
hold query results which are just arrays that hold counts.  Anyway, I 
noticed that the function ran faster when I did not use a where 
statement in the sql even though it was returning a lot more data.  My 
guess is this is due to how sqlite needs to create temporary buffers to 
get the specific data you requested.  If you don't request specific 
data, maybe it just reads from the table itself and bypasses the 
additional memory allocations. 

I'm not sure that my database will fit into the cache, its about 100MB 
in size.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] 100 million records will be fine?

2009-02-16 Thread danjenkins

Hi.  I've started a SQLite C++ project that could peak at 100 million records
(250 bytes per record spread over 20 fields) and would like to ask if anyone
has seen SQLite projects of this magnitude.

The Windows data logging project will add up to 1 million records per day
and run queries containing approximately 20,000 records a few times a day. 

I fully understand that performance will depend on the coding, database
structure and indexing (& hardware) but, assuming these are taken care of,
should a 100 million record table perform loosely in the same performance
class as other popular databases?

My concern is, for example, is hitting a brick wall in performance after 2
million, or whatever, records and so I wanted to post the question here
before I go too far down development road.

I appreciate your input on telling me if you've seen SQLite databases this
large.

Best regards,
Dan Jenkins

-- 
View this message in context: 
http://www.nabble.com/100-million-records-will-be-fine--tp22038526p22038526.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Full Table Read

2009-02-16 Thread Griggs, Donald
 

Griggs, Donald wrote:
>  > On linux you can do
>   
>> dd if=database.db of=/dev/null bs=1M
>> and after perform "select ..."
>> 
>
> =
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Nathan Biggs
> Subject: Re: [sqlite] Full Table Read
>
> We are actually using Windows XP for an OS.  (I know, lots of
> limitations, but that is what we have to use).
>
> =
>
> I think the windows equivalent would be:
>copy /b database.db nul
>
> In either case, we're assuming your database fits into available
> ram-cached virtual space.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>
=
Biggs wrote:

Is that just creating an in-memory database then?  I'm not familiar with
a copy to a null location.
=  

Nothing quite that fancy.   It's just attempting to get most or all of
your database into the read-cache of the operating system.   (I probably
was not accurate earlier when I wrote "virtual space" since I don't know
how the read-cache is implemented.)

Your performance, after a successful loading into cache, should be about
that of a ram-based  database -- without the data integrity and
synchronization problems a ramdisk-based database can cause.

You didn't mention any "order by" clause on your "select *".   If you
*do* require a an ordering of the output then an index, of course, could
possibly make a large difference.

Instead of the copy to a nul device, you could also try running a vacuum
command -- since it   can "defragment" a table who's portions are spread
throughout the file.

All this assumes that the time to perform the copy or vacuum is
tolerable in your application.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Full Table Read

2009-02-16 Thread Alexey Pechnikov
Hello!

В сообщении от Monday 16 February 2009 17:18:04 Nathan Biggs написал(а):
> Is that just creating an in-memory database then?  I'm not familiar with
> a copy to a null location.

Writes to in-memory database is not saved to disk. If you only want to increase 
read speed when 
ram-cache of your database file can help to you.

Best regards, Alexey.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Visual Studio 2005 Linker Problems - SQLite v3.6.3

2009-02-16 Thread Ian Thomas
Hello,

I'm trying to compile my own debug build of SQLite v3.6.3 in Visual 
Studio 2005 so that I can step through a problem I'm having with my 
application. I have followed the instructions found here:

http://www.sqlite.org/cvstrac/wiki?p=HowToCompileWithVsNet

Using the standard v3.6.3 source tree (not the amalgamation version) 
everything compiles fine but fails to link with this error:

1>rtree.obj : error LNK2005: _sqlite3_api already defined in 
fts3_tokenizer.obj
1>   Creating library 
D:\x_mirror\buildman\tools\sqlite-3.6.3\SQLiteVS2005\Debug\SQLiteVS2005.lib 
and object 
D:\x_mirror\buildman\tools\sqlite-3.6.3\SQLiteVS2005\Debug\SQLiteVS2005.exp
1>D:\x_mirror\buildman\tools\sqlite-3.6.3\SQLiteVS2005\Debug\SQLiteVS2005.dll 
: fatal error LNK1169: one or more multiply defined symbols found

I have only two SQLite preprocessor defines set in my project:

SQLITE_ENABLE_COLUMN_METADATA
SQLITE_ENABLE_FTS3

I get this error both with and without the SQLITE_ENABLE_FTS3 define.

I get the same linker error when trying to build SQLite v3.6.10 and when 
trying to use Visual Studio 2003 instead of 2005.

I'm sure there must be an easy fix for this problem - any advice would 
be really appreciated. I imagine I could use the /FORCE:MULTIPLE linker 
option to get past this but I'm reluctant to do so without checking first.

Thanks in advance,

Ian Thomas


LEGAL NOTICE
Unless expressly stated otherwise, information contained in this
message is confidential. If this message is not intended for you,
please inform postmas...@ccdc.cam.ac.uk and delete the message.
The Cambridge Crystallographic Data Centre is a company Limited
by Guarantee and a Registered Charity.
Registered in England No. 2155347 Registered Charity No. 800579
Registered office 12 Union Road, Cambridge CB2 1EZ.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Full Table Read

2009-02-16 Thread Nathan Biggs
Is that just creating an in-memory database then?  I'm not familiar with 
a copy to a null location.



Griggs, Donald wrote:
>  > On linux you can do
>   
>> dd if=database.db of=/dev/null bs=1M
>> and after perform "select ..."
>> 
>
> =
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Nathan Biggs
> Subject: Re: [sqlite] Full Table Read
>
> We are actually using Windows XP for an OS.  (I know, lots of
> limitations, but that is what we have to use).
>
> =
>
> I think the windows equivalent would be:
>copy /b database.db nul
>
> In either case, we're assuming your database fits into available
> ram-cached virtual space.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>
>   
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Full Table Read

2009-02-16 Thread Griggs, Donald

 > On linux you can do
> dd if=database.db of=/dev/null bs=1M
> and after perform "select ..."

=
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Nathan Biggs
Subject: Re: [sqlite] Full Table Read

We are actually using Windows XP for an OS.  (I know, lots of
limitations, but that is what we have to use).

=

I think the windows equivalent would be:
   copy /b database.db nul

In either case, we're assuming your database fits into available
ram-cached virtual space.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Full Table Read

2009-02-16 Thread Nathan Biggs
We are actually using Windows XP for an OS.  (I know, lots of 
limitations, but that is what we have to use).

Alexey Pechnikov wrote:
> Hello!
>
> В сообщении от Saturday 14 February 2009 00:33:38 Nathan Biggs написал(а):
>   
>> Is there a faster way to read an entire table other then:
>>
>> select * from table
>>
>> Not that is is slow, just curious.
>> 
>
> On linux you can do 
> dd if=database.db of=/dev/null bs=1M
> and after perform "select ..."
>
> This trick does put your db to OS file cache and all read operations will be 
> extremely fast.
>
> Best regards, Alexey.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] having the Top N for each group

2009-02-16 Thread Sylvain Pointeau
Hello,
I lost my query that I used before :-(
(I put the file in GIT after my second query)

I will however have another try and I will post it to you.
I was also wondering if my indices were correctly defined.

Many thanks for your answers, I appreciate a lot your help.

Cheers,
Sylvain



On Mon, Feb 16, 2009 at 8:19 AM, Edzard Pasma wrote:

> Hello,
>
> Your current solution is theoretically not optimal, as it evaluates a
> sub-query for each row in table T, whereas a construction with LIMIT does
> this only once for each group. If you wish I may look at the 'infinite'
> query, just mail it. Otherwise we at least have proved SQLite's incredible
> speed in doing UPDATE :)
>
> Edzard Pasma
>
> --- sylvain.point...@gmail.com wrote:
>
> From: Sylvain Pointeau 
> To: edz...@volcanomail.com, General Discussion of SQLite Database <
> sqlite-users@sqlite.org>
> Subject: Re: [sqlite] having the Top N for each group
> Date: Sun, 15 Feb 2009 21:44:58 +0100
>
> Hello,
> on my large dataset, it tooks an infinite time.
> I finished with :
>
> update T
> set ranknum = (select count(*) from T a where ... a.value >= T.value  )
>
> and it works fast enough, in few minutes.
>
> if you have better solution, I would be glad to change.
>
> Cheers,
> Sylvain
>
> On Sun, Feb 15, 2009 at 10:06 AM, Edzard Pasma  >wrote:
>
> > Hello again,
> >
> > The following solution is more elegant than my earlier group_cancat idea,
> > and is just as fast. I had not expected that as it seems what you started
> > with.
> >
> > select period.period, sales.product
> > from period
> > join sales on sales.rowid in (
> >select rowid
> >from sales
> >where sales.period = period.period
> >order by sales.qty desc
> >limit 3);
> >
> > -- Edzard Pasma
> >
> >
> > --- sylvain.point...@gmail.com wrote:
> >
> > From: Sylvain Pointeau 
> > To: sqlite-users@sqlite.org
> > Subject: [sqlite] having the Top N for each group
> > Date: Sat, 14 Feb 2009 09:21:15 +0100
> >
> > Hello all,
> > I am wondering if we have a method faster then the INNER JOIN which
> > can be very slow in case of large number of rows, which is my case.
> > I was thinking of a UDF that increment a number if the concatenation of
> the
> > key column (or group columns) is the same, means:
> > select col1, col2, udf_topN(col1||col2) from TTT order by value group by
> > col1,col2
> >
> > will result into
> >
> > 1,1,1
> > 1,1,2
> > 1,1,3
> > 2,1,1
> > 2,1,2
> > 2,1,3
> > 4,3,1
> > 4,3,2
> > etc
> >
> >
> > however I don't really find how to keep, initialize, and destroy a
> variable
> > in a UDF for a query time execution
> >
> > do you have some idea?
> > is a TopN function planned for the future version of sqlite?
> >
> > Many thanks,
> > Sylvain
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >
> >
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >
>
>
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users