Re: [sqlite] Database File Size

2013-08-30 Thread Simon Slavin

On 30 Aug 2013, at 5:37am, techi eth  wrote:

> I am using Jffs2 file system. As of now I using all default from
> sqlite3.I am not sure with this information it is possible to get some
> approx that with 'X KB data, database file size will be 'Y' KB or MB.

Your best way to figure this out is to make a set of files with different 
amounts of data in yourself and 
plot how the file size grows with data size on your hardware with your 
operating system and your storage device and your file system.  If you can work 
out your own formula then you can assume it will continue working until you 
change something.

Also note that a page in the file contains data only from one table or one 
index.  So the result is a little different depending on whether you added 500 
records to one table and 80 to another table, or just added 580 records to one 
table.


On 30 Aug 2013, at 5:43am, techi eth  wrote:

> I am thinking to use auto_vacuum INCREMENTAL & PRAGMA
> incremental_vacuum(N) to make sure whenever required, pages can be
> free.
> 
> I assume this will be better option with compare to FULL.

This would be complicated, involve fragmentation, and would tie the system up 
for a lot of time.  It would be better to stick to just one way of doing it: 
either

A) set no PRAGMAs and just VACUUM immediately after a DELETE FROM which deleted 
lots of data

or

B) use "PRAGMA auto_vacuum = FULL" and just let the system do what it wants.

Once again I point out that these things only make a difference if you have 
deleted significant amounts of data from a file and have not already used up 
the released space with new data.  Almost no uses of SQLite do this, most of 
them have database files which grow and don't shrink much.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database File Size

2013-08-29 Thread techi eth
I am thinking to use auto_vacuum INCREMENTAL & PRAGMA
incremental_vacuum(N) to make sure whenever required, pages can be
free.

I assume this will be better option with compare to FULL.

I assume i can do anytime this operation in squence after opening
successfull connection to Database.I am holding open connection
handler as long as i required in system.

Thanks..


Cheers -
Techi

On 8/30/13, techi eth  wrote:
> I am using Jffs2 file system. As of now I using all default from
> sqlite3.I am not sure with this information it is possible to get some
> approx that with 'X KB data, database file size will be 'Y' KB or MB.
>
>> Regarding default file size.
> I understand with the use of max_page_count & size of each page
> database file size is get calculated.
> What is default setting for max_page_count (I leanr Default for page is
> 1KB)
>
> Cheers -
> Techi
>
> On 8/29/13, Simon Slavin  wrote:
>>
>> On 29 Aug 2013, at 9:11am, techi eth  wrote:
>>
>>> 1) Is their any way I can calculate approx file size based on data.
>>>
>>> Ex :  Let us say i am creating table with  (row ID int ,Test Text).
>>>
>>> Each text string size is 10 byte. If I will create 100 row of this in
>>> database then what would be Max size of created database file.
>>
>> Assuming that you are talking about a database file created from scratch,
>> you might be able to take a guess at the maximum file size by just
>> creating
>> a file like that yourself and seeing how bit it is.  But the answer for
>> your
>> users would depend on which file system and which operating system they
>> are
>> using.  This is because those things govern the page size selected by
>> SQLite:
>>
>> 
>>
>> You would have to allow so much 'slop' it might be easier just to pick an
>> acceptable number -- say half a megabyte, and use that.
>>
>>> 2) How do I can fix the database file size to make sure I have not
>>> exceeded
>>> max page count?
>>>
>>> http://www.sqlite.org/pragma.html#pragma_max_page_count
>>
>> No need.  SQLite will itself refuse to make a database file bigger than
>> that.  I believe it generates an SQLITE_IOERR for the command that tries
>> to
>> exceed that page count.
>>
>> Simon.
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database File Size

2013-08-29 Thread techi eth
I am using Jffs2 file system. As of now I using all default from
sqlite3.I am not sure with this information it is possible to get some
approx that with 'X KB data, database file size will be 'Y' KB or MB.

> Regarding default file size.
I understand with the use of max_page_count & size of each page
database file size is get calculated.
What is default setting for max_page_count (I leanr Default for page is 1KB)

Cheers -
Techi

On 8/29/13, Simon Slavin  wrote:
>
> On 29 Aug 2013, at 9:11am, techi eth  wrote:
>
>> 1) Is their any way I can calculate approx file size based on data.
>>
>> Ex :  Let us say i am creating table with  (row ID int ,Test Text).
>>
>> Each text string size is 10 byte. If I will create 100 row of this in
>> database then what would be Max size of created database file.
>
> Assuming that you are talking about a database file created from scratch,
> you might be able to take a guess at the maximum file size by just creating
> a file like that yourself and seeing how bit it is.  But the answer for your
> users would depend on which file system and which operating system they are
> using.  This is because those things govern the page size selected by
> SQLite:
>
> 
>
> You would have to allow so much 'slop' it might be easier just to pick an
> acceptable number -- say half a megabyte, and use that.
>
>> 2) How do I can fix the database file size to make sure I have not
>> exceeded
>> max page count?
>>
>> http://www.sqlite.org/pragma.html#pragma_max_page_count
>
> No need.  SQLite will itself refuse to make a database file bigger than
> that.  I believe it generates an SQLITE_IOERR for the command that tries to
> exceed that page count.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database File Size

2013-08-29 Thread Simon Slavin

On 29 Aug 2013, at 9:11am, techi eth  wrote:

> 1) Is their any way I can calculate approx file size based on data.
> 
> Ex :  Let us say i am creating table with  (row ID int ,Test Text).
> 
> Each text string size is 10 byte. If I will create 100 row of this in
> database then what would be Max size of created database file.

Assuming that you are talking about a database file created from scratch, you 
might be able to take a guess at the maximum file size by just creating a file 
like that yourself and seeing how bit it is.  But the answer for your users 
would depend on which file system and which operating system they are using.  
This is because those things govern the page size selected by SQLite:



You would have to allow so much 'slop' it might be easier just to pick an 
acceptable number -- say half a megabyte, and use that.

> 2) How do I can fix the database file size to make sure I have not exceeded
> max page count?
> 
> http://www.sqlite.org/pragma.html#pragma_max_page_count

No need.  SQLite will itself refuse to make a database file bigger than that.  
I believe it generates an SQLITE_IOERR for the command that tries to exceed 
that page count.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Database File Size

2013-08-29 Thread techi eth
Database files size:

1) Is their any way I can calculate approx file size based on data.

 Ex :  Let us say i am creating table with  (row ID int ,Test Text).

Each text string size is 10 byte. If I will create 100 row of this in
database then what would be Max size of created database file.

2) How do I can fix the database file size to make sure I have not exceeded
max page count?

http://www.sqlite.org/pragma.html#pragma_max_page_count

Cheers -

Techi
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database file size

2008-12-03 Thread Christian Smith
On Thu, Nov 27, 2008 at 08:12:02AM +, Simon Bulman wrote:
> Morning,
> 
> Table 1
> 
> BIGINT (index),  VARCHAR(30), VARCHAR(10)
> 
>  
> 
> Table 2
> 
> BIGINT (index), FLOAT


For the second table, the index will contain the BIGINT value and the table
rowid, which is almost as big as the actual row, so use of a separate index
will literally double storage used for the second table including index.

If your BIGINT fields are just record ids, then you are better off using
an "integer primary key" for these fields, as this is used as the table
rowid, and no separate index will be required. The "integer primary key"
type is good for 64 bit ids, so should be adequate.

Regards,
Christian
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database file size

2008-11-28 Thread D. Richard Hipp

On Nov 28, 2008, at 8:13 AM, Simon Bulman wrote:

> Ok, I am having trouble counting this morning.
>
> No of rows in Table 1: 1165
> No of rows in Table 2: 376295
>
> Analysis is attached.
>
> Sorry for the continued confusion.

The SUMMARY_VECTORS table consumes 99.55% of the file.  I assume that  
this is "Table 2".  We can ignore the rest.

I see that the index on SUMMARY_VECTORS is as big as the table  
itself.  This is surprising.  What does your schema look like?  You  
must be indexing every column of the table.  Is that really  
necessary?  You might save some space by only
indexing the columns you really need to index.

The storage efficiency for entries in SUMMARY_VECTORS is 62.2%. 
This is lower than normal.  SQLite usually gives a storage efficiency  
in the 80-90% range.  The reason it is lower here is that each row of  
your table is small - only 10.51 bytes on average.  Thus the few bytes  
of fixed overhead at the beginning of each row amount to a larger  
fraction of the total.  Even so, it is not a bad ratio.

Of the 10.51 bytes per row, 4 bytes is probably devoted to storing  
type and string-length information.  You likely omit the type  
information in your proprietary format, but I suppose you still need  
to store the string information.  So you could be using as little as  
8.51 bytes per row.   That still gives 3.2MB - not 1.7MB as you  
claim.  So you must also be doing some kind of compression in your  
proprietary format.

Bottom line:  The SQLite database file is densely packed.  I think you  
will be hard pressed to find another general-purpose database engine  
that will store your data more compactly.  If you want something to  
store data more compactly than what SQLite is currently doing, I  
suspect you will need to develop a proprietary one-off solution, as  
you have done.


>
>
> Cheers,
> S.
>
> -Original Message-
> From: Jens Miltner [mailto:[EMAIL PROTECTED]
> Sent: 28 November 2008 12:58
> To: [EMAIL PROTECTED]
> Cc: 'General Discussion of SQLite Database'
> Subject: Re: [sqlite] Database file size
>
>
> Am 28.11.2008 um 13:37 schrieb Simon Bulman:
>
>> Ahhh, sorry, I wrongly calculated the number of rows in table 2. It
>> actually
>> has 29581 rows. Still surprised at the 7x size increase but perhaps
>> you are
>> not based on the overheads?
>
> I still can't reproduce your database sizes - creating 3000 rows in
> table 1 and 3 in table 2 I end up with 745kB database file size.
> Even with 3 rows in table 1 I end up with only 2.9MB file size...
>
> And the overhead I was talking about is overhead that sqlite maintains
> for each row in every database, so that should be the same on your end
> and my end.
>
> Unless your row counts are much, much higher, I suspect that there's
> either
> (1) other tables that contribute the major part to the database file
> size
> (2) your database does have lots of free space and needs to be  
> vacuumed
>
> You could try to run sqlite3_analyzer on your database to see the
> memory usage of each table, free space, etc.
> Are you using a stock sqlite3 installation or did you modify/customize
> it through build settings?
>
> 
>
>
>>
>> -Original Message-----
>> From: Jens Miltner [mailto:[EMAIL PROTECTED]
>> Sent: 28 November 2008 08:38
>> To: [EMAIL PROTECTED]
>> Cc: General Discussion of SQLite Database
>> Subject: Re: [sqlite] Database file size
>>
>>
>> Am 28.11.2008 um 09:20 schrieb Simon Bulman:
>>
>>> Hi Jens,
>>>
>>> Thanks for your input. UTF-8 did not make a difference. I expected
>>> that
>>> SQLite file would be larger on disk than our proprietary format
>>> because of
>>> the overheads that you mention - I am surprised however it at least
>>> 7x
>>> larger.
>>
>> To be honest - given your table definitions below, I'm surprised the
>> database is _that_ large, too:
>>
>> Table 1 - according to your definition - should contain at most about
>> 50 bytes of pure data per row (plus the overhead needed by SQLite).
>> Table 2 would only contain ~ 16 bytes of data per row.
>>
>> Dividing the database disk size by the total number of rows you
>> mentioned, would indicate a whopping 8k per row.
>>
>> I did a quick test and created a schema similar to what you outlined
>> and filled it with data (the same number of rows you mentioned and 28
>> and ~20 characters per row for the two varchar columns) and my
>> database ended up being 71kB in size instead of the 11.8 MB you  
>> saw...
>>
>> Are there any other tables that contain non-negligible amounts of
>> data?
>>

Re: [sqlite] Database file size

2008-11-28 Thread Simon Bulman
Ok, I am having trouble counting this morning.

No of rows in Table 1: 1165
No of rows in Table 2: 376295

Analysis is attached.

Sorry for the continued confusion.

Cheers,
S.

-Original Message-
From: Jens Miltner [mailto:[EMAIL PROTECTED] 
Sent: 28 November 2008 12:58
To: [EMAIL PROTECTED]
Cc: 'General Discussion of SQLite Database'
Subject: Re: [sqlite] Database file size


Am 28.11.2008 um 13:37 schrieb Simon Bulman:

> Ahhh, sorry, I wrongly calculated the number of rows in table 2. It  
> actually
> has 29581 rows. Still surprised at the 7x size increase but perhaps  
> you are
> not based on the overheads?

I still can't reproduce your database sizes - creating 3000 rows in  
table 1 and 3 in table 2 I end up with 745kB database file size.
Even with 3 rows in table 1 I end up with only 2.9MB file size...

And the overhead I was talking about is overhead that sqlite maintains  
for each row in every database, so that should be the same on your end  
and my end.

Unless your row counts are much, much higher, I suspect that there's  
either
(1) other tables that contribute the major part to the database file  
size
(2) your database does have lots of free space and needs to be vacuumed

You could try to run sqlite3_analyzer on your database to see the  
memory usage of each table, free space, etc.
Are you using a stock sqlite3 installation or did you modify/customize  
it through build settings?




>
> -Original Message-
> From: Jens Miltner [mailto:[EMAIL PROTECTED]
> Sent: 28 November 2008 08:38
> To: [EMAIL PROTECTED]
> Cc: General Discussion of SQLite Database
> Subject: Re: [sqlite] Database file size
>
>
> Am 28.11.2008 um 09:20 schrieb Simon Bulman:
>
>> Hi Jens,
>>
>> Thanks for your input. UTF-8 did not make a difference. I expected
>> that
>> SQLite file would be larger on disk than our proprietary format
>> because of
>> the overheads that you mention - I am surprised however it at least  
>> 7x
>> larger.
>
> To be honest - given your table definitions below, I'm surprised the
> database is _that_ large, too:
>
> Table 1 - according to your definition - should contain at most about
> 50 bytes of pure data per row (plus the overhead needed by SQLite).
> Table 2 would only contain ~ 16 bytes of data per row.
>
> Dividing the database disk size by the total number of rows you
> mentioned, would indicate a whopping 8k per row.
>
> I did a quick test and created a schema similar to what you outlined
> and filled it with data (the same number of rows you mentioned and 28
> and ~20 characters per row for the two varchar columns) and my
> database ended up being 71kB in size instead of the 11.8 MB you saw...
>
> Are there any other tables that contain non-negligible amounts of  
> data?
> Are the data sizes indeed what's indicated in the schema (since SQLite
> doesn't really care about the varchar size constraints you can
> actually put any amount of data into a varchar(30) column) ?
>
> 
>
>
>>
>> I am actually recreating the whole database (delete file and  
>> recreate)
>> programmatically so vacuuming has not effect.
>>
>> Cheers,
>> S.
>>
>> -Original Message-
>> From: Jens Miltner [mailto:[EMAIL PROTECTED]
>> Sent: 27 November 2008 13:48
>> To: General Discussion of SQLite Database
>> Cc: [EMAIL PROTECTED]
>> Subject: Re: [sqlite] Database file size
>>
>>
>> Am 27.11.2008 um 09:12 schrieb Simon Bulman:
>>
>>> I have been playing around with SQLite to use as an alternative to
>>> one of
>>> our proprietary file formats used to read large amounts of data. Our
>>> proprietary format performs very badly i.e. takes a long time to
>>> load some
>>> data; as expected SQLite is lighting quick in comparison - great!
>>>
>>> One considerable stumbling block is the footprint (size) of the
>>> database
>>> file on disk. It turns out that SQLite is roughly 7x larger than our
>>> proprietary format - this is prohibitive. The data is pretty simple
>>> really,
>>> 2 tables
>>>
>>> Table 1
>>>
>>> BIGINT (index),  VARCHAR(30), VARCHAR(10)
>>>
>>>
>>> Table 2
>>>
>>> BIGINT (index), FLOAT
>>>
>>>
>>> For a particular data set Table1 has 1165 rows and Table 2 has 323
>>> rows,
>>> however typically Table 2 becomes bigger for larger models. The size
>>> on disk
>>> of this file is 11.8 Mb (compared to 1.7 Mb for our proprietary
>>> format). I
>>> have noticed that if I drop the indexes the size drops  
>>> dra

Re: [sqlite] Database file size

2008-11-28 Thread Jens Miltner

Am 28.11.2008 um 13:37 schrieb Simon Bulman:

> Ahhh, sorry, I wrongly calculated the number of rows in table 2. It  
> actually
> has 29581 rows. Still surprised at the 7x size increase but perhaps  
> you are
> not based on the overheads?

I still can't reproduce your database sizes - creating 3000 rows in  
table 1 and 3 in table 2 I end up with 745kB database file size.
Even with 3 rows in table 1 I end up with only 2.9MB file size...

And the overhead I was talking about is overhead that sqlite maintains  
for each row in every database, so that should be the same on your end  
and my end.

Unless your row counts are much, much higher, I suspect that there's  
either
(1) other tables that contribute the major part to the database file  
size
(2) your database does have lots of free space and needs to be vacuumed

You could try to run sqlite3_analyzer on your database to see the  
memory usage of each table, free space, etc.
Are you using a stock sqlite3 installation or did you modify/customize  
it through build settings?




>
> -Original Message-
> From: Jens Miltner [mailto:[EMAIL PROTECTED]
> Sent: 28 November 2008 08:38
> To: [EMAIL PROTECTED]
> Cc: General Discussion of SQLite Database
> Subject: Re: [sqlite] Database file size
>
>
> Am 28.11.2008 um 09:20 schrieb Simon Bulman:
>
>> Hi Jens,
>>
>> Thanks for your input. UTF-8 did not make a difference. I expected
>> that
>> SQLite file would be larger on disk than our proprietary format
>> because of
>> the overheads that you mention - I am surprised however it at least  
>> 7x
>> larger.
>
> To be honest - given your table definitions below, I'm surprised the
> database is _that_ large, too:
>
> Table 1 - according to your definition - should contain at most about
> 50 bytes of pure data per row (plus the overhead needed by SQLite).
> Table 2 would only contain ~ 16 bytes of data per row.
>
> Dividing the database disk size by the total number of rows you
> mentioned, would indicate a whopping 8k per row.
>
> I did a quick test and created a schema similar to what you outlined
> and filled it with data (the same number of rows you mentioned and 28
> and ~20 characters per row for the two varchar columns) and my
> database ended up being 71kB in size instead of the 11.8 MB you saw...
>
> Are there any other tables that contain non-negligible amounts of  
> data?
> Are the data sizes indeed what's indicated in the schema (since SQLite
> doesn't really care about the varchar size constraints you can
> actually put any amount of data into a varchar(30) column) ?
>
> 
>
>
>>
>> I am actually recreating the whole database (delete file and  
>> recreate)
>> programmatically so vacuuming has not effect.
>>
>> Cheers,
>> S.
>>
>> -Original Message-
>> From: Jens Miltner [mailto:[EMAIL PROTECTED]
>> Sent: 27 November 2008 13:48
>> To: General Discussion of SQLite Database
>> Cc: [EMAIL PROTECTED]
>> Subject: Re: [sqlite] Database file size
>>
>>
>> Am 27.11.2008 um 09:12 schrieb Simon Bulman:
>>
>>> I have been playing around with SQLite to use as an alternative to
>>> one of
>>> our proprietary file formats used to read large amounts of data. Our
>>> proprietary format performs very badly i.e. takes a long time to
>>> load some
>>> data; as expected SQLite is lighting quick in comparison - great!
>>>
>>> One considerable stumbling block is the footprint (size) of the
>>> database
>>> file on disk. It turns out that SQLite is roughly 7x larger than our
>>> proprietary format - this is prohibitive. The data is pretty simple
>>> really,
>>> 2 tables
>>>
>>> Table 1
>>>
>>> BIGINT (index),  VARCHAR(30), VARCHAR(10)
>>>
>>>
>>> Table 2
>>>
>>> BIGINT (index), FLOAT
>>>
>>>
>>> For a particular data set Table1 has 1165 rows and Table 2 has 323
>>> rows,
>>> however typically Table 2 becomes bigger for larger models. The size
>>> on disk
>>> of this file is 11.8 Mb (compared to 1.7 Mb for our proprietary
>>> format). I
>>> have noticed that if I drop the indexes the size drops  
>>> dramatically -
>>> however the query performance suffers to an unacceptable level.
>>>
>>> For a larger model the DB footprint is 2.2 Gb compared to 267 Mb for
>>> the
>>> proprietary format.
>>>
>>> Does anybody have any comments on this? Are there any configuration
>>> options
>>> or ideas I could use to redu

Re: [sqlite] Database file size

2008-11-28 Thread Simon Bulman
Ahhh, sorry, I wrongly calculated the number of rows in table 2. It actually
has 29581 rows. Still surprised at the 7x size increase but perhaps you are
not based on the overheads?

Sorry for the misleading info,

Cheers,
S.

-Original Message-
From: Jens Miltner [mailto:[EMAIL PROTECTED] 
Sent: 28 November 2008 08:38
To: [EMAIL PROTECTED]
Cc: General Discussion of SQLite Database
Subject: Re: [sqlite] Database file size


Am 28.11.2008 um 09:20 schrieb Simon Bulman:

> Hi Jens,
>
> Thanks for your input. UTF-8 did not make a difference. I expected  
> that
> SQLite file would be larger on disk than our proprietary format  
> because of
> the overheads that you mention - I am surprised however it at least 7x
> larger.

To be honest - given your table definitions below, I'm surprised the  
database is _that_ large, too:

Table 1 - according to your definition - should contain at most about  
50 bytes of pure data per row (plus the overhead needed by SQLite).
Table 2 would only contain ~ 16 bytes of data per row.

Dividing the database disk size by the total number of rows you  
mentioned, would indicate a whopping 8k per row.

I did a quick test and created a schema similar to what you outlined  
and filled it with data (the same number of rows you mentioned and 28  
and ~20 characters per row for the two varchar columns) and my  
database ended up being 71kB in size instead of the 11.8 MB you saw...

Are there any other tables that contain non-negligible amounts of data?
Are the data sizes indeed what's indicated in the schema (since SQLite  
doesn't really care about the varchar size constraints you can  
actually put any amount of data into a varchar(30) column) ?




>
> I am actually recreating the whole database (delete file and recreate)
> programmatically so vacuuming has not effect.
>
> Cheers,
> S.
>
> -Original Message-
> From: Jens Miltner [mailto:[EMAIL PROTECTED]
> Sent: 27 November 2008 13:48
> To: General Discussion of SQLite Database
> Cc: [EMAIL PROTECTED]
> Subject: Re: [sqlite] Database file size
>
>
> Am 27.11.2008 um 09:12 schrieb Simon Bulman:
>
>> I have been playing around with SQLite to use as an alternative to
>> one of
>> our proprietary file formats used to read large amounts of data. Our
>> proprietary format performs very badly i.e. takes a long time to
>> load some
>> data; as expected SQLite is lighting quick in comparison - great!
>>
>> One considerable stumbling block is the footprint (size) of the
>> database
>> file on disk. It turns out that SQLite is roughly 7x larger than our
>> proprietary format - this is prohibitive. The data is pretty simple
>> really,
>> 2 tables
>>
>> Table 1
>>
>> BIGINT (index),  VARCHAR(30), VARCHAR(10)
>>
>>
>> Table 2
>>
>> BIGINT (index), FLOAT
>>
>>
>> For a particular data set Table1 has 1165 rows and Table 2 has 323
>> rows,
>> however typically Table 2 becomes bigger for larger models. The size
>> on disk
>> of this file is 11.8 Mb (compared to 1.7 Mb for our proprietary
>> format). I
>> have noticed that if I drop the indexes the size drops dramatically -
>> however the query performance suffers to an unacceptable level.
>>
>> For a larger model the DB footprint is 2.2 Gb compared to 267 Mb for
>> the
>> proprietary format.
>>
>> Does anybody have any comments on this? Are there any configuration
>> options
>> or ideas I could use to reduce the footprint of the db file?
>
>
> I don't think you'll be able to make SQLite as efficient (regarding
> storage size) as a custom file format, because it has to have some
> overhead for indexes, etc.
>
> However, one thing that comes to mind is the way string data is  
> stored:
> If you're concerned about disk space an your string data is mostly
> ASCII, make sure your strings are stored as UTF-8 - for ASCII string
> data, this will save you one byte per character in the string data
> storage.
> To enforce UTF-8 string storage, execute "PRAGMA encoding='UTF-8'" as
> the first command when creating the database (before you create and
> tables).
> You can query the format using "PRAGMA encoding" - UTF-16 encodings
> will store two bytes / character, regardless of the actual  
> characters...
>
> Note that this doesn't mean your database size will shrink to half the
> size - it merely means you'll be able to fit more rows onto a single
> page, thus eventually you should see a decrease in file size when
> comparing UTF-16 vs. UTF-8 databases.
>
> BTW: are you aware that SQLite database won't shrink by themselves?
> You'll have to vacuum them to reclaim unused space (see
> <http://www.sqlite.org/faq.html#q12
>> )
>
> HTH,
> 
>

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database file size

2008-11-28 Thread D. Richard Hipp
Download the sqlite3_analyzer.exe utility from the SQLite website 
(http://www.sqlite.org/download.html 
) and run it against your database file.  The output will tell you  
where the disk space is being used.  You might want to post the output  
to this list.

D. Richard Hipp
[EMAIL PROTECTED]



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database file size

2008-11-28 Thread Jens Miltner

Am 28.11.2008 um 09:20 schrieb Simon Bulman:

> Hi Jens,
>
> Thanks for your input. UTF-8 did not make a difference. I expected  
> that
> SQLite file would be larger on disk than our proprietary format  
> because of
> the overheads that you mention - I am surprised however it at least 7x
> larger.

To be honest - given your table definitions below, I'm surprised the  
database is _that_ large, too:

Table 1 - according to your definition - should contain at most about  
50 bytes of pure data per row (plus the overhead needed by SQLite).
Table 2 would only contain ~ 16 bytes of data per row.

Dividing the database disk size by the total number of rows you  
mentioned, would indicate a whopping 8k per row.

I did a quick test and created a schema similar to what you outlined  
and filled it with data (the same number of rows you mentioned and 28  
and ~20 characters per row for the two varchar columns) and my  
database ended up being 71kB in size instead of the 11.8 MB you saw...

Are there any other tables that contain non-negligible amounts of data?
Are the data sizes indeed what's indicated in the schema (since SQLite  
doesn't really care about the varchar size constraints you can  
actually put any amount of data into a varchar(30) column) ?




>
> I am actually recreating the whole database (delete file and recreate)
> programmatically so vacuuming has not effect.
>
> Cheers,
> S.
>
> -Original Message-
> From: Jens Miltner [mailto:[EMAIL PROTECTED]
> Sent: 27 November 2008 13:48
> To: General Discussion of SQLite Database
> Cc: [EMAIL PROTECTED]
> Subject: Re: [sqlite] Database file size
>
>
> Am 27.11.2008 um 09:12 schrieb Simon Bulman:
>
>> I have been playing around with SQLite to use as an alternative to
>> one of
>> our proprietary file formats used to read large amounts of data. Our
>> proprietary format performs very badly i.e. takes a long time to
>> load some
>> data; as expected SQLite is lighting quick in comparison - great!
>>
>> One considerable stumbling block is the footprint (size) of the
>> database
>> file on disk. It turns out that SQLite is roughly 7x larger than our
>> proprietary format - this is prohibitive. The data is pretty simple
>> really,
>> 2 tables
>>
>> Table 1
>>
>> BIGINT (index),  VARCHAR(30), VARCHAR(10)
>>
>>
>> Table 2
>>
>> BIGINT (index), FLOAT
>>
>>
>> For a particular data set Table1 has 1165 rows and Table 2 has 323
>> rows,
>> however typically Table 2 becomes bigger for larger models. The size
>> on disk
>> of this file is 11.8 Mb (compared to 1.7 Mb for our proprietary
>> format). I
>> have noticed that if I drop the indexes the size drops dramatically -
>> however the query performance suffers to an unacceptable level.
>>
>> For a larger model the DB footprint is 2.2 Gb compared to 267 Mb for
>> the
>> proprietary format.
>>
>> Does anybody have any comments on this? Are there any configuration
>> options
>> or ideas I could use to reduce the footprint of the db file?
>
>
> I don't think you'll be able to make SQLite as efficient (regarding
> storage size) as a custom file format, because it has to have some
> overhead for indexes, etc.
>
> However, one thing that comes to mind is the way string data is  
> stored:
> If you're concerned about disk space an your string data is mostly
> ASCII, make sure your strings are stored as UTF-8 - for ASCII string
> data, this will save you one byte per character in the string data
> storage.
> To enforce UTF-8 string storage, execute "PRAGMA encoding='UTF-8'" as
> the first command when creating the database (before you create and
> tables).
> You can query the format using "PRAGMA encoding" - UTF-16 encodings
> will store two bytes / character, regardless of the actual  
> characters...
>
> Note that this doesn't mean your database size will shrink to half the
> size - it merely means you'll be able to fit more rows onto a single
> page, thus eventually you should see a decrease in file size when
> comparing UTF-16 vs. UTF-8 databases.
>
> BTW: are you aware that SQLite database won't shrink by themselves?
> You'll have to vacuum them to reclaim unused space (see
> <http://www.sqlite.org/faq.html#q12
>> )
>
> HTH,
> 
>

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database file size

2008-11-28 Thread Jens Miltner

Am 28.11.2008 um 09:20 schrieb Simon Bulman:

> Hi Jens,
>
> Thanks for your input. UTF-8 did not make a difference. I expected  
> that
> SQLite file would be larger on disk than our proprietary format  
> because of
> the overheads that you mention - I am surprised however it at least 7x
> larger.

To be honest - given your table definitions below, I'm surprised the  
database is _that_ large, too:

Table 1 - according to your definition - should contain at most about  
50 bytes of pure data per row (plus the overhead needed by SQLite).
Table 2 would only contain ~ 16 bytes of data per row.

Dividing the database disk size by the total number of rows you  
mentioned, would indicate a whopping 8k per row.

I did a quick test and created a schema similar to what you outlined  
and filled it with data (the same number of rows you mentioned and 28  
and ~20 characters per row for the two varchar columns) and my  
database ended up being 71kB in size instead of the 11.8 MB you saw...

Are there any other tables that contain non-negligible amounts of data?
Are the data sizes indeed what's indicated in the schema (since SQLite  
doesn't really care about the varchar size constraints you can  
actually put any amount of data into a varchar(30) column) ?




>
> I am actually recreating the whole database (delete file and recreate)
> programmatically so vacuuming has not effect.
>
> Cheers,
> S.
>
> -Original Message-
> From: Jens Miltner [mailto:[EMAIL PROTECTED]
> Sent: 27 November 2008 13:48
> To: General Discussion of SQLite Database
> Cc: [EMAIL PROTECTED]
> Subject: Re: [sqlite] Database file size
>
>
> Am 27.11.2008 um 09:12 schrieb Simon Bulman:
>
>> I have been playing around with SQLite to use as an alternative to
>> one of
>> our proprietary file formats used to read large amounts of data. Our
>> proprietary format performs very badly i.e. takes a long time to
>> load some
>> data; as expected SQLite is lighting quick in comparison - great!
>>
>> One considerable stumbling block is the footprint (size) of the
>> database
>> file on disk. It turns out that SQLite is roughly 7x larger than our
>> proprietary format - this is prohibitive. The data is pretty simple
>> really,
>> 2 tables
>>
>> Table 1
>>
>> BIGINT (index),  VARCHAR(30), VARCHAR(10)
>>
>>
>> Table 2
>>
>> BIGINT (index), FLOAT
>>
>>
>> For a particular data set Table1 has 1165 rows and Table 2 has 323
>> rows,
>> however typically Table 2 becomes bigger for larger models. The size
>> on disk
>> of this file is 11.8 Mb (compared to 1.7 Mb for our proprietary
>> format). I
>> have noticed that if I drop the indexes the size drops dramatically -
>> however the query performance suffers to an unacceptable level.
>>
>> For a larger model the DB footprint is 2.2 Gb compared to 267 Mb for
>> the
>> proprietary format.
>>
>> Does anybody have any comments on this? Are there any configuration
>> options
>> or ideas I could use to reduce the footprint of the db file?
>
>
> I don't think you'll be able to make SQLite as efficient (regarding
> storage size) as a custom file format, because it has to have some
> overhead for indexes, etc.
>
> However, one thing that comes to mind is the way string data is  
> stored:
> If you're concerned about disk space an your string data is mostly
> ASCII, make sure your strings are stored as UTF-8 - for ASCII string
> data, this will save you one byte per character in the string data
> storage.
> To enforce UTF-8 string storage, execute "PRAGMA encoding='UTF-8'" as
> the first command when creating the database (before you create and
> tables).
> You can query the format using "PRAGMA encoding" - UTF-16 encodings
> will store two bytes / character, regardless of the actual  
> characters...
>
> Note that this doesn't mean your database size will shrink to half the
> size - it merely means you'll be able to fit more rows onto a single
> page, thus eventually you should see a decrease in file size when
> comparing UTF-16 vs. UTF-8 databases.
>
> BTW: are you aware that SQLite database won't shrink by themselves?
> You'll have to vacuum them to reclaim unused space (see
> <http://www.sqlite.org/faq.html#q12
>> )
>
> HTH,
> 
>

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database file size

2008-11-28 Thread Simon Bulman
Hi Jens,

Thanks for your input. UTF-8 did not make a difference. I expected that
SQLite file would be larger on disk than our proprietary format because of
the overheads that you mention - I am surprised however it at least 7x
larger.

I am actually recreating the whole database (delete file and recreate)
programmatically so vacuuming has not effect.

Cheers,
S.

-Original Message-
From: Jens Miltner [mailto:[EMAIL PROTECTED] 
Sent: 27 November 2008 13:48
To: General Discussion of SQLite Database
Cc: [EMAIL PROTECTED]
Subject: Re: [sqlite] Database file size


Am 27.11.2008 um 09:12 schrieb Simon Bulman:

> I have been playing around with SQLite to use as an alternative to  
> one of
> our proprietary file formats used to read large amounts of data. Our
> proprietary format performs very badly i.e. takes a long time to  
> load some
> data; as expected SQLite is lighting quick in comparison - great!
>
> One considerable stumbling block is the footprint (size) of the  
> database
> file on disk. It turns out that SQLite is roughly 7x larger than our
> proprietary format - this is prohibitive. The data is pretty simple  
> really,
> 2 tables
>
> Table 1
>
> BIGINT (index),  VARCHAR(30), VARCHAR(10)
>
>
> Table 2
>
> BIGINT (index), FLOAT
>
>
> For a particular data set Table1 has 1165 rows and Table 2 has 323  
> rows,
> however typically Table 2 becomes bigger for larger models. The size  
> on disk
> of this file is 11.8 Mb (compared to 1.7 Mb for our proprietary  
> format). I
> have noticed that if I drop the indexes the size drops dramatically -
> however the query performance suffers to an unacceptable level.
>
> For a larger model the DB footprint is 2.2 Gb compared to 267 Mb for  
> the
> proprietary format.
>
> Does anybody have any comments on this? Are there any configuration  
> options
> or ideas I could use to reduce the footprint of the db file?


I don't think you'll be able to make SQLite as efficient (regarding  
storage size) as a custom file format, because it has to have some  
overhead for indexes, etc.

However, one thing that comes to mind is the way string data is stored:
If you're concerned about disk space an your string data is mostly  
ASCII, make sure your strings are stored as UTF-8 - for ASCII string  
data, this will save you one byte per character in the string data  
storage.
To enforce UTF-8 string storage, execute "PRAGMA encoding='UTF-8'" as  
the first command when creating the database (before you create and  
tables).
You can query the format using "PRAGMA encoding" - UTF-16 encodings  
will store two bytes / character, regardless of the actual characters...

Note that this doesn't mean your database size will shrink to half the  
size - it merely means you'll be able to fit more rows onto a single  
page, thus eventually you should see a decrease in file size when  
comparing UTF-16 vs. UTF-8 databases.

BTW: are you aware that SQLite database won't shrink by themselves?  
You'll have to vacuum them to reclaim unused space (see
<http://www.sqlite.org/faq.html#q12 
 >)

HTH,


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database file size

2008-11-27 Thread Jens Miltner

Am 27.11.2008 um 09:12 schrieb Simon Bulman:

> I have been playing around with SQLite to use as an alternative to  
> one of
> our proprietary file formats used to read large amounts of data. Our
> proprietary format performs very badly i.e. takes a long time to  
> load some
> data; as expected SQLite is lighting quick in comparison - great!
>
> One considerable stumbling block is the footprint (size) of the  
> database
> file on disk. It turns out that SQLite is roughly 7x larger than our
> proprietary format - this is prohibitive. The data is pretty simple  
> really,
> 2 tables
>
> Table 1
>
> BIGINT (index),  VARCHAR(30), VARCHAR(10)
>
>
> Table 2
>
> BIGINT (index), FLOAT
>
>
> For a particular data set Table1 has 1165 rows and Table 2 has 323  
> rows,
> however typically Table 2 becomes bigger for larger models. The size  
> on disk
> of this file is 11.8 Mb (compared to 1.7 Mb for our proprietary  
> format). I
> have noticed that if I drop the indexes the size drops dramatically -
> however the query performance suffers to an unacceptable level.
>
> For a larger model the DB footprint is 2.2 Gb compared to 267 Mb for  
> the
> proprietary format.
>
> Does anybody have any comments on this? Are there any configuration  
> options
> or ideas I could use to reduce the footprint of the db file?


I don't think you'll be able to make SQLite as efficient (regarding  
storage size) as a custom file format, because it has to have some  
overhead for indexes, etc.

However, one thing that comes to mind is the way string data is stored:
If you're concerned about disk space an your string data is mostly  
ASCII, make sure your strings are stored as UTF-8 - for ASCII string  
data, this will save you one byte per character in the string data  
storage.
To enforce UTF-8 string storage, execute "PRAGMA encoding='UTF-8'" as  
the first command when creating the database (before you create and  
tables).
You can query the format using "PRAGMA encoding" - UTF-16 encodings  
will store two bytes / character, regardless of the actual characters...

Note that this doesn't mean your database size will shrink to half the  
size - it merely means you'll be able to fit more rows onto a single  
page, thus eventually you should see a decrease in file size when  
comparing UTF-16 vs. UTF-8 databases.

BTW: are you aware that SQLite database won't shrink by themselves?  
You'll have to vacuum them to reclaim unused space (see 
)

HTH,


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Database file size

2008-11-27 Thread Simon Bulman
Morning,

 

I have been playing around with SQLite to use as an alternative to one of
our proprietary file formats used to read large amounts of data. Our
proprietary format performs very badly i.e. takes a long time to load some
data; as expected SQLite is lighting quick in comparison - great!

 

One considerable stumbling block is the footprint (size) of the database
file on disk. It turns out that SQLite is roughly 7x larger than our
proprietary format - this is prohibitive. The data is pretty simple really,
2 tables

 

Table 1

BIGINT (index),  VARCHAR(30), VARCHAR(10)

 

Table 2

BIGINT (index), FLOAT

 

For a particular data set Table1 has 1165 rows and Table 2 has 323 rows,
however typically Table 2 becomes bigger for larger models. The size on disk
of this file is 11.8 Mb (compared to 1.7 Mb for our proprietary format). I
have noticed that if I drop the indexes the size drops dramatically -
however the query performance suffers to an unacceptable level.

 

For a larger model the DB footprint is 2.2 Gb compared to 267 Mb for the
proprietary format.

 

Does anybody have any comments on this? Are there any configuration options
or ideas I could use to reduce the footprint of the db file?

 

Many thanks,

Simon

 

 

--

Simon Bulman

Petrel Reservoir Engineering Architect

Schlumberger

Lambourn Court, Wyndyke Furlong,

Abingdon Business Park, Abingdon,

Oxfordshire, OX14 1UJ, UK

Tel: +44 (0)1235 543 401

 

Registered Name: Schlumberger Oilfield UK PLC

Registered Office: 8th Floor, South Quay Plaza 2, 183 Marsh Wall, London.
E14 9SH

Registered in England No. 4157867

 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] database file size isn't really very small

2008-07-19 Thread Corey Nelson
Ah, ha! I had actually originally planned on doing that but when I read the
primary key could only be an integer I assumed it was a 32 bit integer so I
would need a separate column for the date. But you're right of course and I
see now that as of version 3, I can use 64 bit integers as the primary key!
This brings my database file down to a very reasonable 513024 bytes and
speeds up my code to boot.

What's more, since nearly all the prices in my data can be represented in 6
characters or less, I found I can save even more space by storing them as
strings like this:

CREATE TABLE StockName (date INTEGER PRIMARY KEY, price TEXT);

This brings my test database file size down to the bargain-basement size of
431104 bytes! Right on par with my text file format (which I will no longer
be needing).

Thanks a lot guys!

Corey

On Fri, Jul 18, 2008 at 1:32 PM, Filip Navara <[EMAIL PROTECTED]> wrote:

> Not really two copies as the integer could be primary key ...
> something along the lines of
>
> CREATE TABLE StockName (date INTEGER PRIMARY KEY, price REAL);
>
> Regards,
> F.
>
> On Fri, Jul 18, 2008 at 10:03 PM, Jay A. Kreibich <[EMAIL PROTECTED]> wrote:
> > On Fri, Jul 18, 2008 at 11:58:16AM -0700, Corey Nelson scratched on the
> wall:
> >> > sqlite3 Ticks.db ".dump"
> >> BEGIN TRANSACTION;
> >> CREATE TABLE StockName (date TEXT UNIQUE ON CONFLICT REPLACE, price
> REAL);
> >
> >> I would expect the database file to store a bit of "extra" data but it's
> >> 2.17 times bigger than the text file!
> >
> >  As others have pointed out, the issue is with the index, which is
> >  created automatically by the "UNIQUE" constraint.
> >
> >  In SQLite an index holds a full copy of the data.  Since the vast
> >  majority of your data (byte wise) is the date string, slightly more
> >  than 2x sounds just about right.
> >
> >  You can try to convert the date to a large integer.  SQLite supports
> >  up to 64 bit ints (signed), which should hold your current
> >  representation without problems.  You'll still have two copies, but
> >  the data should be much smaller.
> >
> >   -j
> >
> > --
> > Jay A. Kreibich < J A Y  @  K R E I B I.C H >
> >
> > "'People who live in bamboo houses should not throw pandas.' Jesus said
> that."
> >   - "The Ninja", www.AskANinja.com, "Special Delivery 10: Pop!Tech 2006"
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] database file size isn't really very small

2008-07-18 Thread John Stanton
Try making your date a REAL and using the Sqlite date and time 
functions.  You will use extra space for the rowid, the key of the row 
and for the b-tree index.

You would expect the indexed rows to be about double the raw text data 
since the numbers are 64 bit FP.

Corey Nelson wrote:
> I'm developing some software that helps with day trading. I need to store
> years worth of tick prices. At first I was going to write a library that
> would write and read this information to and from files. Then I thought
> "don't be silly", this is the sort of thing databases were made for. I have
> little experience with databases but I thought surely I could find a small,
> fast, easy to use database I could use for this purpose.
> 
> It didn't take me long to get some test data into an SQLite3 database file.
> But there's a problem, the database file is almost three times bigger than
> storing the information in text files the way I had planned. Even writing
> the contents of the database to text produces a text file less than half the
> size of the database file.
> 
> 
>>sqlite3 Ticks.db ".dump"
> 
> BEGIN TRANSACTION;
> CREATE TABLE StockName (date TEXT UNIQUE ON CONFLICT REPLACE, price REAL);
> INSERT INTO "StockName" VALUES('1202680806000',96.18);
> INSERT INTO "StockName" VALUES('1202680807000',96.16);
> INSERT INTO "StockName" VALUES('1202680821000',96.15);
> [...]
> INSERT INTO "StockName" VALUES('1202767171000',96.71);
> INSERT INTO "StockName" VALUES('1202767174000',96.7);
> INSERT INTO "StockName" VALUES('1202767197000',96.68);
> COMMIT;
> 
> 
>>sqlite3 Ticks.db "select rowid,* from StockName" > Ticks.txt
>>more  Ticks.txt
> 
> 1|1202680806000|96.18
> 2|1202680807000|96.16
> 3|1202680821000|96.15
> [...]
> 12278|1202767171000|96.71
> 12279|1202767174000|96.7
> 12280|1202767197000|96.68
> 
> Ticks.db  =  1334272 bytes
> Ticks.txt  =  613702 bytes
> 
> I would expect the database file to store a bit of "extra" data but it's
> 2.17 times bigger than the text file! That means the "extra" data uses more
> room than the stuff I'm interested in. Is this size difference normal and to
> be expected or am I doing something wrong? Is this just the price I have to
> pay for the convenience of a database?
> 
> I don't think it's relevant but my software is written in Java and I'm using
> SQLiteJDBC from http://www.zentus.com/sqlitejdbc/. For the date I'm just
> storing the long returned by java.util.Date.getTime() since it is easy to
> work with and I figured that the string would be shorter than a normal date
> string.
> 
> Thanks for reading.
> 
> Corey
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] database file size isn't really very small

2008-07-18 Thread Filip Navara
Not really two copies as the integer could be primary key ...
something along the lines of

CREATE TABLE StockName (date INTEGER PRIMARY KEY, price REAL);

Regards,
F.

On Fri, Jul 18, 2008 at 10:03 PM, Jay A. Kreibich <[EMAIL PROTECTED]> wrote:
> On Fri, Jul 18, 2008 at 11:58:16AM -0700, Corey Nelson scratched on the wall:
>> > sqlite3 Ticks.db ".dump"
>> BEGIN TRANSACTION;
>> CREATE TABLE StockName (date TEXT UNIQUE ON CONFLICT REPLACE, price REAL);
>
>> I would expect the database file to store a bit of "extra" data but it's
>> 2.17 times bigger than the text file!
>
>  As others have pointed out, the issue is with the index, which is
>  created automatically by the "UNIQUE" constraint.
>
>  In SQLite an index holds a full copy of the data.  Since the vast
>  majority of your data (byte wise) is the date string, slightly more
>  than 2x sounds just about right.
>
>  You can try to convert the date to a large integer.  SQLite supports
>  up to 64 bit ints (signed), which should hold your current
>  representation without problems.  You'll still have two copies, but
>  the data should be much smaller.
>
>   -j
>
> --
> Jay A. Kreibich < J A Y  @  K R E I B I.C H >
>
> "'People who live in bamboo houses should not throw pandas.' Jesus said that."
>   - "The Ninja", www.AskANinja.com, "Special Delivery 10: Pop!Tech 2006"
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] database file size isn't really very small

2008-07-18 Thread Jay A. Kreibich
On Fri, Jul 18, 2008 at 11:58:16AM -0700, Corey Nelson scratched on the wall:
> > sqlite3 Ticks.db ".dump"
> BEGIN TRANSACTION;
> CREATE TABLE StockName (date TEXT UNIQUE ON CONFLICT REPLACE, price REAL);

> I would expect the database file to store a bit of "extra" data but it's
> 2.17 times bigger than the text file!

  As others have pointed out, the issue is with the index, which is
  created automatically by the "UNIQUE" constraint.

  In SQLite an index holds a full copy of the data.  Since the vast
  majority of your data (byte wise) is the date string, slightly more
  than 2x sounds just about right.

  You can try to convert the date to a large integer.  SQLite supports
  up to 64 bit ints (signed), which should hold your current
  representation without problems.  You'll still have two copies, but
  the data should be much smaller.

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"'People who live in bamboo houses should not throw pandas.' Jesus said that."
   - "The Ninja", www.AskANinja.com, "Special Delivery 10: Pop!Tech 2006"
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] database file size isn't really very small

2008-07-18 Thread Igor Tandetnik
Corey Nelson <[EMAIL PROTECTED]>
wrote:
> It didn't take me long to get some test data into an SQLite3 database
> file. But there's a problem, the database file is almost three times
> bigger than storing the information in text files the way I had
> planned.

Well, you can't get something for nothing. That additional space 
(occupied by  an index) gives you the ability to quickly search through 
the database.

> CREATE TABLE StockName (date TEXT UNIQUE ON CONFLICT REPLACE, price
> REAL);

If you don't want your table indexed, remove UNIQUE and ON CONFLICT 
clauses.

Igor Tandetnik



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] database file size isn't really very small

2008-07-18 Thread Corey Nelson
I'm developing some software that helps with day trading. I need to store
years worth of tick prices. At first I was going to write a library that
would write and read this information to and from files. Then I thought
"don't be silly", this is the sort of thing databases were made for. I have
little experience with databases but I thought surely I could find a small,
fast, easy to use database I could use for this purpose.

It didn't take me long to get some test data into an SQLite3 database file.
But there's a problem, the database file is almost three times bigger than
storing the information in text files the way I had planned. Even writing
the contents of the database to text produces a text file less than half the
size of the database file.

> sqlite3 Ticks.db ".dump"
BEGIN TRANSACTION;
CREATE TABLE StockName (date TEXT UNIQUE ON CONFLICT REPLACE, price REAL);
INSERT INTO "StockName" VALUES('1202680806000',96.18);
INSERT INTO "StockName" VALUES('1202680807000',96.16);
INSERT INTO "StockName" VALUES('1202680821000',96.15);
[...]
INSERT INTO "StockName" VALUES('1202767171000',96.71);
INSERT INTO "StockName" VALUES('1202767174000',96.7);
INSERT INTO "StockName" VALUES('1202767197000',96.68);
COMMIT;

> sqlite3 Ticks.db "select rowid,* from StockName" > Ticks.txt
> more  Ticks.txt
1|1202680806000|96.18
2|1202680807000|96.16
3|1202680821000|96.15
[...]
12278|1202767171000|96.71
12279|1202767174000|96.7
12280|1202767197000|96.68

Ticks.db  =  1334272 bytes
Ticks.txt  =  613702 bytes

I would expect the database file to store a bit of "extra" data but it's
2.17 times bigger than the text file! That means the "extra" data uses more
room than the stuff I'm interested in. Is this size difference normal and to
be expected or am I doing something wrong? Is this just the price I have to
pay for the convenience of a database?

I don't think it's relevant but my software is written in Java and I'm using
SQLiteJDBC from http://www.zentus.com/sqlitejdbc/. For the date I'm just
storing the long returned by java.util.Date.getTime() since it is easy to
work with and I figured that the string would be shorter than a normal date
string.

Thanks for reading.

Corey
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database file size

2007-10-21 Thread radu_d

Do you know how I can determine programatically if the database has a lot of
empty space? For example how i can determine if 30% of the data space is
empty?


Trey Mack wrote:
> 
> 
>> I add records to database tables. Then when i delete them the database do 
>> not
>> reduce size. I add BLOB elements. Do you know what can be the problem?
>> Thanks
> 
> http://www.sqlite.org/lang_vacuum.html 
> 
> 
> -
> To unsubscribe, send email to [EMAIL PROTECTED]
> -
> 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Database-file-size-tf4666479.html#a13330456
Sent from the SQLite mailing list archive at Nabble.com.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Database file size

2007-10-21 Thread Trey Mack


I add records to database tables. Then when i delete them the database do 
not

reduce size. I add BLOB elements. Do you know what can be the problem?
Thanks


http://www.sqlite.org/lang_vacuum.html 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Database file size

2007-10-21 Thread radu_d

I add records to database tables. Then when i delete them the database do not
reduce size. I add BLOB elements. Do you know what can be the problem?
Thanks
-- 
View this message in context: 
http://www.nabble.com/Database-file-size-tf4666479.html#a13330175
Sent from the SQLite mailing list archive at Nabble.com.


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Database File size not exceeding 2GB

2007-07-05 Thread RaghavendraK 70574
Pls check if can create a normal file more than 2GB from your program
if not then some options must be missing.

regards
ragha

**
 This email and its attachments contain confidential information from HUAWEI, 
which is intended only for the person or entity whose address is listed above. 
Any use of the information contained herein in any way (including, but not 
limited to, total or partial disclosure, reproduction, or dissemination) by 
persons other than the intended recipient(s) is prohibited. If you receive this 
e-mail in error, please notify the sender by phone or email immediately and 
delete it!
 
*

- Original Message -
From: "Krishnamoorthy, Priya (IE10)" <[EMAIL PROTECTED]>
Date: Thursday, July 5, 2007 7:37 pm
Subject: [sqlite] Database File size not exceeding 2GB

> Hi all,
> 
> 
> 
> This is with reference to the problem mentioned below:
> 
> 
> 
> 1)   I am using NTFS file system
> 
> 2)   I tried in windows 2000 and Windows XP
> 
> 
> 
> But still I am able to log only 2 GB of data. SQLite stops logging 
> datamore than that. But it is not throwing any exception also.
> 
> 
> 
> Is there any setting that needs to be done in Windows XP? Enable 
> LFS etc
> so as to log data more than 2GB?
> 
> 
> 
> Please help me in this regard.
> 
> 
> 
> Regards,
> 
> Priya
> 
> 
> 
> ---
> -
> ---
> -
> 
> 
> 
> 
> I am using SQLite3 database in my application.
> 
> 
> 
> My application runs on Windows XP (32 bit) platform. I am not able to
> store more than 2GB of data in my database. Is it not possible to 
> storemore than 2gb data in windows XP? 
> 
> 
> 
> I used SQlite3 in Linux and could store more than 2GB.
> 
> 
> 
> Please help me in this regard
> 
> 
> 
> Best regards,
> 
> Priya 
> 
> 
> 
> 
> 
> 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Database File size not exceeding 2GB

2007-07-05 Thread Krishnamoorthy, Priya (IE10)
Hi all,

 

This is with reference to the problem mentioned below:

 

1)   I am using NTFS file system

2)   I tried in windows 2000 and Windows XP

 

But still I am able to log only 2 GB of data. SQLite stops logging data
more than that. But it is not throwing any exception also.

 

Is there any setting that needs to be done in Windows XP? Enable LFS etc
so as to log data more than 2GB?

 

Please help me in this regard.

 

Regards,

Priya

 





 

I am using SQLite3 database in my application.

 

My application runs on Windows XP (32 bit) platform. I am not able to
store more than 2GB of data in my database. Is it not possible to store
more than 2gb data in windows XP? 

 

I used SQlite3 in Linux and could store more than 2GB.

 

Please help me in this regard

 

Best regards,

Priya 

 

 



Re: [sqlite] Database File size not exceeding 2GB

2007-07-04 Thread Andrew Finkenstadt

According to http://en.wikipedia.org/wiki/File_Allocation_Table , the limit
on FAT16 is 2 gigabytes per file, on FAT32 it's 4 gigabytes per file, and on
NTFS it's very, very large.

In my application I needed to deal with splitting my data into 2 gigabyte
(maximum) database file sizes, and I had two options:

I could implement the DISKIO subfeature of SQLite3 which would let me
emulate a very large file system on top of smaller file chunks, or...

I could just implement a "Collection" object which implements the same C++
interface to my database, but splits the data across multiple databases,
each of which are limited in size to 2 gigabytes.

I found for my application that the latter choice was much easier and faster
to implement.

For each database file opened on a new connection, my code executes "PRAGMA
max_page_count=XXX;" after figuring out how large a page is and dividing my
desired maximum size by it.  As of the 3.4.0 release, this max_page_count is
per-connection and not per-file.  And it's only checked when allocating new
pages to the file through the pager allocation routines.

--andy



On 7/4/07, Ian Frosst <[EMAIL PROTECTED]> wrote:


Is the file system holding your file Fat32, or NTFS?  If it's Fat32, it
may
be the source of your problem, as it doesn't support very large files.

Ian

On 7/4/07, Krishnamoorthy, Priya (IE10) <
[EMAIL PROTECTED]>
wrote:
>
> Hi all,
>
>
>
> I am using SQLite3 database in my application.
>
>
>
> My application runs on Windows XP (32 bit) platform. I am not able to
> store more than 2GB of data in my database. Is it not possible to store
> more than 2gb data in windows XP?
>
>
>
> I used SQlite3 in Linux and could store more than 2GB.
>
>
>
> Please help me in this regard
>
>
>
> Best regards,
>
> Priya
>
>
>
>
>
>



Re: [sqlite] Database File size not exceeding 2GB

2007-07-04 Thread Ian Frosst

Is the file system holding your file Fat32, or NTFS?  If it's Fat32, it may
be the source of your problem, as it doesn't support very large files.

Ian

On 7/4/07, Krishnamoorthy, Priya (IE10) <[EMAIL PROTECTED]>
wrote:


Hi all,



I am using SQLite3 database in my application.



My application runs on Windows XP (32 bit) platform. I am not able to
store more than 2GB of data in my database. Is it not possible to store
more than 2gb data in windows XP?



I used SQlite3 in Linux and could store more than 2GB.



Please help me in this regard



Best regards,

Priya








[sqlite] SQLite Database File Size

2004-07-28 Thread sankara . narayanan
Hi,

I have started to use SQLite recently. I have an interesting situation of 
deciding the database schematic for my solution. In one of the tables I 
need to store contents of size 2000 * 20. If I could create 4 rows 
containing only one or two columns, my schema is quite convenient and 
extendable at a later time. If I restrict that data to be of 2000 rows 
with 20 columns, my solution is not extendable at a later time because in 
future I may need to extend to 25 columns. I learn that to add a column, I 
need to create some temp table and do copy operations (I got this details 
from sqlite FAQ).

Here is my clarification.

If I have 4 rows with two columns in each row, I find the db file size 
to be around 1.2 MB. But if I store the same data in 2000 rows with 21 
columns, I have the DB File size of around 300KB. Why is this huge 
difference in file size. I am also quite new to Databases. If any one of 
you could clarify whether I am doing something wrong or is it a feature of 
Sqlite file structure?

Please advise.

With regards,
Sankara Narayanan B