On Thu, Nov 10, 2011 at 12:28:24AM +0100, GB scratched on the wall:
> Ok, just thought it worth mentioning. But a VACUUMed database may be
> more efficient if you have multiple columns with multiple indexes
> because you get a mixed sequence of data and index pages while
> inserting data. VACUUM
Doing a file copy has similar behavior. So as long as the file is cached
everything is copacetic. That's what leads me to believe it's head thrashing
causing this behavior.
./sqlite3 index2.db However, reboot again and add "select count(*) from a;" as the first line of
> gendat2.sql
>
>
Fabian schrieb am 09.11.2011 23:10:
I'm running these tests on a very simple database: 1 table and 1 column, so
ANALYZE shouldn't have any effect. And I already tested running VACUUM on
the database, but it didn't help (which seems logical, because I start with
a fresh db each time).
Ok, just
2011/11/9 GB
> Maybe you could try to use a pagesize that matches the size of a disk
> allocation unit or memory page. For Windows since a typical NTFS partition
> has a clustersize of 4KiB - which happens to also be the size of a memory
> page - a pagesize of 4096 Byte seems to
On Nov 9, 2011, at 10:24 PM, Fabian wrote:
> And I'd like to avoid to have some fuzzy logic
> that tries to predicts which of the two methods is going to be faster.
Perhaps an alternative to your conundrum is the good, old "divide and conquer"
approach. In other words, you could partition your
Maybe you could try to use a pagesize that matches the size of a disk
allocation unit or memory page. For Windows since a typical NTFS
partition has a clustersize of 4KiB - which happens to also be the size
of a memory page - a pagesize of 4096 Byte seems to be a good compromise
between
On Wed, Nov 9, 2011 at 3:24 PM, Fabian wrote:
> 2011/11/9 Nico Williams
>> I don't get it. You're reading practically the whole file in a random
>> manner, which is painfully slow, so why can't you read the file in one
>> fell swoop (i.e.,
of SQLite Database
Subject: EXT :Re: [sqlite] INDEX Types
2011/11/9 Black, Michael (IS) <michael.bla...@ngc.com>
OK...you're right...a reboot kills it.
>
I'm glad someone was able to reproduce this on Linux, ruling out the
possibility it's a Windows-issue.
> However, reboot again and
On Nov 9, 2011, at 10:24 PM, Fabian wrote:
> It seems that FTS doesn't need to read the whole index from
> disk, so I'm trying to pinpoint the difference. My best guess is that it
> creates a fresh b-tree for the additional inserts, causing the boost in
> performance.
Indeed.
Quoting the fine
2011/11/9 Black, Michael (IS)
OK...you're right...a reboot kills it.
>
I'm glad someone was able to reproduce this on Linux, ruling out the
possibility it's a Windows-issue.
> However, reboot again and add "select count(*) from a;" as the first line
> of gendat2.sql
>
2011/11/9 Nico Williams
>
> I don't get it. You're reading practically the whole file in a random
> manner, which is painfully slow, so why can't you read the file in one
> fell swoop (i.e., sequential reads)??
>
I'm only reading the whole file when the number of
Advanced GEOINT Solutions Operating Unit
Northrop Grumman Information Systems
From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on
behalf of Fabian [fabianpi...@gmail.com]
Sent: Wednesday, November 09, 2011 1:36 PM
To: General Discussion of SQLite Database
Subject: EXT :Re
On Wed, Nov 9, 2011 at 1:53 PM, Fabian wrote:
> 2011/11/9 Nico Williams
>> What's wrong with reading the whole file into memory at boot time as a
>> way to prime the cache? Rebooting always takes some time, mostly the
>> time to read all sorts of
2011/11/9 Nico Williams
>
> What's wrong with reading the whole file into memory at boot time as a
> way to prime the cache? Rebooting always takes some time, mostly the
> time to read all sorts of files.
>
>
It's a desktop application, I cannot pre-cache anything before
Fabian,
What's wrong with reading the whole file into memory at boot time as a
way to prime the cache? Rebooting always takes some time, mostly the
time to read all sorts of files.
Nico
--
___
sqlite-users mailing list
sqlite-users@sqlite.org
On Wed, Nov 9, 2011 at 12:04 PM, Fabian wrote:
> 2011/11/9 Simon Slavin
>> Didn't someone recently note that entering the first million records was
>> fast, but if he then closed and reopened the database, entering the next
>> 100,000 records was slow
On 09-11-2011 20:14, Fabian wrote:
2011/11/9 Luuk
On 09-11-2011 17:23, Black, Michael (IS) wrote:
time sqlite3
$ time sqlite3
Did you do a reboot between the second insert? Because the difference I'm
seeing is much larger than 38%? Did you test it on Linux or Windows?
2011/11/9 Luuk
> On 09-11-2011 17:23, Black, Michael (IS) wrote:
>
>> time sqlite3> 19.307u 0.082s 0:19.40 99.8%0+0k 0+0io 0pf+0w
>>
>> time sqlite3< index2.sql
>> 19.266u 0.092s 0:19.37 99.8%0+0k 0+0io 0pf+0w
>>
>>
>
> $ time sqlite3
> real0m21.094s
> user
On 09-11-2011 17:23, Black, Michael (IS) wrote:
time sqlite3
$ time sqlite3 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
2011/11/9 Black, Michael (IS)
> Are you sure you're using BEGIN/COMMIT on your transactions?
>
Yes
> I just used my benchmark data and inserted another 100,000 rows into the
> database in 2.3 seconds.
That is because you immediately insert those additional rows,
Are you sure you're using BEGIN/COMMIT on your transactions?
I just used my benchmark data and inserted another 100,000 rows into the
database in 2.3 seconds.
I made 1,100,000 records and cut the last 100,000 into a seperate file with
BEGIN/COMMIT on both.
time sqlite3 index.db
>
> Didn't
2011/11/9 Simon Slavin
>
> Didn't someone recently note that entering the first million records was
> fast, but if he then closed and reopened the database, entering the next
> 100,000 records was slow ?
>
>
Yes, and there is still no real explanation for it, other than
On 9 Nov 2011, at 4:42pm, Fabian wrote:
> 2011/11/9 Black, Michael (IS)
>
>> Hmmm...appears to be the same for this case which, I must say, I find
>> surprising.
>
> Thanks for actually benchmarking it. I'm also a bit surprised, because I
> always thought SQLite
2011/11/9 Black, Michael (IS)
> Hmmm...appears to be the same for this case which, I must say, I find
> surprising.
>
>
Thanks for actually benchmarking it. I'm also a bit surprised, because I
always thought SQLite handled INTEGER more efficiently than TEXT.
I also did
> But I'm wondering if SQLite can deal more efficiently with a INTEGER index
> (64bits) VS an 8-byte TEXT column (also 64bits). I know the INTEGERs
> require less disk-space because SQLite can store smaller values in fewer
> bytes, but are there any other differences that make them more preferable
On 9 Nov 2011, at 3:23pm, Fabian wrote:
> I'm having an issue where inserts on an un-cached database are very slow.
> The reason probably is that a large part of the existing index needs to be
> read from disk, to be able to insert new rows to the index.
Are you doing a lot of INSERT commands
I'm having an issue where inserts on an un-cached database are very slow.
The reason probably is that a large part of the existing index needs to be
read from disk, to be able to insert new rows to the index. The length of
the values in the indexed column are around 60 bytes, so I'm thinking about
27 matches
Mail list logo