Thanks for the hints so far.
Here is my existing Mysql setup and what I've tried with Sqlite
*Mysql Table structure:*
CREATE TABLE `mydb`.`mytable` (
`c1`
On 22/02/2009 7:48 PM, Kim Boulton wrote:
> Thanks for the hints so far.
>
> Here is my existing Mysql setup and what I've tried with Sqlite
>
> *Mysql Table structure:*
> CREATE TABLE `mydb`.`mytable` (
> `c1`
>
On Sun, 22 Feb 2009 08:48:00 +, Kim Boulton
wrote in k...@jesk.co.uk, General Discussion
of SQLite Database :
> *Then queried the Sqlite3 table with:*
> PRAGMA cache_size = 2000; */uses up 1.5GB RAM regardless*/
cache_size is expressed in number
For starters, I think that loading the index into the cache in
MySQL is biasing your performance measures. SQLite will automatically
load pages of any necessary indexes into memory as part of executing
the query, but doing so takes time. By preloading the index on MySQL,
you're removing that
I deal with a relevantly small database that fits into cache. The database is
accesseed by single application and single thread for both read and write. I
also use PRAGMA locking_mode=exclusive.
As far as I understand between transactions the cache is invalidated and all
pages are removed. I
On Sun, Feb 22, 2009 at 01:29:09PM +0100, Kees Nuyt wrote:
> > PRAGMA page_size = 2000; /*this doesn't make any difference*/
>
> PRAGMA page_size will only make a difference if you use it
> when creating the database (before the first table is
> created), or just before a VACUUM statement.
>
Would loading the 30 million row csv file via the command line be wrapped in
inside a single transaction, thus building a very rollback log? I like to
break my bulk loads into nice chunks.
Also it could be the MySQL parser does a better job of optimizing the
unusual select.
On Sun, Feb 22,
Greetings!
I have a very small database, well, 62.1 MB (65,150,976 bytes), and when I
do a search on it, the CPU only goes to 7% and 12% at the most. Sometimes
it takes a few seconds to return with the results. I am not complaining
about the 5-10 seconds of the returning of the result, but
Since the dawn of digital computers the CPU has been waiting on the I/O.
Want to go faster? Get a faster mass storage device. Then your CPU usage
will most likely jump all the way up to 9% - 14%!
You can't believe what a 300 card per minute 80 column card reader does to
throughput when you use
On Sun, Feb 22, 2009 at 2:14 PM, Fred Williams wrote:
> Since the dawn of digital computers the CPU has been waiting on the I/O.
> Want to go faster? Get a faster mass storage device. Then your CPU usage
> will most likely jump all the way up to 9% - 14%!
>
> You can't
Hello!
On Sunday 22 February 2009 22:56:36 jose isaias cabrera wrote:
> I have left everything default, so I have not set any PRAGMA settings.
Try this:
pragma cache_size=1;
pragma page_size=16384;
vacuum;
Best regards.
___
sqlite-users mailing
Suppose that have 2 columns: folders and tags:
~/Music|classic,rock
~/Music|classic,rock
~/Pictures|art,photos
~/Pictures|art,photos
~/Pictures|art,photos
To know the folder count I do:
sqlite3 test.db "select folder, count(folders) from t1 group by folder"
Returns:
~/Music|2
~/Pictures|3
How
"Yuzem" wrote in
message news:22153722.p...@talk.nabble.com
> Suppose that have 2 columns: folders and tags:
> ~/Music|classic,rock
> ~/Music|classic,rock
> ~/Pictures|art,photos
> ~/Pictures|art,photos
> ~/Pictures|art,photos
>
> To know the folder count I do:
> sqlite3
On Feb 22, 2009, at 9:40 PM, efimd wrote:
>
> I deal with a relevantly small database that fits into cache. The
> database is
> accesseed by single application and single thread for both read and
> write. I
> also use PRAGMA locking_mode=exclusive.
> As far as I understand between
On Sun, Feb 22, 2009 at 11:44 PM, jose isaias cabrera
wrote:
>>
>> Try this:
>> pragma cache_size=1;
>> pragma page_size=16384;
>> vacuum;
>
> Wow, thanks. That did help. Cool, so there was something that I could do.
> Here is a question: this DB is shared by other
Hello,
Thanks
The grep regex on the text file found around 10,000 lines over 5 minutes
(out of a total possible 200,000 rows), at which time I stopped it,
interesting experiment anyway :-)
Using OR instead of UNION in Mysql is definitely slower, although maybe,
as someone has mentioned
You have a good point there, I've been using that wrongly. I'll try that.
Thanks
Kees Nuyt wrote:
> On Sun, 22 Feb 2009 08:48:00 +, Kim Boulton
> wrote in k...@jesk.co.uk, General Discussion
> of SQLite Database :
>
>
>> *Then queried the
Sorry, I forgot to mention that I was running the query twice, first to
cache it, second to measure the speed.
Yeah it's a horrible query but produces the fastest results in Mysql.
maybe using OR instead works better on Sqlite, working on that one.
Problem I have with indexes in sqlite is that
Hi,All Sqlite3 user,or experts
I did need your help,please give me a hand,thanks!
I am now porting sqlite3_6_11(amalgamation) to Vxworks( Version
number is 5.5).
status of porting is: Sqlite had been compiled successfully with
Tornado2.22.
But when I ran the test
Woo, that's a bit better. Setting the page_size = 4096 gets the query
time down to 17 seconds, so it's now faster than Mysql. Result!
I also tried changing the query to:
SELECT * FROM mytable WHERE (c5 = 'W' AND c6 IN ('C','E','F') AND c7 IN
('S','Z')) OR
(c3 = 'W' AND c4 IN ('C','E','F') AND
Woo, that's a bit better. Setting the page_size = 4096 gets the query
time down to 17 seconds, so it's now faster than Mysql. Result!
I also tried changing the query to:
SELECT * FROM mytable WHERE (c5 = 'W' AND c6 IN ('C','E','F') AND c7 IN
('S','Z')) OR
(c3 = 'W' AND c4 IN ('C','E','F') AND
Hi,
I am doing some performance analysis on my SQLite queries. The problem is
SQLite seems to be caching query results.
I tried restarting my program, that is not helping. only if i don't access
that database for 2 days then it is giving proper profile data.
*
Question*
1) How can I disable query
On 23/02/2009 5:14 PM, Kim Boulton wrote:
> Hello,
>
> Thanks
>
> The grep regex on the text file found around 10,000 lines over 5 minutes
> (out of a total possible 200,000 rows), at which time I stopped it,
> interesting experiment anyway :-)
Uh-huh ... so you'd estimate that it would take
23 matches
Mail list logo