On Wed, Jul 14, 2010 at 12:34:05PM -0500, Black, Michael (IS) scratched on the
wall:
> Was that a facetious remark???
>
> Rather than "here's a function/pragma that allows you to put the
> journal file where you want -- but BE CAREFUL BECAUSE..."
>
> Writing you own VFS is not for the casual
Jolly good...just the noose we need :-) I also was thinking of memory mode for
the journal. That's probably the best solution for this kinda of case. Though
I think the original poster was wanting a recovery mechanism.
Michael D. Black
Senior Scientist
Northrop Grumman Mission Systems
On Wed, Jul 14, 2010 at 1:51 PM, Richard Hipp wrote:
>
>
> On Wed, Jul 14, 2010 at 1:34 PM, Black, Michael (IS) <
> michael.bla...@ngc.com> wrote:
>
>> Was that a facetious remark???
>>
>> Rather than "here's a function/pragma that allows you to put the journal
>> file where you
It sounds like you've been drinking Microsoft's kool-aid -- don't let the user
do what they need to do...it's too dangerous. Just try and print a directory
list from your file browser in Windows for example (not that it's
dangerous..but it's pretty obvious someone might want to do it).
On 14 Jul 2010, at 5:02pm, Black, Michael (IS) wrote:
> Does anybody know how to make the journal file go to a different location
> than the database? Apprarently it's not treated as a "temporary" file.
> Perhaps it should be??
It's essential not to treat the journal file as a temporary
On Wed, Jul 14, 2010 at 1:34 PM, Black, Michael (IS) wrote:
> Was that a facetious remark???
>
> Rather than "here's a function/pragma that allows you to put the journal
> file where you want -- but BE CAREFUL BECAUSE..."
>
> Writing you own VFS is not for the casual
Was that a facetious remark???
Rather than "here's a function/pragma that allows you to put the journal file
where you want -- but BE CAREFUL BECAUSE..."
Writing you own VFS is not for the casual user...
I was trying just to find where the journal filename was created but there's no
On Wed, Jul 14, 2010 at 12:31 PM, Black, Michael (IS) <
michael.bla...@ngc.com> wrote:
> If you could set the journcal location BEFORE you open the database that
> wouldn't be such a bad thing. Giving us the ability to do this would allow
> for the flexibility when needed with appropriate
Yuck...do you know what bandwidth you have? Looks kind of like a T1 line to me.
That should allow 100Meg to come across in about 13 minutes if it's not being
used for anything else. Unless you're monitoring time usage in your
application how do you know where your time is being spent? If
Werner Smit wrote:
> After taking out count(*) and adding a few pragma's and saving 6000
> records rather than 500 at a time I've got it down to 34 minutes.
> If I build in on local drive it takes 28 minutes.(with chunks of 500)
Why not do an apples-to-apples test and commit the same number
If you don't know why it's slow you'll be shooting in the dark. And
doing compression on a local network link isn't likely to buy you much.
Might even hurt.
In other words, is it latency or bandwidth? Give 8K/sec I'm guessing
it's latency unless you're running a 64KBit line. Are you THAT
If you could set the journcal location BEFORE you open the database that
wouldn't be such a bad thing. Giving us the ability to do this would allow for
the flexibility when needed with appropriate warnings about how to recover.
In particular, if you only access the database through your own
If you don't know why it's slow you'll be shooting in the dark. And doing
compression on a local network link isn't likely to buy you much. Might even
hurt.
In other words, is it latency or bandwidth? Give 8K/sec I'm guessing it's
latency unless you're running a 64KBit line. Are you THAT
Black, Michael (IS) wrote:
> Does anybody know how to make the journal file go to a different
> location than the database? Apprarently it's not treated as a "temporary"
> file. Perhaps it should be??
Seems like you'd have to communicate the journal location to other
processes, meaning
Now I'm confused...how can you be 50% faster if 90% of the time is in
retrieving from Oracle?
Michael D. Black
Senior Scientist
Northrop Grumman Mission Systems
From: sqlite-users-boun...@sqlite.org on behalf of Werner Smit
Sent: Wed 7/14/2010 11:11 AM
To:
>>According to my math your final database size should be on the order
of
100Meg?
>>
>>That means at 200 minutes and 1,000,000 records:
>>83 inserts per second
>>8333 bytes per second
>>Both of these values are terrible.
>>#1 What kind of network connection do you have? 100BaseT?
>>#2 What
Can you answer #3 though?
Does anybody know how to make the journal file go to a different location than
the database? Apprarently it's not treated as a "temporary" file. Perhaps it
should be??
Michael D. Black
Senior Scientist
Northrop Grumman Mission Systems
>>On 14 Jul 2010, at 11:55am, Werner Smit wrote:
>> ps. When I started with sqlite it took 500 minutes to save the 1
million
>> records.
>> I've got it down to just less than 200 minutes with current settings.
>> Clarion does it in between 100 and 200 minutes.
>Do you have any indexes defined ?
On 14 Jul 2010, at 11:55am, Werner Smit wrote:
> ps. When I started with sqlite it took 500 minutes to save the 1 million
> records.
> I've got it down to just less than 200 minutes with current settings.
> Clarion does it in between 100 and 200 minutes.
Do you have any indexes defined ? It
-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Paul Corke
Sent: 14 July 2010 01:03 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Sqlite Insert Speed Optimization
On 14 July 2010 11:56, Werner Smit wrote
On 14 July 2010 11:56, Werner Smit wrote:
> 3. I'm saving to a network drive.
Is this a one-off data import into a new clean sqlite db? If so have
you considered writing to a db file on a local drive and then copying
the whole file to the network drive afterwards?
Paul.
> Thanks for all the feedback!
>
> It helped a lot.
> 1. I'm going to try and see what happen if I leave the "end transaction"
> until 5 insert was done.
This is what I meant also when I said 500 was too small.
John
___
sqlite-users mailing list
Thanks for all the feedback!
It helped a lot.
1. I'm going to try and see what happen if I leave the "end transaction"
until 5 insert was done.
2. I'm going to increase cache_size from 8192 to 16384
Extra info,
1. This program saved data to a clarion file before and in sqlite it's
about
-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of John Drescher
Sent: Tuesday, July 13, 2010 12:37 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Sqlite Insert Speed Optimization
> I also wrap my stateme
Griggs, Donald wrote:
> I guess I was wondering if the fastest records-per-transaction value
> would depend on the page cache and be more or less independent of the
> total records to be imported.
I think the page cache is one of a great many variables.
> So, the records-per-transaction
Griggs, Donald wrote:
> Is the percentage of the final rowcount really a criterion?
The answer to that, according to my brief exploration, is somewhere between
"yes" and "very much yes", depending on various factors.
Thanks, Eric. I guess I was wondering if the fastest
Werner Smit wrote:
> My question(s)
> If I want to use the "insert or replace" to populate my database of
> around 1 million records.
> And I want to do it as fast as possible.
> What are all the tricks I can use?
Obey the first rule of optimization: don't do it unless you're
sure you
Griggs, Donald wrote:
> Is the percentage of the final rowcount really a criterion?
The answer to that, according to my brief exploration, is somewhere
between "yes" and "very much yes", depending on various factors.
--
Eric A. Smith
The number of UNIX installations has grown to 10, with
On 13 Jul 2010, at 5:33pm, Werner Smit wrote:
> I currently use the following pragma's (for speed)
> temp_store = 2
> page_size=4096
> cache_size=8192
> synchronous=off
>
> Any others I could try?
Don't get too involved in the PRAGMAs until you have a good reason to. The
default values are
On Tue, Jul 13, 2010 at 12:48 PM, John Drescher wrote:
>> I was wondering if that's really so. Wouldn't the marginal speed
>> improvement be quite small? Is the percentage of the final rowcount really
>> a criterion?
>
> Each transaction costs at least 1 disk seek. Doing
> I was wondering if that's really so. Wouldn't the marginal speed improvement
> be quite small? Is the percentage of the final rowcount really a criterion?
Each transaction costs at least 1 disk seek. Doing thousands of seeks
the result would be very slow.
John
-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org]
On Behalf Of John Drescher
Sent: Tuesday, July 13, 2010 12:37 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Sqlite Insert Speed Optimization
> I also wrap
> After these 500 i take a few seconds to read more data so sqlite should
> have time to do any housekeeping it might need.
SQLite is not a Database Server. It has no background threads. So it
can't do any "housekeeping" until you call some sqlite3_* function.
Pavel
On Tue, Jul 13, 2010 at
> I also wrap my statements (about 500 inserts at a time) with a begin/end
> transaction.
> After these 500 i take a few seconds to read more data so sqlite should
> have time to do any housekeeping it might need.
>
Wrap more into a transaction. 500 is too small of a percentage of a million.
34 matches
Mail list logo