On Tue, Aug 4, 2015 at 10:45 AM, Simon Slavin <slavins at bigfraud.org> wrote:

>
> On 3 Aug 2015, at 1:58pm, Linquan Bai <linquan.bai at gmail.com> wrote:
>
> > I am trying to read large data from the database about 1 million records.
> > It takes around 1min for the first time read. But if I do the same
> process
> > thereafter, the time is significantly reduced to 3 seconds. How can I
> get a
> > fast speed for the first time read?
>
> You can't.  Some part of your computer has pulled that data into cache,
> and it's still in the cache when you run the process again, so it the data
> doesn't need to be fetched from disk again.
>

That sounds correct to me. I don't know which OS the OP is running (likely
Windows, which I don't know well). But I wonder if there is some way, like
running a program before he runs his application which can tell the OS to
"preload" the file into RAM cache. On Linux, I might do something like: "dd
if=/path/to/sqlite-database.sqlite3 of=/dev/null bs=1024 count=100" which
would, as a side effect, pull the first 100KiB of the file into RAM.

Of course, there is an expensive way: ?Put your SQLite data base on an PCIe
SATA III (6Gib/s) SSD drive. Lowest cost for a reasonable sized one which I
saw on Amazon was USD 250 for 240GiB.
http://smile.amazon.com/Kingston-Digital-Predator-SHPM2280P2H-240G/dp/B00V01C4RK
This is for me as a Amazon Prime member.




>
> If you tell us how big an average row is (how many bytes of data) then we
> can tell you whether 1 million rows in 1 minute is a reasonable time.  But
> my guess is that that's more or less the time you'd expect.
>
> Simon.
>


-- 

Schrodinger's backup: The condition of any backup is unknown until a
restore is attempted.

Yoda of Borg, we are. Futile, resistance is, yes. Assimilated, you will be.

He's about as useful as a wax frying pan.

10 to the 12th power microphones = 1 Megaphone

Maranatha! <><
John McKown

Reply via email to