Hi,
I am using the sqlite gui and click import table from csv. I select a txt
file that contain over 200,000 words in a list. Sqlite works fine with a
smaller list of 200-300 words but when i import my big list, it hangs for
ages or completely crashes my computer.
Does anyone know how i can impo
Ashish Singh wrote:
> 1:After the installation how do I where can i go and acess SQLite
> program on RedHat Linux
After installation (of the binary packages, at least), the command line
tools will be available.
You also (and most importantly) have the sqlite libraries for use in
your own code...
"Roger Binns" <[EMAIL PROTECTED]> wrote:
> Is it safe to have one sqlite3* in one thread doing various operations
> and then calling sqlite3_interrupt from another thread on that same
> sqlite3* pointer? Or does sqlite3_interrupt always have to be called
> in the same thread as the operations?
>
Hello Everybdoy
I am a new user to linux, I am trying to install SQLite on RedHat
linux.
My questions are
1:After the installation how do I where can i go and acess SQLite
program on RedHat Linux
2:Is there anything else I need to do after $ make install $ /sbin
/ldconfig
3: I need to populate
> ... should I be using commit before end ...
Commit and end are synonyms; you don't need both.
> ... this is single user, I assume using immediate or exclusive is ok ...
It's OK but not necessary. A simple begin is just as good, since
no one else can apply a lock before yours is upgraded.
Reg
Is it safe to have one sqlite3* in one thread doing various operations
and then calling sqlite3_interrupt from another thread on that same
sqlite3* pointer? Or does sqlite3_interrupt always have to be called
in the same thread as the operations?
The doc doesn't say anything either way. The code
OMG! I can't believe the speed difference!
I wrapped around 3000 transactions, executing them 3 times, so I did a
for y=0, y<3, y++
{
begin immediate transaction testtrans
{
for x=0, x<3000, x++
insert record
}
end transaction testtrans
}
and inserting those 9000 records took maybe a second!!!
Tony Harris wrote:
Hi all,
I have a database where one table stores the bulk of logged data, it has 14
columns, 11 integers and 3 reals.
Initially I'm gonna have to import a lot of data into this database - about
9,000 entries per months worth of data. I already know 3K entries takes about
* Tony Harris <[EMAIL PROTECTED]> [2006-06-24 19:05]:
> Is this about average, or is there a way I might be able to get
> a little more speed out of it?
Put a transaction around your INSERTs, at least around batches of
a few thousand each, and you’ll get much better speed.
Regards,
--
Aristotle
Hi all,
I have a database where one table stores the bulk of logged data, it has 14
columns, 11 integers and 3 reals.
Initially I'm gonna have to import a lot of data into this database - about
9,000 entries per months worth of data. I already know 3K entries takes about
192KBytes in the db f
> I have played with the new Virtual Table interface from CVS and found some
> shortcommings for the
> current implementation of the xRowID method:
>
> int (*xRowid)(sqlite3_vtab_cursor*, sqlite_int64 *pRowid);
>
> As far as I understand, this function is called by SQLite whenever it needs a
Chris Werner <[EMAIL PROTECTED]> wrote:
> So...
>
> sqlite-3.3.5
>
> Would you expect ./testfixture ./test/quick.test to pass with "0 errors out
> of 24922 tests"
>
> But ./testfixture ./test/all.test to bail abruptly with:
>
> btree2-2.5... Ok
> btree2-2.6...
> Error: invalid command name "bt
12 matches
Mail list logo