> Would there be any benefit to "pre-sorting" in this scenario?
somewhat related discussion:
http://www.mail-archive.com/sqlite-users@sqlite.org/msg19111.html
Sucker-punch spam with award-winning protection.
Shane Harrelson wrote:
We've declared an INTEGER column as the PRIMARY KEY on all tables as
below:
CREATE TABLE Strings ( StringId INTEGER PRIMARY KEY, Value VARCHAR(30)
UNIQUE )
It is my understanding that this makes it an alias to the "internal"
ROWID used by SQLite. On insertions we
On 2/6/07, Joe Wilson <[EMAIL PROTECTED]> wrote:
--- Shane Harrelson <[EMAIL PROTECTED]> wrote:
> On 2/5/07, Joe Wilson <[EMAIL PROTECTED]> wrote:
> > > Yes, it's typical. Each database instance is typically composed of
> > > around 50k records, all inserted in a single pass. If I could do
>
--- Shane Harrelson <[EMAIL PROTECTED]> wrote:
> On 2/5/07, Joe Wilson <[EMAIL PROTECTED]> wrote:
> > > Yes, it's typical. Each database instance is typically composed of
> > > around 50k records, all inserted in a single pass. If I could do
> > > larger transactions (or not do them at all) I
On 2/5/07, Joe Wilson <[EMAIL PROTECTED]> wrote:
--- Shane Harrelson <[EMAIL PROTECTED]> wrote:
> On 2/5/07, Dennis Cote <[EMAIL PROTECTED]> wrote:
> > Shane Harrelson wrote:
> > > Perform 50 transactions of 1000 inserts each (5 total).
> > >
> >
> > Shane,
> >
> > Is this your normal
--- Shane Harrelson <[EMAIL PROTECTED]> wrote:
> On 2/5/07, Dennis Cote <[EMAIL PROTECTED]> wrote:
> > Shane Harrelson wrote:
> > > Perform 50 transactions of 1000 inserts each (5 total).
> > >
> >
> > Shane,
> >
> > Is this your normal usage pattern? Inserting records in blocks of around
>
On 2/5/07, Dennis Cote <[EMAIL PROTECTED]> wrote:
Shane Harrelson wrote:
> Perform 50 transactions of 1000 inserts each (5 total).
>
Shane,
Is this your normal usage pattern? Inserting records in blocks of around
1000 per transaction. Or would you be more likely to insert using 1000
Shane Harrelson wrote:
Perform 50 transactions of 1000 inserts each (5 total).
Shane,
Is this your normal usage pattern? Inserting records in blocks of around
1000 per transaction. Or would you be more likely to insert using 1000
transactions of 50 records, or perhaps only a single
Shane Harrelson wrote:
However, in an effort to provide some useful performance data for
everyone, I'm making available the results of my testing with the
various suggested insertion algorithms below. The fastest was my
"Hack" version, followed closely by the "Insert/Select/Insert" version
On 2/5/07, A. Pagaltzis <[EMAIL PROTECTED]> wrote:
* [EMAIL PROTECTED] <[EMAIL PROTECTED]> [2007-02-04 13:35]:
> "A. Pagaltzis" <[EMAIL PROTECTED]> wrote:
> > It's a pity that INSERT OR IGNORE (apparently?) does not set
> > last_insert_id properly regardless of outcome,
>
> Consider this case:
>
* [EMAIL PROTECTED] <[EMAIL PROTECTED]> [2007-02-04 13:35]:
> "A. Pagaltzis" <[EMAIL PROTECTED]> wrote:
> > It's a pity that INSERT OR IGNORE (apparently?) does not set
> > last_insert_id properly regardless of outcome,
>
> Consider this case:
>
> CREATE TABLE t1(x INTEGER PRIMARY KEY, y
* Dennis Cote <[EMAIL PROTECTED]> [2007-02-03 17:20]:
> I suspect the reduction from executing three statements
> (insert, select, insert) down to only two (insert insert) would
> probably provide about the same performance increase as the 5%
> to 10% speedup he saw by replacing the separate
On 2/2/07, A. Pagaltzis <[EMAIL PROTECTED]> wrote:
5-10% in his tests, as he wrote a few mails up the thread.
Significant? No. Worthwhile? Apparently so, for his application.
I saw that as well, but based on this:
> is there a more formal mechanism for getting this value without having
do
* Dennis Cote <[EMAIL PROTECTED]> [2007-02-03 01:30]:
> If you wanted to model what the insert or ignore is doing more
> directly, you could do something like this.
>
> rowid = select rowid from Strings where value = 'foo';
> if rowid is null then
>insert into Strings values ('foo');
>
Shane Harrelson wrote:
Yes, I could do that, but the point of this thread was to find someway
around doing the subsequent SELECT on a constraint collision during an
INSERT into the strings table.
I'm using the C API, and looking at the internals of the VBDE, the
rowid responsible for the
On 2/2/07, Dennis Cote <[EMAIL PROTECTED]> wrote:
Shane Harrelson wrote:
> On 2/2/07, A. Pagaltzis <[EMAIL PROTECTED]> wrote:
>> * Nicolas Williams <[EMAIL PROTECTED]> [2007-02-01 00:10]:
>> > Sounds like you should want to use INSERT OR IGNORE ... INTO
>> > Strings and then SELECT the rowid of
Shane Harrelson wrote:
On 2/2/07, A. Pagaltzis <[EMAIL PROTECTED]> wrote:
* Nicolas Williams <[EMAIL PROTECTED]> [2007-02-01 00:10]:
> Sounds like you should want to use INSERT OR IGNORE ... INTO
> Strings and then SELECT the rowid of the string for use in
> INSERTing INTO Object.
That was my
On 2/2/07, A. Pagaltzis <[EMAIL PROTECTED]> wrote:
* Nicolas Williams <[EMAIL PROTECTED]> [2007-02-01 00:10]:
> Sounds like you should want to use INSERT OR IGNORE ... INTO
> Strings and then SELECT the rowid of the string for use in
> INSERTing INTO Object.
That was my first thought as well.
* Nicolas Williams <[EMAIL PROTECTED]> [2007-02-01 00:10]:
> Sounds like you should want to use INSERT OR IGNORE ... INTO
> Strings and then SELECT the rowid of the string for use in
> INSERTing INTO Object.
That was my first thought as well. Does SQLite return the row’s
last_insert_id correctly
19 matches
Mail list logo