Re: [sqlite] Questions about your "Performance Matters" talk re SQLite

2020-01-04 Thread Simon Slavin
On 4 Jan 2020, at 5:49pm, Doug  wrote:

> for (i=1000; i--; i>0);

If you have optimization turned on, your compiler might turn that into "i = 0". 
 Optimization messes with a lot of benchmarks.  Checking which optimization 
setting was used is one aspect of what Howard Chu was talking about: benchmark 
figures in isolation don't mean much.

I'm not a big fan of general statements about which form is faster, uses less 
memory, whatever, than which other form.  I want to say "try both ways and keep 
the one which is best".
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about your "Performance Matters" talk re SQLite

2020-01-03 Thread sky5walk
haha, that is a mangled way of saying I wrapped my db functions in a dll
for multiple app use. I did not expose this config setting as I never knew
its impact. To be honest, I still don't. ;)

On Fri, Jan 3, 2020 at 5:18 PM Tim Streater  wrote:

> On 03 Jan 2020, at 22:08, sky5walk  wrote:
>
> > Querying the config state is helpful for a dll wrapped database, ...
>
> What's one of them?
>
>
> --
> Cheers  --  Tim
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about your "Performance Matters" talk re SQLite

2020-01-03 Thread Tim Streater
On 03 Jan 2020, at 22:08, sky5walk  wrote:

> Querying the config state is helpful for a dll wrapped database, ...

What's one of them?


-- 
Cheers  --  Tim
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] questions

2018-08-19 Thread Balaji Ramanathan
>
>2. questions (am...@juno.com)
> From: "am...@juno.com" 
> To: sqlite-users@mailinglists.sqlite.org, am...@juno.com
> Cc:
> Bcc:
> Date: Sat, 18 Aug 2018 16:24:45 GMT
> Subject: [sqlite] questions
> August 18, 2018 Good Morning Good People: I am in the process of building
> a database for where I work. I can do a lot of tasks with SQLite. But I am
> stuck on the dilemmas below. If any of you have any programming solutions
> for any/all of these dilemmas I am in--in a way that I can download what yo
> have so someone who has no programming experience and can just 'click and
> use' (i.e., turnkey solutions, please advise. Once I hear back that you
> have any/all solutions to my dilemmas below, I will give you an e-mail
> address to send them to. Thank you vern much in advance for helping me
> eliminate these dilemmas. 1) How Can I do a search--say for people trained
> on one task, or on a given date, or for people who know how to operate one
> specific piece of equipment? 2) How can I set up SQLite so some people can
> only read SQLite? 3) How can I sent up a table/report of my results? 4) How
> can I put in ascending or descending order all the information on one
> field, say by date I gave out safety glasses? Respectfully yours, Alex
> Stavis
>
>
Hi Alex,

As others here have suggested, SQLite is just a database engine with no
user interface (except for the command line interface).  You need to build
your own interface by writing a program that includes SQLite as the data
storage and retrieval engine.  You can write such a program in most
programming languages or in a scripting language like Tcl.  I have also
experimented with using ODBC to use MS Access as a front end to a SQLite
database.  But if you have Access, you may not even need SQLite unless you
are dealing with large amounts of data that Access doesn't scale to
properly.

MS Access gives you the ability to use forms for data input, printable
reports, etc. that you can fine-tune to your heart's content.  It includes
the programming language Visual Basic for Access (VBA) that you can use to
do lots of things that SQL alone may not be able to do (or requires jumping
through hoops to accomplish).

An alternative to that is to use a generic user interface program.
There are are several available out there, and the two most commonly used
ones seem to be SQLiteStudio (www.sqlitestudio.pl) and SQLiteExpert (
www.sqliteexpert.com).  They allow you to administer a sqlite database in a
familiar windows environment, add and edit data in the tables, pull data
out using queries and views, etc.  You do have to be familiar with how to
design a relational database and be able to use SQL to access the data in
such a relational database.

Personally, I use Tcl for one of my uses of SQLite.  I am not an expert
in Tcl by any stretch of the imagination, and it can be quite frustrating
to get it to do exactly what you want.  Tcl allows you to create forms and
reports for interacting with the database.  If you use a programming
language or MS Access as a front end to the database, you can limit people
to read-only access if necessary, by locking down the forms and exposing
only the reports.

For another of my uses, I use SQLite Studio.  I find it very quick and
robust and has all the capabilities I need for simply being able add, edit
and retrieve data in an intuitive and straightforward user interface.
However, it is a very generic tool that has limited customization
potential, for instance, for creating forms, reports, etc.  I experimented
with SQLite Expert, but found its user interface less to my liking.
Neither of these tools will limit some people to read-only access to the
database.

Hope I have given you some things to think about, and hopefully try out
to get an idea of what the advantages and disadvantages of each approach
are.

Balaji Ramanathan
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] questions

2018-08-18 Thread R Smith

Hi Alex,

I like the way you type, it's how I imagine text will look in the future 
when we have successfully completed chip implants in the brain that 
directly outputs the text of what we are thinking.  :)


So if I understand you correct, you have a database in SQLite which you 
have developed or somehow filled with some data (possibly using a DB 
manager of sorts) that is tailored to the needs at your place of work. 
Now you would like this data to be visible to some people in a useful way.


It's hard to know what the specific needs or specification is from the 
description, but I assume you would like someone to assist and make the 
program(s) for you/with you. I'm sure everyone here is capable, but most 
people here already have a full-time job, so you'd probably have more 
luck at a site like freelancer.com (where you can hire expert people on 
the cheap).


If you only say "I need help"  the best we can answer is: "We can see 
that."  Questions here have to be somewhat specific, even if you post 
some code in whatever programming platform you have, you will typically 
get some help with it here. To get best answers here, what you can do is 
post a query that you have made, along with your table schema / layout, 
and then ask how to make that query better, for instance how to order it 
ascending or filter it, or perhaps how to improve the schema itself, and 
for that you should get loads of results from here.



Best of luck!
Ryan



On 2018/08/18 7:15 PM, am...@juno.com wrote:

August 18, 2018, Hello Roger, Thank you so much for your answer, and so fast at 
that! Going forward, the programming I am looking for does not have to be 
'off-the-shelf' It could be programming done by someone the s/he is willing to 
shine with me--as long as it solves my dilemmas. Also, this programming does 
not have to be 'industry-specific'. As always, if you have any questions, need 
more information and/or need any clarifications, ask away. Respectfully yours, 
Alex Stavis

-- Original Message --
From: Roger Schlueter 
To: sqlite-users@mailinglists.sqlite.org
Subject: Re: [sqlite] questions
Date: Sat, 18 Aug 2018 09:42:38 -0700

Hi Alex,

This functionality is certainly something that SQLite or even any
moderately capable database system can solve.  However, for these
specific requirements there is very likely no off-the-shelf, ready-to-go
solution unless the requirement is very common or regulatory-required in
your industry.


On 8/18/2018 9:24, am...@juno.com wrote:

August 18, 2018 Good Morning Good People: I am in the process of building a 
database for where I work. I can do a lot of tasks with SQLite. But I am stuck 
on the dilemmas below. If any of you have any programming solutions for any/all 
of these dilemmas I am in--in a way that I can download what yo have so someone 
who has no programming experience and can just 'click and use' (i.e., turnkey 
solutions, please advise. Once I hear back that you have any/all solutions to 
my dilemmas below, I will give you an e-mail address to send them to. Thank you 
vern much in advance for helping me eliminate these dilemmas. 1) How Can I do a 
search--say for people trained on one task, or on a given date, or for people 
who know how to operate one specific piece of equipment? 2) How can I set up 
SQLite so some people can only read SQLite? 3) How can I sent up a table/report 
of my results? 4) How can I put in ascending or descending order all the 
information on one field, say by date I gave out safety glasses? Respectfully 
yours, Alex Stavis



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] questions

2018-08-18 Thread am...@juno.com
August 18, 2018, Hello Roger, Thank you so much for your answer, and so fast at 
that! Going forward, the programming I am looking for does not have to be 
'off-the-shelf' It could be programming done by someone the s/he is willing to 
shine with me--as long as it solves my dilemmas. Also, this programming does 
not have to be 'industry-specific'. As always, if you have any questions, need 
more information and/or need any clarifications, ask away. Respectfully yours, 
Alex Stavis

-- Original Message --
From: Roger Schlueter 
To: sqlite-users@mailinglists.sqlite.org
Subject: Re: [sqlite] questions
Date: Sat, 18 Aug 2018 09:42:38 -0700

Hi Alex,

This functionality is certainly something that SQLite or even any 
moderately capable database system can solve.  However, for these 
specific requirements there is very likely no off-the-shelf, ready-to-go 
solution unless the requirement is very common or regulatory-required in 
your industry.


On 8/18/2018 9:24, am...@juno.com wrote:
> August 18, 2018 Good Morning Good People: I am in the process of building a 
> database for where I work. I can do a lot of tasks with SQLite. But I am 
> stuck on the dilemmas below. If any of you have any programming solutions for 
> any/all of these dilemmas I am in--in a way that I can download what yo have 
> so someone who has no programming experience and can just 'click and use' 
> (i.e., turnkey solutions, please advise. Once I hear back that you have 
> any/all solutions to my dilemmas below, I will give you an e-mail address to 
> send them to. Thank you vern much in advance for helping me eliminate these 
> dilemmas. 1) How Can I do a search--say for people trained on one task, or on 
> a given date, or for people who know how to operate one specific piece of 
> equipment? 2) How can I set up SQLite so some people can only read SQLite? 3) 
> How can I sent up a table/report of my results? 4) How can I put in ascending 
> or descending order all the information on one field, say by date I gave out 
> safety glasses? Respectfully yours, Alex Stavis
> 
> Oncologists Freak Out Over True Cause of Cancer
> pro.naturalhealthresponse.com
> http://thirdpartyoffers.juno.com/TGL3131/5b784864d59cb48641affst04vuc
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] questions

2018-08-18 Thread Roger Schlueter

Hi Alex,

This functionality is certainly something that SQLite or even any 
moderately capable database system can solve.  However, for these 
specific requirements there is very likely no off-the-shelf, ready-to-go 
solution unless the requirement is very common or regulatory-required in 
your industry.



On 8/18/2018 9:24, am...@juno.com wrote:

August 18, 2018 Good Morning Good People: I am in the process of building a 
database for where I work. I can do a lot of tasks with SQLite. But I am stuck 
on the dilemmas below. If any of you have any programming solutions for any/all 
of these dilemmas I am in--in a way that I can download what yo have so someone 
who has no programming experience and can just 'click and use' (i.e., turnkey 
solutions, please advise. Once I hear back that you have any/all solutions to 
my dilemmas below, I will give you an e-mail address to send them to. Thank you 
vern much in advance for helping me eliminate these dilemmas. 1) How Can I do a 
search--say for people trained on one task, or on a given date, or for people 
who know how to operate one specific piece of equipment? 2) How can I set up 
SQLite so some people can only read SQLite? 3) How can I sent up a table/report 
of my results? 4) How can I put in ascending or descending order all the 
information on one field, say by date I gave out safety glasses? Respectfully 
yours, Alex Stavis

Oncologists Freak Out Over True Cause of Cancer
pro.naturalhealthresponse.com
http://thirdpartyoffers.juno.com/TGL3131/5b784864d59cb48641affst04vuc
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] questions

2018-08-18 Thread am...@juno.com
August 18, 2018 Good Morning Good People: I am in the process of building a 
database for where I work. I can do a lot of tasks with SQLite. But I am stuck 
on the dilemmas below. If any of you have any programming solutions for any/all 
of these dilemmas I am in--in a way that I can download what yo have so someone 
who has no programming experience and can just 'click and use' (i.e., turnkey 
solutions, please advise. Once I hear back that you have any/all solutions to 
my dilemmas below, I will give you an e-mail address to send them to. Thank you 
vern much in advance for helping me eliminate these dilemmas. 1) How Can I do a 
search--say for people trained on one task, or on a given date, or for people 
who know how to operate one specific piece of equipment? 2) How can I set up 
SQLite so some people can only read SQLite? 3) How can I sent up a table/report 
of my results? 4) How can I put in ascending or descending order all the 
information on one field, say by date I gave out safety glasses? Respectfully 
yours, Alex Stavis

Oncologists Freak Out Over True Cause of Cancer
pro.naturalhealthresponse.com
http://thirdpartyoffers.juno.com/TGL3131/5b784864d59cb48641affst04vuc
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions- New App

2017-09-20 Thread Jens Alfke


> On Sep 19, 2017, at 6:16 AM, Russell Duncan  wrote:
> 
> I've just started researching programs for coding/app-making and SQLite
> came up, is this a program that would be useful for something like that?

Yes, although SQLite is rather low-level, and learning to use it directly is 
already a sizable task if you're not already familiar with databases.

(The best example I can think of is that you decide to start a band, so you 
walk into a music store and look at an electric guitar pickup*. Is it useful 
for making music? Yes, but unless you want to dive into a rabbit hole of 
customizing your own guitars, you should probably start by buying a complete 
guitar and learning to play that.)

> I'm trying to make the app from scratch, and learn everything on my own so
> it is completely mine, I do not want to go through tools that have
> completed the coding for me.

I can understand that desire. But there's a difference between a tool that does 
all the work for you, and a tool that takes care of lower-level stuff so you 
can focus on the high-level app. And as a novice, it's best not to take on too 
much complexity all at once.

SQLite itself has a low-level C API, and on top of that it has a specialized 
language called SQL that you have to write commands in. There's a big gap 
between that and the language you'll probably write your app in (Swift, Java, 
C#, Objective-C…) and the kinds of things your app will want to do, like "save 
a Chat object to disk" or "find all the Person objects for people I've talked 
to lately". You could spend months just learning how to write code to bridge 
that gap.

You haven't said which OS platform you want to develop for, but they all have 
higher level data storage frameworks that act as glue between SQLite and apps. 
For example, Apple has Core Data. (I'm not an Android or .NET programmer, but 
they have database APIs too.) These frameworks make it pretty easy to store and 
load your application's objects, and to find objects matching particular 
criteria. You won't have to spend time worrying about how to encode String 
objects to UTF-8, how to invoke sqlite3_step, or what an INNER JOIN is.

Also worth looking at are cross-platform data libraries that aren't directly 
tied to SQLite, like Realm and Couchbase Lite [disclaimer: I work on Couchbase 
Lite.]

—Jens

* Modular synthesizers are an even better metaphor, but they're less well 
known. As a music newbie it would be a terrible idea to start out by buying an 
individual rack-mounted oscillator. Instead get an all-in-one synth like a 
Novation Circuit or a Korg Minologue. I say this as someone who's had a Circuit 
for two years and is now just starting to get his feet wet with modular gear.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Questions- New App

2017-09-20 Thread Russell Duncan
Hello,

I'm completely new to everything SQLite, and app-coding in general.

I'm trying to make a service based app similar to Venmo (not for payments,
but similar in layout and size, with the idea of having the social feed
incorporated).

I've just started researching programs for coding/app-making and SQLite
came up, is this a program that would be useful for something like that?

I'm trying to make the app from scratch, and learn everything on my own so
it is completely mine, I do not want to go through tools that have
completed the coding for me.

If you could send me in the right direction and tell me if SQlite is useful
for this, or what other programs or resources would be useful.

I look forward to working further, and learning further from this.

Thank you,

Russell Duncan

Florida International University
Studying International Relations
(813) 493 5764
Instagram: @russell_duncan
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Questions regarding the use of a "partial index" as a "covering index"

2017-09-09 Thread Martin Thierer
I get confusing (to me) results when toying with a query that I think
should use a partial index as a "covering index". This is using sqlite
3.20.1 on Arch Linux x86_64.

The examples listed below start from a new, empty database.

*** Given a table and an index like this

CREATE TABLE tab (x1 TEXT, x2 TEXT, x3 TEXT, x4 TEXT,
  PRIMARY KEY (x1, x3)
) WITHOUT ROWID;
CREATE INDEX ind ON tab (x2, x4) WHERE x4 IS NULL;

i get the result I would expect: The index is used as a "covering
index", even though if I read the bytecode correctly there's a != NULL
comparison that's not strictly necessary, but that would be just room
for further optimization:

sqlite> EXPLAIN QUERY PLAN SELECT x1, x2 FROM tab WHERE x4 IS NULL;
selectid|order|from|detail
0|0|0|SCAN TABLE tab USING COVERING INDEX ind

sqlite> EXPLAIN SELECT x1, x2 FROM tab WHERE x4 IS NULL;
addr  opcode p1p2p3p4 p5  comment
  -        -  --  -
0 Init   0 10000  Start at 10
1 OpenRead   1 3 0 k(4)   00  root=3 iDb=0; ind
2 Rewind 1 9 1 0  00
3   Column 1 1 100  r[1]=tab.x4
4   NotNull1 8 000  if r[1]!=NULL goto 8
5   Column 1 2 200  r[2]=tab.x1
6   Column 1 0 300  r[3]=tab.x2
7   ResultRow  2 2 000  output=r[2..3]
8 Next   1 3 001
9 Halt   0 0 000
10Transaction0 0 2 0  01  usesStmtJournal=0
11Goto   0 1 000

*** But if I remove the (redundant) column x4 from the index, it seems
to me like it's no longer used as a "covering index" for the same
query, even though all the output columns are still there and the
index *is* actually used:

DROP TABLE tab;
CREATE TABLE tab (x1 TEXT, x2 TEXT, x3 TEXT, x4 TEXT,
  PRIMARY KEY (x1, x3)
) WITHOUT ROWID;
CREATE INDEX ind ON tab (x2) WHERE x4 IS NULL;

sqlite> EXPLAIN QUERY PLAN SELECT x1, x2 FROM tab WHERE x4 IS NULL;
selectid|order|from|detail
0|0|0|SCAN TABLE tab USING INDEX ind

sqlite> EXPLAIN SELECT x1, x2 FROM tab WHERE x4 IS NULL;
addr  opcode p1p2p3p4 p5  comment
  -        -  --  -
0 Init   0 14000  Start at 14
1 OpenRead   0 2 0 k(2,,) 00  root=2 iDb=0; tab
2 OpenRead   1 3 0 k(3,,,)00  root=3 iDb=0; ind
3 Rewind 1 131 0  00
4   Column 1 1 100  r[1]=
5   Column 1 2 200  r[2]=
6   NotFound   0 121 2  00  key=r[1..2]
7   Column 0 3 300  r[3]=tab.x4
8   NotNull3 12000  if
r[3]!=NULL goto 12
9   Column 1 1 400  r[4]=tab.x1
10  Column 1 0 500  r[5]=tab.x2
11  ResultRow  4 2 000  output=r[4..5]
12Next   1 4 001
13Halt   0 0 000
14Transaction0 0 470  01  usesStmtJournal=0
15Goto   0 1 000

*** It gets even more confusing: If I remove the "TEXT" type from
columns x3 and x4, the index isn't used at all:

DROP TABLE tab;
CREATE TABLE tab (x1 TEXT, x2 TEXT, x3, x4,
  PRIMARY KEY (x1, x3)
) WITHOUT ROWID;
CREATE INDEX ind ON tab (x2) WHERE x4 IS NULL;

sqlite> EXPLAIN QUERY PLAN SELECT x1, x2 FROM tab WHERE x4 IS NULL;
selectid|order|from|detail
0|0|0|SCAN TABLE tab

*** Except if I also remove the "TEXT" type from column x2, the index
is used again:

DROP TABLE tab;
CREATE TABLE tab (x1 TEXT, x2, x3, x4,
  PRIMARY KEY (x1, x3)
) WITHOUT ROWID;
CREATE INDEX ind ON tab (x2) WHERE x4 IS NULL;

sqlite> EXPLAIN QUERY PLAN SELECT x1, x2 FROM tab WHERE x4 IS NULL;
selectid|order|from|detail
0|0|0|SCAN TABLE tab USING INDEX ind

*** But only until I also remove the "TEXT" type from column x1, then
it's not used:

DROP TABLE tab;
CREATE TABLE tab (x1, x2, x3, x4,
  PRIMARY KEY (x1, x3)
) WITHOUT ROWID;
CREATE INDEX ind ON tab (x2) WHERE x4 IS NULL;

sqlite> EXPLAIN QUERY PLAN SELECT x1, x2 FROM tab WHERE x4 IS NULL;
selectid|order|from|detail
0|0|0|SCAN TABLE tab

Is this the expected behaviour?

Thanks!

Martin
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about SQLite Encryption Extension (SEE)

2017-06-18 Thread Yuriy M. Kaminskiy
Jens Alfke  writes:

>> On Jun 17, 2017, at 7:02 AM, Yuriy M. Kaminskiy  wrote:
>> 
>> *) don't appear to be able to keep key in system-provided secure 
>> device/enclave;
>
> In their defense, I think this is out-of-scope for a cross-platform db
> encryption library, as there are so many different APIs for this on
> different platforms, and different valid choices even on one
> platform.

Sure, this feature is very unlikely to be present for /any/ user-level
file/db encryption.
But totally within scope for system-wide full-disk encryption.

> So I see this more as an application responsibility.
I'm not sure application can do a lot here: if you are going to
perform encryption on user-level, key will be in application memory,
not in security enclave.

[...]
>> *) error handling looks problematic in a lot of places (no error
>> checks, there are memory/resource leaks on error paths).
>
> If you have notes on those, could you share them? It would be good to

Take any openssl function that can return error (e.g. if you specify
PRAGMA cipher=aes-128-gcm, EVP_CipherFinal is expected to always return
error on decryption, as sqlcipher does not provide correct tag [or, more
precisely, *any* tag]).
There are no check for this error.

Take sqlcipher_codec_ctx_init:

  if((rc = sqlcipher_cipher_ctx_init(>read_ctx)) != SQLITE_OK) return rc; 
  if((rc = sqlcipher_cipher_ctx_init(>write_ctx)) != SQLITE_OK) return rc;
  
Suppose, first sqlcipher_cipher_ctx_init succeed, but second failed.
Who is going to release ctx->read_ctx (and ctx itself)?
And a lot more similar things.

Sure, nothing *terrible serious* (it is leak on error path, likely only
possible on OOM, so program state is rather fragile and most likely will
crash or terminate anyway), but still shows that hardly anyone seriously
reviewed code.

> get those cleaned up. (I don’t work on SQLCipher, but I do work on a
> library that uses it in some configurations.)

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about SQLite Encryption Extension (SEE)

2017-06-17 Thread Jens Alfke

> On Jun 17, 2017, at 7:02 AM, Yuriy M. Kaminskiy  wrote:
> 
> *) don't appear to be able to keep key in system-provided secure 
> device/enclave;

In their defense, I think this is out-of-scope for a cross-platform db 
encryption library, as there are so many different APIs for this on different 
platforms, and different valid choices even on one platform. So I see this more 
as an application responsibility.

For example, on iOS you could store the key as a normal Keychain item or put it 
under Touch ID control, or make the user enter a passphrase. Storing or 
accessing the key may require user interaction, which means UI code that likely 
needs to be customized to the application. In some environments you might need 
to request the key from a key-server. Etc.

> *) error handling looks problematic in a lot of places (no error
> checks, there are memory/resource leaks on error paths).

If you have notes on those, could you share them? It would be good to get those 
cleaned up. (I don’t work on SQLCipher, but I do work on a library that uses it 
in some configurations.)

—Jens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about SQLite Encryption Extension (SEE)

2017-06-17 Thread Yuriy M. Kaminskiy
Jens Alfke  writes:
>> And any non-opensource crypto should be taken with triple caution. Or
>> even opensource, but not widely-used or otherwise not known to be
>> carefully peer-reviewed (FWIW, I looked at e.g. wxsqlite crypto code, it
>> looks not exactly promising too).
>
> What do you think of SQLCipher?

Disclaimer: I'm not a real cryptographer ^_^, while I can notice some
outright problematic things, but easily miss others.

From quick overview:

*) they had sense to avoid self-coding crypto primitives, and use
openssl, tomscrypt or (macs?) commoncrypto;
   *) with openssl, they appear to support any cipher supported by
openssl (but they deprecated `PRAGMA cipher` in recent releases, so it
is aes-256-cbc by default);
   *) with other backends, no flexibility at all: only aes-256-cbc
supported;
*) use crypto provider's [strong] random for IV;
*) for (optional) integrity, uses HMAC-SHA1; AFAIK, while SHA1 is
getting more and more broken, HMAC-SHA1 is not broken yet; but it would
be good if they had a plan ahead.
*) don't support AEAD modes (such as AES-{GCM,CCM} or CHACHA20-POLY1305);
*) for kdf, uses PBKDF2-HMAC-SHA1 by default (same: it is not broken
yet, but).
*) don't appear to be able to keep key in system-provided secure
device/enclave;
*) don't appear to be able to easily change passphrase (only by
re-encrypting whole database) or use several passphrases (see LUKS for
comparison).
*) nitpick, but I don't like how they love to constantly re-init cipher
(and hmac) context on reading/writing each page (key setup is not
exactly inexpensive thing; with some cipher [e.g. blowfish], it is
outright SLOW). (And slow crypto is *also* a security problem: if crypto
is expensive, people tends to avoid it).
*) error handling looks problematic in a lot of places (no error
checks, there are memory/resource leaks on error paths).

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about SQLite Encryption Extension (SEE)

2017-06-11 Thread Rowan Worth
On 9 June 2017 at 22:30, Yuriy M. Kaminskiy  wrote:
>
> Don't know about windows, but on linux no additional "debug privileges"
> needed. You can attach debugger (ptrace syscall) to any process running
> with under same user. Additional privileges needed only for debugging
> processes running under different users (or suid executables).
>

This is generally true, but might not be in the future. The linux kernel
does have an option to limit the processes on which ptrace is effective,
even within processes owned by a specific user. Archlinux at least enables
it by default, I guess time will tell if it sees widespread adoption. I
think it works by allowing ptrace only if invoked by root, or if the target
process is a child of the calling process. I can't find much documentation
on it but here's the arch description:
https://wiki.archlinux.org/index.php/security#ptrace_scope

-Rowan
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about SQLite Encryption Extension (SEE)

2017-06-09 Thread Matthias-Christian Ott
On 2017-06-09 00:13, Wout Mertens wrote:
> Isn't it all just obfuscation? Any root user can read your key, if not from
> disk then from memory. Any normal user can't read your key, nor from disk,
> nor from memory; and they can't read your db file either.
> 
> So if the adversary is someone with access to your disk image, disk
> encryption trumps db encryption (unless the disk encryption is vulnerable
> to known-plaintext attacks, but I guess they probably apply to sqlite too).
> 
> If the adversary is another process on the same host, encrypting the db
> just adds obfuscation, which is security against lazy hackers.

When the discussion about DRM and Trusted Computing was more active,
this was widely discussed. Cory Doctorow gave a talk about DRM at
Microsoft that illustrates this misuse of cryptography [1]. Mark Stefik
described a scary vision of DRM over two decades ago [2]. Richard
Stallman has said and written a lot about DRM as well. So perhaps we
should not start another debate on this mailing list and read what has
already been written and said about it at great length.

My personal conclusion from the discussion about DRM and Trusted
Computing is that DRM will never work unless we don't own our computers
but someone else who controls a cryptographic chip in them does.
Unfortunately, this is reality for devices with iOS and other similar
products.

SEE only protects the database if an attacker only has access the
storage medium of the database but not the encryption key. Not more and
not less. You can of course argue about how difficult it is to obtain
the encryption key but has nothing to do with SEE. It depends only
concrete use cases, scenarios and threat models but not SEE. So can we
have the discussion about this on another mailing list?

[1] http://craphound.com/msftdrm.txt
[2]
http://www2.parc.com/istl/groups/uir/publications/items/UIR-1996-10-Stefik-InternetCommerce-IgnitingDreams.pdf
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about SQLite Encryption Extension (SEE)

2017-06-09 Thread Jens Alfke

> On Jun 9, 2017, at 7:30 AM, Yuriy M. Kaminskiy  wrote:
> 
> On other hand, application-level encryption should be used with great
> caution; it is a way too often designed and implemented by
> non-cryptographers, does not use optimized or hardware-assisted crypto
> primitives (and, for AES, often use naive implementation without
> protection against timing/cache attacks),

Hm. The file-encryption code I’ve seen generally delegates the actual crypto 
primitives to either OpenSSL, libSodium, or an OS-provided subsystem like 
Apple’s CommonCrypto. I agree that I’d have little trust in a library like this 
that tried to write its own implementation of AES, etc.

> does not use protected hardware for keeping keys;

Yeah, key management is generally “left as an exercise for the app developer”, 
who has little knowledge of security. The problem I’ve seen on mobile devices 
is that app developers will often pass the buck to the user, i.e. deriving the 
key from a passphrase. The user then has to type the damn passphrase every time 
they run the app, which heavily incents them to pick something short and 
trivially-crackable.

This has gotten somewhat better with biometric sensors. On iOS it’s pretty easy 
to store a key in the Keychain protected by TouchID, which means the key itself 
resides in the CPU secure enclave, which will only release it when it gets a 
fingerprint. Not that fingerprints are massively secure, but they’re a lot 
better than a four-digit PIN, and require physical access to spoof.

(FYI, Apple has an excellent iOS security white-paper that covers all of this 
stuff in detail.)

> And any non-opensource crypto should be taken with triple caution. Or
> even opensource, but not widely-used or otherwise not known to be
> carefully peer-reviewed (FWIW, I looked at e.g. wxsqlite crypto code, it
> looks not exactly promising too).

What do you think of SQLCipher?

—Jens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about SQLite Encryption Extension (SEE)

2017-06-09 Thread Jens Alfke

> On Jun 8, 2017, at 3:13 PM, Wout Mertens  wrote:
> 
> Isn't it all just obfuscation? Any root user can read your key, if not from
> disk then from memory.

Keys on disk are [or should be!] generally stored by special OS subsystems 
(like the Keychain on Apple platforms) that use encrypted storage, the keys to 
which are in turn managed by a secure enclave in the CPU and/or derived from 
user login passphrase.

I believe (but don’t know the details) that on macOS it’s pretty difficult for 
a process to get access to another process’ address space, even one running as 
the same user. If this capability is covered by System Integrity Protection, 
then it would require more than just(!) root access, involving at least a 
reboot into system recovery mode to turn off SIP; i.e. needing physical access 
to the machine.

On iOS, processes are completely sandboxed from each other, most of the types 
of exploits used to get root are unavailable, and  getting any access to a 
locked or powered-down device is close to impossible, as the FBI found out in 
the San Bernardino case last year.

In any case, regardless of the technical benefits, there can be legal 
requirements for app-level encryption, for example apps storing health data 
which in the US fall under HIPPAA. (It’s actually a bit vague about whether 
encryption is strictly required, but this tends to be interpreted as “if it’s 
feasible, encrypt it”: https://www.sookasa.com/resources/HIPAA-encryption/ )

—Jens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about SQLite Encryption Extension (SEE)

2017-06-09 Thread Yuriy M. Kaminskiy
Eric Grange  writes:

>> Isn't it all just obfuscation?
>
> Not really, the encryption protects the file, wherever it is, as long as
> the attacker does not have access to the application keys or application
> memory.
>
>> If the adversary is another process on the same host, encrypting the db
>> just adds obfuscation, which is security against lazy hackers.
>
> Another process would need debug privileges to access your application's
> memory.

Don't know about windows, but on linux no additional "debug privileges"
needed. You can attach debugger (ptrace syscall) to any process running
with under same user. Additional privileges needed only for debugging
processes running under different users (or suid executables).

> If you rely on disk encryption primarily, then if that encryption
> compromised, or if backups are compromised, or if a root user copies the
> wrong files in the wrong places, or just makes any error, then everything
> on that disk can be compromised.

> With application-level encryption, user error will only compromise that
> app's data, and you otherwise need the root user to be the attacker, which
> makes the problem quite different from the root user making a mistake.
>
> Finally in the grand scheme of things, the likelyhood of any disk
> encryption being broken (as an implementation) is extremely high, given it
> is such a juicy target.

And that's why they attract a lot of attention, and any bugs or even
traces of weakness were weeded out very long ago. I'm not so sure about
semi-closed application-level security solutions, like SEE. All I've
seen in public was not very encouraging (somewhat unusual for disk
encryption crypto constructs [with one of primitives already considered
to be broken], nothing said about KDF, nothing about IV, nothing about
design, etc).

> And when it is broken, automated tools will be
> available for all lazy hackers to download and deploy with a single
> click.
>
> So while you can and should use disk encryption, it can only be seen as an
> added security layer, never as a primary security layer.

I'd say opposite. System-wide encryption is a must have (*especially*,
swap, hibernation and temporary space encryption; there are *nothing*
that can be done about that on application-level!).

On other hand, application-level encryption should be used with great
caution; it is a way too often designed and implemented by
non-cryptographers, does not use optimized or hardware-assisted crypto
primitives (and, for AES, often use naive implementation without
protection against timing/cache attacks), does not use protected
hardware for keeping keys; and in general provides very little or no
additional security over FDE plus file/directory permissions.

And any non-opensource crypto should be taken with triple caution. Or
even opensource, but not widely-used or otherwise not known to be
carefully peer-reviewed (FWIW, I looked at e.g. wxsqlite crypto code, it
looks not exactly promising too).

> On Fri, Jun 9, 2017 at 12:13 AM, Wout Mertens 
> wrote:
>
>> Isn't it all just obfuscation? Any root user can read your key, if not from
>> disk then from memory. Any normal user can't read your key, nor from disk,
>> nor from memory; and they can't read your db file either.
>>
>> So if the adversary is someone with access to your disk image, disk
>> encryption trumps db encryption (unless the disk encryption is vulnerable
>> to known-plaintext attacks, but I guess they probably apply to sqlite too).
>>
>> If the adversary is another process on the same host, encrypting the db
>> just adds obfuscation, which is security against lazy hackers.
>>
>> On Thu, Jun 8, 2017 at 9:04 PM Richard Hipp  wrote:
>>
>> > On 6/8/17, Wout Mertens  wrote:
>> > > Just musing: is an encrypted disk not more reliable? You have to store
>> > the
>> > > key somewhere…
>> >
>> > Maybe.  I guess it depends on your threat model.
>> >
>> > Encrypting the whole disk is a system setting,.  Anybody who has
>> > access to the system can see everything on disk.  You also have to
>> > have administrator privileges to set it up.
>> >
>> > Encrypting a single database file is an application setting.  Some
>> > applications might want to hide there data from other applications on
>> > the same system, or from the user of the system.  Whole disk
>> > encryption won't help there.  And, database encryption requires no
>> > special privileges.
>> >
>> > --
>> > D. Richard Hipp
>> > d...@sqlite.org

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Questions about SQLite Encryption Extension (SEE)

2017-06-09 Thread wout.mertens
Aha, that does make sense, thinking of each risk in terms in
likelihoods. So encrypting the db as well as the disk seems the safest
route here.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about SQLite Encryption Extension (SEE)

2017-06-09 Thread Eric Grange
> Isn't it all just obfuscation?

Not really, the encryption protects the file, wherever it is, as long as
the attacker does not have access to the application keys or application
memory.

> If the adversary is another process on the same host, encrypting the db
> just adds obfuscation, which is security against lazy hackers.

Another process would need debug privileges to access your application's
memory.

If you rely on disk encryption primarily, then if that encryption
compromised, or if backups are compromised, or if a root user copies the
wrong files in the wrong places, or just makes any error, then everything
on that disk can be compromised.

With application-level encryption, user error will only compromise that
app's data, and you otherwise need the root user to be the attacker, which
makes the problem quite different from the root user making a mistake.

Finally in the grand scheme of things, the likelyhood of any disk
encryption being broken (as an implementation) is extremely high, given it
is such a juicy target. And when it is broken, automated tools will be
available for all lazy hackers to download and deploy with a single click.

So while you can and should use disk encryption, it can only be seen as an
added security layer, never as a primary security layer.

Eric


On Fri, Jun 9, 2017 at 12:13 AM, Wout Mertens 
wrote:

> Isn't it all just obfuscation? Any root user can read your key, if not from
> disk then from memory. Any normal user can't read your key, nor from disk,
> nor from memory; and they can't read your db file either.
>
> So if the adversary is someone with access to your disk image, disk
> encryption trumps db encryption (unless the disk encryption is vulnerable
> to known-plaintext attacks, but I guess they probably apply to sqlite too).
>
> If the adversary is another process on the same host, encrypting the db
> just adds obfuscation, which is security against lazy hackers.
>
> On Thu, Jun 8, 2017 at 9:04 PM Richard Hipp  wrote:
>
> > On 6/8/17, Wout Mertens  wrote:
> > > Just musing: is an encrypted disk not more reliable? You have to store
> > the
> > > key somewhere…
> >
> > Maybe.  I guess it depends on your threat model.
> >
> > Encrypting the whole disk is a system setting,.  Anybody who has
> > access to the system can see everything on disk.  You also have to
> > have administrator privileges to set it up.
> >
> > Encrypting a single database file is an application setting.  Some
> > applications might want to hide there data from other applications on
> > the same system, or from the user of the system.  Whole disk
> > encryption won't help there.  And, database encryption requires no
> > special privileges.
> >
> > --
> > D. Richard Hipp
> > d...@sqlite.org
> > ___
> > sqlite-users mailing list
> > sqlite-users@mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> >
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about SQLite Encryption Extension (SEE)

2017-06-08 Thread Simon Slavin


On 8 Jun 2017, at 11:13pm, Wout Mertens  wrote:

> So if the adversary is someone with access to your disk image, disk
> encryption trumps db encryption (unless the disk encryption is vulnerable
> to known-plaintext attacks, but I guess they probably apply to sqlite too).

Your hope is that the database is held on a server but the decryption key is on 
the computers users type on.  Or if you have a web-facing setup with multiple 
servers, your database is on the database computer running PHP and the key is 
in the JavaScript files on the web server.

But in the long run, physical possession always trumps encryption.  Once 
they’re in a situation where they can try keys endlessly it’s just a case of 
how much time and money they’re willing to spend to get access to your data.  
Are you a target of the CIA ?  Don’t rely on encryption.  If you’re someone 
with no money and no interest in politics ?  Then encryption is good at 
preventing casual theft by bored employees and thieves of opportunity.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about SQLite Encryption Extension (SEE)

2017-06-08 Thread Wout Mertens
Isn't it all just obfuscation? Any root user can read your key, if not from
disk then from memory. Any normal user can't read your key, nor from disk,
nor from memory; and they can't read your db file either.

So if the adversary is someone with access to your disk image, disk
encryption trumps db encryption (unless the disk encryption is vulnerable
to known-plaintext attacks, but I guess they probably apply to sqlite too).

If the adversary is another process on the same host, encrypting the db
just adds obfuscation, which is security against lazy hackers.

On Thu, Jun 8, 2017 at 9:04 PM Richard Hipp  wrote:

> On 6/8/17, Wout Mertens  wrote:
> > Just musing: is an encrypted disk not more reliable? You have to store
> the
> > key somewhere…
>
> Maybe.  I guess it depends on your threat model.
>
> Encrypting the whole disk is a system setting,.  Anybody who has
> access to the system can see everything on disk.  You also have to
> have administrator privileges to set it up.
>
> Encrypting a single database file is an application setting.  Some
> applications might want to hide there data from other applications on
> the same system, or from the user of the system.  Whole disk
> encryption won't help there.  And, database encryption requires no
> special privileges.
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about SQLite Encryption Extension (SEE)

2017-06-08 Thread Richard Hipp
On 6/8/17, Wout Mertens  wrote:
> Just musing: is an encrypted disk not more reliable? You have to store the
> key somewhere…

Maybe.  I guess it depends on your threat model.

Encrypting the whole disk is a system setting,.  Anybody who has
access to the system can see everything on disk.  You also have to
have administrator privileges to set it up.

Encrypting a single database file is an application setting.  Some
applications might want to hide there data from other applications on
the same system, or from the user of the system.  Whole disk
encryption won't help there.  And, database encryption requires no
special privileges.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about SQLite Encryption Extension (SEE)

2017-06-08 Thread Wout Mertens
Just musing: is an encrypted disk not more reliable? You have to store the
key somewhere…

On Thu, Jun 8, 2017, 7:07 PM Richard Hipp  wrote:

> On 6/8/17, Karl Sanders  wrote:
> > I would like to know if an encrypted database allows hot backups and
> > page sizes different from the default one.
>
> Yes and Yes.
>
> >
> > Is encryption applied to everything that gets written to disk?
> > Including transient indices and materializations of views and subqueries?
> >
>
> The database file and rollback journal or WAL file are all encrypted.
> Actually, in the rollback journal and WAL file, the meta-data is not
> encrypted, just the page images that will be written back into the
> database.
>
> Transient indexes and materializations of views and subqueries are not
> encrypted.  I recommend you set "PRAGMA temp_store=MEMORY" so that
> those objects are never written to disk.
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about SQLite Encryption Extension (SEE)

2017-06-08 Thread Richard Hipp
On 6/8/17, Karl Sanders  wrote:
> I would like to know if an encrypted database allows hot backups and
> page sizes different from the default one.

Yes and Yes.

>
> Is encryption applied to everything that gets written to disk?
> Including transient indices and materializations of views and subqueries?
>

The database file and rollback journal or WAL file are all encrypted.
Actually, in the rollback journal and WAL file, the meta-data is not
encrypted, just the page images that will be written back into the
database.

Transient indexes and materializations of views and subqueries are not
encrypted.  I recommend you set "PRAGMA temp_store=MEMORY" so that
those objects are never written to disk.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Questions about SQLite Encryption Extension (SEE)

2017-06-08 Thread Karl Sanders
I would like to know if an encrypted database allows hot backups and
page sizes different from the default one.

Is encryption applied to everything that gets written to disk?
Including transient indices and materializations of views and subqueries?

Regards,
Karl
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions on big blobs and blob I/O

2017-03-29 Thread Stephen Chrzanowski
@Jens;

I'd take what David suggested one step further and just remove the BLOB
from the tables of concern and keep the blob on a 1:1 PK:FK relationship
and only look at the BLOB tables when required.  That way if you do an
occasional [select * from ...] you're not reading the BLOB because it's
living somewhere else.

I make this suggestion not just because of SQLite, but, any DBMS I've
developed for.  Keep the really big data somewhere else, with a
relationship to the primary resource required, and then only query that
massive chunk when required through a specialized view or select.

On Wed, Mar 29, 2017 at 2:11 PM, David Raymond <david.raym...@tomtom.com>
wrote:

> Remember to make the blob field the very last field in your table schema,
> and avoid "select * from", otherwise you can create performance problems.
> The data for a record is stored in the same order as the fields are
> defined, and overflow pages are a linked list. So if you have your 100 MB
> blob first, and your ID second, then you will have to read through all 100
> MB of the blob to get to your ID.
>
> For the freeing up of space check out the incremental version of auto
> vacuum. Rather than freeing it all up at once it might be good to run it
> "incrementally" to avoid 1 big performance hit. Maybe have something run
> every so often to free up a set amount of pages, similar to a scheduled WAL
> checkpointer.
>
>
> -Original Message-
> From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org]
> On Behalf Of Jens Alfke
> Sent: Wednesday, March 29, 2017 1:14 PM
> To: SQLite mailing list
> Subject: [sqlite] Questions on big blobs and blob I/O
>
> I’m thinking of transitioning from external to internal storage of large
> blobs. That is, currently I store these as individual files in a directory
> next to the SQLite database. But it sounds like today’s new improved SQLite
> is better at managing big blobs, and supports streaming reads; and it would
> simplify my code to not have to deal with all those files.
>
> (These blobs are created by a “document attachments” API of the library I
> work on. What’s in them is up to the developer. They’re often fairly small,
> e.g. a thumbnail JPEG, but they could be large media files like movies, in
> the tens or hundreds of megabytes. There’s no upper limit.)
>
> I have two questions:
>
> * Do big blobs cause any performance problems for the db as a whole, like
> fragmentation? In particular, is the blob data stored contiguously in the
> file?
>
> * After a big blob is deleted, my understanding is that the db file won’t
> shrink, so the space isn’t reclaimed to the filesystem. Instead the freed
> pages in the db file will be reused later on. If I urgently need to shrink
> the file, do I have to run a full (expensive) vacuum? (The problem with
> vacuum is that it requires lots of free space to work, but blob deletions
> may be happening on behalf of the user, to free up storage when the
> [mobile] device is running out of free space, creating a catch-22…)
>
> * Is there any way to avoid a sqlite3_blob handle being invalidated when
> the blob’s row is deleted (via a DELETE on another db handle?) I’d prefer
> to have behavior like file deletion in Unix, where open file handles keep
> working even after a file is deleted. I’m wondering whether I can
> accomplish this by opening a read transaction before opening the blob, then
> leaving it open until the blob is closed. Since I’m using WAL, will this db
> handle keep a snapshot of the time before the blob’s row was deleted,
> allowing the blob handle to keep working?
>
> Thanks!
>
> —Jens
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions on big blobs and blob I/O

2017-03-29 Thread Richard Hipp
On 3/29/17, David Raymond  wrote:
> Remember to make the blob field the very last field in your table schema,
> and avoid "select * from", otherwise you can create performance problems.
> The data for a record is stored in the same order as the fields are defined,
> and overflow pages are a linked list. So if you have your 100 MB blob first,
> and your ID second, then you will have to read through all 100 MB of the
> blob to get to your ID.

David is correct.  You will do well to follow his advice.

However, for completeness let me add that sometimes SQLite is able to
avoid walking the whole linked list in order to get to data that lives
at the end.

There is a special optimization in the code, that only works when
either auto_vacuum is enabled, that sometimes allows SQLite to skip
over the intermediate pages of a big BLOB and go right to the page at
the end that contains your ID.  Key points are that this optimization
does not work every time and will fall back to walking the list if it
fails, and it never works unless auto_vacuum is turned on.
-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions on big blobs and blob I/O

2017-03-29 Thread David Raymond
Remember to make the blob field the very last field in your table schema, and 
avoid "select * from", otherwise you can create performance problems. The data 
for a record is stored in the same order as the fields are defined, and 
overflow pages are a linked list. So if you have your 100 MB blob first, and 
your ID second, then you will have to read through all 100 MB of the blob to 
get to your ID.

For the freeing up of space check out the incremental version of auto vacuum. 
Rather than freeing it all up at once it might be good to run it 
"incrementally" to avoid 1 big performance hit. Maybe have something run every 
so often to free up a set amount of pages, similar to a scheduled WAL 
checkpointer.


-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
Behalf Of Jens Alfke
Sent: Wednesday, March 29, 2017 1:14 PM
To: SQLite mailing list
Subject: [sqlite] Questions on big blobs and blob I/O

I’m thinking of transitioning from external to internal storage of large blobs. 
That is, currently I store these as individual files in a directory next to the 
SQLite database. But it sounds like today’s new improved SQLite is better at 
managing big blobs, and supports streaming reads; and it would simplify my code 
to not have to deal with all those files.

(These blobs are created by a “document attachments” API of the library I work 
on. What’s in them is up to the developer. They’re often fairly small, e.g. a 
thumbnail JPEG, but they could be large media files like movies, in the tens or 
hundreds of megabytes. There’s no upper limit.)

I have two questions:

* Do big blobs cause any performance problems for the db as a whole, like 
fragmentation? In particular, is the blob data stored contiguously in the file?

* After a big blob is deleted, my understanding is that the db file won’t 
shrink, so the space isn’t reclaimed to the filesystem. Instead the freed pages 
in the db file will be reused later on. If I urgently need to shrink the file, 
do I have to run a full (expensive) vacuum? (The problem with vacuum is that it 
requires lots of free space to work, but blob deletions may be happening on 
behalf of the user, to free up storage when the [mobile] device is running out 
of free space, creating a catch-22…)

* Is there any way to avoid a sqlite3_blob handle being invalidated when the 
blob’s row is deleted (via a DELETE on another db handle?) I’d prefer to have 
behavior like file deletion in Unix, where open file handles keep working even 
after a file is deleted. I’m wondering whether I can accomplish this by opening 
a read transaction before opening the blob, then leaving it open until the blob 
is closed. Since I’m using WAL, will this db handle keep a snapshot of the time 
before the blob’s row was deleted, allowing the blob handle to keep working?

Thanks!

—Jens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions on big blobs and blob I/O

2017-03-29 Thread Dan Kennedy

On 03/30/2017 12:52 AM, Olivier Mascia wrote:

Le 29 mars 2017 à 19:40, Simon Slavin  a écrit :


Since I’m using WAL, will this db handle keep a snapshot of the time before the 
blob’s row was deleted, allowing the blob handle to keep working?

As Simon said wisely, a word of the developers would clarify this better.

Yet, since you're using WAL it should work.  Else the implementation of the 
blob API would not obey the transactional/isolation rules and I suppose it 
would qualify as a (horrible) bug.


Transactions are isolated in SQLite, and in WAL mode readers do not 
block writers. So it will work.


Separate database connections for the reader and writer of course.

Dan.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions on big blobs and blob I/O

2017-03-29 Thread Olivier Mascia
> Le 29 mars 2017 à 19:40, Simon Slavin  a écrit :
> 
>> Since I’m using WAL, will this db handle keep a snapshot of the time before 
>> the blob’s row was deleted, allowing the blob handle to keep working?

As Simon said wisely, a word of the developers would clarify this better.

Yet, since you're using WAL it should work.  Else the implementation of the 
blob API would not obey the transactional/isolation rules and I suppose it 
would qualify as a (horrible) bug.

-- 
Best Regards, Meilleures salutations, Met vriendelijke groeten,
Olivier Mascia, http://integral.software


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions on big blobs and blob I/O

2017-03-29 Thread Simon Slavin

On 29 Mar 2017, at 6:14pm, Jens Alfke  wrote:

> * Do big blobs cause any performance problems for the db as a whole, like 
> fragmentation? In particular, is the blob data stored contiguously in the 
> file?

Blobs are stored in the same pages that other values are stored in.  If a blob 
is bigger than a page it needs to be split, with the continuation on another 
page.  So yes, this can lead to internal defragmentation of the database.

However, with modern storage systems fragmentation causes very little decrease 
in access time.  Access to 'sectors' of solid state storage are not related to 
the order in which sectors are stored.  And even on rotating storage so much 
use is made of virtual sectors for wear-balancing purposes that consecutive 
sectors are often not consecutive.  Any advice to users to run defragmentation 
software I have to consider as an obsolete hang-over from obsolete storage 
systems, merely causing additional pointless wear.

> * After a big blob is deleted, my understanding is that the db file won’t 
> shrink, so the space isn’t reclaimed to the filesystem. Instead the freed 
> pages in the db file will be reused later on. If I urgently need to shrink 
> the file, do I have to run a full (expensive) vacuum?

All the above is correct, as far as I know.  Pages and parts of pages are not 
released from filespace, they’re just listed as 'free' and will be used when 
SQLite needs more storage space for that database.  In your situation you 
should take a look at



use FULL mode or perhaps use INCREMENTAL mode and perform the incremental 
vacuums as your app quits.  Note that both of these modes are misnamed.  Read 
the descriptions, don’t depend on the names.

> * Is there any way to avoid a sqlite3_blob handle being invalidated when the 
> blob’s row is deleted (via a DELETE on another db handle?) I’d prefer to have 
> behavior like file deletion in Unix, where open file handles keep working 
> even after a file is deleted. I’m wondering whether I can accomplish this by 
> opening a read transaction before opening the blob, then leaving it open 
> until the blob is closed. Since I’m using WAL, will this db handle keep a 
> snapshot of the time before the blob’s row was deleted, allowing the blob 
> handle to keep working?

I think that the answer here is negative (unless you play C-type pointer games 
with memory blocks) but I have to let the devs answer this one.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Questions on big blobs and blob I/O

2017-03-29 Thread Jens Alfke
I’m thinking of transitioning from external to internal storage of large blobs. 
That is, currently I store these as individual files in a directory next to the 
SQLite database. But it sounds like today’s new improved SQLite is better at 
managing big blobs, and supports streaming reads; and it would simplify my code 
to not have to deal with all those files.

(These blobs are created by a “document attachments” API of the library I work 
on. What’s in them is up to the developer. They’re often fairly small, e.g. a 
thumbnail JPEG, but they could be large media files like movies, in the tens or 
hundreds of megabytes. There’s no upper limit.)

I have two questions:

* Do big blobs cause any performance problems for the db as a whole, like 
fragmentation? In particular, is the blob data stored contiguously in the file?

* After a big blob is deleted, my understanding is that the db file won’t 
shrink, so the space isn’t reclaimed to the filesystem. Instead the freed pages 
in the db file will be reused later on. If I urgently need to shrink the file, 
do I have to run a full (expensive) vacuum? (The problem with vacuum is that it 
requires lots of free space to work, but blob deletions may be happening on 
behalf of the user, to free up storage when the [mobile] device is running out 
of free space, creating a catch-22…)

* Is there any way to avoid a sqlite3_blob handle being invalidated when the 
blob’s row is deleted (via a DELETE on another db handle?) I’d prefer to have 
behavior like file deletion in Unix, where open file handles keep working even 
after a file is deleted. I’m wondering whether I can accomplish this by opening 
a read transaction before opening the blob, then leaving it open until the blob 
is closed. Since I’m using WAL, will this db handle keep a snapshot of the time 
before the blob’s row was deleted, allowing the blob handle to keep working?

Thanks!

—Jens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Questions about SQLITE_CONFIG_SCRATCH and SQLITE_CONFIG_PAGECACHE

2015-01-29 Thread Dominique Pellé
Hi

1) Question about SQLITE_CONFIG_SCRATCH

In SQLite documentation about SQLITE_CONFIG_SCRATCH,
I read:

=== BEGIN QUOTE https://sqlite.org/c3ref/c_config_getmalloc.html ===
   SQLite will never require a scratch buffer that is more
   than 6 times the database page size. If SQLite needs needs
   additional scratch memory beyond what is provided by this
   configuration option, then sqlite3_malloc() will be used to
   obtain the memory needed.
=== END QUOTE ===

I stumbled upon code where the scratch buffer size is configured
to only 10 KB only, yet some DB have 16KB page sizes:

const int KSize = 10*1024;
const int KBufferCount = 8;
static uint64_t sqliteScratchBuffer[KSize*KBuferSize/sizeof(uint64_t)];

status = sqlite3_config(
   SQLITE_CONFIG_SCRATCH,
   [0],
   KSize,
   KBufferCount);

Is it safe to have only 10KB of scratch buffer when DB page size
can be 16KB?  Is it ideal?  I don't find guidelines about configuring
SQLITE_CONFIG_SCRATCH.

What happens if the scratch buffer was bigger than 6*page size?
Would memory just be wasted?  (since doc says it never allocate
more than 6*page size).


2) Question about SQLITE_CONFIG_PAGECACHE

In order to reduce the number of malloc calls, I consider
configuring SQLITE_CONFIG_PAGECACHE with
a static buffer. However, the application opens multiple
databases with various page sizes (1KB, 4MB, 8MB, 16MB).

So what happens if if do for example:

   // Max DB page size is 16KB. SQLite doc says to add 40 bytes for page header.
   const int KPageSize = 16*1024 + 40;
   const int KPageCount= 512;
   static uint64_t sqlitePageCache[KPageSize*KPageCount/sizeof(uint64_t)];

   status = sqlite3_config(
  SQLITE_CONFIG_PAGECACHE,
  [0],
  KPageSize,
  KPageCount);

Will SQLite use 16KB (=KPageSize) in that buffer to store each page
of DBs even for the DBs where page size is only 1KB or 4KB or 8KB?
If so, it will waste memory for 1KB, 4KB or 8KB pages and
SQLITE_CONFIG_PAGECACHE does not look like a good
idea in such case.

Regards
Dominique
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions from a novice - basic browsing of records ina listview.

2014-07-12 Thread RSmith


On 2014/07/12 14:26, - wrote:

Hello Ryan,

Thanks for your response.   I was writing a lengthy reply when I realized
that most of what I said in it where repetitions of what I have already said
earlier, so I deleted it.

To be honest, its well possible that I currently just can't wrap my head
about the non-strict way SQLite seems to work (its not really what I'm
accustomed to)  ...


Hi Rudy,

The response is a pleasure and as to the wrapping your head around it, we'll gladly assist. I do not mean to re-iterate things 
either, but you add inaccurate statements, so please allow me to be (once more) very clear - SQLite is in no way "non-strict". Maybe 
you meant non-constrictive or constrained? The limits are wide and mean to accommodate as many use-cases as possible, but they are 
very strict, you cannot slip-through one extra column above the limit, as an example.


I understand the impulse needing compartmentalized absolutes, but it isn't viable. You have no problem understanding the road-laws, 
they have limits too, but there are exceptions everywhere. You must drive in a certain lane and your car is not allowed to be wider 
than a lane, except for instance when a truck is delivering a cooling tower that's 3-lanes wide, then we use special escorts and 
traffic control interventions to move the load. There is however no point in making those exceptions part of standard 
road-rule-studies when teaching a teenager to drive - good thing too, because the list of possible and even plausible exceptions 
would dwarf the library of congress.


SQL is the road-system and you are the town-planner. It is your job to figure out the use-case and model the limits to suit it. It 
is SQLite's task to attempt supplying SQL data interfaces for whatever that mold transpires to be.


If you are used to anything less, then please consider this new horizon a step 
in the right direction. :)


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions from a novice - basic browsing of records ina listview.

2014-07-12 Thread -
Hello Ryan,

Thanks for your response.   I was writing a lengthy reply when I realized
that most of what I said in it where repetitions of what I have already said
earlier, so I deleted it.

To be honest, its well possible that I currently just can't wrap my head
about the non-strict way SQLite seems to work (its not really what I'm
accustomed to)  ...

Regards,
Rudy Wieser


- Original Message -
From: RSmith <rsm...@rsweb.co.za>
To: General Discussion of SQLite Database <sqlite-users@sqlite.org>
Sent: Friday, July 11, 2014 2:24 AM
Subject: Re: [sqlite] Questions from a novice - basic browsing of records
ina listview.


>
> >
> >> What if the table has 10^16 or more items?
> > Is that number within the limits as set by SQLite3 ?  Than its my
intention
> > to handle it.
>
> No, I just made that up out of thin air. SQLite's maximum row limit is
2^63-1 I believe. It is unreachable on current physical media.
>
> > Oh, there are some there, like a SQLITE_MAX_SQL_LENGTH of (max) a gig,
and a SQLITE_MAX_COLUMN of (max) 32767. Together that would
> > mean a maximum column-name length of about 32768 chars//
>
> No, it does not mean that at all. Your inference is not only wrong but
also unneeded, meaning that you are imagining relationships
> where there are none. The limits for max sql length and max column are
very clear, and in no way and by no virtue does it imply that
> the one informs the other. Why do you imagine that this is necessarily so?
>
> To be clear - when you go inside an elevator - you might see a weight
limit stated as "500Kg / 13 Persons". This does not mean the
> limit per person is 500/13=38Kg at all, and there is no reason in the
known universe to imagine that it does. (Good thing too cause
> I won't ever get to use the elevator). It does mean that even if you find
an array of 20Kg children, you still cannot pack more than
> 13 in there, and if you have 5 really big (100Kg+) people hopping on ahead
of you, best to wait for the next one. The limit
> statement is not fuzzy.
>
> > ...//Also, the above SQLITE_MAX_COLUMN can be changed at *compile* time.
What am I supposed to do with my regular API (max 2000
> > columns) when encountering a database made with such an altered API ?
Will it even cause troubles at all ? *That* is what I mean
> > with "vague or fuzzy".
>
> If this was true it would be reason for concern - but the limit you refer
to is a SQL language construct limit, not a table-width
> limit so the worry is not justified - but In a query one might be able to
dream up a set of extracted columns that tops the query
> limit easily even if the underlying tables only sports a few columns. For
this you will need to plan. The paradigm here is to cater
> for what is relevant. Very few people make queries longer than a few
columns, but they might. It's a kind of bell curve, and if the
> users are those lying at the 6th+ standard deviation of column-count
requirements, chances are they will have compiled their own
> sqlite version by now, and if they did not, ask them politely to adhere to
whichever limit you picked. There is no SQLite-imposed
> hard limit (other than the physical), in stead, it supports the wide gamut
of needs that cover the 99% bulk, and people with special
> needs roll their own (using the various compiler directives and the like).
>
> You don't even need to check this, SQLite will do it for you. Send a query
with 101 columns, it will return a
> DUDE_THATS_TOO_MANY_COLUMNS result in stead, the (real) codes being
available in the documentation. But if you like there is nothing
> wrong with setting your own maximum columns and doing your own checking,
but the ability doesn't mean SQLite is soft in the head -
> just that it isn't restrictive and you have some freedom of design.
>
> >> most all of them can be adjusted to suit folks like yourself
> >> who wish to push the limits.
> > I'm afraid you misunderstood. Its not my intention to push those limits,
but
> > I'll be damned if I let my intended basic database-browser refuse to
display
> > a legal table because *I* could not find a way to handle it.
>
> Not misunderstood, just a bit tongue-in-cheek, but the nuance probably
misplaced, I apologise.
>
> > Maybe in the end I still have to conceede defeat and use a few arbitrary
> > limits, taking the risk the program cannot handle every table. But not
> > before I tried my d*mn best not to do let that happen. :-)
>
> There is no such risk. SQLite will open each and every valid table in
existence, and you can query it so long as the query itself
> conforms. In this regard you are quite justified to fear a table with 200
columns and you have a 99 col query limit, so you wont be
> able to ask for every column by name, thou

Re: [sqlite] Questions from a novice - basic browsing of records in a listview.

2014-07-10 Thread RSmith





What if the table has 10^16 or more items?

Is that number within the limits as set by SQLite3 ?  Than its my intention
to handle it.


No, I just made that up out of thin air. SQLite's maximum row limit is 2^63-1 I 
believe. It is unreachable on current physical media.

Oh, there are some there, like a SQLITE_MAX_SQL_LENGTH of (max) a gig, and a SQLITE_MAX_COLUMN of (max) 32767. Together that would 
mean a maximum column-name length of about 32768 chars//


No, it does not mean that at all. Your inference is not only wrong but also unneeded, meaning that you are imagining relationships 
where there are none. The limits for max sql length and max column are very clear, and in no way and by no virtue does it imply that 
the one informs the other. Why do you imagine that this is necessarily so?


To be clear - when you go inside an elevator - you might see a weight limit stated as "500Kg / 13 Persons". This does not mean the 
limit per person is 500/13=38Kg at all, and there is no reason in the known universe to imagine that it does. (Good thing too cause 
I won't ever get to use the elevator). It does mean that even if you find an array of 20Kg children, you still cannot pack more than 
13 in there, and if you have 5 really big (100Kg+) people hopping on ahead of you, best to wait for the next one. The limit 
statement is not fuzzy.


...//Also, the above SQLITE_MAX_COLUMN can be changed at *compile* time. What am I supposed to do with my regular API (max 2000 
columns) when encountering a database made with such an altered API ? Will it even cause troubles at all ? *That* is what I mean 
with "vague or fuzzy".


If this was true it would be reason for concern - but the limit you refer to is a SQL language construct limit, not a table-width 
limit so the worry is not justified - but In a query one might be able to dream up a set of extracted columns that tops the query 
limit easily even if the underlying tables only sports a few columns. For this you will need to plan. The paradigm here is to cater 
for what is relevant. Very few people make queries longer than a few columns, but they might. It's a kind of bell curve, and if the 
users are those lying at the 6th+ standard deviation of column-count requirements, chances are they will have compiled their own 
sqlite version by now, and if they did not, ask them politely to adhere to whichever limit you picked. There is no SQLite-imposed 
hard limit (other than the physical), in stead, it supports the wide gamut of needs that cover the 99% bulk, and people with special 
needs roll their own (using the various compiler directives and the like).


You don't even need to check this, SQLite will do it for you. Send a query with 101 columns, it will return a 
DUDE_THATS_TOO_MANY_COLUMNS result in stead, the (real) codes being available in the documentation. But if you like there is nothing 
wrong with setting your own maximum columns and doing your own checking, but the ability doesn't mean SQLite is soft in the head - 
just that it isn't restrictive and you have some freedom of design.



most all of them can be adjusted to suit folks like yourself
who wish to push the limits.

I'm afraid you misunderstood. Its not my intention to push those limits, but
I'll be damned if I let my intended basic database-browser refuse to display
a legal table because *I* could not find a way to handle it.


Not misunderstood, just a bit tongue-in-cheek, but the nuance probably 
misplaced, I apologise.


Maybe in the end I still have to conceede defeat and use a few arbitrary
limits, taking the risk the program cannot handle every table. But not
before I tried my d*mn best not to do let that happen. :-)


There is no such risk. SQLite will open each and every valid table in existence, and you can query it so long as the query itself 
conforms. In this regard you are quite justified to fear a table with 200 columns and you have a 99 col query limit, so you wont be 
able to ask for every column by name, though * will still work.  One might say that 99 columns is more than any user might want to 
or be able to really look at... but if you disagree (and even I disagree, I think probably 200 is closer to a sensible human limit), 
then you might simply decide what it is that you feel would be the most anyone can useful observe in a query, say 500 if you like, 
or 1000, and make that your limit. And then, if ever a query needs more than those, split it into 2 queries. SQL DBA's are not as 
dumb as you might think, they are very used to working within limits. (Or maybe your intended user is of a different mindset, but 
that is up to you to figure out).



Well, I wanted to start with browsing. Adding selections to limit the output (possibly also hiding interresting entries!) could 
come later. The idea behind that is that while browsing you might find stuff that looks interresting, something that could easily 
get hidden when limiting the output (using a LIKE clause) 


Ok, I'm 

Re: [sqlite] Questions from a novice - basic browsing of records in a listview.

2014-07-10 Thread Clemens Ladisch
- wrote:
>> Are you using a list view, or paging?  The scrolling
>> cursor method is appropriate only for the latter.
>
> I'm using a report-style listview.  And to be honest, I have no idea what a
> "paging" component looks like (I'm using standard windows components).

This would be a list without scroll bar but with previous/next page
buttons; mostly used in HTML pages.

(Without prev/next, you do not have the previous row whose values you
could use as basis for fetching the next rows.)

>> The easiest way to handle a list view would be to
>> read the primary key of _all_ records in the correct
>> order into your application.
>
> True.  But as its my intention to create a generic SQLite3 viewer I have no
> idea if the computer its used on will have enough memory to store such a
> list, as the size of the database is unknown 

When you have a table with millions of rows, and the user happens to
scroll to the 1234567th row, how do you get that row without knowing its
rowid or primary key?

And for huge tables, "browsing" does not make sense.  Even if there were
some interesting value in some row, you wouldn't be able to find it.

> What I was thinking about was something in the line of "continue/start from
> rowID {ID}".

This is possible (if the table was not declared with WITHOUT ROWID, and
if there are no other columns named rowid, _rowid_, or oid).  However,
in a list view, you are not guaranteed to know the rowid to start from.


Regards,
Clemens
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions from a novice - basic browsing of records in a listview.

2014-07-10 Thread -
n to do it exactly like
> that in a next version,

Which is why I'm attemting to do it "the right way".  ... Which than brought
me in collision with vague limits.

> I hope this helps to alleviate your headaches slightly.

Not really, but I'm going to try to digest it.

Thanks for your help.

> Have a great day!

And the same to you.

Regards,
Rudy Wieser


- Original Message -
From: RSmith <rsm...@rsweb.co.za>
To: <sqlite-users@sqlite.org>
Sent: Thursday, July 10, 2014 5:31 PM
Subject: Re: [sqlite] Questions from a novice - basic browsing of recordsina
listview.


>
> On 2014/07/10 16:04, - wrote:
> >> You could set a very big maximum (e.g. 5000 rows) on
> >> the assumption that users will never actually read or scroll
> >> through that many rows.
> > :-)  In that case I would simply use a simple listview (a listview can
> > handle upto 32000 records), and see if I can use its lazy data retrieval
> > method (never used that method, but there is a first time for
everything).
> >
> > Apart from the problems I would need to solve by *not* imposing some
> > abitrary limit (and learn from that ofcourse), it was/is my intention to
be
> > able to fully browse a table no matter how long (upto the limits set by
> > SQLite3, although those seem to be quite vague).
>
> Hi Rudy,
>
> Firstly, while I understand the notion of not wanting arbitrary limits -
this is just a silly notion when placed under scrutiny.
> What is "enough"? How long is a piece of string?
> The notion of "no matter how long" is just not feasible. What if the table
has 10^16 or more items? (This is more than the amount of
> stars in the known universe, so it's probably not likely, but what if it
is?) You would need a cray or something to even access a
> list like that. Limits are inherent and the best practice is to start out
with a very specific limit-universe in mind.
>
> Further to this - the limits are very specific in SQLIte and not vague or
fuzzy at all - though most all of them can be adjusted to
> suit folks like yourself who wish to push the limits.
>
> Lastly, more pertinent to the question - yes, if you have any kind of
dataset/table which is larger than what is comfortable for the
> intended devices in a full-load scenario, the best (but not only) other
solution is lazy-retrieval, which you already seem familiar
> with so I would advise to go with that. I have an example system if you
like to see which can display insane amounts of data at
> lightning speed using just this sort of convention, but it still is
limited to 2^63-1 items, a limit which approaches the total
> number of atoms on earth (~ 1.3 x 10^51) and as such is unlikely to ever
span a dataset small enough to fit in physical memory of
> any size which are made from atoms available on earth - but it still is a
limit.  It still gets a bit sticky after about a billion
> items and more importantly, as Simon alluded to, it is silly to "display"
any list which is so long that it cannot possibly be read
> by a human - what would be the purpose of that?  And as other posters
alluded to, human readability diminishes very long before the
> listing abilities of even mediocre modern systems.
>
> > Shucks! That means that I need to use the column names in their full
glory, no matter how long they are. And as far as I can tell
> > they can be *long* (upto 2 or 4 gigs?), and could, with a few columns
easily exhaust the specified buffer size for a query (IIRC,
> > 110 KByte).
>
> It's ~2 giga-characters for a Unicode-enabled string. Anyone who makes
column names that long has issues that can only be solved by
> a medical doctor, not an SQL engine. (and as such SQLite or any other
engine does not really go to lengths to specifically cater for
> handling those length column names efficiently - the same reason they do
not care to cater for saving 256-bit integers natively,
> even though those numbers are sometimes used by people).
>
> Do not confuse system design allowances with actual usage conventions. If
you make an app that counts how many hamburgers one eats
> in a day, you do not need to make the display counter wide enough to fit
2^32 digits, even if that is the system design limit, I
> think we can safely assume no physical human of the kind that traverse the
Earth will top a 4-digit number, even in America.
>
> Moral of the story: Pick a limit and build the system according to that.
>
>
> > Yeah, thats another funny thing. To be *sure* about the order of the
columns, how the full record is sorted and from which record
> > the "rolling cursor" should continue you're sending the same order of
columns 3 times in one query ...
>
> How is that a funny thing? The SQL Engine is under obligat

Re: [sqlite] Questions from a novice - basic browsing of records ina listview.

2014-07-10 Thread RSmith

Quick typo/fact check:

...// What if the table has 10^16 or more items? (This is more than the amount 
of stars in the known universe//...

should of course read:

...// What if the table has 10^24 or more items? (This is more than the amount 
of stars in the observable universe//...



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions from a novice - basic browsing of records ina listview.

2014-07-10 Thread RSmith


On 2014/07/10 16:04, - wrote:

You could set a very big maximum (e.g. 5000 rows) on
the assumption that users will never actually read or scroll
through that many rows.

:-)  In that case I would simply use a simple listview (a listview can
handle upto 32000 records), and see if I can use its lazy data retrieval
method (never used that method, but there is a first time for everything).

Apart from the problems I would need to solve by *not* imposing some
abitrary limit (and learn from that ofcourse), it was/is my intention to be
able to fully browse a table no matter how long (upto the limits set by
SQLite3, although those seem to be quite vague).


Hi Rudy,

Firstly, while I understand the notion of not wanting arbitrary limits - this is just a silly notion when placed under scrutiny. 
What is "enough"? How long is a piece of string?
The notion of "no matter how long" is just not feasible. What if the table has 10^16 or more items? (This is more than the amount of 
stars in the known universe, so it's probably not likely, but what if it is?) You would need a cray or something to even access a 
list like that. Limits are inherent and the best practice is to start out with a very specific limit-universe in mind.


Further to this - the limits are very specific in SQLIte and not vague or fuzzy at all - though most all of them can be adjusted to 
suit folks like yourself who wish to push the limits.


Lastly, more pertinent to the question - yes, if you have any kind of dataset/table which is larger than what is comfortable for the 
intended devices in a full-load scenario, the best (but not only) other solution is lazy-retrieval, which you already seem familiar 
with so I would advise to go with that. I have an example system if you like to see which can display insane amounts of data at 
lightning speed using just this sort of convention, but it still is limited to 2^63-1 items, a limit which approaches the total 
number of atoms on earth (~ 1.3 x 10^51) and as such is unlikely to ever span a dataset small enough to fit in physical memory of 
any size which are made from atoms available on earth - but it still is a limit.  It still gets a bit sticky after about a billion 
items and more importantly, as Simon alluded to, it is silly to "display" any list which is so long that it cannot possibly be read 
by a human - what would be the purpose of that?  And as other posters alluded to, human readability diminishes very long before the 
listing abilities of even mediocre modern systems.


Shucks! That means that I need to use the column names in their full glory, no matter how long they are. And as far as I can tell 
they can be *long* (upto 2 or 4 gigs?), and could, with a few columns easily exhaust the specified buffer size for a query (IIRC, 
110 KByte). 


It's ~2 giga-characters for a Unicode-enabled string. Anyone who makes column names that long has issues that can only be solved by 
a medical doctor, not an SQL engine. (and as such SQLite or any other engine does not really go to lengths to specifically cater for 
handling those length column names efficiently - the same reason they do not care to cater for saving 256-bit integers natively, 
even though those numbers are sometimes used by people).


Do not confuse system design allowances with actual usage conventions. If you make an app that counts how many hamburgers one eats 
in a day, you do not need to make the display counter wide enough to fit 2^32 digits, even if that is the system design limit, I 
think we can safely assume no physical human of the kind that traverse the Earth will top a 4-digit number, even in America.


Moral of the story: Pick a limit and build the system according to that.


Yeah, thats another funny thing. To be *sure* about the order of the columns, how the full record is sorted and from which record 
the "rolling cursor" should continue you're sending the same order of columns 3 times in one query ... 


How is that a funny thing? The SQL Engine is under obligation to return EXACTLY what you ASK for, or return all results when you do 
not ask for a specific set. it does this, everytime without fail. It is under no obligation to return a specific form or format when 
you don't ask for any, and more importantly, this behaviour is not a silly design quirk or oversight of the SQL engine specification 
or the designers of any SQL engine, it is specifically required to avoid spending system resources on nonsense ordering and 
formatting and the like when in 99+ % of cases it is not needed at all by the reading construct. It has to tie up though, if you 
specified the columns in a specific order. This is WHY you specify the columns, and don't worry, it is all cached very nicely inside 
SQLite, there is negligible overhead for it.


Guess what: for a simple table query thats probably the most-used selection. :-( Currently I'm assuming that querying a table will 
keep the results in order of the columns in the table. Up until 

Re: [sqlite] Questions from a novice - basic browsing of records ina listview.

2014-07-10 Thread -
Hello Simon,

> You could set a very big maximum (e.g. 5000 rows) on
> the assumption that users will never actually read or scroll
> through that many rows.

:-)  In that case I would simply use a simple listview (a listview can
handle upto 32000 records), and see if I can use its lazy data retrieval
method (never used that method, but there is a first time for everything).

Apart from the problems I would need to solve by *not* imposing some
abitrary limit (and learn from that ofcourse), it was/is my intention to be
able to fully browse a table no matter how long (upto the limits set by
SQLite3, although those seem to be quite vague).

> It depends on what interface or shim you're using to access your database.

I'm programming directly against the SQLite3 DLL, mostly using
"sqlite3_prepare".

> ... there's no language in SQL to say things like 'column 3'
> and most APIs don't supply it.

Shucks!  That means that I need to use the column names in their full glory,
no matter how long they are.   And as far as I can tell they can be *long*
(upto 2 or 4 gigs?), and could, with a few columns easily exhaust the
specified buffer size for a query (IIRC, 110 KByte).

> On the other hand, if you were referring to the results
> of a SELECT, then results are always returned in the
> order you asked for them

Yeah, thats another funny thing.  To be *sure* about the order of the
columns, how the full record is sorted and from which record the "rolling
cursor" should continue you're sending the same order of columns 3 times in
one query ...

> (apart from *)

Guess what: for a simple table query thats probably the most-used selection.
:-(

Currently I'm assuming that querying a table will keep the results in order
of the columns in the table.  Up until now that seems to be true.

Man, trying to understand reasons the designers of the SQL language did
certain things in a certain way gives me a headache. :-\

Regards,
Rudy Wieser


- Original Message -
From: Simon Slavin <slav...@bigfraud.org>
To: General Discussion of SQLite Database <sqlite-users@sqlite.org>
Sent: Thursday, July 10, 2014 2:17 PM
Subject: Re: [sqlite] Questions from a novice - basic browsing of records
ina listview.


>
> On 10 Jul 2014, at 12:54pm, - <mz2n6u7c.temp...@xs4all.nl> wrote:
>
> > But it might cause another problem:  the database could get/be so large
that
> > the ammount of memory needed to store all the rowIDs in could well
exeede
> > the ammount of memory available to the program.
> > I could ofcourse use (or at that moment switch over to) a local file
> > (database!) to store them in, but somehow that feels a bit odd.
>
> You could set a very big maximum (e.g. 5000 rows) on the assumption that
users will never actually read or scroll through that many rows.  Use LIMIT
5000 and if you actually get 5000 rows returned put up a messages telling
them if the row they want doesn't appear they should be more specific in
their query.  Or some other cop-out.
>
> >> At this point you care only about column values and you
> >> never need to use SQL to scroll around in a table,
> >
> > It also restores the use of the listviews own slider to move about in
the
> > list (do away with the "page up", "page down" buttons).  I must say I
like
> > that.
>
> We're all still adapting to the changes that the excellent GUIs and
ridiculous speed of today's computers require.  I now have trivial little
systems which reflect in realtime changes made by other users and in other
windows, just because I needed to write those library routines for my 'big'
systems.
>
> > Lastly, the whole of the above ignores systems where the
> >> user (or another user !) may insert or delete a row that was
> >> in your foundset in another window, while it's being shown
> >
> > I also thought of that, but wasn't prepared to think about the
consequences
> > (good or bad) before the preceeding problems where solved.   My intended
> > approach to it was to add a "reload" button/key (F5) for it.
>
> You can deal with cases where the rowid no longer exists (as long as you
do correctly test for it).  But yes, spotting new rows is harder.
>
> > Thanks for the full (explanation, hints) reply.
>
> The clarity of your question suggested that a long answer would be read
and understood.
>
> > Remark: I've not seen any reference in your reply to my first question
where
> > I wondered if it would be possible to refer (in queries) to columns
other
> > than by their full names. Must I assume its not possible ?
>
> Just that I didn't understand the question well enough to answer it and
was hoping someone else did.
>
> It depends on what interface or shim you're using to access your database.
Alt

Re: [sqlite] Questions from a novice - basic browsing of records in a listview.

2014-07-10 Thread -
Hello Clemens,

> Are you using a list view, or paging?  The scrolling
> cursor method is appropriate only for the latter.

I'm using a report-style listview.  And to be honest, I have no idea what a
"paging" component looks like (I'm using standard windows components).

> The easiest way to handle a list view would be to
> read the primary key of _all_ records in the correct
> order into your application.

True.  But as its my intention to create a generic SQLite3 viewer I have no
idea if the computer its used on will have enough memory to store such a
list, as the size of the database is unknown 

> Only data in those columns that you are using for sorting.
> (But those must be a unique key for the records.)

:-) As for a generic viewer I have no control over that it means I need to
send *all* columns back, *in full*. (and yes, I see blobs creating a problem
there). :-\
Otherwise I either can get stuck (>=) or skip records (>)  when the WHERE
field contains more than a pages worth of the same data.   Already ran into
that 

> > 1) Is it possible to refer to the columns in a kind of shorthand
> > (index perhaps) ?
>
> No.

Thats (too) bad.

> Compiled statements can be reused (and the SQLite database
> drivers of many languages have a statement cache).

How do I refer to a previously executed (and terminated) statement ?   If
that is not possible, how is that cache of use to whomever needs to repeat a
query ?

> However, this is unlikely to be a bottleneck.

Its not a bottleneck I'm worried about, it is having to cope with a
system/method/environment which demands me to do/send the same thing every
time I need something from it, or having to return data I just got from it.
It just bellows inefficiency to me.

> No.  But SQLite has no client/server communication overhead.

I'm sorry, but I have no idea why you mention that overhead.

The "overhead" I was thinking of is the one where the database has to
re-find a record it has just found and send me the contents of.  Again,
inefficiency.   Another "overhead" is my program having to keep track of
(possibly large ammounts of) data, only so I can send it back (a standard
listview only accepts upto, IIRC, 260 chars and discards the rest).

What I was thinking about was something in the line of "continue/start from
rowID {ID}".

Regards,
Rudy Wieser


- Original Message -
From: Clemens Ladisch <clem...@ladisch.de>
To: <sqlite-users@sqlite.org>
Sent: Wednesday, July 09, 2014 4:15 PM
Subject: Re: [sqlite] Questions from a novice - basic browsing of records
ina listview.


> - wrote:
> > After having used the OFFSET and LIMIT 1 method (in conjuction with a
> > userdata listview) and finding a past post into this forum describing it
as
> > a rookie mistake I'm now trying to implement the "scrolling cursor"
method
> > in that same post.
>
> Are you using a list view, or paging?  The scrolling cursor method
> is appropriate only for the latter.
>
> The easiest way to handle a list view would be to read the primary key of
> _all_ records in the correct order into your application.
>
> If the amount of data isn't too large, OFFSET/LIMIT works just fine.
>
> > For the above method to work for any database it means I need, for
> > each-and-every next/previous page request, to send *all* the bottom/top
> > records data back to the SQLite engine so it knows where to continue.
>
> Only data in those columns that you are using for sorting.  (But those
> must be a unique key for the records.)
>
> > 1) Is it possible to refer to the columns in a kind of shorthand (index
> > perhaps) ?
>
> No.
>
> > 2) Is it possible to have the SQLite engine initialize and remember
certain
> > WHERE and ORDER clauses (without creating another database please :-) ),
so
> > they can be used again-and-again (for the duration of a connection).
>
> Compiled statements can be reused (and the SQLite database drivers of
> many languages have a statement cache).
>
> However, this is unlikely to be a bottleneck.
>
> > 3) Is it possible, for the above 'scrolling cursor' method, to refer to
a
> > starting record other than by sending the exact data of such a record
back
> > to the SQLite engine ?
>
> No.  But SQLite has no client/server communication overhead.
>
>
> Regards,
> Clemens
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions from a novice - basic browsing of records in a listview.

2014-07-10 Thread Teg
OP is in windows. Windows can send you display cache hints that tells
you what page it intends to display next. I use these notification to
load up a page worth of data at a time. Keep it in an internal cache.

>> the ammount of memory needed to store all the rowIDs in could well exeede
>> the ammount of memory available to the program.

You're  talking about 2 gigs in 32 bit windows and virtually unlimited
in 64 bit windows. I'd say this is a non-issue.

You get the list of RowID's then "SetItemCount" to tell the list
control how big it is. Then the user just scrolls/pages up/down
resizes as he will. Windows handles the rest. Windows sends you
notifications about what data it needs to display and cache hints
telling you what data it intends to display in the future.

You can use the same technique in non-windows environments like
Android. The details change but, the technique works there too. 

SS> You could set a very big maximum (e.g. 5000 rows)

Using the technique discussed here, keeping a list of rowid's to seed
the virtual control, I've had virtual list controls with millions of
lines of records. It's slowish when I get over about 500K but,
functional. It's not really practical to scroll this data but, it
works and the RAM usage isn't really over the top. I don't consider
using a couple hundred megs for a list control to be unreasonable if
that's what the application calls for. Most PC's have more RAM then
they can ever use.

I  think you hit it on head when you suggest that most programmers are
mired  in  the  memory  limited  days.  Even android phones can handle
1000's of records in a list control.

If it's a list control with only 5000 records, you might be better off
loading the entire thing into memory. For my usage, that's a really
small list.


SS> On 10 Jul 2014, at 12:54pm, -  wrote:

>> But it might cause another problem:  the database could get/be so large that
>> the ammount of memory needed to store all the rowIDs in could well exeede
>> the ammount of memory available to the program.
>> I could ofcourse use (or at that moment switch over to) a local file
>> (database!) to store them in, but somehow that feels a bit odd.

SS> You could set a very big maximum (e.g. 5000 rows) on the
SS> assumption that users will never actually read or scroll through
SS> that many rows.  Use LIMIT 5000 and if you actually get 5000 rows
SS> returned put up a messages telling them if the row they want
SS> doesn't appear they should be more specific in their query.  Or some other 
cop-out.

>>> At this point you care only about column values and you
>>> never need to use SQL to scroll around in a table,
>> 
>> It also restores the use of the listviews own slider to move about in the
>> list (do away with the "page up", "page down" buttons).  I must say I like
>> that.

SS> We're all still adapting to the changes that the excellent GUIs
SS> and ridiculous speed of today's computers require.  I now have
SS> trivial little systems which reflect in realtime changes made by
SS> other users and in other windows, just because I needed to write
SS> those library routines for my 'big' systems.

>> Lastly, the whole of the above ignores systems where the
>>> user (or another user !) may insert or delete a row that was
>>> in your foundset in another window, while it's being shown
>> 
>> I also thought of that, but wasn't prepared to think about the consequences
>> (good or bad) before the preceeding problems where solved.   My intended
>> approach to it was to add a "reload" button/key (F5) for it.

SS> You can deal with cases where the rowid no longer exists (as long
SS> as you do correctly test for it).  But yes, spotting new rows is harder.

>> Thanks for the full (explanation, hints) reply.

SS> The clarity of your question suggested that a long answer would be read and 
understood.

>> Remark: I've not seen any reference in your reply to my first question where
>> I wondered if it would be possible to refer (in queries) to columns other
>> than by their full names. Must I assume its not possible ?

SS> Just that I didn't understand the question well enough to answer
SS> it and was hoping someone else did.

SS> It depends on what interface or shim you're using to access your
SS> database.  Although several things about SQL syntax betray the
SS> fact that columns have an order (for example, you can do INSERT
SS> without specifying columns and the third value gets put in the
SS> third column) there's no language in SQL to say things like
SS> 'column 3' and most APIs don't supply it.

SS> On the other hand, if you were referring to the results of a
SS> SELECT, then results are always returned in the order you asked
SS> for them (apart from *), and you have to go to extra effort to
SS> find the names of the columns of the values that were returned. 
SS> So all you have to do is remember what you asked for.

SS> Simon.
SS> ___
SS> sqlite-users 

Re: [sqlite] Questions from a novice - basic browsing of records in a listview.

2014-07-10 Thread Simon Slavin

On 10 Jul 2014, at 12:54pm, -  wrote:

> But it might cause another problem:  the database could get/be so large that
> the ammount of memory needed to store all the rowIDs in could well exeede
> the ammount of memory available to the program.
> I could ofcourse use (or at that moment switch over to) a local file
> (database!) to store them in, but somehow that feels a bit odd.

You could set a very big maximum (e.g. 5000 rows) on the assumption that users 
will never actually read or scroll through that many rows.  Use LIMIT 5000 and 
if you actually get 5000 rows returned put up a messages telling them if the 
row they want doesn't appear they should be more specific in their query.  Or 
some other cop-out.

>> At this point you care only about column values and you
>> never need to use SQL to scroll around in a table,
> 
> It also restores the use of the listviews own slider to move about in the
> list (do away with the "page up", "page down" buttons).  I must say I like
> that.

We're all still adapting to the changes that the excellent GUIs and ridiculous 
speed of today's computers require.  I now have trivial little systems which 
reflect in realtime changes made by other users and in other windows, just 
because I needed to write those library routines for my 'big' systems.

> Lastly, the whole of the above ignores systems where the
>> user (or another user !) may insert or delete a row that was
>> in your foundset in another window, while it's being shown
> 
> I also thought of that, but wasn't prepared to think about the consequences
> (good or bad) before the preceeding problems where solved.   My intended
> approach to it was to add a "reload" button/key (F5) for it.

You can deal with cases where the rowid no longer exists (as long as you do 
correctly test for it).  But yes, spotting new rows is harder.

> Thanks for the full (explanation, hints) reply.

The clarity of your question suggested that a long answer would be read and 
understood.

> Remark: I've not seen any reference in your reply to my first question where
> I wondered if it would be possible to refer (in queries) to columns other
> than by their full names. Must I assume its not possible ?

Just that I didn't understand the question well enough to answer it and was 
hoping someone else did.

It depends on what interface or shim you're using to access your database.  
Although several things about SQL syntax betray the fact that columns have an 
order (for example, you can do INSERT without specifying columns and the third 
value gets put in the third column) there's no language in SQL to say things 
like 'column 3' and most APIs don't supply it.

On the other hand, if you were referring to the results of a SELECT, then 
results are always returned in the order you asked for them (apart from *), and 
you have to go to extra effort to find the names of the columns of the values 
that were returned.  So all you have to do is remember what you asked for.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions from a novice - basic browsing of records ina listview.

2014-07-10 Thread -
Hello Philip,

> How about using prepared statements in conjunction with bind?

I also considered dat, and although it would certainly make the ammount of
data send after the preparation smaller, it could still mean shi(t/p)loads
of data being shutteled to-and-fro.

And although I find the "to" quite acceptable (I need it to be able to
display something :-) ), I do not think the same about the "fro" part
(having to send the just-received data back again) 

Regards,
Rudy Wieser


- Original Message -
From: Philip Bennefall <phi...@blastbay.com>
To: General Discussion of SQLite Database <sqlite-users@sqlite.org>
Sent: Wednesday, July 09, 2014 3:11 PM
Subject: Re: [sqlite] Questions from a novice - basic browsing of records
ina listview.


> How about using prepared statements in conjunction with bind?
>
> http://www.sqlite.org/c3ref/bind_blob.html
>
> Kind regards,
>
> Philip Bennefall
> On 2014-07-09 15:03, - wrote:
> > Hello all,
> >
> > I'm quite new at SQLite3, and have a bit of a problem with grasping the
> > handling of a  database.
> >
> > After having used the OFFSET and LIMIT 1 method (in conjuction with a
> > userdata listview) and finding a past post into this forum describing it
as
> > a rookie mistake I'm now trying to implement the "scrolling cursor"
method
> > in that same post.  It leads to a few questions though.
> >
> > For the above method to work for any database it means I need, for
> > each-and-every next/previous page request, to send *all* the bottom/top
> > records data back to the SQLite engine so it knows where to continue.
Even
> > when assuming the default maximum of columns the accumulated column
names
> > and related data for the "WHERE" clause could get quite big.  Add to
that a
> > possible the "SORT BY" clause and I'm looking at quite a large query,
which
> > has to be created and transferred for every "scroll".  Which is
something I
> > do not really like ...
> >
> > 1) Is it possible to refer to the columns in a kind of shorthand (index
> > perhaps) ?
> >
> > 2) Is it possible to have the SQLite engine initialize and remember
certain
> > WHERE and ORDER clauses (without creating another database please :-) ),
so
> > they can be used again-and-again (for the duration of a connection).
> >
> > 3) Is it possible, for the above 'scrolling cursor' method, to refer to
a
> > starting record other than by sending the exact data of such a record
back
> > to the SQLite engine ?
> >
> > Ofcourse, feel (very) free to include other things that I've not thought
> > about and could be usefull. :-)
> >
> > Regards,
> > Rudy Wieser
> >
> >
> >
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> > .
> >
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions from a novice - basic browsing of records in a listview.

2014-07-10 Thread -
Hello Simon,

> You've just listed a lot of the concerns involved in the use of
> scroll-page-by-page.  And there are no good answers to them.

Thanks for the above.  It means that I did my homework right. :-)

> Nor is it possible to tell SQLite to ...   Sorry about that.

I already got that feeling, but had to make sure (novice and all that). And
nothing to be sorry about (although it would have been usefull in this
case), just something I have to learn to work with (or rather, without).

> However, the whole question is almost obsolete.  Users now
> scroll up and down displays so quickly and frequently that
> grabbing just one screen worth of data from a database is
> pointless.

Yes, that was also a concern of mine.  But although I already had several
possible approaches to it (like a bit of caching and buttons scrolling more
than a single page, possibly related to the size of the database) I did not
want to concern myself and this forum with all of that at the same time.
One step at a time keeps things simple.

> Similarly, users will frequently start a query with a small
> window, then make the window larger (fullscreen ?)
> which means it shows more rows.

Already considered that, and found LVM_GETCOUNTPERPAGE to work quite well
for it.

> So rather than the old-style page-by-page listing, 

I was also thinking in that direction.  Get all rowIDs first and use them.
In that regard, thanks for the "rowid IN (line1rid,line2rid,line3rid,...)"
hint, that takes care of one of my concerns of having to send a query for
each-and-every record in a page.

But it might cause another problem:  the database could get/be so large that
the ammount of memory needed to store all the rowIDs in could well exeede
the ammount of memory available to the program.
I could ofcourse use (or at that moment switch over to) a local file
(database!) to store them in, but somehow that feels a bit odd.

> At this point you care only about column values and you
> never need to use SQL to scroll around in a table,

It also restores the use of the listviews own slider to move about in the
list (do away with the "page up", "page down" buttons).  I must say I like
that.

> Lastly, the whole of the above ignores systems where the
> user (or another user !) may insert or delete a row that was
> in your foundset in another window, while it's being shown

I also thought of that, but wasn't prepared to think about the consequences
(good or bad) before the preceeding problems where solved.   My intended
approach to it was to add a "reload" button/key (F5) for it.

Thanks for the full (explanation, hints) reply.

Remark: I've not seen any reference in your reply to my first question where
I wondered if it would be possible to refer (in queries) to columns other
than by their full names. Must I assume its not possible ?

Regards,
Rudy Wieser


- Original Message -
From: Simon Slavin <slav...@bigfraud.org>
To: General Discussion of SQLite Database <sqlite-users@sqlite.org>
Sent: Wednesday, July 09, 2014 4:07 PM
Subject: Re: [sqlite] Questions from a novice - basic browsing of records
ina listview.


>
> On 9 Jul 2014, at 2:03pm, - <mz2n6u7c.temp...@xs4all.nl> wrote:
>
> > 1) Is it possible to refer to the columns in a kind of shorthand (index
> > perhaps) ?
> >
> > 2) Is it possible to have the SQLite engine initialize and remember
certain
> > WHERE and ORDER clauses (without creating another database please :-) ),
so
> > they can be used again-and-again (for the duration of a connection).
> >
> > 3) Is it possible, for the above 'scrolling cursor' method, to refer to
a
> > starting record other than by sending the exact data of such a record
back
> > to the SQLite engine ?
>
> You've just listed a lot of the concerns involved in the use of
scroll-page-by-page.  And there are no good answers to them.
>
> The convenience functions which would give you "Where am I currently in
this index ?" don't exist.  If you want to do it you have to roll your own.
Nor is it possible to tell SQLite to preserve the temporary index it made up
from your query terms (WHERE and ORDER) so you can reuse it.  Sorry about
that.
>
> However, the whole question is almost obsolete.  Users now scroll up and
down displays so quickly and frequently that grabbing just one screen worth
of data from a database is pointless.  Similarly, users will frequently
start a query with a small window, then make the window larger (fullscreen
?) which means it shows more rows.
>
> So rather than the old-style page-by-page listing, with the programming
which goes into scrolling, modern systems tend to use a different style
which doesn't have some of the concerns you list.  This involves storing and
refetching different things as follows.  For my example I will use the
follo

Re: [sqlite] Questions from a novice - basic browsing of records in a listview.

2014-07-09 Thread Clemens Ladisch
- wrote:
> After having used the OFFSET and LIMIT 1 method (in conjuction with a
> userdata listview) and finding a past post into this forum describing it as
> a rookie mistake I'm now trying to implement the "scrolling cursor" method
> in that same post.

Are you using a list view, or paging?  The scrolling cursor method
is appropriate only for the latter.

The easiest way to handle a list view would be to read the primary key of
_all_ records in the correct order into your application.

If the amount of data isn't too large, OFFSET/LIMIT works just fine.

> For the above method to work for any database it means I need, for
> each-and-every next/previous page request, to send *all* the bottom/top
> records data back to the SQLite engine so it knows where to continue.

Only data in those columns that you are using for sorting.  (But those
must be a unique key for the records.)

> 1) Is it possible to refer to the columns in a kind of shorthand (index
> perhaps) ?

No.

> 2) Is it possible to have the SQLite engine initialize and remember certain
> WHERE and ORDER clauses (without creating another database please :-) ), so
> they can be used again-and-again (for the duration of a connection).

Compiled statements can be reused (and the SQLite database drivers of
many languages have a statement cache).

However, this is unlikely to be a bottleneck.

> 3) Is it possible, for the above 'scrolling cursor' method, to refer to a
> starting record other than by sending the exact data of such a record back
> to the SQLite engine ?

No.  But SQLite has no client/server communication overhead.


Regards,
Clemens
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions from a novice - basic browsing of records in a listview.

2014-07-09 Thread Simon Slavin

On 9 Jul 2014, at 2:03pm, -  wrote:

> 1) Is it possible to refer to the columns in a kind of shorthand (index
> perhaps) ?
> 
> 2) Is it possible to have the SQLite engine initialize and remember certain
> WHERE and ORDER clauses (without creating another database please :-) ), so
> they can be used again-and-again (for the duration of a connection).
> 
> 3) Is it possible, for the above 'scrolling cursor' method, to refer to a
> starting record other than by sending the exact data of such a record back
> to the SQLite engine ?

You've just listed a lot of the concerns involved in the use of 
scroll-page-by-page.  And there are no good answers to them.

The convenience functions which would give you "Where am I currently in this 
index ?" don't exist.  If you want to do it you have to roll your own.  Nor is 
it possible to tell SQLite to preserve the temporary index it made up from your 
query terms (WHERE and ORDER) so you can reuse it.  Sorry about that.

However, the whole question is almost obsolete.  Users now scroll up and down 
displays so quickly and frequently that grabbing just one screen worth of data 
from a database is pointless.  Similarly, users will frequently start a query 
with a small window, then make the window larger (fullscreen ?) which means it 
shows more rows.

So rather than the old-style page-by-page listing, with the programming which 
goes into scrolling, modern systems tend to use a different style which doesn't 
have some of the concerns you list.  This involves storing and refetching 
different things as follows.  For my example I will use the following example

SELECT name,phonenumber FROM people WHERE phonenumber LIKE '01707%' ORDER BY 
name

1) When you need to open the window, collect which rows are returned.  Execute

SELECT rowid FROM people WHERE phonenumber LIKE '01707%' ORDER BY name

and store the array of resulting rowids, even if there are thousands of them.  
At this point you don't care about column values at all.

2) When you need to display some rows, use your rowid array to figure out which 
records you need.  Once you know which rows you want execute something like one 
of the following, depending on how your code works and what your user is trying 
to do.

SELECT rowid,name,phonenumber FROM people WHERE rowid BETWEEN this AND that

or

SELECT rowid,name,phonenumber FROM people WHERE rowid IN 
(line1rid,line2rid,line3rid,...)

At this point you care only about column values and you never need to use SQL 
to scroll around in a table, which means you don't care about preserving 
indexes or index points or any of the tricky stuff.  You dealt with that all in 
step (1) and don't need it any more.

Lastly, the whole of the above ignores systems where the user (or another user 
!) may insert or delete a row that was in your foundset in another window, 
while it's being shown.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions from a novice - basic browsing of records in a listview.

2014-07-09 Thread Philip Bennefall

How about using prepared statements in conjunction with bind?

http://www.sqlite.org/c3ref/bind_blob.html

Kind regards,

Philip Bennefall
On 2014-07-09 15:03, - wrote:

Hello all,

I'm quite new at SQLite3, and have a bit of a problem with grasping the
handling of a  database.

After having used the OFFSET and LIMIT 1 method (in conjuction with a
userdata listview) and finding a past post into this forum describing it as
a rookie mistake I'm now trying to implement the "scrolling cursor" method
in that same post.  It leads to a few questions though.

For the above method to work for any database it means I need, for
each-and-every next/previous page request, to send *all* the bottom/top
records data back to the SQLite engine so it knows where to continue.  Even
when assuming the default maximum of columns the accumulated column names
and related data for the "WHERE" clause could get quite big.  Add to that a
possible the "SORT BY" clause and I'm looking at quite a large query, which
has to be created and transferred for every "scroll".  Which is something I
do not really like ...

1) Is it possible to refer to the columns in a kind of shorthand (index
perhaps) ?

2) Is it possible to have the SQLite engine initialize and remember certain
WHERE and ORDER clauses (without creating another database please :-) ), so
they can be used again-and-again (for the duration of a connection).

3) Is it possible, for the above 'scrolling cursor' method, to refer to a
starting record other than by sending the exact data of such a record back
to the SQLite engine ?

Ofcourse, feel (very) free to include other things that I've not thought
about and could be usefull. :-)

Regards,
Rudy Wieser



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Questions from a novice - basic browsing of records in a listview.

2014-07-09 Thread -
Hello all,

I'm quite new at SQLite3, and have a bit of a problem with grasping the
handling of a  database.

After having used the OFFSET and LIMIT 1 method (in conjuction with a
userdata listview) and finding a past post into this forum describing it as
a rookie mistake I'm now trying to implement the "scrolling cursor" method
in that same post.  It leads to a few questions though.

For the above method to work for any database it means I need, for
each-and-every next/previous page request, to send *all* the bottom/top
records data back to the SQLite engine so it knows where to continue.  Even
when assuming the default maximum of columns the accumulated column names
and related data for the "WHERE" clause could get quite big.  Add to that a
possible the "SORT BY" clause and I'm looking at quite a large query, which
has to be created and transferred for every "scroll".  Which is something I
do not really like ...

1) Is it possible to refer to the columns in a kind of shorthand (index
perhaps) ?

2) Is it possible to have the SQLite engine initialize and remember certain
WHERE and ORDER clauses (without creating another database please :-) ), so
they can be used again-and-again (for the duration of a connection).

3) Is it possible, for the above 'scrolling cursor' method, to refer to a
starting record other than by sending the exact data of such a record back
to the SQLite engine ?

Ofcourse, feel (very) free to include other things that I've not thought
about and could be usefull. :-)

Regards,
Rudy Wieser



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions regarding using test_onefile as VFS

2014-05-28 Thread Luca Sturaro
Hi, no you have to compile your vfs with sqlite3 amalgamation and its shell
in order to use it.
After compiling and linking, you will be able to run your vfs. Remember to
register your vfs in order to have it available.

Hope this helps,
Regards,
Luca
Il 28/mag/2014 14:41 "김병준"  ha scritto:

> The documentation seems to state that in order to use test_onefile,
> instead of providing an option when compiling sqlite3, test_onefile must be
> set as vfs with the -vfs command option in the shell. Is my understanding
> correct?
>
> The documentation does not provide examples using vfs demo files such as
> test_onefile or test_demovfs.  In order to use those vfs demo files, do I
> need to edit the shell source code myself?
>
> Are there any existing documentations on the usage of files such as
> test_demovfs or test_onefile? Or are there any tips or books on this matter
> that you can recommend me?
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Questions regarding using test_onefile as VFS

2014-05-28 Thread 김병준
The documentation seems to state that in order to use test_onefile, instead of 
providing an option when compiling sqlite3, test_onefile must be set as vfs 
with the -vfs command option in the shell. Is my understanding correct? 
 
The documentation does not provide examples using vfs demo files such as 
test_onefile or test_demovfs.  In order to use those vfs demo files, do I need 
to edit the shell source code myself?
 
Are there any existing documentations on the usage of files such as 
test_demovfs or test_onefile? Or are there any tips or books on this matter 
that you can recommend me?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about "INTEGER PRIMARY KEY AUTOINCREMENT" and "UPDATE"

2013-11-22 Thread James K. Lowden
On Thu, 21 Nov 2013 07:29:49 -0600
John McKown  wrote:

> To get more to your question, what I would do is have another,
> boolean, column in my table. I would call it something like
> "being_edited". When a user wants to edit a car, I would start a
> transaction (BEGIN TRANSACTION). I would then SELECT all the columns
> for the given car_id. My C (or whatever) code would check
> "being_edited". If it is TRUE, then tell the user that the car cannot
> be modified at this time and END TRANSACTION. Otherwise, UPDATE the
> "being_edited" to TRUE and UPDATE the row. Then END TRANSACTION.

That's pretty good, but I think I can do better.  

0.  Add a column "status", with a domain representing the entities that
can update the database, perhaps process ids or user ids.  
Let 0 be "no one".  

For each update, 

1.  update T set status = $uid where status in (0, $uid)
2.  check row count and notify user if busy, etc.  
3.  select ... from T where status = $uid
4.  [user putzes with data]
5.  update T set ... , status = 0 where status = $uid
6.  check row count and notify user, etc.  
7.  provide for resetting status of abandoned edits.  

For each delete, 

1.  delete from T where status = 0

In this scenario, the first one to reach #1 wins, provided everyone
plays by the same rules.  The record is "locked" according to those
rules, but not according to the DBMS.  

You begin with an UPDATE to acquire the "lock" atomically.  If that
fails, the user will have go the the next cubicle and bang on the
door.  

Then select the row(s) at your leisure, because you've claimed them
with your id.  

When you apply the new values with the second UPDATE, check the status
again, just to make sure.  Measure twice, cut once.  

The above describes pessimistic locking.  Large systems, such as airline
reservations, use optimistic locking.  The row has a counter; every
select captures it, and every update both checks and increments it.  If
two readers read the same row, the first update succeeds (because the
counter still matches) and second one fails.  Inconvenient for the
user, but half the I/O, and greater concurrency.  

--jkl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about exclusive transation

2013-06-26 Thread Dan Kennedy

On 06/26/2013 02:08 PM, Woody Wu wrote:

On Tue, Jun 25, 2013 at 10:21:51PM -0400, Igor Tandetnik wrote:

On 6/25/2013 10:13 PM, Woody Wu wrote:

1. When a exclusive transation started and not yet commit, I found if I
open another connection to the same database and try to access it (using
sqlite3_step), I got the error code SQLITE_MISUSE.  It sounds a little
strange because I thought the error code should be SQLITE_BUSY.  Is that
normal?

You are doing something wrong. Some call failed and you are passing
an invalid handle to a subsequent call, or something like that. That
is what causes SQLITE_MISUSE: the existence of an exclusive
transaction is perhaps the proximate cause of the original failure,
but it's not the direct cause of SQLITE_MISUSE.

Thanks for the analyzing.  After checked, I found it's the prepare_v2
statement, previous to the step statement, returned SQLITE_BUSY.  That
means, a prepare_v2 can even failure if the statement be preparing is an
operation to an exclusive locked database.  Sounds very strange! Is it true?


It is. prepare_v2() sometimes has to read the schema from
the database file. And it cannot if another process has
an exclusive lock.

Dan.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about exclusive transation

2013-06-26 Thread Simon Slavin

On 26 Jun 2013, at 8:08am, Woody Wu  wrote:

> Thanks for the analyzing.  After checked, I found it's the prepare_v2
> statement, previous to the step statement, returned SQLITE_BUSY.  That
> means, a prepare_v2 can even failure if the statement be preparing is an
> operation to an exclusive locked database.  Sounds very strange! Is it true?

How, precisely, do you obtain your exclusive lock ?  Are you checking the value 
returned by every SQLite3 call you use to obtain that lock ?

Always check the values returned for all the stages: _exec() or all three of 
_prepare(), _step(), and _finalize().  Whether you’re using assert() or some 
other mechanism, don’t just assume that, for instance, _finalize() never fails.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about exclusive transation

2013-06-25 Thread Igor Tandetnik

On 6/25/2013 10:13 PM, Woody Wu wrote:

1. When a exclusive transation started and not yet commit, I found if I
open another connection to the same database and try to access it (using
sqlite3_step), I got the error code SQLITE_MISUSE.  It sounds a little
strange because I thought the error code should be SQLITE_BUSY.  Is that
normal?


You are doing something wrong. Some call failed and you are passing an 
invalid handle to a subsequent call, or something like that. That is 
what causes SQLITE_MISUSE: the existence of an exclusive transaction is 
perhaps the proximate cause of the original failure, but it's not the 
direct cause of SQLITE_MISUSE.



2. After started a exclusive transaction with success. Can I expect that
all the operations that follows should run with success?


Yes, barring catastrophic failure (e.g. hard drive gone bad) and 
programming errors on your part (e.g. trying to prepare a syntactically 
incorrect statement).



Especially, is that possible that the 'commit' could get an error of 
SQLITE_BUSY?


No, it is not.
--
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Questions about exclusive transation

2013-06-25 Thread Woody Wu
Hi,

I have serveral questions about exclusive transaction. (version 3.7.x)

1. When a exclusive transation started and not yet commit, I found if I
open another connection to the same database and try to access it (using
sqlite3_step), I got the error code SQLITE_MISUSE.  It sounds a little
strange because I thought the error code should be SQLITE_BUSY.  Is that
normal?

2. After started a exclusive transaction with success. Can I expect that
all the operations that follows should run with success? Those
operations could be insert, update, delete and will end with a 'commit
transaction'?  Especially, is that possible that the 'commit' could get
an error of SQLITE_BUSY? You know, the reason I want to use exclusive
transaction is to simplify the programming.

Thanks in advance.

-
woody
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about Timeline

2013-04-19 Thread Richard Hipp
On Thu, Apr 18, 2013 at 12:51 PM, Aimard Janvier  wrote:

>
>
> Hello,
>
> I would like to know how to access more than 200 entries (previous months
> or years) on your Timeline.
>

Look on the URL and find the place where is says "n=200".  Change the 200
to whatever number you want.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Questions about Timeline

2013-04-19 Thread Aimard Janvier


Hello,

I would like to know how to access more than 200 entries (previous months or 
years) on your Timeline.

Thanks
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] questions regarding WAL and Shared Cache Mode

2012-04-30 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 30/04/12 14:52, Sean Cui wrote:
> 1.  The main purpose of Shared Cache Mode is to enable multiple
> connections to a database file WITHIN a process.

No, it is to save some memory.  You can already open up multiple
connections to the same or different databases within a process.  Shared
cache mode just means that if they are to the same file then they'll share
a cache instead of having a unique cache per connection.

For regular SQLite usage (ie not limited embedded devices) I strongly
recommend *not* using shared cache mode.

  http://apidoc.apsw.googlecode.com/hg/tips.html#shared-cache-mode

> 2.  With Shared Cache Mode, Table level locking can be used to
> improve concurrency.

I believe things are still serialised.  There is no magical switch or
setting that makes SQLite go a lot faster.

> 3.  WAL mode increases concurrency by ...

The documentation for WAL clearly explains the advantages and
disadvantages, including how it works.

  http://www.sqlite.org/wal.html

Roger


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iEYEARECAAYFAk+fGY4ACgkQmOOfHg372QTOBQCg43mgDkWsOLOaetJdCYlnvKWq
u3EAn1MdRlqiPHPiFSsGGyOB+ws5k6gA
=Aky4
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] questions regarding WAL and Shared Cache Mode

2012-04-30 Thread Sean Cui
Hello, this is my first question in the Sqlite mailing list:
I am currently investigating concurrency in SQLite. I found two related topics: 
WAL and Shared Cache Mode. Below is a list of my understandings regarding them:

1.  The main purpose of Shared Cache Mode is to enable multiple connections 
to a database file WITHIN a process.



2.  With Shared Cache Mode, Table level locking can be used to improve 
concurrency.



3.  WAL mode increases concurrency by making Writes and Reads not blocking 
each other.  Writes and Reads can be from one process or from multiple 
processes.
Am I right with the above points?
Thanks,
Sean
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Newbie Sqlite questions

2012-03-21 Thread Yan Seiner

Igor Tandetnik wrote:

On 3/21/2012 9:33 PM, Yan Seiner wrote:

I need to write some C code to interface with sqlite API. I"ve been
through the docs, and I'm hoping someone can point me to some very basic
example code that I could build on.


http://www.sqlite.org/quickstart.html


Thanks!  Perfect!


--
Honoring our vets.

http://www.bataanmarch.com/

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Newbie Sqlite questions

2012-03-21 Thread Igor Tandetnik

On 3/21/2012 9:33 PM, Yan Seiner wrote:

I need to write some C code to interface with sqlite API. I"ve been
through the docs, and I'm hoping someone can point me to some very basic
example code that I could build on.


http://www.sqlite.org/quickstart.html
--
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Newbie Sqlite questions

2012-03-21 Thread Yan Seiner
I just started playing around with sqlite3 in earnest.  I have a fairly 
large database (several hundred thousand records) on an embedded box and 
sqlite does an incredible job of finding what i need very fast.


My database is as simple as it gets:

CREATE VIRTUAL TABLE latlong USING rtree( id, latmin, latmax, longmin, 
longmax);
CREATE TABLE geodata ( id INTEGER PRIMARY KEY, name TEXT, class TEXT, 
state TEXT, county TEXT);


The test runs I've done show that sqlite is incredibly fast pulling out 
the records I need.


I need to write some C code to interface with sqlite API.  I"ve been 
through the docs, and I'm hoping someone can point me to some very basic 
example code that I could build on.


Thanks!

--
Honoring our vets.

http://www.bataanmarch.com/

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about BLOB and page size

2011-09-15 Thread Pavel Ivanov
> 1. Does BLOB type field cause any performance issues compared to using TEXT> 
> when doing inserts/updates. In other words, if they use the same number of> 
> bytes, would there be any difference in performance?
I believe it's a little weird choice as TEXT fields have some
additional features that you don't want to have with binary data. But
to answer your question I think performance would depend on encoding
in your database (UTF-8 vs UTF-16) and the functions you use to bind
TEXT value.

> 2. When you store multiple tables, does each table get stored on a page by
> itself, so the larger the page size, the more likely you would see
> contiguous items for a table?

Yes, every table has its own set of pages, their data doesn't mix into
same pages.

> 3. How does the sqlite page size correlate with the file system block size?
> In other words, if sqlite uses a page size of 1K and the file system uses a
> block size of 4K, would you still end up writing 4K even if you changed only
> 1K?

Yes, at the OS level (i.e. SQLite doesn't do anything for that to
happen) you'll end up reading 4K (if it's not already in cache) and
writing 4K back with 1K of them changed.


Pavel


On Thu, Sep 15, 2011 at 10:04 PM, Andy Stec  wrote:
> We are trying to determine how BLOB type and page size impact performance.
> I have a few questions related to that:
>
> 1. Does BLOB type field cause any performance issues compared to using TEXT
> when doing inserts/updates. In other words, if they use the same number of
> bytes, would there be any difference in performance?
>
> 2. When you store multiple tables, does each table get stored on a page by
> itself, so the larger the page size, the more likely you would see
> contiguous items for a table?
>
> 3. How does the sqlite page size correlate with the file system block size?
> In other words, if sqlite uses a page size of 1K and the file system uses a
> block size of 4K, would you still end up writing 4K even if you changed only
> 1K?
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Questions about BLOB and page size

2011-09-15 Thread Andy Stec
We are trying to determine how BLOB type and page size impact performance.
I have a few questions related to that:

1. Does BLOB type field cause any performance issues compared to using TEXT
when doing inserts/updates. In other words, if they use the same number of
bytes, would there be any difference in performance?

2. When you store multiple tables, does each table get stored on a page by
itself, so the larger the page size, the more likely you would see
contiguous items for a table?

3. How does the sqlite page size correlate with the file system block size?
In other words, if sqlite uses a page size of 1K and the file system uses a
block size of 4K, would you still end up writing 4K even if you changed only
1K?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about table optimization

2011-02-12 Thread Simon Slavin

On 12 Feb 2011, at 1:25pm, Gabriele Favrin wrote:

> First of all a BIG thanks to the dev team of SQLite, it's an amazing 
> library, it helped me to enter in the world of SQL and I'm using it from 
> PHP on some small-medium sites. Thanks also to who takes time to explain 
> thing to people who aren't really expert on db (like me, I know a bit 
> but I'm so much to learn).

In case anyone's wondering it really is nice to be thanked occasionally.

> ip VARCHAR( 16 ),
> msgVARCHAR( 4096 ),

Please note that these types don't exist in SQLite.  They're both interpreted 
as 'TEXT' and the number of characters is not limited.  Don't worry, this does 
not make use of TEXT columns slow, and the columns don't waste filespace for 
characters you don't use.  For more details see

http://www.sqlite.org/datatype3.html

> pubINTEGER  DEFAULT ( 0 ),
> newINTEGER  DEFAULT ( 1 ),

Do you really need both fields ?  Are all four combinations possible ?  It 
seems to me that your real-life situation is that a new message has status

new == awaiting moderation == not public

and at some point the message is seen by a moderator and is either deleted 
(removed from the database) or becomes

no longer new == moderator-approved == public

So the two columns can only really need to be one column.  Saving a column will 
speed things up generally. Another way to look at this might be to use many 
different values, since the column is already INTEGER not BOOL.  So you might 
use something like

 0 = new
 1 = awaiting moderation (might be the same as 'new' in your system)
 2 = moderated okay for normal post
 3 = sticky
 4 = important
-1 = moderator did not approve (but saved in case of overruling, or as a record 
of the submitter's history)

In that situation moderator-tasks can look for status 1, the public fora show 
posts of status >= 2, etc..  I just did this off the top of my head and you 
should make up your own numbers to reflect the way you want your fora to work.

Merging these four columns into one (by including the sticky and important 
columns too) would save you database space and processing overhead every time 
to you make or delete a row.  And just one index on (category,status) should be 
all you need to make all the searches fast.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about table optimization

2011-02-12 Thread Igor Tandetnik
Gabriele Favrin  wrote:
> The board is moderated, so any new message should be approved from admin.
> I use the columns pub to determine messages that can be shown and new to
> determine new messages (which by default have pub set to 0). This is
> because changing a message from new=1 to new=0 gets some points to the
> user who sent it.

I'd have a separate table for new messages, and move them over to the main 
table as they become approved.

> I've started to think: an index on pub and or new would speed up
> counting and display, right?

It wouldn't help much at best, and may actually hurt performance at worst. An 
index works best on column(s) with lots of distinct values, and for queries 
that need to quickly pick a small subset of a large set of rows (e.g., select 
all messages posted on a given date).

> But doesn't it also slow down too much
> insertion and update

Measure it, but I doubt you'll see a noticeable difference.

> Another question related to this table: is there any way to have a
> select and collect both main messages and replies to them in an
> heirchical way?

Not really. Add a column ThreadId (an ID of the root post might do), assign it 
to all new posts in that thread. On display, retrieve all posts by thread ID, 
reconstruct the hierarchy in your application's code.
-- 
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Questions about table optimization

2011-02-12 Thread Gabriele Favrin
Hi all, I'm new on this list.
First of all a BIG thanks to the dev team of SQLite, it's an amazing 
library, it helped me to enter in the world of SQL and I'm using it from 
PHP on some small-medium sites. Thanks also to who takes time to explain 
thing to people who aren't really expert on db (like me, I know a bit 
but I'm so much to learn).

I'm using SQLite in web development since 2009.
Recently I moved my development from PHP 5.2.x (which contained old 
SQLite version 3.3.7) to latest 5.3.x (with 3.7.3 and FTS3) and I've got 
a very good increase of speed in the database related operations.
By reading this list I've started wondering if I can archieve even more 
by improving indexing in my tables, so here I'm.

I've this table which is used for a board on a safe site for kids.
It's also going to become a forum with some more columns and another 
table for topics:

CREATE TABLE bacheca (
 id INTEGER  PRIMARY KEY AUTOINCREMENT,
 idnick INTEGER  REFERENCES utenti ( id ) ON DELETE CASCADE,
 ip VARCHAR( 16 ),
 msgVARCHAR( 4096 ),
 msg_date   DATE DEFAULT ( CURRENT_TIMESTAMP ),
 pubINTEGER  DEFAULT ( 0 ),
 sticky INTEGER  DEFAULT ( 0 ),
 important  INTEGER  DEFAULT ( 0 ),
 newINTEGER  DEFAULT ( 1 ),
 category   INTEGER  DEFAULT ( 1 ),
 replytoINTEGER  REFERENCES bacheca ( id ) ON DELETE 
CASCADE
);

The board is moderated, so any new message should be approved from admin.
I use the columns pub to determine messages that can be shown and new to 
determine new messages (which by default have pub set to 0). This is 
because changing a message from new=1 to new=0 gets some points to the 
user who sent it.

When I show messages to user I use the condition 'where pub=1 AND new=0' 
(just to be safe). In the site control panel main page I collect new 
activities to do which a 'select count(1) from bacheca where new=1' to 
show if and how many new messages needs to ne approved.

Currently I have over 3600 messages in the board, showed with pagination 
(15 per page). The thing is still fast but obviously slower than other 
tables I have (for polls, user contributed calendar and so on).

I've started to think: an index on pub and or new would speed up 
counting and display, right? But doesn't it also slow down too much 
insertion and update (which from that I understand means that during 
that time no one can't access any page which reads from database)? FYI, 
insertion happens between a declared transaction since I've to update 
other tables as well.

Another question related to this table: is there any way to have a 
select and collect both main messages and replies to them in an 
heirchical way?
Example: currently I don't have topics (will add them soon, as an 
integer referencing another table for their names) but I've answers to a 
specific message. How can I get with a single query ALL messages and 
when one of them have ansers all of them? eg:

message id 1 has replies with id 2 3 4 6
A select should give me:
1
2
3
4
6
and than go on with message 5 which has no replies, and so on...

Or such a select would be complicated and is best to count so an index 
for replyto is required, and it's already there) and do a 2nd select?

Thanks in advance and sorry for my BAD english.


-- 
Saluti da Gabriele Favrin
http://www.favrin.net
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions regarding Lemon

2010-02-22 Thread Wilson, Ronald
> > %type course_plot { std::vector* }
> >
> > course_plot(V) ::= COURSE_PLOT_BEG course_plot_sector(A) .
> > {
> >
> The issue is more what V is when the vector isn't created.
> 
> 
>  Igmar

lemon is not going to initialize anything for you.  You need to design
your parser such that you can control the creation and deletion of your
own objects.

*** untested pseudo-c code ***

%type assigments { struct llist *}
main ::= zero_or_more_statments.
zero_or_more_statements ::= .
zero_or_more_statements ::= zero_or_more_statements one_statement.
one_statement ::= assignments(L). { dostuff(*L); free(L); }
assignments(A) ::= ASSIGNMENT_BEGIN assignment(B). { A = malloc(...);
A->add(B); }
assignments(A) ::= assignments(L) assignment(B). { A = L; A->add(B); }

RW

Ron Wilson, Engineering Project Lead
(o) 434.455.6453, (m) 434.851.1612, www.harris.com

HARRIS CORPORATION   |   RF Communications Division
assuredcommunications(tm)
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions regarding Lemon

2010-02-22 Thread Igmar Palsenberg

> %type course_plot { std::vector* }
>
> course_plot(V) ::= COURSE_PLOT_BEG course_plot_sector(A) .
>   {
>
The issue is more what V is when the vector isn't created.


 Igmar
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions regarding Lemon

2010-02-22 Thread Wilson, Ronald
You have to initialize the variable yourself, e.g.

%type course_plot { std::vector* } 

course_plot(V) ::= COURSE_PLOT_BEG course_plot_sector(A) .
{
V = new std::vector;
V->push_back(A);
}
course_plot(V) ::= course_plot(L) COURSE_PLOT_GT course_plot_sector(A) .
{
V = L;
V->push_back(A);
}

And yes, I parse TradeWars 2002 with lemon.  /bow

Ron Wilson, Engineering Project Lead
(o) 434.455.6453, (m) 434.851.1612, www.harris.com

HARRIS CORPORATION   |   RF Communications Division 
assuredcommunications(tm)


> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Igmar Palsenberg
> Sent: Monday, February 22, 2010 7:39 AM
> To: sqlite-users@sqlite.org
> Subject: [sqlite] Questions regarding Lemon
> 
> 
> Hi,
> 
> After a decode or so I've begun to program in C again. I've writing a
> parser, and Lemon seems like the right tool for the job. SQLite itself
> is an excellent read when starting with lemon.
> However, a few questions remain unanswered :
> 
> - How do variables get initialized ? For example :
> 
> %type assigments { struct llist *}
> 
> assignments(A) ::= assignments assignment(B). { /* . */ }
> assignments(A) ::= assignment(B). {/*  */ }
> 
> Can I assume that A, when the rule is executed the first time, is NULL ?
> 
> - What is the meaning of @ in an assignment ? From the sqlite parse.y
> source :
> 
> transtype(A) ::= DEFERRED(X).  {A = @X;}
> 
> Thanx in advance for all the hints :)
> 
> 
> Regards,
> 
> 
>  Igmar
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Questions regarding Lemon

2010-02-22 Thread Igmar Palsenberg
Hi,

After a decode or so I've begun to program in C again. I've writing a 
parser, and Lemon seems like the right tool for the job. SQLite itself 
is an excellent read when starting with lemon.
However, a few questions remain unanswered :

- How do variables get initialized ? For example :

%type assigments { struct llist *}

assignments(A) ::= assignments assignment(B). { /* . */ }
assignments(A) ::= assignment(B). {/*  */ }

Can I assume that A, when the rule is executed the first time, is NULL ?

- What is the meaning of @ in an assignment ? From the sqlite parse.y 
source :

transtype(A) ::= DEFERRED(X).  {A = @X;}

Thanx in advance for all the hints :)


Regards,


 Igmar

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions regaring SQLite support on N97

2010-01-19 Thread Pedro Brigatto
Well, I would like to thank you for the responses!! Thank you so much!

In terms of app development, well, I've heard things like that before, but
in this case the app is developed in java me, and I was just wondering if
there is any API that makes it possible to stablish a conversation between
sqlite databases and the app itself. Do you have any information that could
help?

I've been searching, but got nothing concrete/useful 'til now...

regards!
//pedro

On Tue, Jan 19, 2010 at 2:05 AM, Simon Slavin  wrote:

>
> On 19 Jan 2010, at 2:33am, Bill King wrote:
>
> > Things are getting better, qt and qtcreator for s60's making things
> > noticeably easier, and there's a QtSql wrapper for the sqlite server
> > engine access in the works, or native sqlite is included in qt/s60 by
> > default.
>
> That's all good news.  I tried developing for S60 about five years ago and
> gave up in disgust.  (I'm an experienced developer on both large and
> embedded, high-level and low-level systems, I didn't just wimp out at the
> first hurdle.)  Everything you mentioned there sounds like a huge
> improvement.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Pedro Eugênio Cunha Brigatto
- Software Engineering Specialist -
- Sun Certified Java Programmer -
- SAP Netweaver Certified Java Consultant -
- Java ME Designer -
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions regaring SQLite support on N97

2010-01-18 Thread Simon Slavin

On 19 Jan 2010, at 2:33am, Bill King wrote:

> Things are getting better, qt and qtcreator for s60's making things
> noticeably easier, and there's a QtSql wrapper for the sqlite server
> engine access in the works, or native sqlite is included in qt/s60 by
> default.

That's all good news.  I tried developing for S60 about five years ago and gave 
up in disgust.  (I'm an experienced developer on both large and embedded, 
high-level and low-level systems, I didn't just wimp out at the first hurdle.)  
Everything you mentioned there sounds like a huge improvement.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions regaring SQLite support on N97

2010-01-18 Thread Bill King
On 01/19/2010 12:11 PM, ext Simon Slavin wrote:
> On 19 Jan 2010, at 2:02am, Pedro Brigatto wrote:
>
>   
>> I would like to know if there is any way to use SQLite on Nokia N97 (I've
>> been surfing on the web but got no answers to any of my questions regarding
>> it yet).
>> If the answer is *yes*, how could I get more info to get started on
>> developing my application using this DB?
>> 
> The Nokia N97 is an Nseries phone that runs Symbian Series 60.  This
>
> 
>
> or something like it should do what you want.  Google on 'symbian S60 sqlite'.
>
> In terms of starting to develop for Symbian, my advice is to get just enough 
> experience with it to persuade yourself that it's a world of frustration and 
> pain.  You can get started by looking at Nokia's Developer site and 
> downloading the Symbian development tools.  After a month or two you can move 
> to the iPhone or Android and regrow some hair.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   
Things are getting better, qt and qtcreator for s60's making things
noticeably easier, and there's a QtSql wrapper for the sqlite server
engine access in the works, or native sqlite is included in qt/s60 by
default.

-- 
Bill King, Software Engineer
Qt Development Frameworks, Nokia Pty Ltd
Brisbane Office

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions regaring SQLite support on N97

2010-01-18 Thread Simon Slavin

On 19 Jan 2010, at 2:02am, Pedro Brigatto wrote:

> I would like to know if there is any way to use SQLite on Nokia N97 (I've
> been surfing on the web but got no answers to any of my questions regarding
> it yet).
> If the answer is *yes*, how could I get more info to get started on
> developing my application using this DB?

The Nokia N97 is an Nseries phone that runs Symbian Series 60.  This



or something like it should do what you want.  Google on 'symbian S60 sqlite'.

In terms of starting to develop for Symbian, my advice is to get just enough 
experience with it to persuade yourself that it's a world of frustration and 
pain.  You can get started by looking at Nokia's Developer site and downloading 
the Symbian development tools.  After a month or two you can move to the iPhone 
or Android and regrow some hair.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Questions regaring SQLite support on N97

2010-01-18 Thread Pedro Brigatto
Hi, all!
This is my first post on this list. Thank you in advance for your time on
reading it.

I would like to know if there is any way to use SQLite on Nokia N97 (I've
been surfing on the web but got no answers to any of my questions regarding
it yet).
If the answer is *yes*, how could I get more info to get started on
developing my application using this DB?
If the answer is *no*, is there any 'relational' database to be used on this
device? Right now, the only solution to me sounds to be RMS, but it would be
amazing to be able to use a database properly said, since the tasks and the
how-to-do would be pretty much closer to the way I work on DB for
stand-alone and web applications.

Well, I thank you again and in advance, and would be glad if I get any
answer to this post.

Best regards,
-- 
*Pedro Eugênio Cunha Brigatto*
- Software Engineering Specialist -
- Sun Certified Java Programmer -
- SAP Netweaver Certified Java Consultant -
- Java ME Designer -
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about sqlite3_result_text*

2009-10-20 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Jean-Christophe Deschamps wrote:
> 
>> You are trying really hard to overthink things :-)
> 
> I simply found version c-1) in a widely spread extension and was 
> surprised by this way of doing the return, unduly complicated and 
> inefficient in my poor understanding, hence the question.

The best place to look for best practises and well written code is the
SQLite core implementation itself :-)

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkreoKEACgkQmOOfHg372QTngQCgoz1NeQCORlTQQNqLKFs9Avow
E2wAoMMxSNILusw6Z7rHU8Z2LJaB2950
=I1ZW
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about sqlite3_result_text*

2009-10-20 Thread Jean-Christophe Deschamps


>You are trying really hard to overthink things :-)

I simply found version c-1) in a widely spread extension and was 
surprised by this way of doing the return, unduly complicated and 
inefficient in my poor understanding, hence the question.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about sqlite3_result_text*

2009-10-20 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Jean-Christophe Deschamps wrote:
> a) is it allowable for a scalar function to modify (in place) an input 
> argument

No.

> b) same question for unmodified arguments: is it possible to avoid 
> duplication
> of an argument if the function determines it is to be returned 
> verbatim?
> As I read it the function to use is sqlite3_result_value but it 
> duplicates.

Yes use sqlite3_result_value - see ifnullFunc in the sqlite source for an
example.  There is no issue with it duplicating (the code is not performance
critical, C is not garbage collected or reference counted etc).  If you
disagree with this then make a patched version and  run a benchmark showing
the difference.

> c) is it true that the following return methods are both acceptable, 
> with method
> 2 being better by not forcing an immediate dupplication of the 
> returned value
> and a byte count of the string?
> 
>1)  [result r obtained from sqlite3_malloc]
>...
>sqlite3_result_text(context, r, -1, SQLITE_TRANSIENT);
>sqlite3_free(r);
>  }
> 
>2)  [result r obtained from sqlite3_malloc]
>...
>sqlite3_result_text(context, r, effective_byte_length, 
> sqlite3_free);
>  }

Correct 1 requires a copy while 2 takes ownership of the string.  You are
trying really hard to overthink things :-)

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAkrenJEACgkQmOOfHg372QRj/ACfVAQG1HDfFLtL9JcHklyU0K82
2OkAoLLDdaTAxdgeWYfxM4SzDNTtNRIB
=B7Vq
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Questions about sqlite3_result_text*

2009-10-20 Thread Jean-Christophe Deschamps
Hi,

The following details about text/blobs returns from scalar functions 
are unclear to me and I prefer doing things right.

a) is it allowable for a scalar function to modify (in place) an input 
argument
and return a pointer to modified argument with 
sqlite3_result_text?  If yes,
what should be the last parameter of sqlite3_result_text?
(Surely, SQLITE_TRANSIENT would work)

b) same question for unmodified arguments: is it possible to avoid 
duplication
of an argument if the function determines it is to be returned 
verbatim?
As I read it the function to use is sqlite3_result_value but it 
duplicates.

c) is it true that the following return methods are both acceptable, 
with method
2 being better by not forcing an immediate dupplication of the 
returned value
and a byte count of the string?

   1)  [result r obtained from sqlite3_malloc]
   ...
   sqlite3_result_text(context, r, -1, SQLITE_TRANSIENT);
   sqlite3_free(r);
 }

   2)  [result r obtained from sqlite3_malloc]
   ...
   sqlite3_result_text(context, r, effective_byte_length, 
sqlite3_free);
 }


I currently avoid a) & b) and use c-2).  Is this the best way?

TIA



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions from a new user

2009-06-18 Thread Dan

On Jun 18, 2009, at 9:50 PM, Shaun Seckman (Firaxis) wrote:

> Hello everyone,
>
>I'm in the process of writing some C++ wrappers to
> SQLite in order to shoehorn it into some legacy code.  I'm curious if
> there is any performance impact to having several prepared statements
> active at any given time.  I was thinking about using a prepared
> statement per instance of a Query class.

>  Also, what is the lifetime of
> the text received by sqlite3_column_text?

Buffer is generally valid until the next call to sqlite3_step(),  
sqlite3_reset()
or sqlite3_finalize() on the statement handle.

You can also invalidate it by calling sqlite3_column_text16(), or  
bytes16().
In this case the buffer that contains the utf-8 representation (a  
pointer
to which is returned by sqlite3_column_text()) is discarded to make room
for the utf-16 representation of the same text.

Dan.


> -Shaun
>
>
>
> Shaun Seckman
>
> Firaxis Games
> Programmer
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions from a new user

2009-06-18 Thread John Elrick
Shaun Seckman (Firaxis) wrote:
> Hello everyone,
>
> I'm in the process of writing some C++ wrappers to
> SQLite in order to shoehorn it into some legacy code.  I'm curious if
> there is any performance impact to having several prepared statements
> active at any given time.  

That depends.  There is a time hit for preparing a query, but it is 
minimal.  That being said, I wrote a performance analyzer for our Delphi 
wrappers which tracked the amount of time in each phase and eliminated 
60% of our application's overall query time cost by caching prepared 
queries.


> I was thinking about using a prepared
> statement per instance of a Query class.  Also, what is the lifetime of
> the text received by sqlite3_column_text?
>   
This one caught me.  There is a parameter argument you can pass which 
transfers the management of the string to the sqlite engine itself:

Note that this code is in Delphi, but you get the idea:

procedure TSqlite3PreparedStatement.bind(const oneBasedIndex: integer; 
const value: ansistring);
begin
  logParam(itsParamName, oneBasedIndex, value);
  checkSqliteReturnResult(
_sqlite3_bind_text(itsSqlitePreparedStatement, oneBasedIndex, 
pAnsiChar(value), length(value), pointer(SQLITE_TRANSIENT)),
bindingError('string', value, oneBasedIndex));
end;

SQLITE_TRANSIENT is the value you are looking for:

http://www.sqlite.org/c3ref/bind_blob.html

"The fifth argument to sqlite3_bind_blob(), sqlite3_bind_text(), and 
sqlite3_bind_text16() is a destructor used to dispose of the BLOB or 
string after SQLite has finished with it. If the fifth argument is the 
special value SQLITE_STATIC, then SQLite assumes that the information is 
in static, unmanaged space and does not need to be freed. If the fifth 
argument has the value SQLITE_TRANSIENT, then SQLite makes its own 
private copy of the data immediately, before the sqlite3_bind_*() 
routine returns."


HTH,


John


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Questions from a new user

2009-06-18 Thread Shaun Seckman (Firaxis)
Hello everyone,

I'm in the process of writing some C++ wrappers to
SQLite in order to shoehorn it into some legacy code.  I'm curious if
there is any performance impact to having several prepared statements
active at any given time.  I was thinking about using a prepared
statement per instance of a Query class.  Also, what is the lifetime of
the text received by sqlite3_column_text?

 

-Shaun

 

Shaun Seckman

Firaxis Games
Programmer

 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about "analyze"

2008-10-19 Thread Dan

On Oct 20, 2008, at 2:57 AM, Clodo wrote:

> Thanks for you answer MikeW.
>> Could you put the update records into their own separate table
> I already solved my problem using the newest "INDEXED BY".
> I'm trying to understand if is a SqLite limit or bug.

It's an unfortunate edge case alright. SQLite's query optimizer is very
simple. Running ANALYZE sometimes improves the decisions it makes, but
also sometimes not.

If you must use ANALYZE, overriding it's decision using INDEXED BY is  
probably
your best solution at this point. As you are aware, this is brand new  
in 3.6.4,
so your queries will be syntax errors

> And i'm trying to understand your answer about the hashes :(

SQLite doesn't use hash indexes. And I'm not sure that it would make
any difference to this kind of case if it did. (it's not a crazy  
suggestion
though - a lot of database systems do use hash indexes).


> For the moment, here you can found the comparison between SqLite,  
> MySql
> and Oracle9 with the minimal full test-case.
> Only sqlite do a full-table-scan after an "analyze". But i'm not 100%
> sure about the right syntax for MySql and Oracle.
> ---
> SqLite (from new empty database)
>
> SQL> CREATE TABLE mytest
> (
> mycodeint NOT NULL,
> mymarkint NOT NULL
> );
> SQL> insert into mytest select 0,0;
> SQL> insert into mytest select * from mytest;
> -- rerun the lastest 21 times, until i reach 2 millions of records.
> SQL> create index myindex on mytest (mymark);
> SQL> explain query plan select mycode from mytest where mymark=1;
> detail: TABLE mytest WITH INDEX myindex
> SQL> analyze
> SQL> explain query plan select mycode from mytest where mymark=1;
> detail: TABLE mytest
> ---
> Oracle:
>
> SQL> CREATE TABLE mytest
> (
> mycodeint NOT NULL,
> mymarkint NOT NULL
> );
> SQL> insert into mytest select 0,0 from dual;
> SQL> insert into mytest select * from mytest;
> -- rerun the lastest 21 times, until i reach 2 millions of records.
> SQL> create index myindex on mytest (mymark) tablespace users;
> -- with sqlite-autotrace enabled
> SQL> select mycode from mytest where mymark=1;
> Execution Plan
>   0  SELECT STATEMENT Optimizer=CHOOSE
>   10   TABLE ACCESS (BY INDEX ROWID) OF 'MYTEST'
>   21 INDEX (RANGE SCAN) OF 'MYINDEX' (NON-UNIQUE)
> SQL> analyze table mytest compute statistics;
> SQL> explain query plan select mycode from mytest where mymark=1;
> Execution Plan
>   0  SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=1 Bytes=2)
>   10   TABLE ACCESS (BY INDEX ROWID) OF 'MYTEST' (Cost=4 Card=1
> Bytes=2)
>   21 INDEX (RANGE SCAN) OF 'MYINDEX' (NON-UNIQUE) (Cost=3  
> Card=1)
>
> ---
> MySql:
>
> SQL> CREATE TABLE mytest
> (
> mycodeint NOT NULL,
> mymarkint NOT NULL
> );
> SQL> insert into mytest select 0,0;
> SQL> insert into mytest select * from mytest;
> -- rerun the lastest 21 times, until i reach 2 millions of records.
> SQL> create index myindex on mytest (mymark);
> SQL> explain select mycode from mytest where mymark=1;
> id select_type table type possible_keys key
> key_len ref rows Extra
> 1SIMPLEmytestrefmyindexmyindex4const1
> SQL> analyze table mytest;
> SQL> explain select mycode from mytest where mymark=1;
> id select_type table type possible_keys key
> key_len ref rows Extra
> 1SIMPLEmytestrefmyindexmyindex4const1
>
>
>> Don't have the source to hand, but it's possible that SQLite does  
>> indexes
>> by hash, in which case it wouldn't "know" that the field values are  
>> all
>> the same, just that they had the same hash-value, hence it will have
>> to do a linear search, which it will always have to do if hashes  
>> are the same.
>>
>> Could you put the update records into their own separate table,
>> then move them into the main one later - that would save all the  
>> slow accesses.
>>
>> Regards,
>> MikeW
>>
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about "analyze"

2008-10-19 Thread Clodo

> Hi Clodo,
>
> I believe Sqlite uses B-trees, unless one links in the R-tree module or
> uses the FTS feature (Full Text Search).
>
>   
I don't know almost nothing about the internal implementation of a 
database engine, i need to study to understand your reply :)
> I believe you wrote earlier that the new "INDEXED BY" feature solved
> your problem, but you saw something in the documentation that dissuaded
> you.  Do you mind saying just what it was that seemed problematic?
>
>   
Sorry but i have some difficult to understand your english... if i 
understand your message correctly,
i actually use "INDEXED BY" to force the use of the index,
but this page: http://www.sqlite.org/lang_indexedby.html   say:
"Some SQL database engines provide non-standard "hint" mechanisms which 
can be used to give the query optimizer clues about what indices it 
should use for a particular statement. The INDEX BY clause of SQLite is 
/not/ a hinting mechanism and it should not be used as such."
[...]
"The INDEXED BY clause is /not/ intended for use in tuning the 
preformance of a query. The intent of the INDEXED BY clause is to raise 
a run-time error if a schema change".

For that i'm looking for another better solution.
> To mention a variation of  MikeW's suggestion, since you tend to have
> millions of records, with often only a few requiring an update, you
> might want to keep a small table containing only the record numbers of
> the rows that need updating.  The increase in disk space sounds like it
> would be minimal.
>   
Yes,
- an additional table is a solution;
- using timestamp "last_check" and not a boolean is another solution, 
because with a "variation" of values the index are used also after an 
analyze;
- and maybe exists other solutions.

But... i'm optimizing/tuning my software, and i discover this problem 
"random" during hundred of "explain query"... i'm trying to understand 
the "logic" (if exists) behind this problem, to understand if other 
query with a similar problem can be also affected. And also for giving 
feedback, for improving SqLite, -IF- this is a bug :)

Thanks for you reply! Bye
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about "analyze"

2008-10-19 Thread Griggs, Donald
Hi Clodo,

I believe Sqlite uses B-trees, unless one links in the R-tree module or
uses the FTS feature (Full Text Search).

I believe you wrote earlier that the new "INDEXED BY" feature solved
your problem, but you saw something in the documentation that dissuaded
you.  Do you mind saying just what it was that seemed problematic?

To mention a variation of  MikeW's suggestion, since you tend to have
millions of records, with often only a few requiring an update, you
might want to keep a small table containing only the record numbers of
the rows that need updating.  The increase in disk space sounds like it
would be minimal.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about "analyze"

2008-10-19 Thread Clodo
Thanks for you answer MikeW.
 >Could you put the update records into their own separate table
I already solved my problem using the newest "INDEXED BY".
I'm trying to understand if is a SqLite limit or bug.
And i'm trying to understand your answer about the hashes :(

For the moment, here you can found the comparison between SqLite, MySql 
and Oracle9 with the minimal full test-case.
Only sqlite do a full-table-scan after an "analyze". But i'm not 100% 
sure about the right syntax for MySql and Oracle.
---
SqLite (from new empty database)

SQL> CREATE TABLE mytest
(
mycodeint NOT NULL,
mymarkint NOT NULL
);
SQL> insert into mytest select 0,0;
SQL> insert into mytest select * from mytest;
-- rerun the lastest 21 times, until i reach 2 millions of records.
SQL> create index myindex on mytest (mymark);
SQL> explain query plan select mycode from mytest where mymark=1;
detail: TABLE mytest WITH INDEX myindex
SQL> analyze
SQL> explain query plan select mycode from mytest where mymark=1;
detail: TABLE mytest
---
Oracle:

SQL> CREATE TABLE mytest
(
mycodeint NOT NULL,
mymarkint NOT NULL
);
SQL> insert into mytest select 0,0 from dual;
SQL> insert into mytest select * from mytest;
-- rerun the lastest 21 times, until i reach 2 millions of records.
SQL> create index myindex on mytest (mymark) tablespace users;
-- with sqlite-autotrace enabled
SQL> select mycode from mytest where mymark=1;
Execution Plan
   0  SELECT STATEMENT Optimizer=CHOOSE
   10   TABLE ACCESS (BY INDEX ROWID) OF 'MYTEST'
   21 INDEX (RANGE SCAN) OF 'MYINDEX' (NON-UNIQUE)
SQL> analyze table mytest compute statistics;
SQL> explain query plan select mycode from mytest where mymark=1;
Execution Plan
   0  SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=1 Bytes=2)
   10   TABLE ACCESS (BY INDEX ROWID) OF 'MYTEST' (Cost=4 Card=1 
Bytes=2)
   21 INDEX (RANGE SCAN) OF 'MYINDEX' (NON-UNIQUE) (Cost=3 Card=1)
  
---
MySql:

SQL> CREATE TABLE mytest
(
mycodeint NOT NULL,
mymarkint NOT NULL
);
SQL> insert into mytest select 0,0;
SQL> insert into mytest select * from mytest;
-- rerun the lastest 21 times, until i reach 2 millions of records.
SQL> create index myindex on mytest (mymark);
SQL> explain select mycode from mytest where mymark=1;
id select_type table type possible_keys key 
key_len ref rows Extra
1SIMPLEmytestrefmyindexmyindex4const1 
SQL> analyze table mytest;
SQL> explain select mycode from mytest where mymark=1;
id select_type table type possible_keys key 
key_len ref rows Extra
1SIMPLEmytestrefmyindexmyindex4const1


> Don't have the source to hand, but it's possible that SQLite does indexes
> by hash, in which case it wouldn't "know" that the field values are all
> the same, just that they had the same hash-value, hence it will have
> to do a linear search, which it will always have to do if hashes are the same.
>
> Could you put the update records into their own separate table,
> then move them into the main one later - that would save all the slow 
> accesses.
>
> Regards,
> MikeW
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>   

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about "analyze"

2008-10-19 Thread MikeW
Fabrizio Carimati <[EMAIL PROTECTED]> writes:

> 
> Hi to all,
> I have a table with many record, that have a field normally with value '0'.
> Occasionally i update some records by setting the field to '1' to mark 
> it, and after in a background job, i localized them for working on it. 
> For that, i have an index on that field.
> My problem: if i run an "analyze" when all records have the field equal 
> to '0', the localization (select .. where field != 0) don't use the 
> index, and do a full-table-scan.
> If at least one record is set to '1', the index are used.
> 
> It's seem that "analyze" discover that all records have same field 
> value, so mark the index "useless".
> 
> Solutions that i know:
> - run "analyze" after updating to '1' some records. :(
> - maintain at least one fake record with '1' :(
> - never launch "analyze" :(
> 
> Or... (please help!)
> - exists some options to control this behiavour of analyze?
> - it's possible to force the use of the index?
> 
> Thanks for feedback!
> 

Don't have the source to hand, but it's possible that SQLite does indexes
by hash, in which case it wouldn't "know" that the field values are all
the same, just that they had the same hash-value, hence it will have
to do a linear search, which it will always have to do if hashes are the same.

Could you put the update records into their own separate table,
then move them into the main one later - that would save all the slow accesses.

Regards,
MikeW


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about "analyze"

2008-10-17 Thread Paul Smith
Clodo wrote:
> Many thanks, it's a good news that resolve my problem.
>
> But still remain "a trick", i think the behaviour descripted in my 
> original feedback is "strange".. i understand, if all fields have the 
> same value, an index on that have a zero "height" in computing the best 
> indexes to use, but not use index at all and do a full-table-scan, for 
> what i understand about sqlite, imho is strange...
>   
If an index is useless for the query, then a full table scan will
generally be quicker than an indexed scan.

An indexed scan requires accessing two things (the index plus the data)
and also traversing the index which is more 'random access' than
sequentially scanning through the table.

If the analyse has analysed the right data, then letting it do the table
scan is probably the best thing. If it hasn't, then put the right data
in the table before doing the analyse...

(PostgreSQL does the same thing, if the index is useless, or the
database clustering is highly correlated with the index, then it won't
use the index at all).
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Questions about "analyze"

2008-10-17 Thread Griggs, Donald
Greetings, Clodo,

Regarding:   "[is it]possible to force the use of the index?"

The very latest release of sqlite, 3.6.4, implements precisely that, in
the form of an "INDEXED BY" clause.

See:
 http://www.sqlite.org/releaselog/3_6_4.html
Specifically:
 http://www.sqlite.org/lang_indexedby.html

There was a flurry of discussion regarding "INDEXED BY" just before its
release if you want to search the archives of this list, but the link
above may be all you need at this point.

I can't speak to other parts of your question.

Regards,
  Donald Griggs

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Questions about "analyze"

2008-10-17 Thread Fabrizio Carimati
Hi to all,
I have a table with many record, that have a field normally with value '0'.
Occasionally i update some records by setting the field to '1' to mark 
it, and after in a background job, i localized them for working on it. 
For that, i have an index on that field.
My problem: if i run an "analyze" when all records have the field equal 
to '0', the localization (select .. where field != 0) don't use the 
index, and do a full-table-scan.
If at least one record is set to '1', the index are used.

It's seem that "analyze" discover that all records have same field 
value, so mark the index "useless".

Solutions that i know:
- run "analyze" after updating to '1' some records. :(
- maintain at least one fake record with '1' :(
- never launch "analyze" :(

Or... (please help!)
- exists some options to control this behiavour of analyze?
- it's possible to force the use of the index?

Thanks for feedback!



-- Test case:

-- Creating test table
CREATE TABLE os_test
(
  codeint NOT NULL,
  markint NOT NULL,
  PRIMARY KEY (code)
);

-- Creating an index
CREATE INDEX os_test_index_mark ON os_test (mark);

-- Filling with some data
insert into os_test values ('a',0);
insert into os_test values ('b',0);
insert into os_test values ('c',0);

-- Testing indexes
explain query plan
select code from os_test where mark == 1

-- Result:   "TABLE os_test WITH INDEX os_test_index_mark"
-- Right.

-- Launch "analyze"
analyze

-- Re-Testing indexes
explain query plan
select code from os_test where mark == 1

-- Result:   "TABLE os_test"
-- Wrong.


-- Updating one record
update os_test set mark=1 where code='a'

-- Re-Testing indexes
explain query plan
select code from os_test where mark == 1

-- Result:   "TABLE os_test"
-- Wrong.

-- Re-Analyze
analyze

-- Re-Testing indexes
explain query plan
select code from os_test where mark == 1

-- Result:   "TABLE os_test WITH INDEX os_test_index_mark"
-- Right.




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Questions about "analyze"

2008-10-16 Thread Clodo
Hi to all,

I have a table with many record, that have a field normally with value '0'.
Occasionally i update some records by setting the field to '1' to mark 
it, and after in a background job, i localized them for working on it. 
For that, i have an index on that field.
My problem: if i run an "analyze" when all records have the field equal 
to '0', the localization (select .. where field != 0) don't use the 
index, and do a full-table-scan.
If at least one record is set to '1', the index are used.

It's seem that "analyze" discover that all records have same field 
value, so mark the index "useless".

Solutions that i know:
- run "analyze" after updating to '1' some records. :(
- maintain at least one fake record with '1' :(
- never launch "analyze" :(

Or... (please help!)
- exists some options to control this behiavour of analyze?
- it's possible to force the use of the index?

Thanks for feedback!



-- Test case:

-- Creating test table
CREATE TABLE os_test
(
 codeint NOT NULL,
 markint NOT NULL,
 PRIMARY KEY (code)
);

-- Creating an index
CREATE INDEX os_test_index_mark ON os_test (mark);

-- Filling with some data
insert into os_test values ('a',0);
insert into os_test values ('b',0);
insert into os_test values ('c',0);

-- Testing indexes
explain query plan
select code from os_test where mark == 1

-- Result:   "TABLE os_test WITH INDEX os_test_index_mark"
-- Right.

-- Launch "analyze"
analyze

-- Re-Testing indexes
explain query plan
select code from os_test where mark == 1

-- Result:   "TABLE os_test"
-- Wrong.


-- Updating one record
update os_test set mark=1 where code='a'

-- Re-Testing indexes
explain query plan
select code from os_test where mark == 1

-- Result:   "TABLE os_test"
-- Wrong.

-- Re-Analyze
analyze

-- Re-Testing indexes
explain query plan
select code from os_test where mark == 1

-- Result:   "TABLE os_test WITH INDEX os_test_index_mark"
-- Right.





___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Re: Performance tuning, and other (silly?) SQLite questions.

2007-11-19 Thread James Steward
On Mon, 2007-11-19 at 18:14 -0500, Igor Tandetnik wrote:
> James Steward 
> wrote:
> > db eval {CREATE TABLE position(\
> > position_id INTEGER PRIMARY_KEY, \
> > odo INTEGER, \
> > time CURRENT_TIMESTAMP);}
> 
> You probably meant
> 
> -- note no underscore between PRIMARY and KEY
> position_id INTEGER PRIMARY KEY

Oops!  Fixed now.  Thanks.

> -- note DEFAULT keyword
> time DEFAULT CURRENT_TIMESTAMP

I discovered that just before I saw your reply ;-)

> > #A more complicated query...runs quite slowly.  How can this be sped
> > up?
> > db eval {SELECT position.odo, data.x, data.y from position, data WHERE
> > position.odo BETWEEN 1 AND 10020;}
> 
> First, you want an index on position.odo. Second, you don't specify any 
> relation between position and data tables, so you generate a full 
> cross-product. You want
> 
> SELECT position.odo, data.x, data.y
> FROM position JOIN data ON (position.position_id = data.position_id)
> WHERE position.odo BETWEEN 1 AND 10020;

Ah.  I have been reading a PHP/MySQL book, that I thought said a MySQL
server would see the common column names and automagically join the 2.
Either I misremember what the book said (it's not with me here), or this
is a feature of MySQL, not present in SQLite.  Anyway, what you suggest
works just fine.

> And for that to work efficiently, you want another index on 
> data.position_id

I'm guessing that is with;
CREATE INDEX odo_index ON data (position_id ASC);

If the data is streaming in, and insertions are being made on the fly,
will an index need to be regenerated periodically, i.e. REINDEX?

Cheers,
James.



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Performance tuning, and other (silly?) SQLite questions.

2007-11-19 Thread Kees Nuyt
[Default] On Tue, 20 Nov 2007 09:56:11 +1100, James Steward
<[EMAIL PROTECTED]> wrote:

>Hi,
>
>I am new to SQL and SQLite, so please excuse me if I appear thick at
>times.
>
>I have an application that generates data from a moving vehicle.  At a
>position there may be 10s of pairs of data.  Position is given as an
>integer, and the data pairs are an integer and a float (real).  So I set
>up a database like this (in Tcl);
>
>db eval {CREATE TABLE position(\
>   position_id INTEGER PRIMARY_KEY, \
>   odo INTEGER, \
>   time CURRENT_TIMESTAMP);}
>db eval {CREATE TABLE data(\
>   data_id INTEGER PRIMARY_KEY, \
>   position_id INTEGER, \
>   x INTEGER, \
>   y REAL);}

Shouldn't that be PRIMARY KEY (without the underscore)?


>So far so good.  Now I have a heap of stored data in a binary file, that
>I use to insert into the tables, with statements like;
>
>db eval {BEGIN;}
>
># loop reading in file...
>while {..} {
>
>#when a new odo value is read...
>db eval {INSERT INTO position(odo) values($odo);} 
>set pos_id [db eval {SELECT last_insert_rowid();}]
>
>#for every data pair
>db eval {INSERT INTO data(position_id, x, y) values($pos_id, $x, $y);}
>
>}
>
>db eval {COMMIT;}
>
>There are no errors returned.  Now I try a couple of queries, which
>return data, but not quite what I expect.
>
>#This kinda works as expected, but position_id and time columns are not
>#poulated.  Why not?

That will have to do with the wrong spelling of PRIMARY KEY.

>db eval {SELECT * FROM position WHERE ROWID BETWEEN 100 AND 101;}
>
>#This does not work, returns nothing.  I thought (after reading the
>#SQLite doco) that position_id would be populated with ROWID...
>db eval {SELECT * FROM position WHERE position_id BETWEEN 100 AND 101;}
>
>#This kinda works, but data_id is not populated...
>db eval {SELECT * FROM data where position_id BETWEEN 100 AND 101;}
>
>#A more complicated query...runs quite slowly.  How can this be sped up?
>db eval {SELECT position.odo, data.x, data.y from position, data WHERE
>position.odo BETWEEN 1 AND 10020;}

An index on position.odo might help.
Try that and compare the 'EXPLAIN SELECT ...' and 'EXPLAIN QUERY
PLAN SELECT ...'  output with and without index.

>Is SQLite going to be able to handle, say, 2,000,000 data pairs, and say
>60,000 positions, efficiently and quickly?

In general, yes.

>How can I help SQLite perform queries like the last above, as fast as
>possible?

Indexes and pragmas. More specifically, the page_size and
cache_size pragmas.

>Regards,
>James.

HTH
-- 
  (  Kees Nuyt
  )
c[_]

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Re: Performance tuning, and other (silly?) SQLite questions.

2007-11-19 Thread Igor Tandetnik

James Steward 
wrote:

db eval {CREATE TABLE position(\
position_id INTEGER PRIMARY_KEY, \
odo INTEGER, \
time CURRENT_TIMESTAMP);}


You probably meant

-- note no underscore between PRIMARY and KEY
position_id INTEGER PRIMARY KEY

-- note DEFAULT keyword
time DEFAULT CURRENT_TIMESTAMP


db eval {CREATE TABLE data(\
data_id INTEGER PRIMARY_KEY, \
position_id INTEGER, \
x INTEGER, \
y REAL);}


Same here - it's PRIMARY KEY with no underscore.


#A more complicated query...runs quite slowly.  How can this be sped
up?
db eval {SELECT position.odo, data.x, data.y from position, data WHERE
position.odo BETWEEN 1 AND 10020;}


First, you want an index on position.odo. Second, you don't specify any 
relation between position and data tables, so you generate a full 
cross-product. You want


SELECT position.odo, data.x, data.y
FROM position JOIN data ON (position.position_id = data.position_id)
WHERE position.odo BETWEEN 1 AND 10020;

And for that to work efficiently, you want another index on 
data.position_id


Igor Tandetnik 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Questions on views

2007-03-27 Thread drh
"Dennis Volodomanov" <[EMAIL PROTECTED]> wrote:
> Ok, thank you for clarifying those points to me.
> 
> At the moment I'm struggling to understand why running a
> 
> SELECT ROWID FROM TableA WHERE colA='a'
> 
> Takes almost twice as long as
> 
> SELECT ROWID FROM TableA WHERE colA='b'
> 
> Running on the same table with the same data with colA indexed.
> Unfortunately, the real queries are not as simple, although still the
> same between each other, just different data. I'm still in the process
> of singleing out the query that is different (using EXPLAIN), so as soon
> as I know which it is I'll be able to study it and see how can I change
> it.
> 

Run ANALYZE then look at the lines of sqlite_stat1 that
pertain to TableA.  I'm guessing you will find that
colA is not very selective.  That is to say, a large
fraction (perhaps half) of the rows in tableA have 'a' 
as their value for colA.  If requesting more than just
ROWID in the return set, this can result in a significant
slowdown.

Running ANALYZE might give the query optimizer enough
clues to figure that out, by the way. So try rerunning
the query after you run ANALYZE and see if it helps.
--
D. Richard Hipp  <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



  1   2   >