[sqlite] CVE-2019-16168

2020-01-25 Thread Naumowicz, Ken E
Hello,

I need to know if there is a security patch for this CVE on Windows Server 2012:

Java SE Vulnerability CVE-2019-16168 Related to JavaFX (SQLite)   <<<=== 
https://www.symantec.com/security-center/vulnerabilities/writeup/111496
   > NO UPDATE/PATCH FOUND at SQLite - SQLite Homepage (https://www.sqlite.org/)

Thanks...

Ken Naumowicz
Sr. IT Application Consultant - EMS/SCADA Application Design and Engineering
WEC Energy Group - WEC Business Services (WBS)
office: 262-544-7239
email: ken.naumow...@wecenergygroup.com


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] sqlite3_exec() on prepared/bound statement

2017-04-04 Thread Labar, Ken
Hello all,

* Short story:
To support better security through binding, we would like to request a new 
C/C++ API function similar to sqlite3_exec(), however it will take a prepared 
statement instead of an sql string.

* Long story:
We finally allowed a user string input directly into our database (embbeded C, 
ARM7, UCOS-II port).
We immediately found SQL injection errors in testing.
To solve sql injection we used sqlite3_preapre_v2() with sqlite3_bind_text().
When looking to refactor the code, it was easier to create the following 
function directly in sqlite3.c.
It appears this function could be called from sqlite3_exec()  to save code 
space and avoid duplication.

We feel this would make sense in the mainline code, so we are offering it here, 
hopefully it might be useful to others:

#define ADD_SQLITE3_EXEC_PREPARED 1
#if ADD_SQLITE3_EXEC_PREPARED
/*
** Execute a prepared sqlite3 statement. Borrowed from SQLITE_API int 
sqlite3_exec().
** Return one of the SQLITE_ success/failure
** codes.  Also write an error message into memory obtained from
** malloc() and make *pzErrMsg point to that message.
**
** If the SQL is a query, then for each row in the query result
** the xCallback() function is called.  pArg becomes the first
** argument to xCallback().  If xCallback=NULL then no callback
** is invoked, even for queries.
*/
// @note: added for the ViperFish project by klabar 3/24/2017
SQLITE_API int sqlite3_exec_prepared(
  sqlite3 *db,/* The database on which the SQL executes */
  sqlite3_stmt *pStmt,/* The prepared SQL to be executed */
  sqlite3_callback xCallback, /* Invoke this callback routine */
  void *pArg, /* First argument to xCallback() */
  char **pzErrMsg /* Write error messages here */
){
  int rc = SQLITE_OK; /* Return code */
  const char *zLeftover;  /* Tail of unprocessed SQL */
  char **azCols = 0;  /* Names of result columns */
  int callbackIsInit; /* True if callback data is initialized */

  if( !sqlite3SafetyCheckOk(db) ) return SQLITE_MISUSE_BKPT;

  sqlite3_mutex_enter(db->mutex);
  sqlite3Error(db, SQLITE_OK, 0);

  while( rc==SQLITE_OK && pStmt ){
int nCol;
char **azVals = 0;

callbackIsInit = 0;
nCol = sqlite3_column_count(pStmt);

while( 1 ){
  int i;
  rc = sqlite3_step(pStmt);

  /* Invoke the callback function if required */
  if( xCallback && (SQLITE_ROW==rc ||
  (SQLITE_DONE==rc && !callbackIsInit
   && db->flags&SQLITE_NullCallback)) ){
if( !callbackIsInit ){
  azCols = sqlite3DbMallocZero(db, 2*nCol*sizeof(const char*) + 1);
  if( azCols==0 ){
goto exec_prep_out;
  }
  for(i=0; imallocFailed = 1;
  goto exec_prep_out;
}
  }
}
if( xCallback(pArg, nCol, azVals, azCols) ){
  rc = SQLITE_ABORT;
  sqlite3VdbeFinalize((Vdbe *)pStmt);
  pStmt = 0;
  sqlite3Error(db, SQLITE_ABORT, 0);
  goto exec_prep_out;
}
  }

  if( rc!=SQLITE_ROW ){
rc = sqlite3VdbeFinalize((Vdbe *)pStmt);
pStmt = 0;
break;
  }
}

sqlite3DbFree(db, azCols);
azCols = 0;
  }

exec_prep_out:
  if( pStmt ) sqlite3VdbeFinalize((Vdbe *)pStmt);
  sqlite3DbFree(db, azCols);

  rc = sqlite3ApiExit(db, rc);
  if( rc!=SQLITE_OK && ALWAYS(rc==sqlite3_errcode(db)) && pzErrMsg ){
int nErrMsg = 1 + sqlite3Strlen30(sqlite3_errmsg(db));
*pzErrMsg = sqlite3Malloc(nErrMsg);
if( *pzErrMsg ){
  memcpy(*pzErrMsg, sqlite3_errmsg(db), nErrMsg);
}else{
  rc = SQLITE_NOMEM;
  sqlite3Error(db, SQLITE_NOMEM, 0);
}
  }else if( pzErrMsg ){
*pzErrMsg = 0;
  }

  assert( (rc&db->errMask)==rc );
  sqlite3_mutex_leave(db->mutex);
  return rc;
}
#endif //ADD_SQLITEUTIL_EXEC_PREPARED


Here is an example of how we used it:
/**
   \brief
   \return rc // the SQLite return code
**/
int HLFF_DB_logfile_get_byLognameSource(char *logname, int source, void* 
relay_args)
{
int rc;
sqlite3_stmt *pStmt;


if (!logname) { return(-1); }
if (!logname[0]) { return(-1); }

//build the query
Str_Copy(
sql,
"SELECT id, filename, logname, size, "
"source, start_time, stop_time,"
" modified_timestamp FROM `logfile` WHERE logname=?1 and 
source=?2");

// prepare the query
rc = sqlite3_prepare_v2( _pHLFF_DB, sql, SQL_CMD_BUFF_SIZE, &pStmt, NULL );

// bind the parameters
if(!rc)
{
rc |= sqlite3_bind_text( pStmt, 1, logname , Str_Len( logname  ), 
SQLITE_STATIC );
rc |= sqlite3_bind_int( pStmt, 2, source );
}

// Execute the statement.
rc = sqliteUtil_exec_prepared(
_pHLFF_DB,
pStmt,
hlff_

Re: [sqlite] Foreign key error...

2017-01-11 Thread Ken Wagner

Keith,

Good point. Did not know this exists.

Ken


On 01/10/2017 09:48 PM, Simon Slavin wrote:

On 11 Jan 2017, at 1:02am, Keith Medcalf  wrote:


You are correct, however, if there were a unique constraint placed on 
tracks.name, then a given track could only appear once (in the first case), or 
in multiple places (in the second case).

_The Power of Love_ was recorded by Frankie Goes to Hollywood, Jennifer Rush, 
and Huey Lewis and The News.  Not only are these different recordings, they’re 
different songs.

Nor can you place a UNIQUE requirement on the combination of (title,authors).  
There are three different /studio/ edits of Bowie’s _Heroes_, and two of them 
are different enough that someone might like one and not the other.  Not to 
mention numerous different live versions which appear on convert albums and 
DVDs.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] About ticket c92ecff2ec5f1784 LEFT JOIN problem

2017-01-11 Thread Ken Wagner

Domingo,

Thanks for the email, but I don't think I am your inteded recipient.

Ken


On 01/10/2017 12:11 PM, Domingo Alvarez Duarte wrote:

Hello Richard !

Now that you are dealing with this ticket 
http://www.sqlite.org/src/info/c92ecff2ec5f1784 could be a good moment 
to acknowledge the problem reported before (see bellow), I'm not sure 
why you have answered this way without acknowledge any further action 
on it. I was expecting a bug ticket for it or at minimum a 
documentation of this weird behavior of the sqlite3 planner so other 
people could be aware of it.


Cheers !

===

On 1/5/17, Domingo Alvarez Duarte wrote:


Hello !

Today I found this unexpected behavior when using sqlite3 trunk:

When using views with joins sqlite3 is choosing expected plans except
for "LEFT JOIN", bellow is the snippet that shows this unexpected 
behavior.


===

create table if not exists a(id integer primary key, val text);
create table if not exists b(id integer primary key, a_id integer not
null, val text);
create view if not exists b_view as select b.*, a.* from b left join a
on b.a_id=a.id;
create table if not exists c(id integer primary key, b_id integer not
null, val text);

select 'bad unexpected plan';
explain query plan select c.*, b_view.* from c left join b_view on
c.b_id=b_view.id;


Can you rewrite your query as:

  SELECT *
   FROM c LEFT JOIN b ON c.b_id=b.id
LEFT JOIN a ON b.id=a.id;
-- D. Richard Hipp

===

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Foreign key error...

2017-01-09 Thread Ken Wagner

Keith,

"this does not allow the same track on multiple albums" with the same trackno, 
but a different trackno seems to work. Thus results cannot be guaranteed valid?

Ken


On 01/08/2017 06:57 AM, Keith Medcalf wrote:

On Sunday, 8 January, 2017 05:05, Ken Wagner  wrote:


Keith,

Ahh, better to be specific and avoid simplistic assumptions.

For foreign keys which is better: 'trackerartist' or 'artistid' in the
track file? Does it matter? Is it personal preference?

It is a matter of personal preference.  Personally, I use the same name for the 
same data, and do not add useless prefaces, and usually do not preface the id 
with the table name.  For example:

create table artists (
  id integer primary key,
  name text collate nocase unique
);
create table albums (
  id integer primary key,
  name text collate nocase unique,
  artistid integer references artists
);
create table tracks (
  id integer primary key,
  seq integer,
  name text collate nocase,
  artistid integer references artists,
  albumid integer references albums,
  unique (albumid, seq)
);

select albums.name as albumname,
albumartists.name as albumartist,
tracks.seq as trackno,
tracks.name as trackname,
trackartists.name as trackartist
   from albums, tracks, artists as albumartists, artists as trackartists
  where tracks.artistid = trackartists.id
and tracks.albumid = albums.id
and albums.artistid = albumartists.id;

Of course, this does not allow the same track on multiple albums.  For that you 
need another table to do the N:M mapping:

create table artists (
  id integer primary key,
  name text collate nocase unique
);
create table albums (
  id integer primary key,
  name text collate nocase unique,
  artistid integer references artists
);
create table tracks (
  id integer primary key,
  name text collate nocase,
  artistid integer references artists,
);
create table albumtracks (
  id integer primary key,
  albumid integer references albums,
  trackid integer references tracks,
  seq integer,
  unique (albumid, seq),
  unique (albumid, trackid),
  unique (trackid, albumid)
);

select albums.name as albumname,
albumartists.name as akbumartist,
albumtracks.seq as trackno,
tracks.name as trackname,
trackartists.name as trackartist
   from albums, albumtracks, tracks, artists as albumartists, artists as 
trackartists
  where tracks.artistid = trackartists.id
and albumtracks.albumid = albums.id
and albumtracks.trackid = tracks.id
and albums.artistid = albumartists.id;


On 01/08/2017 05:46 AM, Keith Medcalf wrote:

... join ... using (column) has nothing whatever to do with foreign

keys.

"FROM a JOIN b USING (c) is "syntactic sugar" (meaning that it is

sweetening and full of calories, but has no nutritional value) for the
expression "FROM a, b WHERE a.c = b.c"

This is so, for example, if you use really long stupid names it saves

considerable space and typing:

SELECT * FROM TheFirstTableToBeJoined JOIN TheSecondTableToBeJoined

USING (TheCommonColumnNameBetweenTheFirstAndTheSecondTable);

-vs-
SELECT * FROM TheFirstTableToBeJoined, TheSecondTableToBeJoined WHERE

TheFirstTableToBeJoined.TheCommonColumnNameBetweenTheFirstAndTheSecondTabl
e =
TheSecondTableToBeJoined.TheCommonColumnNameBetweenTheFirstAndTheSecondTab
le;



-Original Message-
From: sqlite-users [mailto:sqlite-users-

boun...@mailinglists.sqlite.org]

On Behalf Of Ken Wagner
Sent: Sunday, 8 January, 2017 04:04
To: SQLite mailing list
Subject: Re: [sqlite] Foreign key error...

Thanks, Kees,

The message is helpful as a warning.

select artistname, trackname from artist inner join track on
trackartist = artistid;  works just fine.

But isn't the efficiency of 'using (artistid)' more desirable?

Is the use of a 'trackerartist' as the foreign key used because it is
more informative?

I.e., wherever it is seen it shows the track-artist link? But is more
demanding when coding:

   'on trackerartist = artistid' vs 'using (artistid)'

Best or preferred SQLite3 practice for using which foreign reference
style 'trackartist' vs 'artistid'?

Thanks,
Ken


On 01/08/2017 04:47 AM, Kees Nuyt wrote:

On Sun, 8 Jan 2017 04:21:16 -0600, Ken Wagner
 wrote:


Hello SQLusers,

The error below occurs even though the

CREATE TABLE track(

 trackid INTEGER,
 trackname   TEXT,
 trackartist INTEGER,
 *FOREIGN KEY(trackartist) REFERENCES artist(artistid)*
);

statement at https://sqlite.org/foreignkeys.html was observed.

It appears that 'trackerartist' should be named 'artistid'.

SQLite3 CLI results using version SQLite 3.15.2 2016-11-28 19:13:37
bbd85d235f7037c6a033a9690534391ffeacecc8

sqlite> select artistname, trackname from artist inner join track

using

(trackartist);
Error:\ cannot join using column trackartist - column n

Re: [sqlite] Foreign key error...

2017-01-09 Thread Ken Wagner

Yes, thanks.

The 'left join on'  or 'inner join on ' removes the chance of an 
erroneous key linkage.


Also makes sense to pay close attention as to which table is left and 
right.


Ken

On 01/09/2017 06:46 AM, Dominique Devienne wrote:

On Sun, Jan 8, 2017 at 12:46 PM, Keith Medcalf  wrote:


... join ... using (column) has nothing whatever to do with foreign keys.

"FROM a JOIN b USING (c) is "syntactic sugar" ([...]) for the expression
"FROM a, b WHERE a.c = b.c"


Or "FROM a JOIN b ON a.c = b.c".
Or "FROM a INNER JOIN b ON a.c = b.c".

Syntax does matter (to some?) for readability and comprehension of a query
IMHO.

I prefer to keep my WHERE clauses for single-table "filtering",
and rely on JOIN-ON for how two tables "connect" during a join.
(which columns to "thread the needle through" to form/constitute
a "multi-table row" is my personal mental image of a join).

My $0.02. --DD

PS: I myself consider "FROM a, b WHERE a.c = b.c" to be the "legacy"
syntax, best avoided :)
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Foreign key error...

2017-01-08 Thread Ken Wagner

Keith,

Ahh, better to be specific and avoid simplistic assumptions.

For foreign keys which is better: 'trackerartist' or 'artistid' in the 
track file? Does it matter? Is it personal preference?


Ken


On 01/08/2017 05:46 AM, Keith Medcalf wrote:

... join ... using (column) has nothing whatever to do with foreign keys.

"FROM a JOIN b USING (c) is "syntactic sugar" (meaning that it is sweetening and full of 
calories, but has no nutritional value) for the expression "FROM a, b WHERE a.c = b.c"

This is so, for example, if you use really long stupid names it saves 
considerable space and typing:

SELECT * FROM TheFirstTableToBeJoined JOIN TheSecondTableToBeJoined USING 
(TheCommonColumnNameBetweenTheFirstAndTheSecondTable);
-vs-
SELECT * FROM TheFirstTableToBeJoined, TheSecondTableToBeJoined WHERE 
TheFirstTableToBeJoined.TheCommonColumnNameBetweenTheFirstAndTheSecondTable = 
TheSecondTableToBeJoined.TheCommonColumnNameBetweenTheFirstAndTheSecondTable;



-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org]
On Behalf Of Ken Wagner
Sent: Sunday, 8 January, 2017 04:04
To: SQLite mailing list
Subject: Re: [sqlite] Foreign key error...

Thanks, Kees,

The message is helpful as a warning.

select artistname, trackname from artist inner join track on
trackartist = artistid;  works just fine.

But isn't the efficiency of 'using (artistid)' more desirable?

Is the use of a 'trackerartist' as the foreign key used because it is
more informative?

I.e., wherever it is seen it shows the track-artist link? But is more
demanding when coding:

  'on trackerartist = artistid' vs 'using (artistid)'

Best or preferred SQLite3 practice for using which foreign reference
style 'trackartist' vs 'artistid'?

Thanks,
Ken


On 01/08/2017 04:47 AM, Kees Nuyt wrote:

On Sun, 8 Jan 2017 04:21:16 -0600, Ken Wagner
 wrote:


Hello SQLusers,

The error below occurs even though the

CREATE TABLE track(

trackid INTEGER,
trackname   TEXT,
trackartist INTEGER,
*FOREIGN KEY(trackartist) REFERENCES artist(artistid)*
);

statement at https://sqlite.org/foreignkeys.html was observed.

It appears that 'trackerartist' should be named 'artistid'.

SQLite3 CLI results using version SQLite 3.15.2 2016-11-28 19:13:37
bbd85d235f7037c6a033a9690534391ffeacecc8

sqlite> select artistname, trackname from artist inner join track using
(trackartist);
Error:\ cannot join using column trackartist - column not present in
both tables
sqlite> .tables track
track
sqlite> .schema track
CREATE TABLE track(
trackid INTEGER,
trackname   TEXT,
trackartist INTEGER,
FOREIGN KEY(trackartist) REFERENCES artist(artistid)
);
sqlite> .schema artist
CREATE TABLE artist(
artistidINTEGER PRIMARY KEY,
artistname  TEXT

Am I missing something important here?

The error message is quite informative: the artist table does
not have a column trackartist.

Try:
select artistname, trackname from artist inner join track on
trackartist = artistid;

HTH


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Foreign key error...

2017-01-08 Thread Ken Wagner

Thanks, Kees,

The message is helpful as a warning.

select artistname, trackname from artist inner join track on
trackartist = artistid;  works just fine.

But isn't the efficiency of 'using (artistid)' more desirable?

Is the use of a 'trackerartist' as the foreign key used because it is 
more informative?


I.e., wherever it is seen it shows the track-artist link? But is more 
demanding when coding:


'on trackerartist = artistid' vs 'using (artistid)'

Best or preferred SQLite3 practice for using which foreign reference 
style 'trackartist' vs 'artistid'?


Thanks,
Ken


On 01/08/2017 04:47 AM, Kees Nuyt wrote:

On Sun, 8 Jan 2017 04:21:16 -0600, Ken Wagner
 wrote:


Hello SQLusers,

The error below occurs even though the

CREATE TABLE track(

   trackid INTEGER,
   trackname   TEXT,
   trackartist INTEGER,
   *FOREIGN KEY(trackartist) REFERENCES artist(artistid)*
);

statement at https://sqlite.org/foreignkeys.html was observed.

It appears that 'trackerartist' should be named 'artistid'.

SQLite3 CLI results using version SQLite 3.15.2 2016-11-28 19:13:37
bbd85d235f7037c6a033a9690534391ffeacecc8

sqlite> select artistname, trackname from artist inner join track using
(trackartist);
Error:\ cannot join using column trackartist - column not present in
both tables
sqlite> .tables track
track
sqlite> .schema track
CREATE TABLE track(
   trackid INTEGER,
   trackname   TEXT,
   trackartist INTEGER,
   FOREIGN KEY(trackartist) REFERENCES artist(artistid)
);
sqlite> .schema artist
CREATE TABLE artist(
   artistidINTEGER PRIMARY KEY,
   artistname  TEXT

Am I missing something important here?

The error message is quite informative: the artist table does
not have a column trackartist.

Try:
select artistname, trackname from artist inner join track on
trackartist = artistid;

HTH



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Foreign key error...

2017-01-08 Thread Ken Wagner

Hello SQLusers,

The error below occurs even though the

CREATE TABLE track(

  trackid INTEGER,
  trackname   TEXT,
  trackartist INTEGER,
  *FOREIGN KEY(trackartist) REFERENCES artist(artistid)*
);

statement at https://sqlite.org/foreignkeys.html was observed.

It appears that 'trackerartist' should be named 'artistid'.

SQLite3 CLI results using version SQLite 3.15.2 2016-11-28 19:13:37 
bbd85d235f7037c6a033a9690534391ffeacecc8


sqlite> select artistname, trackname from artist inner join track using 
(trackartist);
Error:\ cannot join using column trackartist - column not present in 
both tables

sqlite> .tables track
track
sqlite> .schema track
CREATE TABLE track(
  trackid INTEGER,
  trackname   TEXT,
  trackartist INTEGER,
  FOREIGN KEY(trackartist) REFERENCES artist(artistid)
);
sqlite> .schema artist
CREATE TABLE artist(
  artistidINTEGER PRIMARY KEY,
  artistname  TEXT

Am I missing something important here?

Thanks,

Ken


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite3 Tutorial error

2017-01-05 Thread Ken Wagner

David,

Yes. That would be a big assist. I am new to using SQLite3 and found the 
GLOB function erratic in practice -- not on SQLite3 but on other web 
sites using SQLite. They yielded completely opposite results.


Second the motion.

Ken


On 01/05/2017 05:23 PM, dandl wrote:

From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
Behalf Of Simon Slavin


They’re probably using the external function interface to override the internal 
globbing function.  And by the look of the results at least one of the 
programmers involved thinks that GLOB and REGEX do the same thing.

I think you're right. One of the contributing problems is that the behaviour of 
GLOB is not defined in the documentation. Here is all it says:

"The GLOB operator is similar to LIKE but uses the Unix file globbing syntax for its 
wildcards. Also, GLOB is case sensitive, unlike LIKE. Both GLOB and LIKE may be preceded 
by the NOT keyword to invert the sense of the test. The infix GLOB operator is 
implemented by calling the function glob(Y,X) and can be modified by overriding that 
function."

Unix globbing for Linux is defined here: 
http://man7.org/linux/man-pages/man7/glob.7.html. AFAICT Sqlite does not 
implement this behaviour.

Perhaps some accurate documentation for GLOB in Sqlite would help to clarify 
things?

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org





___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite3 Tutorial error

2017-01-05 Thread Ken Wagner

Danap,

I thought so, too. But it is not the case.

I am cross-checking with the Unix/SQLite results, using Unix/SQLite as 
the base reference.


Ken


On 01/05/2017 01:30 PM, dmp wrote:

Message: 21
Date: Wed, 4 Jan 2017 22:10:59 -0600
From: Ken Wagner 
To: SQLite mailing list 
Subject: Re: [sqlite] SQLite3 Tutorial error
Yes, I am beginning to understand that. SQLite3 is its own GLOB standard.
I will abide by that.
It is just very confusing when 5 other apps using SQLite as their DB
engine all report the opposite.
  The SQLite versions they use are 3.9.2, 3.10.1, 3.11.0 and 3.13.0.
Example: the SQLite Manager in FireFox 50 uses SQLite 3.13.0 and
faithfully removes the names with digits in them.

I tried the example from R. Smith and got the exact same results
in my own SQLite Interface, Ajqvue, using the SQLITE JDBC.

I don't know what those other apps may be doing, but they should
just pass the query created by the user "DIRECTLY" to the SQLite DB
engine without any modification. Likewise the result should also
not be mucked with before presentation.

danap.

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite3 Tutorial error

2017-01-05 Thread Ken Wagner

Ryan,

Thanks. I have saved the Unix GLOB reference.

When I inferred that other versions of SQLite gave the other results, it 
was thru the other SQLite GUI tools and the version of SQLite that they 
used.


Thanks for helping to make clear what was going on.

Ken



On 01/05/2017 02:53 AM, R Smith wrote:


On 2017/01/05 9:04 AM, Ken Wagner wrote:

Keith,

It appears that the folks at SQLiteTutorial.net have a coding anomaly.

They are not following the UNIX / SQLite3 GLOB patterns. And so, too, 
do the other guys.


I am adjusting my usage accordingly. I will advise users of the other 
products to NOT use the GLOB "*[^1-9]*" pattern. SQLiteTutorial.net 
has already been advised. (No reply, thus far in 2 days.)




What the 3rd party people do is likely override the GLOB and REGEXP 
functions, either when compiling their SQLite versions or at runtime 
using the user-function api. People add their own GLOB sometimes 
purely because they add REGEXP functions (which is not defined by 
default in SQLite) and the new REGEXP comes with a GLOB sibling that 
gets added and they perhaps prefer the outcome as it happens in SQLite 
tutorial and DB Browser etc.  These alternate globbing methodologies 
may even be preferred by you or your customers.


The similarity between the different tools suggest to me they probably 
use the same added code / library which someone made to "improve" over 
the standard Unix file globbing. I wouldn't want or expect the SQLite 
tutorial people to change how they implement the globbing, but they 
should at least notify learners of the difference.


What you and your customers need to know is:
  - SQLite uses Unix file globbing exactly
  - 3rd party tools which implement SQLite engine may override this 
(and seemingly mostly do so)
  - You can expect different GLOB returns via different tools, and 
these need to be understood during use. (Hopefully most tools are 
consistent in their alteration)
  - SQLite engine as used in the SQLite CLI will always return the 
exact same Unix file globbing results in queries.


The big question here was on the last point where you suggested you 
saw different results using some version of the SQLite CLI than 
another - but I think Keith is correct in assuming it's simply a 
mistake, however, if you DO find a difference in any version, please 
let us know - that would be a bug (or a concern at the very least).


Thanks,
Ryan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite3 Tutorial error

2017-01-04 Thread Ken Wagner

Keith,

It appears that the folks at SQLiteTutorial.net have a coding anomaly.

They are not following the UNIX / SQLite3 GLOB patterns. And so, too, do 
the other guys.


I am adjusting my usage accordingly. I will advise users of the other 
products to NOT use the GLOB "*[^1-9]*" pattern. SQLiteTutorial.net has 
already been advised. (No reply, thus far in 2 days.)


Yeah, I guess it must be the Dim Sum!!

Many thanks,

Ken


On 01/05/2017 12:46 AM, Keith Medcalf wrote:

 From the (current) source code:

/*
** Compare two UTF-8 strings for equality where the first string is
** a GLOB or LIKE expression.  Return values:
**
**SQLITE_MATCH:Match
**SQLITE_NOMATCH:  No match
**SQLITE_NOWILDCARDMATCH:  No match in spite of having * or % wildcards.
**
** Globbing rules:
**
**  '*'   Matches any sequence of zero or more characters.
**
**  '?'   Matches exactly one character.
**
** [...]  Matches one character from the enclosed list of
**characters.
**
** [^...] Matches one character not in the enclosed list.
**
** With the [...] and [^...] matching, a ']' character can be included
** in the list by making it the first character after '[' or '^'.  A
** range of characters can be specified using '-'.  Example:
** "[a-z]" matches any single lower-case letter.  To match a '-', make
** it the last character in the list.
**
** Like matching rules:
**
**  '%'   Matches any sequence of zero or more characters
**
*** '_'   Matches any one character
**
**  EcWhere E is the "esc" character and c is any other
**character, including '%', '_', and esc, match exactly c.
**
** The comments within this routine usually assume glob matching.
**
** This routine is usually quick, but can be N**2 in the worst case.
*/


-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org]
On Behalf Of dandl
Sent: Wednesday, 4 January, 2017 23:18
To: 'SQLite mailing list'
Subject: Re: [sqlite] SQLite3 Tutorial error

A question: this appears to be the Posix standard for globbing ie Patterns
Used for Filename Expansion:
http://pubs.opengroup.org/onlinepubs/007908799/xcu/chap2.html#tag_001_013_
003

And this is Linux:
https://linux.die.net/man/7/glob

Is this what Sqlite intends to conform to? Because the convention here is
[!0-9] and not [^0-9].

Regards
David M Bennett FACS

Andl - A New Database Language - andl.org


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite3 Tutorial error

2017-01-04 Thread Ken Wagner

Keith,

This has really helped.

SQLite Tutorial site is using: 3.12.12016-04-08 15:09:49 
fe7d3b75fe1bde41511b323925af8ae1b910bc4d


I get the same results as you did below. So, there is some sort of error 
or different code being used. It may have to do with encoding. I will 
test this further tomorrow.


Many thanks,

Ken


On 01/05/2017 12:29 AM, Keith Medcalf wrote:

sqlite3s < globtest.sql

select sqlite_version(), sqlite_source_id();
3.11.0|2016-02-15 17:29:24 3d862f207e3adc00f78066799ac5a8c282430a5f
create table x(x text);
insert into x values('ABC');
insert into x values('5BC');
insert into x values('A5C');
insert into x values('AB5');
insert into x values('1BC');
insert into x values('5B1');
insert into x values('A51');
insert into x values('1B5');
insert into x values('123');

select x from x where x glob '*[^1-9]*';
ABC
5BC
A5C
AB5
1BC
5B1
A51
1B5
select x from x where x glob '*[1-9]*';
5BC
A5C
AB5
1BC
5B1
A51
1B5
123
select x from x where not x glob '*[^1-9]*';
123
select x from x where not x glob '*[1-9]*';
ABC

The GLOB works exactly the same in all cases, including on 3.11.0.

The pattern *[1-9]* matches "any string containing at least 1 character 1 thru 
9"
The pattern *[^1-9]* matches "any string containing 1 character that is not 1 
thru 9 (that is, only strings composed entirely of the characters 1 thru 9 are 
included, those composed entirely of characters 1 thru 9 are excluded).


I do not understand the part where you say "also 3.15.0. Worked as in #1 above in 
both cases in the SQLite Tutorial, but oppositely in the SQLite3 CLI." because using 
python to call the sqlite3 also produces the same results as the CLI:


import apsw
import apswrow
dbr=apsw.Connection('globtest.db')

for row in db.cursor().execute("select x from x where x glob '*[^1-9]*';"):

...  print row
...
Row(x=u'ABC')
Row(x=u'5BC')
Row(x=u'A5C')
Row(x=u'AB5')
Row(x=u'1BC')
Row(x=u'5B1')
Row(x=u'A51')
Row(x=u'1B5')
This would suggest to me that either (a) the internal GLOB function was 
overridden or (b) that even though you thought you changed the underlying 
version of SQLite, you did not.  Did you select sqlite_version() and 
sqlite_source_id() to see what version of sqlite was being used?   In the 
Python above it is currently:


for row in cr.cursor().execute("select sqlite_version(), sqlite_source_id();"):

...  print row
...
Row(sqlite_version=u'3.17.0', sqlite_source_id=u'2017-01-04 04:18:00 
80ad317f89c46db0d0d252aefdc036a34a61183d')



-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org]
On Behalf Of Ken Wagner
Sent: Wednesday, 4 January, 2017 22:36
To: SQLite mailing list
Subject: Re: [sqlite] SQLite3 Tutorial error

Keith,

Thanks for the further explanation and the ll alias.

What is so hard to digest is that:

  1. SQLiteMan - a Linux GUI does the *[^1-9]* pattern and omits rows
with digits 1-9 in them. Using 3.11.0.
  2. DBBrowser - Linux GUI does the same using 3.9.2 and 3.11.0 in
different versions, one older.
  3. SQLite Tutorial @ www.sqlitetutorial.net - Has GLOB lesson page.
Uses version @ https://www.sqlite.org/download.html. I downloaded
sqlite-tools-linux-x86-3160100.zip
<https://www.sqlite.org/2017/sqlite-tools-linux-x86-3160100.zip>(1.77
MiB), also 3.15.0. Worked as in #1 above in both cases in the SQLite
Tutorial, but oppositely in the SQLite3 CLI.
  4. The FireFox 50 SQLite Manager add-on. This one is quite good.
Good layouts, easy to use and detailed. Works as in #1 above. Uses
SQLite 3.13.0.

Yet the SQLite3 CLI does the OPPOSITE and _includes_ any names with
digits 1-9 in them with vv 3.15.0 and 3.16.0.

Do you have any idea why? Is it really the dim sum??

This will have to be explained to my intended customers because some
will use the SQLite3 CLI. Most will use a GUI as they are not very
computer literate. I will be offering them training and also directing
them to training at a good web SQLite tutorial.

All of the above are using the 'chinook.db'. My system is Linux/Ubuntu
16.04, SQLite3 3.15.0 and 3.16.0 CLIs and the above programs. I use the
CLI in both the Terminator and Gnome-Terminal. Some minor differences
with encoding.


Ken


On 01/04/2017 10:49 PM, Keith Medcalf wrote:


Yes.  The GLOB was invented on Unix.  I posted an example of the Unix

filename globbing (which has not changed, to my knowledge, since the
60's), which works exactly the same as the GLOB operator in SQLite 3.9.0
through the current head of trunk.  Perhaps there were minor changes, but
nothing that affects the output of the *[1-9]* or *[^1-9]* patterns when
applied to the same data used in the Linux demonstration.  Howev

Re: [sqlite] SQLite3 Tutorial error

2017-01-04 Thread Ken Wagner

Keith,

Thanks for the further explanation and the ll alias.

What is so hard to digest is that:

1. SQLiteMan - a Linux GUI does the *[^1-9]* pattern and omits rows 
with digits 1-9 in them. Using 3.11.0.
2. DBBrowser - Linux GUI does the same using 3.9.2 and 3.11.0 in 
different versions, one older.
3. SQLite Tutorial @ www.sqlitetutorial.net - Has GLOB lesson page. 
Uses version @ https://www.sqlite.org/download.html. I downloaded 
sqlite-tools-linux-x86-3160100.zip 
<https://www.sqlite.org/2017/sqlite-tools-linux-x86-3160100.zip>(1.77 
MiB), also 3.15.0. Worked as in #1 above in both cases in the SQLite 
Tutorial, but oppositely in the SQLite3 CLI.
4. The FireFox 50 SQLite Manager add-on. This one is quite good. 
Good layouts, easy to use and detailed. Works as in #1 above. Uses 
SQLite 3.13.0.


Yet the SQLite3 CLI does the OPPOSITE and _includes_ any names with 
digits 1-9 in them with vv 3.15.0 and 3.16.0.


Do you have any idea why? Is it really the dim sum??

This will have to be explained to my intended customers because some 
will use the SQLite3 CLI. Most will use a GUI as they are not very 
computer literate. I will be offering them training and also directing 
them to training at a good web SQLite tutorial.


All of the above are using the 'chinook.db'. My system is Linux/Ubuntu 
16.04, SQLite3 3.15.0 and 3.16.0 CLIs and the above programs. I use the 
CLI in both the Terminator and Gnome-Terminal. Some minor differences 
with encoding.



Ken


On 01/04/2017 10:49 PM, Keith Medcalf wrote:


Yes.  The GLOB was invented on Unix.  I posted an example of the Unix filename 
globbing (which has not changed, to my knowledge, since the 60's), which works 
exactly the same as the GLOB operator in SQLite 3.9.0 through the current head 
of trunk.  Perhaps there were minor changes, but nothing that affects the 
output of the *[1-9]* or *[^1-9]* patterns when applied to the same data used 
in the Linux demonstration.  However, I did not try and build every single 
version of SQLite between 3.9.0 to 3.17.0 to see if one of them happened to be 
broken.  The two ends and a sampling from the middle all worked the same.

And by the way, GLOB predates REGEX by about 15 years.  REGEX borrowed (and 
modified) GLOB syntax.

(in case you have never used a Linux/Unix system with an ll command alias, the 
command to create it is:  alias ll='ls -l')

Are you ABSOLUTELY SURE that the authors of the third-party tools have not 
provided their own GLOB function that works differently, perhaps in accordance 
with their Dim Sum because their little hearts did not desire the built in one?


-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org]
On Behalf Of Ken Wagner
Sent: Wednesday, 4 January, 2017 21:24
To: SQLite mailing list
Subject: Re: [sqlite] SQLite3 Tutorial error

Aha! GLOB is an implementation of the UNIX Glob function. It just
borrows the regex character notation of [^1-9].

I have 3.15 and 3.16 CLIs installed. Trying any other CLI versions at
this point won't help for the customer. They will be using a SQLite3
GUI. I will explain the difference between the CLI and the GUI versions.

Where is this UNIX Glob notation spelled out? My system is Linux. I
expect the UNIX version will be somewhat different.

The Linux GLOB is used to find patterns in pathnames (not databases.) Is
the Unix version the same?

Thanks,

Ken

On 01/04/2017 11:51 AM, R Smith wrote:


On 2017/01/04 7:01 PM, Jens Alfke wrote:

On Jan 4, 2017, at 5:57 AM, R Smith  wrote:

As I have it (and as is implemented by SQLite) the GLOB operator
implements a REGEXP that matches against a regexp pattern

No, these are NOT regular expressions in the usual sense of the word.
GLOB's syntax is incompatible with what are commonly called “regular
expressions”, and its feature set is a lot more limited. (It may
technically implement a type of regular expression in the underlying
algorithmic sense, but I think using the term is misleading.)

Quite correct, I meant REGEXP as an internal function of the
Regular-expression type, not the official "regular expression" syntax
- So a "misleading term" then in your words. Allow me to be more clear
then: GLOB in SQLite specifically matches Unix file globbing syntax
(which is very different to official RegEx). 3rd party utilities can
override both the internal REGEXP and GLOB functions with custom
versions.
https://sqlite.org/lang_expr.html#glob

The bit I don't know for sure is whether Unix file globbing will
regard 'AB5' as matching '*[^1-9]*' or not? I know in SQLite it
matches (and I believe this to be correct, but I could be mistaken and
I don't have a Unix box handy to test).

Either way, the concern is more towards consistency than specific
operation.
The SQLite scripts I sent previously will reveal any difference
between versions if they exist. Have you tried it

Re: [sqlite] SQLite3 Tutorial error

2017-01-04 Thread Ken Wagner
Aha! GLOB is an implementation of the UNIX Glob function. It just 
borrows the regex character notation of [^1-9].


I have 3.15 and 3.16 CLIs installed. Trying any other CLI versions at 
this point won't help for the customer. They will be using a SQLite3 
GUI. I will explain the difference between the CLI and the GUI versions.


Where is this UNIX Glob notation spelled out? My system is Linux. I 
expect the UNIX version will be somewhat different.


The Linux GLOB is used to find patterns in pathnames (not databases.) Is 
the Unix version the same?


Thanks,

Ken

On 01/04/2017 11:51 AM, R Smith wrote:



On 2017/01/04 7:01 PM, Jens Alfke wrote:

On Jan 4, 2017, at 5:57 AM, R Smith  wrote:

As I have it (and as is implemented by SQLite) the GLOB operator 
implements a REGEXP that matches against a regexp pattern
No, these are NOT regular expressions in the usual sense of the word. 
GLOB's syntax is incompatible with what are commonly called “regular 
expressions”, and its feature set is a lot more limited. (It may 
technically implement a type of regular expression in the underlying 
algorithmic sense, but I think using the term is misleading.)


Quite correct, I meant REGEXP as an internal function of the 
Regular-expression type, not the official "regular expression" syntax 
- So a "misleading term" then in your words. Allow me to be more clear 
then: GLOB in SQLite specifically matches Unix file globbing syntax 
(which is very different to official RegEx). 3rd party utilities can 
override both the internal REGEXP and GLOB functions with custom 
versions.

https://sqlite.org/lang_expr.html#glob

The bit I don't know for sure is whether Unix file globbing will 
regard 'AB5' as matching '*[^1-9]*' or not? I know in SQLite it 
matches (and I believe this to be correct, but I could be mistaken and 
I don't have a Unix box handy to test).


Either way, the concern is more towards consistency than specific 
operation.
The SQLite scripts I sent previously will reveal any difference 
between versions if they exist. Have you tried it on different 
versions of the CLI?



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite3 Tutorial error

2017-01-04 Thread Ken Wagner

Yes, I am beginning to understand that. SQLite3 is its own GLOB standard.

I will abide by that.

It is just very confusing when 5 other apps using SQLite as their DB 
engine all report the opposite.

The SQLite versions they use are 3.9.2, 3.10.1, 3.11.0 and 3.13.0.

Example: the SQLite Manager in FireFox 50 uses SQLite 3.13.0 and 
faithfully removes the names with digits in them.


I am developing an app for contractors which will include a SQLite3 GUI 
so they can browse their files at will. The CLI is not viable for them 
as they are not very computer literate. They do grasp the 'Select * from 
parts where ;' syntax.


And there's the rub: GLOB "*[^1-9]*" works in the GUI tools that use 
SQLite but not in the SQLite CLI.


This introduces an additional support issue because some users will 
migrate to the CLI.


If you go to www.sqlitetutorial.net they, too, use and also teach the 
GLOB "*[^1-9]*" usage.


My issue is: Why the discrepancy. The other 5 apps all honor the GLOB 
"*[^1-9]*" usage. Only SQLite3 CLI doesn't. If this was deprecated it 
would be understandable. But, no such proviso is shown.


Do you not think that's a bit odd?

Rest assured, I will abide by the SQLite 3.15.0 and future upgrades. 
It's just extra work and support, places a blot on the escutcheon of 
efficiency. It's awkward mixing the GLOB and Regex metaphors.


Ken


On 01/04/2017 11:01 AM, Jens Alfke wrote:

On Jan 4, 2017, at 5:57 AM, R Smith  wrote:

As I have it (and as is implemented by SQLite) the GLOB operator implements a 
REGEXP that matches against a regexp pattern

No, these are NOT regular expressions in the usual sense of the word. GLOB's 
syntax is incompatible with what are commonly called “regular expressions”, and 
its feature set is a lot more limited. (It may technically implement a type of 
regular expression in the underlying algorithmic sense, but I think using the 
term is misleading.)

Case in point: the string "*[^1-9]*" is illegal in every regex syntax I know 
of, because “*” is a postfix operator in regex and can’t appear at the start of the 
string.

Thanks to Dr. Hipp for quoting the exact definition. It looks like this is 
basically the same syntax as the Unix glob(3) function, which is familiar to 
anyone who’s used a Unix shell.

—Jens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite3 Tutorial error

2017-01-04 Thread Ken Wagner

Richard,

Why, thank you, for your very prompt reply and resolution for this 'nit'.

If I can be of any assistance in testing (or taxing) the system, please 
do inform me.


You have a jewel of a product and I enjoy using it immensely. Thanks for 
all the perseverance and patience.


You must have a considerable reservoir of both. Much appreciated.

Thank you for the artifact link. Very helpful.

I shall hold out for much smoother 3.16.1 rollout. My deepest empathies.

Best regards, Ken

On 01/04/2017 08:38 AM, Richard Hipp wrote:

On 1/4/17, Simon Slavin  wrote:

On 4 Jan 2017, at 1:43pm, Ken Wagner  wrote:


There is yet another product "DB Browser for SQLite" using SQLite v 3.9.2.
It, too, omits any row where name contains any char 1 thru 9. It appears
SQLite at one point did this as 'GLOB '*[^1-9]*'.

But it does not do so now. Does SQLite3 provide a detailed syntax
description of the GLOB permutations honored (and, perhaps, those
deprecated?)

No.  Not only is there no documentation for GLOB ...

Ugh.  Ok, I will fix that.  Meanwhile, you have
https://www.sqlite.org/src/artifact/c67273e1ec08abbd?ln=618-633

FWIW: The 3.16.1 release is not going well.  A potentially serious
problem has been discovered by Firefox.  And NDS also reports a lesser
concern.  So I am busy with other things right now, and I don't really
have time to deal with GLOB documentation right this moment.  Y'all
seem to be doing a good job of working this out.  Please continue...


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite3 Tutorial error

2017-01-04 Thread Ken Wagner

Ryan,

Both 'AB6' or '5AB' fail the '[^1-9]' test. So, too do 'New Vol 4' and 
'#1'.


Ken


On 01/04/2017 07:57 AM, R Smith wrote:



On 2017/01/04 3:43 PM, Ken Wagner wrote:
Yes, I changed the query to NOT GLOB '*[1-9]*' and then it omitted 
the 1-9 char-containing entries.


However the logic of 'zero or any chars, then any single char NOT 1 
thru 9, then zero or any chars' should OMIT any name with a 1 thru 9 
in it regardless if it is 'Vol. 3', for example, or 'Vol. 33' because 
as soon as any single 1 thru 9 char is encountered the expression is 
false, i.e., it contains at least one char of 1 thru 9.



This is contrary to my understanding of the GLOB phrase (and I could 
well be wrong about it). As I have it (and as is implemented by 
SQLite) the GLOB operator implements a REGEXP that matches against a 
regexp pattern such that the program '*[^1-9]*' expanded means:

  * : ANY or none characters,
 Followed by
  [ : A character which is -
^ : NOT
1-9 : A character between 1 and 9
  ]
Followed by
  * : ANY or none characters

In English one can simply ask: "Does it have zero or more characters 
of any kind, followed by something that isn't a number 1 through 9, 
and then again zero or more characters of any kind?"


This answer seems to me to be YES (i.e. MATCH) all of the following:
'A', 'ABC', 'AB6' or '5AB'
but not match:
'', '1979' or '5'

Sure, one can construe the value '5AB' to mean Any characters () + 5 + 
Any characters (AB) to cause a non-match, but one can also construe 
the term '5AB' to mean Any characters (5) + NOT a number (A) + Any 
characters (B) and so it will match. Wildcards are tricky, and 
wildcards implemented in the negative are even worse.


I personally agree with how it is done in off-the-shelf SQLite (though 
my opinion is not important, what does the standard say? Is there a 
standard?). If something else has a different implementation it is 
very easy for any 3rd party item to override the GLOB and REGEXP 
functions to a custom implementation, which is probably what you are 
seeing.





There is yet another product "DB Browser for SQLite" using SQLite v 
3.9.2. It, too, omits any row where name contains any char 1 thru 9. 
It appears SQLite at one point did this as 'GLOB '*[^1-9]*'.


But it does not do so now. Does SQLite3 provide a detailed syntax 
description of the GLOB permutations honored (and, perhaps, those 
deprecated?)


Because it may or may not override the GLOB and/or REGEXP functions.

SQLite claims to use the INFIX implementation of LIKE and GLOB 
natively. Perhaps that is a starting point for research?


Hope that helps!
Ryan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite3 Tutorial error

2017-01-04 Thread Ken Wagner

Ryan,

The Regex description of '[^0-9]' states NOT any 0 thru 9 char in any 
SINGLE char position. It can be amended to 1-9 or 3-7 or 1-4 as the user 
sees fit.


Tested it using Ruby and Rubular, a Regex Tester.

HTH,

Ken


On 01/04/2017 07:57 AM, R Smith wrote:



On 2017/01/04 3:43 PM, Ken Wagner wrote:
Yes, I changed the query to NOT GLOB '*[1-9]*' and then it omitted 
the 1-9 char-containing entries.


However the logic of 'zero or any chars, then any single char NOT 1 
thru 9, then zero or any chars' should OMIT any name with a 1 thru 9 
in it regardless if it is 'Vol. 3', for example, or 'Vol. 33' because 
as soon as any single 1 thru 9 char is encountered the expression is 
false, i.e., it contains at least one char of 1 thru 9.



This is contrary to my understanding of the GLOB phrase (and I could 
well be wrong about it). As I have it (and as is implemented by 
SQLite) the GLOB operator implements a REGEXP that matches against a 
regexp pattern such that the program '*[^1-9]*' expanded means:

  * : ANY or none characters,
 Followed by
  [ : A character which is -
^ : NOT
1-9 : A character between 1 and 9
  ]
Followed by
  * : ANY or none characters

In English one can simply ask: "Does it have zero or more characters 
of any kind, followed by something that isn't a number 1 through 9, 
and then again zero or more characters of any kind?"


This answer seems to me to be YES (i.e. MATCH) all of the following:
'A', 'ABC', 'AB6' or '5AB'
but not match:
'', '1979' or '5'

Sure, one can construe the value '5AB' to mean Any characters () + 5 + 
Any characters (AB) to cause a non-match, but one can also construe 
the term '5AB' to mean Any characters (5) + NOT a number (A) + Any 
characters (B) and so it will match. Wildcards are tricky, and 
wildcards implemented in the negative are even worse.


I personally agree with how it is done in off-the-shelf SQLite (though 
my opinion is not important, what does the standard say? Is there a 
standard?). If something else has a different implementation it is 
very easy for any 3rd party item to override the GLOB and REGEXP 
functions to a custom implementation, which is probably what you are 
seeing.





There is yet another product "DB Browser for SQLite" using SQLite v 
3.9.2. It, too, omits any row where name contains any char 1 thru 9. 
It appears SQLite at one point did this as 'GLOB '*[^1-9]*'.


But it does not do so now. Does SQLite3 provide a detailed syntax 
description of the GLOB permutations honored (and, perhaps, those 
deprecated?)


Because it may or may not override the GLOB and/or REGEXP functions.

SQLite claims to use the INFIX implementation of LIKE and GLOB 
natively. Perhaps that is a starting point for research?


Hope that helps!
Ryan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite3 Tutorial error

2017-01-04 Thread Ken Wagner

Ryan,

I will use the SQLite3 CLI as the acid test.

But I still don't grasp why v 3.11.0 honors the GLOB by omitting any row 
where name contains a 1 thru 9 char but 3.15.0 and 3.16.0 do not.


I expect I have missed a deprecation. It's just confusing and extra 
nit-picking and testing.


Thanks for the info.

Ken


On 01/04/2017 07:57 AM, R Smith wrote:



On 2017/01/04 3:43 PM, Ken Wagner wrote:
Yes, I changed the query to NOT GLOB '*[1-9]*' and then it omitted 
the 1-9 char-containing entries.


However the logic of 'zero or any chars, then any single char NOT 1 
thru 9, then zero or any chars' should OMIT any name with a 1 thru 9 
in it regardless if it is 'Vol. 3', for example, or 'Vol. 33' because 
as soon as any single 1 thru 9 char is encountered the expression is 
false, i.e., it contains at least one char of 1 thru 9.



This is contrary to my understanding of the GLOB phrase (and I could 
well be wrong about it). As I have it (and as is implemented by 
SQLite) the GLOB operator implements a REGEXP that matches against a 
regexp pattern such that the program '*[^1-9]*' expanded means:

  * : ANY or none characters,
 Followed by
  [ : A character which is -
^ : NOT
1-9 : A character between 1 and 9
  ]
Followed by
  * : ANY or none characters

In English one can simply ask: "Does it have zero or more characters 
of any kind, followed by something that isn't a number 1 through 9, 
and then again zero or more characters of any kind?"


This answer seems to me to be YES (i.e. MATCH) all of the following:
'A', 'ABC', 'AB6' or '5AB'
but not match:
'', '1979' or '5'

Sure, one can construe the value '5AB' to mean Any characters () + 5 + 
Any characters (AB) to cause a non-match, but one can also construe 
the term '5AB' to mean Any characters (5) + NOT a number (A) + Any 
characters (B) and so it will match. Wildcards are tricky, and 
wildcards implemented in the negative are even worse.


I personally agree with how it is done in off-the-shelf SQLite (though 
my opinion is not important, what does the standard say? Is there a 
standard?). If something else has a different implementation it is 
very easy for any 3rd party item to override the GLOB and REGEXP 
functions to a custom implementation, which is probably what you are 
seeing.





There is yet another product "DB Browser for SQLite" using SQLite v 
3.9.2. It, too, omits any row where name contains any char 1 thru 9. 
It appears SQLite at one point did this as 'GLOB '*[^1-9]*'.


But it does not do so now. Does SQLite3 provide a detailed syntax 
description of the GLOB permutations honored (and, perhaps, those 
deprecated?)


Because it may or may not override the GLOB and/or REGEXP functions.

SQLite claims to use the INFIX implementation of LIKE and GLOB 
natively. Perhaps that is a starting point for research?


Hope that helps!
Ryan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite3 Tutorial error

2017-01-04 Thread Ken Wagner

Simon,

Yes, I am using the SQLite3 CLI. That's how I discovered the different 
GLOB outcomes.


I now understand that the SQLite3 GLOB function is not fully documented 
so it's best to just experiment with the GLOB function at the CLI for 
SQLite3. And avoid using SQLite3 versions prior to 3.15.


It's doable. Just adds extra work requiring checking.

Thanks,

Ken

On 01/04/2017 07:54 AM, Simon Slavin wrote:

On 4 Jan 2017, at 1:43pm, Ken Wagner  wrote:


There is yet another product "DB Browser for SQLite" using SQLite v 3.9.2. It, 
too, omits any row where name contains any char 1 thru 9. It appears SQLite at one point 
did this as 'GLOB '*[^1-9]*'.

But it does not do so now. Does SQLite3 provide a detailed syntax description 
of the GLOB permutations honored (and, perhaps, those deprecated?)

No.  Not only is there no documentation for GLOB but a programmer can replace 
SQLite’s GLOB function with their own one, using the external function 
interface.  You should not produce important production code which relies on 
the implementation of GLOB unless you control every link in the programming 
chain.  And if you’re using a 3rd Party browser then you are obviously not 
doing that.

If you want to test how SQLite itself handles GLOB, please use the SQLIte 
command-line tool.  This can be downloaded as one of the 'precompiled binaries' 
for your platform, and is documented here:

<https://sqlite.org/cli.html>

This tool is written and supported by the SQLite development team and conforms 
in all ways to how SQLite is meant to be used.  If you find a bug in that tool, 
it will be fixed.

If you have problems with any other program which uses SQLite you are going to 
get better results by contacting whoever developed that program.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite3 Tutorial error

2017-01-04 Thread Ken Wagner

Ryan,

FWIW -- I upgraded the DB Browser from v. 3.9 to v 3.11.0. This too 
honors the GLOB '*[^1-9]*'  by omitting any entry with a 1 thru 9 in it 
in any char position.


This sqlitebrower is on git hub. It's a nice SQLite tool. (So far...)

Ken


On 01/04/2017 07:13 AM, R Smith wrote:
Just one more point of clarity, in case my previous explanation did 
not high-light this:


The query:
select trackid, name from tracks where name GLOB '*[^1-9]*';

is completely different to the query:
select trackid, name from tracks where name NOT GLOB '*[1-9]*';

The prior excludes only names which are completely non-zero-containing 
integers (such as "1979"), the latter excludes any value that contains 
a character between 1 and 9 at any position (such as "Opus No. 1" or 
"20 Flight Rock"). The two queries merely intersect in one specific 
point in the Chinook DB (at "1979", that is) but they have nothing 
else in common.


Cheers,
Ryan


On 2017/01/04 2:53 PM, R Smith wrote:



On 2017/01/04 7:17 AM, Ken Wagner wrote:

About 2/3 the way down the page at:

http://www.sqlitetutorial.net/sqlite-glob/   Get names without 
[1-9].


select trackid, name from tracks where name GLOB '*[^1-9]*';

Works properly in SQLiteMan and the SQLite Tutorial. ( Two 
different products, not officially part of sqlite.org, I think, but 
use the sqlite app.)


But not in sqlite3 3.15.1 and 3.16.1.



That should not "work" in ANY version of SQLite (depending on what 
you mean by "work"). The phrase '*[^1-9]*' essentially asks to match 
'Anything, followed by something that is NOT a character 1 through 9, 
followed by Anything again' which is really ANYTHING that isn't 
specifically an integer which doesn't contain zeroes. i.e. it will 
exclude a name like "1996" but not a name like "2016" or "123.456", 
so in the Chinook database (which the tutorial uses) the query will 
list ALL thousands of names except one (namely a song titled: "1979") 
so it would be hard for you to differentiate between the query that 
"works" and one that doesn't.


Perhaps suggest to the Tutorial creators to use phrases like 
'*[^0-9]*' in stead of 1-9 to start with, and then maybe an example 
that has more than one single exclusion across the entire DB (which 
make the results seem superfluous and not very educational).


Another confusing bit is your claim that it "works" in an earlier 
version and not in the newer versions - I get the same results in 3.9 
and 3.16.

May I ask how it differs for you?


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite3 Tutorial error

2017-01-04 Thread Ken Wagner
Yes, I changed the query to NOT GLOB '*[1-9]*' and then it omitted the 
1-9 char-containing entries.


However the logic of 'zero or any chars, then any single char NOT 1 thru 
9, then zero or any chars' should OMIT any name with a 1 thru 9 in it 
regardless if it is 'Vol. 3', for example, or 'Vol. 33' because as soon 
as any single 1 thru 9 char is encountered the expression is false, 
i.e., it contains at least one char of 1 thru 9.


There is yet another product "DB Browser for SQLite" using SQLite v 
3.9.2. It, too, omits any row where name contains any char 1 thru 9. It 
appears SQLite at one point did this as 'GLOB '*[^1-9]*'.


But it does not do so now. Does SQLite3 provide a detailed syntax 
description of the GLOB permutations honored (and, perhaps, those 
deprecated?)


Thanks.

Ken Wagner


On 01/04/2017 07:13 AM, R Smith wrote:
Just one more point of clarity, in case my previous explanation did 
not high-light this:


The query:
select trackid, name from tracks where name GLOB '*[^1-9]*';

is completely different to the query:
select trackid, name from tracks where name NOT GLOB '*[1-9]*';

The prior excludes only names which are completely non-zero-containing 
integers (such as "1979"), the latter excludes any value that contains 
a character between 1 and 9 at any position (such as "Opus No. 1" or 
"20 Flight Rock"). The two queries merely intersect in one specific 
point in the Chinook DB (at "1979", that is) but they have nothing 
else in common.


Cheers,
Ryan


On 2017/01/04 2:53 PM, R Smith wrote:



On 2017/01/04 7:17 AM, Ken Wagner wrote:

About 2/3 the way down the page at:

http://www.sqlitetutorial.net/sqlite-glob/   Get names without 
[1-9].


select trackid, name from tracks where name GLOB '*[^1-9]*';

Works properly in SQLiteMan and the SQLite Tutorial. ( Two 
different products, not officially part of sqlite.org, I think, but 
use the sqlite app.)


But not in sqlite3 3.15.1 and 3.16.1.



That should not "work" in ANY version of SQLite (depending on what 
you mean by "work"). The phrase '*[^1-9]*' essentially asks to match 
'Anything, followed by something that is NOT a character 1 through 9, 
followed by Anything again' which is really ANYTHING that isn't 
specifically an integer which doesn't contain zeroes. i.e. it will 
exclude a name like "1996" but not a name like "2016" or "123.456", 
so in the Chinook database (which the tutorial uses) the query will 
list ALL thousands of names except one (namely a song titled: "1979") 
so it would be hard for you to differentiate between the query that 
"works" and one that doesn't.


Perhaps suggest to the Tutorial creators to use phrases like 
'*[^0-9]*' in stead of 1-9 to start with, and then maybe an example 
that has more than one single exclusion across the entire DB (which 
make the results seem superfluous and not very educational).


Another confusing bit is your claim that it "works" in an earlier 
version and not in the newer versions - I get the same results in 3.9 
and 3.16.

May I ask how it differs for you?


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite3 Tutorial error

2017-01-04 Thread Ken Wagner
The "select trackid, name from tracks where name GLOB '*[^1-9]*';" query 
works as expected and does NOT include names with 1 thru 9 in them in 
SQLiteMan, a Linux/Ubuntu SQL manager. It uses SQLite as the DB backend.


The logic of any # chars but NOT 1 thru 9 plus any # chars means NOT 
1,2,3,4,5,6,7,8 or 9 in name col.


This also works the same in the 'sqlitetutorial.net' example given.

The issue is not which works how; the issue is why the difference? I.e., 
consistency.


I have notified the 'sqlitetutorial.net' folks of the discrepancy.

I added the SQLite Manager tool to FireFox v 50.1.0 and it, too, omits 
any # 1-9 in the query results using " GLOB '*[^1-9]*' ".


How to resolve??

Thanks.


On 01/04/2017 06:53 AM, R Smith wrote:



On 2017/01/04 7:17 AM, Ken Wagner wrote:

About 2/3 the way down the page at:

http://www.sqlitetutorial.net/sqlite-glob/   Get names without 
[1-9].


select trackid, name from tracks where name GLOB '*[^1-9]*';

Works properly in SQLiteMan and the SQLite Tutorial. ( Two 
different products, not officially part of sqlite.org, I think, but 
use the sqlite app.)


But not in sqlite3 3.15.1 and 3.16.1.



That should not "work" in ANY version of SQLite (depending on what you 
mean by "work"). The phrase '*[^1-9]*' essentially asks to match 
'Anything, followed by something that is NOT a character 1 through 9, 
followed by Anything again' which is really ANYTHING that isn't 
specifically an integer which doesn't contain zeroes. i.e. it will 
exclude a name like "1996" but not a name like "2016" or "123.456", so 
in the Chinook database (which the tutorial uses) the query will list 
ALL thousands of names except one (namely a song titled: "1979") so it 
would be hard for you to differentiate between the query that "works" 
and one that doesn't.


Perhaps suggest to the Tutorial creators to use phrases like 
'*[^0-9]*' in stead of 1-9 to start with, and then maybe an example 
that has more than one single exclusion across the entire DB (which 
make the results seem superfluous and not very educational).


Another confusing bit is your claim that it "works" in an earlier 
version and not in the newer versions - I get the same results in 3.9 
and 3.16.

May I ask how it differs for you?


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite3 Tutorial error

2017-01-03 Thread Ken Wagner

Thanks, Jens.

I will do that.

- Ken


On 01/04/2017 12:29 AM, Jens Alfke wrote:

On Jan 3, 2017, at 9:17 PM, Ken Wagner  wrote:

About 2/3 the way down the page at:

http://www.sqlitetutorial.net/sqlite-glob/ 
<http://www.sqlitetutorial.net/sqlite-glob/>   Get names without [1-9].

You should probably report this to the people who run that website. It’s not 
associated with SQLite itself.

—Jens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite3 Tutorial error

2017-01-03 Thread Ken Wagner

About 2/3 the way down the page at:

http://www.sqlitetutorial.net/sqlite-glob/   Get names without [1-9].

select trackid, name from tracks where name GLOB '*[^1-9]*';

Works properly in SQLiteMan and the SQLite Tutorial. ( Two 
different products, not officially part of sqlite.org, I think, but use 
the sqlite app.)


But not in sqlite3 3.15.1 and 3.16.1.

This works as expected in sqlite3 (3.15.1 and 3.16.1 :

select trackid, name from tracks where name not GLOB '*[1-9]*'; Gets 
names without [1-9].




On 01/03/2017 07:37 PM, Richard Hipp wrote:

On 1/3/17, Ken Wagner  wrote:

Hi SQLite,

In the SQLite3 Tutorial

What tutorial are you referring to?



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite3 Tutorial error

2017-01-03 Thread Ken Wagner

Hi SQLite,

In the SQLite3 Tutorial the following query is said to find all names 
WITHOUT numbers in them. But it fails to do so.


select trackid, name from tracks where name GLOB '*[^1-9]*';   per the 
tutorial fails to list names WITHOUT numbers in them.


However, this DOES work:

select trackid, name from tracks where name not GLOB '*[1-9]*';

Is this a tutorial error?  Using SQLiteMan the first query DOES work. 
But not in SQLite 3.15.1 or 3.16.1.


Is this a bug or a tutorial error? Can't tell which version of SQLite 
SQLiteMan is using.


System: Ubuntu 16.04, SQLite3 (3.15.1 and 3.16.1). Also tested using 
Ruby 2.3.3 with ruby-sqlite extension.


Thanks,

Ken Wagner
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Cross-Platform ADO wrapper for iOS, Android and Win 8.1 WP 8.1?

2014-08-11 Thread Ken Wenyon
Is there ADO Support for SQLite using Windows 8.1 and Windows Phone 8.1?  I am 
looking for a Cross-Platform ADO wrapper for iOS, Android and Win 8.1 WP 8.1?


Ken Wenyon
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] ADO Support for SQLite using Windows 8.1 and Windows Phone 8.1?

2014-07-29 Thread Ken Wenyon
Is there ADO Support for SQLite using Windows 8.1 and Windows Phone 8.1?  I am 
looking for a Cross-Platform ADO wrapper for iOS, Android and Win 8.1 WP 8.1?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ORDER BY issue v3.8.2 amalgamtion

2014-02-03 Thread Labar, Ken
Hello Simon, Clemens, and Richard,

Thank you for your help.


> Simon: "What are you seeing from your SELECT that you weren't expecting ?"
0 | Batt
0 | ClockBatt
0 | Batt
0 | BP
0 | ORP
0 | Ref
0 | pH
0 | pH
0 | DO
...


> Simon: "Can you reduce your INSERTs to just two rows, and still get results 
> you consider to be incorrect from the SELECT ?"
Yes, I reduced the table to 2 rows by:
DELETE FROM userparameter WHERE hepi_parameter_id !=32961 AND hepi_parameter_id 
!= 32881;

select (rtTableSort < 1000) as selected, abbrString from userParameter order by 
abbrString;

results:

0 | ORP

0 | DO


> DRH: "Can you try recompiling with all compiler optimizations turned off and 
> see if you still get the error?"

I have turned off all optimizations (FYI: only C++ optimizations were on before)


> DRH: ", can you compile with -DSQLITE_DEBUG then run the query *after* first 
> running "PRAGMA vdbe_debug=ON" and then send us the output?"
I have been struggling to enable SQLITE_DEBUG with our uCOS-II VFS port.
(namely our code does not have stdout or printf as compiled now. I love 
embedded;-) I'm working on a solution for debugging, more soon).


> Clemens: "What is the EXPLAIN QUERY PLAN output for this query on the 
> handheld?"
0|0|0|SCAN TABLE userParameter
0|0|0|USE TEMP B-TREE FOR ORDER BY

Thank you,
Ken

ken labar | Embedded Firmware Engineer
Hach Hydromet | www.hachhydromet.com<http://www.hachhydromet.com/> | 
kla...@hach.com<mailto:kla...@hach.com>

Innovating technology behind better data




Please be advised that this email may contain confidential 
information.  If you are not the intended recipient, please notify us 
by email by replying to the sender and delete this message.  The 
sender disclaims that the content of this email constitutes an offer 
to enter into, or the acceptance of, any agreement; provided that the 
foregoing does not invalidate the binding effect of any digital or 
other electronic reproduction of a manual signature that is included 
in any attachment.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ORDER BY issue v3.8.2 amalgamtion

2014-02-02 Thread Labar, Ken
;userParameter" 
VALUES(33041,17,1,1,1000,1000,1000,1000,1000,1000,287,288,245,'Chlorophyll 
a','CHL','V');
INSERT INTO "userParameter" 
VALUES(32913,9,1,1,1000,1000,1000,1000,1000,1000,14,299,259,'Barometric 
Pressure','BP','mmHg');
INSERT INTO "userParameter" 
VALUES(33057,18,1,1,1000,1000,1000,1000,1000,1000,331,332,334,'Rhodamine 
WT','RWT','ppb');
INSERT INTO "userParameter" 
VALUES(33073,19,1,1,1000,1000,1000,1000,1000,1000,331,332,245,'Rhodamine 
WT','RWT','V');
INSERT INTO "userParameter" 
VALUES(33089,20,1,1,1000,1000,1000,1000,1000,1000,343,344,345,'Phycocyanin','PCY','cells/mL');
INSERT INTO "userParameter" 
VALUES(33105,21,1,1,1000,1000,1000,1000,1000,1000,343,344,245,'Phycocyanin','PCY','V');
INSERT INTO "userParameter" 
VALUES(33121,22,1,1,1000,1000,1000,1000,1000,1000,352,353,345,'Phycoerythrin','PRY','cells/mL');
INSERT INTO "userParameter" 
VALUES(33137,23,1,1,1000,1000,1000,1000,1000,1000,352,353,245,'Phycoerythrin','PRY','V');
INSERT INTO "userParameter" 
VALUES(33265,31,1,1,1000,1000,1000,1000,1000,1000,543,373,245,'Battery 
Voltage','Batt','V');
INSERT INTO "userParameter" 
VALUES(33281,32,1,1,1000,1000,1000,1000,1000,1000,544,464,245,'Clock Battery 
Voltage','ClockBatt','V');
INSERT INTO "userParameter" 
VALUES(33297,33,1,1,1000,1000,1000,1000,1000,1000,399,399,400,'Density','Density','kg/m3');
INSERT INTO "userParameter" 
VALUES(33153,24,1,1,1000,1000,1000,1000,1000,1000,401,401,402,'NH4+','NH4+','mg/L-N');
INSERT INTO "userParameter" 
VALUES(33169,25,1,1,1000,1000,1000,1000,1000,1000,403,403,402,'NH4Tot','NH4Tot','mg/L-N');
INSERT INTO "userParameter" 
VALUES(33185,26,1,1,1000,1000,1000,1000,1000,1000,404,404,402,'NO3-','NO3-','mg/L-N');
INSERT INTO "userParameter" 
VALUES(33201,27,1,1,1000,1000,1000,1000,1000,1000,401,401,214,'NH4+','NH4+','mV');
INSERT INTO "userParameter" 
VALUES(33217,28,1,1,1000,1000,1000,1000,1000,1000,404,404,214,'NO3-','NO3-','mV');
INSERT INTO "userParameter" 
VALUES(33233,29,1,1,1000,1000,1000,1000,1000,1000,405,405,196,'Cl-','Cl-','mg/L');
INSERT INTO "userParameter" 
VALUES(33329,35,1,1,1000,1000,1000,1000,1000,1000,418,422,240,'Altitude','Alt','meters');
INSERT INTO "userParameter" 
VALUES(33345,36,1,1,1000,1000,1000,1000,1000,1000,419,420,423,'Latitude','Lat','degrees');
INSERT INTO "userParameter" 
VALUES(33266,31,2,1,1000,1000,1000,1000,1000,1000,548,373,556,'Battery 
Level','Batt','%');
INSERT INTO "userParameter" 
VALUES(0,35,2,1,1000,1000,1000,1000,1000,1000,418,422,241,'Altitude','Alt','feet');
INSERT INTO "userParameter" 
VALUES(33361,37,1,1,1000,1000,1000,1000,1000,1000,720,721,423,'Longitude','Lon','degrees');
INSERT INTO "userParameter" 
VALUES(32914,9,2,1,1000,1000,1000,1000,1000,1000,14,299,723,'Barometric 
Pressure','BP','psi');
INSERT INTO "userParameter" 
VALUES(32915,9,3,1,1000,1000,1000,1000,1000,1000,14,299,724,'Barometric 
Pressure','BP','mbar');

select (rtTableSort < 1000) as selected, abbrString from userParameter order by 
abbrString;


This used to work until we upgraded sqlite to v3.8.2. It still does work at the 
PC level.
I’m wondering if we are short on RAM, and the sort engine is failing quietly, 
returning the un-sorted query with SQLITE_OK.

Thank you,
Ken

ken labar | Embedded Firmware Engineer
Hach Hydromet | www.hachhydromet.com<http://www.hachhydromet.com/> | 
kla...@hach.com<mailto:kla...@hach.com>

Innovating technology behind better data


Please be advised that this email may contain confidential 
information.  If you are not the intended recipient, please notify us 
by email by replying to the sender and delete this message.  The 
sender disclaims that the content of this email constitutes an offer 
to enter into, or the acceptance of, any agreement; provided that the 
foregoing does not invalidate the binding effect of any digital or 
other electronic reproduction of a manual signature that is included 
in any attachment.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] ORDER BY issue v3.8.2 amalgamtion

2014-01-31 Thread Labar, Ken
Hello,

We have a small sqlite3 database compiled into our handheld unit (system 
details below).
Upgrading  from  3.7.14.1 to 3.8.2 our previously sorted queries are no longer 
sorted.
Testing this same database with the win7 binary 3.8.2 the query is correctly 
sorted.

I have tried:

* adding specific indexes

* reindex immediately before the query

* removing all indexes

* collating binary

* collating nocase

* debugging the sqlite engine, but nothing jumped out at me, and this 
is out of my knowledge base.

Any ideas on why the amalgamation would behave differently compiled into our 
project?

System details:

-  LPC2468 (ARM7)

-  uCOS-II (RTOS)

-  IAR C compiler




Thank you,
Ken

ken labar | Embedded Firmware Engineer
Hach Hydromet | www.hachhydromet.com<http://www.hachhydromet.com/> | 
kla...@hach.com<mailto:kla...@hach.com>

Innovating technology behind better data



Please be advised that this email may contain confidential 
information.  If you are not the intended recipient, please notify us 
by email by replying to the sender and delete this message.  The 
sender disclaims that the content of this email constitutes an offer 
to enter into, or the acceptance of, any agreement; provided that the 
foregoing does not invalidate the binding effect of any digital or 
other electronic reproduction of a manual signature that is included 
in any attachment.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Q: FTS 3-4 for WinRT

2012-10-16 Thread Ken Grant
Q: any plans to have a FTS3/4 WinRT extension for Metro style Win8 apps and if 
so, when?

Sent from my iPhone
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite/LINQ?

2012-04-25 Thread Ken godee

Is there any way to use

SQLite/LINQ/C# - programmatically?

I'm new to the C# world and once I read
about LINQ it sure seemed the way to go.

Maybe I'm missing something but I've really
beat this to death trying to find a solution.

I'm not using Visual Studio.

DBLing is dead.

LingPad works great, not sure how. Builds it's own
connect.

I can use ADO.Net no problem, just seems can't find
a Linq provider.

So if my thinking is right buy Visual Studio
or use ADO.Net seems to be the choice or maybe ALing ($$)

Thought I'd give the list a shot before I give up
and just resort to ADO.

Ideas?









___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] whole-file locking with flock()

2010-04-27 Thread Ken Dreyer
Background: I am trying to test whole-file locking with SQLite on
OpenAFS, since the *nix OpenAFS clients do not yet support byte-range
locking. I am using Solaris 10.

I was originally using SQLite 3.3.7, since this is what is bundled
with PHP 5.2. This SQLite version had the ability to set a fixed
locking style at compile-time (with SQLITE_FIXED_LOCKING_STYLE). I set
this option to flockLockingStyle, and it appeared to work on AFS
read-write volumes. However, whenever I opened SQLite files on AFS
read-only volumes, PHP would hang... even if the SQL operations were
read-only (SELECTs).

To narrow down the cause of the problem, I want to build SQLite apart
from PHP. However, SQLite commit f70d552bcd removed the
SQLITE_FIXED_LOCKING_STYLE option. How can I build newer versions of
SQLite to always use whole-file locking?

- Ken
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] multiple threads with shared cache mode

2009-11-24 Thread Ken
--- On Tue, 11/24/09, Nicolas Rivera  wrote:

> From: Nicolas Rivera 
> Subject: [sqlite] multiple threads with shared cache mode
> To: sqlite-users@sqlite.org
> Date: Tuesday, November 24, 2009, 12:22 PM
> Hi,
> 
> It is my understanding, per 
> http://www.hwaci.com/sw/sqlite/sharedcache.html section
> 2.1, that only 
> one write transaction may exist while in shared cache
> mode.  Is that 
> correct?

Correct.

> 
> If so, then I am trying to figure out what is the advantage
> of having 
> table level locks in shared cache mode.  If only one
> write transaction 
> can be pending at a time, irrespective of the table, then
> what advantage 
> is there to having table level locks?
> 
> To put it another way, is there anything gained in terms of
> concurrency 
> by using shared cached mode versus not using it?  BTW,
> I am not 
> interested in reading uncommitted data.

Yes only one write transaction may occur at a time. But it is possible that a 
write be happening on two separate tables by two separate threads. Then once 
one of the threads needs to spill its "writes" to disk the locks are acquired 
and a journal is created. At this point if the other thread attempts to write 
it will encounter the SQLITE_BUSY error. 

So given this one must also take care not to introduce self dead locks.

At least thats my understanding of it! 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Idea for improving page cache

2009-10-29 Thread Ken
Just for the sake of discussion I've attached a performance graph for various 
C++ data structures plus the Unrolled LL. The tests where run on a dell vostro 
1400 laptop. As you can see the graphs show the ULL to be quite efficient for 
insert/delete from the front/back of the list. I beleive this is mainly due to 
the fact that a new node is not allocated for the insert for each operation. 

If there isn't a page cache on the processor then the ULL will likely not 
perform.

As John and Pavel have pointed out its probably not ideal for the page cache 
since the pages need to point back into the list. So this would be very 
problematic for page movement.

Ken




--- On Tue, 10/27/09, John Crenshaw  wrote:

> From: John Crenshaw 
> Subject: Re: [sqlite] Idea for improving page cache
> To: "General Discussion of SQLite Database" 
> Date: Tuesday, October 27, 2009, 8:29 PM
> I don't think ULL buys you as much
> extra cache space as you think, and
> it certainly isn't anywhere near as fast as LL. Consider
> the following
> problems that still remain:
> 1. If you need to point to an element (common with linked
> lists) it gets
> really messy. Sorted inserts and deletes will change the
> memory location
> of elements, as well as their offset in the node, so you
> can't point to
> them by memory location or node + offset. Instead, you have
> to point to
> the list, plus something static and uniquely identifiable
> about the
> element. Many times this is going to involve additional
> space once per
> node. Assuming you care about word alignment (which you
> should if you're
> talking memory locality and stuff), the additional space
> needs to be the
> size of a pointer. This would give a net savings of one
> pointer for
> double LL, and ZERO net savings for single LL, unless each
> element
> already stored something uniquely identifying.
> 2. Unless you never need to add OR remove elements in the
> middle, you
> need wasted space to avoid very expensive operations. You
> still have to
> store the data. If the data is a pointer to other data, the
> whole
> business of cache locality is largely academic anyway,
> since you've got
> plenty of chances for cache misses when you look at the
> other data.
> Let's say that the data stored is 16 bytes (small, but
> useful), and
> pointers are 4 bytes. The data in this case uses 66% of the
> space, so
> unless you can keep the wasted space per node below 33% you
> don't gain
> anything at all over a straight double LL with good
> locality. If you're
> replacing a single LL, you have to keep wasted space below
> 20% to see
> any gain. At larger and larger data, this number gets
> smaller and
> smaller.
> 3. If data is arbitrarily removed at approximately the same
> rate that
> data is added, the wasted space will naturally gravitate
> towards 50%,
> which is a no gain for footprint in most cases. Even for a
> single LL
> with pointer sized data this is a zero gain.
> 4. You CAN improve wasted space with balancing to
> consolidate adjacent
> nodes when possible, but this can quickly become expensive
> with lots of
> memory copy operations just to avoid a little wasted space
> in the cache.
> If we measure the value of that space in terms of the
> percentage of a
> cache miss that it might avoid times the cost of a cache
> miss in time,
> it is unclear if there is a real gain here.
> 5. Insert and delete operations will require, on average, a
> copy of half
> the elements in a node. Compared to the 2 operation
> insert/delete of a
> single LL and this is not cheap. Again, compare to the
> value of the
> cache you are trying to save by avoiding the pointers, and
> this doesn't
> look like a wise spend to me.
> 6. Linked lists are designed for use in cases where inserts
> and removals
> should be fast and simple, but ULL ignores this. Deletes
> are especially
> problematic, requiring a full scan to the element being
> deleted,
> followed by a memory copy and possibly node balancing. In a
> double
> linked list, if you have a pointer to the node you want to
> delete, the
> delete costs only 2 operations.
> 7. ULL will outperform LL when traversing nodes to find a
> sorted
> insertion point, but may likely lose that gain when it
> actually does the
> insert. In any case btree will outperform either when
> finding a sorted
> insertion point, still gets the superfast insert, and can
> get
> appropriate locality by pooling.
> 
> I guess my point is, avoiding cache misses is a fine goal,
> but I don't
> believe ULL is the tool to use. A Pool allocation scheme
> that uses the
> same pool size as 

Re: [sqlite] Idea for improving page cache

2009-10-27 Thread Ken


--- On Tue, 10/27/09, Pavel Ivanov  wrote:

> From: Pavel Ivanov 
> Subject: Re: [sqlite] Idea for improving page cache
> To: kennethinbox-sql...@yahoo.com, "General Discussion of SQLite Database" 
> 
> Date: Tuesday, October 27, 2009, 12:38 PM
> Are you sure that there will be
> improvement with ULL?
> If you're talking about improving due to CPU internal cache
> then first
> of all you have to store in the list pointers to pages, not
> pages
> themselves (you don't want to store several pages in one
> chunk of
> memory, do you?).

In general the ULL would store the elements in the node, but 2k or greater 
elements would be too large to gain any efficiencies. So yes a reference 
pointer to the page should be stored. 

 So you're getting one more pointer
> dereference every
> time you go to the list. 
Not really, because the list itself is composed of references to other nodes 
correct? So traversal of N elements would require N dereference on a standard 
LL, however a ULL would only required N dereferences 

Then you have to store additional
> information
> in the page to remember where in the list pointer to this
> page is
> stored. And each time list nodes are split or combined you
> have to
> change this information in each page.

Not really, just a reference to the ullNode that contains the page reference. 
This ullNode can be searched quite quickly to find the referenced page, once 
its on the CPU cache.


> And now the main argument: ULL is good when you want to
> save memory
> overhead (which is very questionable in case of page cache)
> and good
> in getting elements by index and traversal of the whole
> list. Last two
> operations are never executed in SQLite.

Are you sure the list is never traversed? I thought I saw some sorting and page 
cleanup routines that traversed the list.

> So looking at all this I don't see how performance can be
> improved
> (for me it seems that it's quite the opposite). Did I
> overlook
> something?

I'm not sure it can be improved either. Its just an idea. Implementation and 
testing would be the only definitive way to tell.

Agreed that it would degrade performance if the CPU does not have a processor 
cache. This alone is reason enough to avoid the ULL for sqlite.

> 
> Pavel
> 
> On Tue, Oct 27, 2009 at 1:07 PM, Ken 
> wrote:
> > Hi All,
> >
> > I have an idea that could improve the page cache
> performance.
> >
> > Instead of using a regular linked list to connect
> pages that are on the cache use an "unrolled linked list".
>  On some architectures due to the CPU caching the ULL is
> about 40 times faster.
> >
> > Still this is mostly insignificant to the speed of
> disk i/o but every bit helps...
> >
> > Just an idea, not sure if its been considered,
> feasible or even worthwhile.
> >
> > Ken
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Idea for improving page cache

2009-10-27 Thread Ken


--- On Tue, 10/27/09, Kristoffer Danielsson  
wrote:

> From: Kristoffer Danielsson 
> Subject: Re: [sqlite] Idea for improving page cache
> To: sqlite-users@sqlite.org
> Date: Tuesday, October 27, 2009, 1:03 PM
> 
> In game development you seldom use linked list altogether
> due to the increased rate of cache-misses.
> 
>  
> 
> Why not use an array with some smart lookup-algorithm?
>  
> > From: paiva...@gmail.com
> > Date: Tue, 27 Oct 2009 13:38:27 -0400
> > To: kennethinbox-sql...@yahoo.com;
> sqlite-users@sqlite.org
> > Subject: Re: [sqlite] Idea for improving page cache
> > 
> > Are you sure that there will be improvement with ULL?
> > If you're talking about improving due to CPU internal
> cache then first
> > of all you have to store in the list pointers to
> pages, not pages
> > themselves (you don't want to store several pages in
> one chunk of
> > memory, do you?). So you're getting one more pointer
> dereference every
> > time you go to the list. Then you have to store
> additional information
> > in the page to remember where in the list pointer to
> this page is
> > stored. And each time list nodes are split or combined
> you have to
> > change this information in each page.
> > And now the main argument: ULL is good when you want
> to save memory
> > overhead (which is very questionable in case of page
> cache) and good
> > in getting elements by index and traversal of the
> whole list. Last two
> > operations are never executed in SQLite.
> > So looking at all this I don't see how performance can
> be improved
> > (for me it seems that it's quite the opposite). Did I
> overlook
> > something?
> > 
> > Pavel
> > 
> > On Tue, Oct 27, 2009 at 1:07 PM, Ken 
> wrote:
> > > Hi All,
> > >
> > > I have an idea that could improve the page cache
> performance.
> > >
> > > Instead of using a regular linked list to connect
> pages that are on the cache use an "unrolled linked
> list".  On some architectures due to the CPU caching
> the ULL is about 40 times faster.
> > >
> > > Still this is mostly insignificant to the speed
> of disk i/o but every bit helps...
> > >
> > > Just an idea, not sure if its been considered,
> feasible or even worthwhile.
> > >
> > > Ken

An unrolled linked list takes advantage of the processor cache. And it reduces 
the overhead of the list pointer elements. 

struct ullNode {
int   highWaterMark;
void* array_of_elements ;
struct ullNode *next;
struct ullNode *prev;
};

This takes advantage of the processor cache by storing multiple items in a 
node. And allows the nodes to be linked for traversal of the list.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Idea for improving page cache

2009-10-27 Thread Ken
Hi All,

I have an idea that could improve the page cache performance.

Instead of using a regular linked list to connect pages that are on the cache 
use an "unrolled linked list".  On some architectures due to the CPU caching 
the ULL is about 40 times faster. 

Still this is mostly insignificant to the speed of disk i/o but every bit 
helps...

Just an idea, not sure if its been considered, feasible or even worthwhile. 

Ken
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite: question

2009-10-27 Thread Ken

Sql uses a single quote or a tick mark to delimit strings. C, C++ and other 
languages use Dobule quote to delimit strings.

I think though that the column names may be double quoted to differentiate them 
from data.

--- On Tue, 10/27/09, D. Richard Hipp  wrote:

> From: D. Richard Hipp 
> Subject: Re: [sqlite] SQLite: question
> To: "General Discussion of SQLite Database" 
> Cc: "Sergiu _" 
> Date: Tuesday, October 27, 2009, 7:56 AM
> Question forwarded to the
> sqlite-users mailing list.
> 
> Quick answer:  String are quoted in SQL using single
> quotes, not  
> double-quotes.  What you are seeing is not a
> bug.  You are misusing  
> the string quoting mechanism.
> 
> On Oct 27, 2009, at 8:51 AM, Sergiu _ wrote:
> 
> > Hello,
> >
> > I use SQLite in one of my projects for quite long time
> and it looked  
> > to be a very good product.
> > Though, I think I spotted a bug, but I am not sure.
> Please confirm.
> >
> > Scenario: Create a table having at least one column of
> TEXT type  
> > (let's say "myColumn"). Insert a row, having the value
> "STATUS" on  
> > that TEXT column; Then try to select the row by using
> WHERE myColumn  
> > = "STATUS". It does not work for me.
> >
> > Could you please tell me whether this is a bug or I am
> just using it  
> > wrong ?
> > Thank you.
> >
> > Sergiu
> 
> D. Richard Hipp
> d...@hwaci.com
> 
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] multiple prepare statements

2009-09-30 Thread Ken

--- On Wed, 9/30/09, Scott Hess  wrote:

> From: Scott Hess 
> Subject: Re: [sqlite] multiple prepare statements
> To: "General Discussion of SQLite Database" 
> Date: Wednesday, September 30, 2009, 9:02 AM
> Since SQLite is an embedded database,
> it generally does not pay to
> count statements, unless they add additional disk
> I/O.  You can code
> like this:
> 
>   BEGIN
>     SELECT ...
>     if (select results A)
>       INSERT ...
>     else
>       UPDATE ...
>   END
> 
> and it will be about as fast as either the INSERT or the
> UPDATE run
> independently.  This is because the INSERT or the
> UPDATE will have to
> read in all the pages the SELECT would have read in, so the
> SELECT is
> essentially free (just a small cost in CPU).  Well,
> assuming that your
> SELECT is selecting the rows you mean to UPDATE or INSERT
> ...
> 
> -scott
> 

Depending upon your system and your data. 
Say you have some type of Primary Key or unique index.

For the case where updates happen infrequently code this way.
Begin
Insert into ..
IF PK failure
   Update
Commit

If you tend to load up the data then have more updates.

Begin
 Update 
 IF No data Found (0 rows updated)
 Insert
Commit


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite database on a certain high-performance "SSD"

2009-09-23 Thread Ken
Interesting

Mind if we ask what the SSD device brand and model is?

Is it a disk backed type of device with equal memory in front, I recall seeing 
devices like this about 7 years ago. I'm thinking that the sync call is causing 
the device to write its memory contents back out to disk (ie to be persisted). 
Just a thought...



--- On Wed, 9/23/09, Pavel Ivanov  wrote:

> From: Pavel Ivanov 
> Subject: Re: [sqlite] SQLite database on a certain high-performance "SSD"
> To: "General Discussion of SQLite Database" 
> Date: Wednesday, September 23, 2009, 11:21 AM
> > Is the only change the absence
> of a call to "fsync()" when turning
> > synchronous off? If so, I can conclusively say that
> fsync() is very slow
> > on this storage device.
> 
> Yes, the only action of synchronous = off is to turn off
> calls to
> fsync() which is called at least twice during each commit.
> 
> Pavel
> 
> On Wed, Sep 23, 2009 at 11:40 AM, Mark 
> wrote:
> >
> > On a RAID-5 array of 4x SAS disks, turning the sync
> off made it about 2x
> > faster, give or take.
> >
> > On the "SSD", it was about 150x faster.
> >
> > Is the only change the absence of a call to "fsync()"
> when turning
> > synchronous off? If so, I can conclusively say that
> fsync() is very slow
> > on this storage device.
> >
> > Thanks for the suggestion.
> >
> > Mark
> >
> >
> > Pavel Ivanov wrote:
> >> If you execute
> >>
> >> pragma synchronous = off;
> >>
> >> you'll be able to compare performance with syncs
> and without them. So
> >> if you make this comparison on standard spinning
> disk and on SSD
> >> you'll see if syncs on SSD indeed extra-ordinary
> slow.
> >>
> >> Pavel
> >>
> >> On Wed, Sep 23, 2009 at 10:09 AM, Mark 
> wrote:
> >>> It's very possible, but I don't know how to
> tell. Is there an easy way
> >>> to know if the sync() calls are taking
> inordinately long?
> >>>
> >>> Mark
> >>>
> >>>
> >>> Thomas Briggs wrote:
>     Is the sync necessary to commit a
> transaction slow?  Performance of
>  that sync depends on the OS, file system,
> hardwar, etc. IIRC, so IOs
>  may be fast but it's possible that the
> syncs are killing you.
> 
>     -T
> 
>  On Tue, Sep 22, 2009 at 5:14 PM, Mark
> 
> wrote:
> > Lothar Scholz wrote:
> >> Hello Mark,
> >>
> >> Tuesday, September 22, 2009,
> 3:53:48 AM, you wrote:
> >>
> >> M> I've currently got a loaner
> high-performance flash-based "SSD" (let's
> >> M> just say it doesn't connect
> to any disk controllers) that I'm testing
> >> M> for performance. I've run my
> application against it, and I believe that
> >> M> I should see numbers MUCH
> higher than I do. When I run my test app on a
> >> M> normal SATA 7200 RPM disk, I
> get a certain performance, and on the "SSD"
> >> M> I get about 1/10th that
> speed. On an array of SAS disks I get numbers
> >> M> that are about 5x faster
> than my SATA disk, so my software itself isn't
> >> M> (I believe) the bottleneck.
> >>
> >> M> I'm wondering if anyone has
> any tips for "optimizing" for this sort of
> >> M> storage solution.
> >>
> >> Throw it into the trash bin and
> buy a new one which has a 3rd
> >> generation controller and at least
> 64MB fast cache. The old JMicron
> >> controller that many low cost SSD
> still use was developed for Flash
> >> USB sticks.
> >>
> >> With modern SSD like the latest
> Samsung should give you at least the
> >> same performance as the SATA. If
> it gets better depends on file size
> >> and cache. Are you sure that the
> SAS RAID Controller is not keeping
> >> everything in the controller
> cache?
> > This isn't an "SSD". It's connected
> directly to the PCI Express bus, and
> > "low cost" it certainly is NOT. It's
> much more valuable than the server
> > it's plugged into.
> >
> > I've run benchmark tests (iometer),
> and the benchmarks show it's as fast
> > as the mfgr says it should be
> (~700MB/sec read and write bandwidth,
> >  >115,000 IOPS) but it performs
> quite poorly when I run my app on it. I
> > can't figure out why.
> >
> > Mark
> >
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite database on a certain high-performance "SSD"

2009-09-22 Thread Ken
How is the SSD connected? Could it be an issue with the transport layer to the 
device?


--- On Mon, 9/21/09, Dave Toll  wrote:

> From: Dave Toll 
> Subject: Re: [sqlite] SQLite database on a certain high-performance "SSD"
> To: "General Discussion of SQLite Database" 
> Date: Monday, September 21, 2009, 5:31 PM
> Hi Mark
> 
> I've had a little experience working with flash-based
> filesystems - I'd
> recommend playing with the page_size and temp_store PRAGMAs
> (and of
> course make sure you are using transactions to minimise the
> number of
> file writes) to improve performance.
> 
> Cheers,
> Dave.
> 
> 
> -Original Message-
> From: Mark [mailto:godef...@gmail.com]
> 
> Sent: 21 September 2009 13:54
> To: sqlite-users@sqlite.org
> Subject: [sqlite] SQLite database on a certain
> high-performance "SSD"
> 
> I've currently got a loaner high-performance flash-based
> "SSD" (let's 
> just say it doesn't connect to any disk controllers) that
> I'm testing 
> for performance. I've run my application against it, and I
> believe that 
> I should see numbers MUCH higher than I do. When I run my
> test app on a 
> normal SATA 7200 RPM disk, I get a certain performance, and
> on the "SSD"
> 
> I get about 1/10th that speed. On an array of SAS disks I
> get numbers 
> that are about 5x faster than my SATA disk, so my software
> itself isn't 
> (I believe) the bottleneck.
> 
> I'm wondering if anyone has any tips for "optimizing" for
> this sort of 
> storage solution.
> 
> Also, if anyone has any quick-and-dirty test setups they'd
> like me to 
> run through on this rig, just let me know :)
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Any way to speed up this SQL?

2009-09-12 Thread Ken
On additional thing.

One of the indices will not be required. Since one table will be the driving 
table and will require a full scan. The other table should have the index. I 
would make that the smaller of the tables.

Secondly if at all possible try not to make composit fields. A compoisit field 
is one where you have one field but put two or more data elements in it. For 
example a "name" that you use to store first and last names vs creating two 
columns first_name last_name.



--- On Sat, 9/12/09, RB Smissaert  wrote:

> From: RB Smissaert 
> Subject: Re: [sqlite] Any way to speed up this SQL?
> To: "'General Discussion of SQLite Database'" 
> Date: Saturday, September 12, 2009, 6:47 AM
> Thanks for the tip and will have a
> look at that.
> I have in the meantime made this a lot faster by making the
> data in both
> tables upper case and making the small table smaller by
> taking out invalid
> records. I can then run a simpler join with glob, although
> I noticed it
> still doesn't use the index. Sounds like your solution will
> be faster still.
> 
> RBS
> 
> 
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org]
> On Behalf Of marbex
> Sent: 12 September 2009 12:25
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Any way to speed up this SQL?
> 
> 
> I had a similar issue. I wanted to find strings that
> started as another
> string in the same table and field. My solution was to
> create a temp table
> that consisted of the id, the string and the first word of
> the string which
> I then indexed. The table had 30 000 records and the total
> processing time
> went down from 15-20 minutes to 15 seconds!
> 
> Applying that solution to your case, not knowing the nature
> of your data (I
> had names), I guess you can do something like this:
> 
> - Get the length of the shortest string in table2.term.
> Lets say it's 3.
> 
> - Create a tmptable of table1
> Create temp tmptable1 as
> select term, lower(substr(term,1,3)) shortest
> from table1
> 
> - Create a tmptable of table2
> Create temp tmptable2 as
> select term, lower(substr(term,1,3)) shortest
> from table2
> 
> - Index the shortest fields
> Create index idx_tmptable1_shortest on tmptable1(shortest)
> Create index idx_tmptable2_shortest on tmptable2(shortest)
> 
> -Run this sql
> select
> count(a.rowid)
> from tmptable1 a inner join tmptable2 b on
> a.shortest=b.shortest
> where (lower(b.term) =
> lower(substr(a.term,1,length(b.term
> 
> 
> 
> RB Smissaert wrote:
> > 
> > Have 2 tables with both one text field called term and
> need to run a SQL
> > like this, to count the records in table1 where the
> start of term in
> > table1
> > equals a term in table2:
> > 
> > select
> > count(a.rowid)
> > from table1 a inner join table2 b on
> > (lower(b.term) =
> lower(substr(a.term,1,length(b.term
> > 
> > term is indexed in both tables, but not surprisingly,
> this query runs very
> > slow, taking some 40 minutes. Table1 is large, maybe
> some 1 million rows
> > and
> > table2 is small, maybe some 30.000 rows. All rows in
> table2 are unique,
> > but
> > table1 has many duplicates.
> > 
> > Any suggestions to speed this up?
> > I could also tackle this in code rather than in SQL.
> > 
> > RBS
> > 
> > 
> > 
> > 
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> > 
> > 
> 
> -- 
> View this message in context:
> http://www.nabble.com/Any-way-to-speed-up-this-SQL--tp25412299p25413614.html
> Sent from the SQLite mailing list archive at Nabble.com.
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question on converting objects into relational db

2009-09-10 Thread Ken
table might have columns, integer ID and BLOB entity.

But it would be better if you could fully describe the "Entity" internal types 
instead of just a var arg...



--- On Thu, 9/10/09, Kavita Raghunathan  wrote:

> From: Kavita Raghunathan 
> Subject: [sqlite] Question on converting objects into relational db
> To: "sqlite-users" 
> Date: Thursday, September 10, 2009, 10:30 AM
> 
> I have a very high level question, not dealing with bits
> and bytes of sqlite: 
> 
> I currently have an object called entity_list, that has a
> list of all entities on the system. 
> While thinking about it, it sounds like a list of entities
> is like having a database. Would 
> it be possible to transfer the object entity-list into a
> sqlite database ? 
> 
> 
> How would my schema look ? 
> 
> 
> I have enclosed an idea of how my class looks. Any links to
> useful information is 
> appreciated! 
> 
> 
> Kavita 
> 
> 
> --- 
> 
> class entity_list 
> { 
> public: 
> entity_list(); 
> ~entity_list(); 
> entity *get_specific_entity(int entity_id); 
> entity *add_entity(int entity_id, void *attr_list,...); 
> remove_entity(int entity_id); 
> vector m_entity_list; 
> 
> 
> private: 
> }; 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] server process gives better concurrency - why?

2009-09-09 Thread Ken
The key to increased concurrency is MVCC. Without MVCC concurrency is limited 
to page locking, table locking etc. 

Google MVCC...

--- On Tue, 9/8/09, Iker Arizmendi  wrote:

> From: Iker Arizmendi 
> Subject: Re: [sqlite] server process gives better concurrency - why?
> To: sqlite-users@sqlite.org
> Date: Tuesday, September 8, 2009, 9:34 PM
> The question is whether a
> client-server design is /necessary/ to
> efficiently implement higher concurrency. It appears to be
> easier
> to do so with a client-server model, but is such a model
> required?
> Are there functions performed by a server process that
> cannot be
> carried out at all without it?
> 
> Iker
> 
> Simon Slavin wrote:
>  > If SQLite was to be
>  > designed to handle multiple processes 'properly', it
> would have to be
>  > rewritten as a client/server system.
>  >
>  > This would, of course, kill all the advantages of
> SQLite: it could no
>  > longer be tiny, fast, and ultra-portable.  So it
> would be a bad design
>  > choice for SQLite (bowing, of course, to DRH's right
> to do whatever he
>  > pleases with it).
>  >
>  > This is why I get uneasy when I see posts here that
> suggest spinning
>  > off threads especially to deal with locking issues,
> or do other things
>  > that solve concurrency or latency problems. 
> Often you find that
>  > making such a change in your program just leads to
> one of the threads
>  > immediately being blocked by another, defeating the
> point of threading
>  > in the first place.  Software has to be designed
> around what is
>  > possible with the tools you're using, not around some
> mythical idea of
>  > the perfect generic SQL engine.
>  >
>  > Simon.
> 
> -- 
> Iker Arizmendi
> AT&T Labs - Research
> Speech and Image Processing Lab
> e: i...@research.att.com
> w: http://research.att.com
> p: 973-360-8516
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Big performance problem with inserting lots of data

2009-08-31 Thread Ken
> Only problem is
> > there seems to be 1-2s freeze on the moment I create a
> new file.
> 

If you are talking about creating a new database periodically with the same 
tables: try creating a "template database" and copy that to the new working 
version. 


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Query performance on AIX

2009-08-27 Thread Ken
Hi I've encountered some really differing performance results for a query on 
the AIX platform. I could you some help with this.

The query performs a union and an order by its large and complicated. But here 
is a "Watered down" version:

select
 id, xl.srcn srcn col1, ... coln
 from ktab, xlist xl
 where ( (op= 2 and (fb&32) and not (fb&64))
 or (op=11 and kdo.cdba = bdba)
 or (op= 5 and kdo.cmr = 5 and (lfb&4))
 or (op= 6 and kdo.cmr = 5 and (lfb&4))
 or (op= 8 and kdo.cmr = 5 and (lfb&4)))
 and ktab.xid = xl.xid
 union
select
 id, xl.srcn srcn, col1, ... coln
 from ltab, xlist xl
 where (ltab.fb&4) and ltab.xid = xl.xid
 order by srcn, id;

The table ktab contains 84000 rows, ltab contains 0 and xlist contains 3 rows. 
So there are 3 distinct srcn values.

The timing to execute this is about 14.7 seconds
query plan:
   0|0|TABLE ktab
   1|1|TABLE xlist AS xl USING PRIMARY KEY
   0|0|TABLE ltab
   1|1|TABLE xlist AS xl USING PRIMARY KEY

 
If I re-organize this into a create table as select and a create index and a 
select.
Then I get the following times:
   Create table: 2.2 s
   Create index: 1.3 s
   Select:   7.6

   Query plan for this final select
  0|0|TABLE ksrcn WITH INDEX ksrcn ORDER BY


10 seconds for the new vs 14.7 seconds for the original  

On linux this same query executes in 5.5 seconds. (After flushing the linux 
buffer cache).

Any ideas why this is so slow on aix, where the disk drives are scsi 10k drives?

Thanks,
Ken
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] AIX performance

2009-08-21 Thread Ken
Im looking for some ideas as to why the performance for a select statement 
takes roughly 5 times longer on AIX as on a linux system.

The disk drive on the aix platform is a 10k scsi vs the 7200 rpm drive on linux.

AIX:
sqlite> .timer on
sqlite> .output ken.out
sqlite> .read kdo.sql
CPU Time: user 26.321955 sys 6.498729
sqlite> 

Linux:
sqlite> .timer on
sqlite> .output ken.out
sqlite> .read kdo.sql
CPU Time: user 4.648290 sys 0.888056
sqlite>         

Thanks,
Ken

Adding pragma temp_store=2 does seem to help, it does reduce the sys time from 
6 to .5

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] AIX test failures for 3.6.17

2009-08-18 Thread Ken
Running make test resulted in the following failures on aix 5.3
Note that the CFLAGS=-DSQLITE_DISABLE_DIRSYNC was set for the make.

14 errors out of 40926 tests
Failures on these tests: backup2-10 io-4.1 io-4.2.3 nan-1.1.2 nan-1.1.3 
nan-1.1.4 nan-1.1.5 nan-1.1.5 nan-4.7 nan-4.8 nan-4.14 nan-4.15 nan-4.16 
nan-4.17

Any suggestions or reason why the io test would fail?
io-3.3... Ok
io-4.1...
Expected: [3]
 Got: [2]
io-4.2.1... Ok
io-4.2.2... Ok
io-4.2.3...
Expected: [3]
 Got: [2]
io-4.3.1... Ok
io-4.3.2... Ok

Thanks,
Ken

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 3.6.17 test failure

2009-08-17 Thread Ken


--- On Sat, 8/15/09, Dan Kennedy  wrote:

> From: Dan Kennedy 
> Subject: Re: [sqlite] 3.6.17 test failure
> To: "General Discussion of SQLite Database" 
> Date: Saturday, August 15, 2009, 12:36 AM
> 
> On Aug 15, 2009, at 2:14 AM, Ken wrote:
> 
> > I'm not sure if this an issue or not.  make test
> failed with the  
> > following:
> >
> > 2 errors out of 40872 tests
> > Failures on these tests: rollback-2.3 tkt3457-1.4
> > All memory allocations freed - no leaks
> > Memory used:         
> now          0  max 
>    102680  max-size 
>    
> > 2800336
> > Page-cache used:      now   
>       0  max     
>    13  max-size     
>   
> > 4096
> > Page-cache overflow:  now     
>     0  max    3071416
> > Scratch memory used:  now     
>     0  max         
> 0
> > Scratch overflow:     now 
>         0  max     
> 33296  max-size       
> > 33296
> > Maximum memory usage: 102680 bytes
> > Current memory usage: 0 bytes
> > Number of malloc()  : -1 calls
> > make: *** [test] Error 1
> 
> Both tests use Tcl to access a journal file in the
> file-system.
> What additional information is found in the body of the
> test
> log? (search for the strings "rollback-2.3" and
> "tkt3457-1.4").
> 
> Dan.
> 
> 

rollback-2.1... Ok
rollback-2.2... Ok
rollback-2.3...
Expected: [0]
 Got: [1]
rollback-2.4... Ok
Memory used:  now 16  max2775504  max-size 10
Page-cache used:  now  0  max 13  max-size   4096
Page-cache overflow:  now  0  max2643024
Scratch memory used:  now  0  max  0
Scratch overflow: now  0  max  33296  max-size  33296


tkt3457-1.1... Ok
tkt3457-1.2... Ok
tkt3457-1.3... Ok
tkt3457-1.4...
Expected: [1 {unable to open database file}]
 Got: [0 {1 2 3 4 5 6}]
tkt3457-1.5... Ok
Memory used:  now 16  max3874864  max-size102
Page-cache used:  now  0  max 13  max-size   4096
Page-cache overflow:  now  0  max3071416
Scratch memory used:  now  0  max  0
Scratch overflow: now  0  max  33296  max-size  33296

Hope that helps
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] 3.6.17 test failure

2009-08-14 Thread Ken
I'm not sure if this an issue or not.  make test failed with the following:

2 errors out of 40872 tests
Failures on these tests: rollback-2.3 tkt3457-1.4
All memory allocations freed - no leaks
Memory used:  now  0  max 102680  max-size2800336
Page-cache used:  now  0  max 13  max-size   4096
Page-cache overflow:  now  0  max3071416
Scratch memory used:  now  0  max  0
Scratch overflow: now  0  max  33296  max-size  33296
Maximum memory usage: 102680 bytes
Current memory usage: 0 bytes
Number of malloc()  : -1 calls
make: *** [test] Error 1
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Cannot insert records into a table after dropping and recreating it

2009-08-11 Thread Ken


--- On Tue, 8/11/09, Radcon Entec  wrote:

> From: Radcon Entec 
> Subject: [sqlite] Cannot insert records into a table after dropping and 
> recreating it
> To: sqlite-users@sqlite.org
> Date: Tuesday, August 11, 2009, 10:50 AM
> Greetings!
> 
> I have an application that uses an SQLite database file
> that may or may not exist when the application starts. 
> At startup, the application creates three tables.  If
> the file previously existed, the create table queries
> fail.  My code checks the error message, and if it
> indicates that the table previously existed, it ignores the
> error.  
> 
> There is a fourth table that is handled differently. 
> This table will be created at startup, used, and then
> dropped.  So, my code drops the table, then creates it,
> and then adds data to it.  (The final drop is not yet
> implemented.)  Just in case the table got left behind
> for some reason, my code drops the table before creating
> it.  Any errors from the drop are ignored, although
> there is an exception handler there and I have verified that
> the hander is not being executed.
> 
> If the database did not exist before the application runs,
> the temporary table is created and populated as
> expected.  But if the database did exist before the
> application runs, the temporary table is dropped and created
> as expected, but I cannot write any data into it. 
> There are no errors generated, but there is no data in the
> table, either.  
> 
> I am inserting data into the table by building an insert
> query and calling sqlite3_exec().  
> 
> Am I leaving out a step that makes my newly created table
> unwritable?
> 
> Thanks very much!
> 
> RolbR
> 
> 

Have you verified that your drop and recreate isn't being called again (maybe 
after doing the inserts)???

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Disk IO ERROR on AIX

2009-08-07 Thread Ken


--- On Fri, 8/7/09, Dan Kennedy  wrote:

> From: Dan Kennedy 
> Subject: Re: [sqlite] Disk IO ERROR on AIX
> To: "General Discussion of SQLite Database" 
> Date: Friday, August 7, 2009, 8:26 AM
> 
> On Aug 4, 2009, at 5:07 AM, Kenneth Long wrote:
> 
> >
> > Hi,
> >
> > I'm getting a Disk I/O error when committing a
> transaction on an AIX  
> > system.
> >
> > The extended result code is 1290. Which i believe
> means that the  
> > extended code is a SQLITE_IOERR_DIR_FSYNC error.
> >
> > Any ideas why this is happening or how to track it
> down?
> 
> Compile with SQLITE_DISABLE_DIRSYNC for AIX.
> 
> Dan.
> 
> 
> > Thanks,
> > Ken
> >

Dan, Many thanks. That did the trick!!!
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Disk I/O error on AIX

2009-08-03 Thread Ken

Roger,

Thats too funny. I guess I forgot posting this last year.

Ken

--- On Mon, 8/3/09, Roger Binns  wrote:

> From: Roger Binns 
> Subject: Re: [sqlite] Disk I/O error on AIX
> To: "General Discussion of SQLite Database" 
> Date: Monday, August 3, 2009, 7:55 PM
> Ken wrote:
> > I'm getting a Disk I/O error when committing a
> transaction on an AIX system.
> > The file system is JFS.
> > The extended result code is 1290. Which i believe
> means that the extended code is a SQLITE_IOERR_DIR_FSYNC
> error.
> > 
> > Any ideas why this is happening or how to track it
> down?
> 
> This has come up before and you may recognise the poster:
> 
>    http://thread.gmane.org/gmane.comp.db.sqlite.general/39682/
> 
> Roger
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Disk I/O error on AIX

2009-08-03 Thread Ken

Thanks Dan!

I'll check this in the morning.

Ken

--- On Tue, 8/4/09, Dan Kennedy  wrote:

> From: Dan Kennedy 
> Subject: Re: [sqlite] Disk I/O error on AIX
> To: "General Discussion of SQLite Database" 
> Date: Tuesday, August 4, 2009, 12:03 AM
> 
> On Aug 4, 2009, at 5:11 AM, Ken wrote:
> 
> >
> > Hi,
> >
> > I'm getting a Disk I/O error when committing a
> transaction on an AIX  
> > system.
> > The file system is JFS.
> >
> >
> > The extended result code is 1290. Which i believe
> means that the  
> > extended code is a SQLITE_IOERR_DIR_FSYNC error.
> >
> > Any ideas why this is happening or how to track it
> down?
> 
> AIX needs -DSQLITE_DISABLE_DIRSYNC.
> 
> Dan.
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Disk I/O error on AIX

2009-08-03 Thread Ken

Hi,

I'm getting a Disk I/O error when committing a transaction on an AIX system.
The file system is JFS.


The extended result code is 1290. Which i believe means that the extended code 
is a SQLITE_IOERR_DIR_FSYNC error.

Any ideas why this is happening or how to track it down?

Thanks,
Ken

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multi-master replication with updated Versioning extension

2009-07-31 Thread Ken

Alex,

I've looked at your code and discussions on this list about the versioning. I 
have a few questions.

1. How are you moving the data around from one master to another?
2. How are you applying the changes once moved to the master?



--- On Fri, 7/31/09, Alexey Pechnikov  wrote:

> From: Alexey Pechnikov 
> Subject: Re: [sqlite] Multi-master replication with updated Versioning 
> extension
> To: sqlite-users@sqlite.org
> Cc: "D. Richard Hipp" 
> Date: Friday, July 31, 2009, 8:42 AM
> Hello!
> 
> I made some changes:
>     hash field in actions table has always
> name "checksum" (so versioning and replication logic doesn't
> influence of hash algorithm)
>     versioning() function without second
> argument now start "local" mode
>     history and actions tables are renamed
> 
> Updated files is here
> http://mobigroup.ru/files/sqlite-ext/versioning/
> 
> Now there are two problems in the realization:
>     the "replace" conflict resolution algorithm
> for SOURCE table may produce errors - tickets 3964, 3982
>     versioning_drop() function doesn't work -
> ticket 4001
> 
> Best regards, Alexey Pechnikov.
> http://pechnikov.tel/
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple Writers and Database is Locked Problems

2009-07-18 Thread Ken

I like to start each of my transactions with a "Begin Immediate" that way the 
database file is locked at that point. And its relatively simple to test for 
the DB locked at that stage and handle waiting or returning an error.

HTH

--- On Fri, 7/17/09, Cole  wrote:

> From: Cole 
> Subject: [sqlite] Multiple Writers and Database is Locked Problems
> To: sqlite-users@sqlite.org
> Date: Friday, July 17, 2009, 6:38 AM
> Hi.
> 
> Im hoping someone might be able to help me with the
> problems im having, or
> suggest a better method of doing what im trying to achieve.
> Currently im
> using sqlite3 3.6.10. I don't mind updating or downgrading
> it if needed.
> 
> I have a program that I run multiple instances of. When
> they start, they
> parse the config file, and open a connection to each
> database that is
> listed. Each database only has a single table in it. They
> then receive
> requests, do a select on the database, parse the returned
> data, modify the
> data, then update the data to the database. However, I am
> running into the
> "database is locked" error when trying to update the
> database. I fully
> understand that 2 or more instances might be trying to
> update the same
> table, but is this the only reason this might happen now?
> Or are there also
> other scenarios where this might happen?
> 
> Ive searched the mailing list and I see that you mention
> using random seed
> and time to handle the SQLITE_BUSY return value, and then
> trying to perform
> the update again. I was perhaps wondering if there are any
> other suggestions
> for dealing with this scenario where there might be
> multiple writers to the
> same database at the same time?
> 
> Regards
> /Cole
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite3 replication

2009-07-18 Thread Ken

Kelly,

The "journal" file for sqlite is not a redo journal but rather an undo journal. 
So it is not really possible to use the journal to replicate.

However, if one were to hack the code and open a redo file along with the 
journal file. Then write the source blocks out as well. This could then be 
copied and written against a target database on a remote server.

Ultimately this would be what rsync provides without the rescans... You could 
also look into some form of filesystem replication as well. I think that would 
be more reliable.

Just my .02
hth, ken

--- On Sat, 7/18/09, Kelly Jones  wrote:

> From: Kelly Jones 
> Subject: [sqlite] SQLite3 replication
> To: sqlite-users@sqlite.org
> Date: Saturday, July 18, 2009, 12:18 PM
> Is there any way to real-time
> replicate SQLite3 dbs across servers?
> 
> I realize I could just rsync constantly, but this seems
> inefficient.
> 
> I know SQLite3 uses a journal when making changes: could I
> use this
> journal for replication, similar to how MySQL uses
> bin-logging for
> replication?
> 
> -- 
> We're just a Bunch Of Regular Guys, a collective group
> that's trying
> to understand and assimilate technology. We feel that
> resistance to
> new ideas and technology is unwise and ultimately futile.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] multi-thread access to a db

2009-07-09 Thread Ken

Batching the orders and writing more data as one transaction will certainly 
yield better throughput, but at the risk of some data loss until the data is 
committed to disk. It sounds like you are building some type of OLTP/ 
Transaction logging system. 

Another good idea here is to also implement or at least think about some form 
of an archiving system. For instance not only would you have one db per thread 
(hashed) but maybe daily you switch to a brand new database file set. Maybe a 
naming convention such as MMDDYY_HashID.db would also be useful?



--- On Thu, 7/9/09, Rizzuto, Raymond  wrote:

> From: Rizzuto, Raymond 
> Subject: Re: [sqlite] multi-thread access to a db
> To: "General Discussion of SQLite Database" 
> Date: Thursday, July 9, 2009, 12:28 PM
> Right now I index off the order
> id.  I can look into options for indexing - you are
> correct that it is more likely that I'll need to read an
> order I recently wrote than one that is older. 
> However, since reading is ~.2% of the accesses to the db,
> all db work accounts for 2% of the cpu usage, it may not be
> worth optimizing in that area.
> 
> In my system, order codes are unique, and orders go to a
> specific thread based on a hash of that id.  Therefore,
> the only thing I need to do is create a unique db file,
> probably based on the thread id, and each thread's logic for
> writing/reading is unchanged - just which db is different.
> 
> Currently I archive orders individually, at the time I
> determine the order shouldn't be needed.  I could add
> logic to do that in a batch in the future, but that would
> require some extensive changes elsewhere in the logic, so
> I'll try some of the other suggestions first.
> 
> -Original Message-
> From: Simon Slavin [mailto:slav...@hearsay.demon.co.uk]
> Sent: Thursday, July 09, 2009 1:08 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] multi-thread access to a db
> 
> 
> On 9 Jul 2009, at 2:58pm, Rizzuto, Raymond wrote:
> 
> > I have 4 servers, all with 4 cores.  This is to
> handle a volume of
> > 10-20 million orders per day.
> >
> > Most of the work load (~90%) is unrelated to the
> database.  In fact,
> > I added the database just to allow me to offload
> orders out of
> > memory when they look done so that the app doesn't run
> out of
> > memory.  It is a 32-bit app, so it typically dies
> when it hits ~2.5
> > gig.  Moving to a 64-bit app was not an option.
> 
> Okay, you're way ahead of most of my recommendations and
> obviously
> understand what you're doing.
> 
> > In approx .2% of the orders, I guess wrong, and have
> to read the
> > order back in from the db to process additional
> activity.  I could
> > remove indices to improve write performance, however
> the hit on read
> > performance might outweigh the gain.  Since the
> processing is
> > supposed to be in near-realtime, the delay in that
> case might be
> > excessive.  Still, it may be worth trying that.
> 
> Since it takes very little extra coding to test the effect
> on speed it
> might be worth experimenting with unusual index
> methods.  If you read
> the database only for two operations, both of which need
> all records
> to do with a particular order, it might be worth
> experimenting with
> having no primary key, just one index on the 'order number'
> column.
> One DBMS I used to use was famously faster if you used
> 'DESC' in
> indices, since records you were normally looking for were
> more likely
> to be recent records than extremely old ones, and the
> search algorithm
> worked faster that way.
> 
> > I am using begin/commit around the writes since each
> order can
> > require between 1 and 5 writes.
> 
> If you're writing orders in big batches, and if your
> previous
> statements about crash-recovery are true, then it might be
> worth
> putting begin/commit just around each batch of orders
> instead of each
> individual order.  You might want to take it even
> further: by analogy
> with a non-SQL DBMS, I once wrote a logging program which
> did a COMMIT
> only just before a SELECT was needed, or when the logging
> program was
> quit.  However, I don't know how SQLite acts if you
> have thousands of
> uncommitted changes: it might get slower if you have that
> many.
> 
> > Ken made a suggestion that I create separate db's for
> each thread.
> > Since the threads don't share data, that approach
> would work for me,
> > and eliminate all contention.  Each db would have
> the same schema,
> > of course.
> 
> If you do

Re: [sqlite] multi-thread access to a db

2009-07-09 Thread Ken

Additional considerations:

1. Does the re-reading of data occur cross thread? If so you'll need some way 
to identify the DB that contains the data.

2. Consider using either a disk array or multiple disk drives, one for each db 
file. You probably should do some load testing at volume to determine optimal 
configurations for you write patterns. 

3. You indicate that the I/O is minimal. Why not create one thread that handles 
the Database work load. All the other threads could pass the work to the 
DbWorker thread. This would also eliminate DB contention, but would cause 
contention at the OS Mutex/Semaphore layer, which should be faster than disk 
based contention. 

Implementation of course could be done via a Shared Memory segment (if unix 
based) and some locking mechanisms.

For a really slick high performance LL implemenation consider using an Unrolled 
Linked List. They are incredibly fast and provide 3-5 times faster performance 
than a simple LL, especially on multicore cpu's with large L1,L2 cache lines.

HTH

--- On Thu, 7/9/09, Rizzuto, Raymond  wrote:

> From: Rizzuto, Raymond 
> Subject: Re: [sqlite] multi-thread access to a db
> To: "General Discussion of SQLite Database" 
> Date: Thursday, July 9, 2009, 8:58 AM
> I have 4 servers, all with 4
> cores.  This is to handle a volume of 10-20 million
> orders per day.
> 
> Most of the work load (~90%) is unrelated to the
> database.  In fact, I added the database just to allow
> me to offload orders out of memory when they look done so
> that the app doesn't run out of memory.  It is a 32-bit
> app, so it typically dies when it hits ~2.5 gig. 
> Moving to a 64-bit app was not an option.
> 
> In approx .2% of the orders, I guess wrong, and have to
> read the order back in from the db to process additional
> activity.  I could remove indices to improve write
> performance, however the hit on read performance might
> outweigh the gain.  Since the processing is supposed to
> be in near-realtime, the delay in that case might be
> excessive.  Still, it may be worth trying that.
> 
> I am using begin/commit around the writes since each order
> can require between 1 and 5 writes.
> 
> Ken made a suggestion that I create separate db's for each
> thread.  Since the threads don't share data, that
> approach would work for me, and eliminate all
> contention.  Each db would have the same schema, of
> course.
> 
> Ray
> 
> -Original Message-
> From: Simon Slavin [mailto:slav...@hearsay.demon.co.uk]
> Sent: Wednesday, July 08, 2009 5:12 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] multi-thread access to a db
> 
> 
> On 8 Jul 2009, at 9:28pm, Rizzuto, Raymond wrote:
> 
> > If I remove the locking_mode=exclusive, I don't get
> those errors.
> >
> > I'd appreciate any advice on how I can get the best
> performance
> > using multiple threads in my application, given that:
> >
> >
> > 1.  I need maximum performance.
> 
> Spend at least five grand on a fast water-cooled box. 
> Use a version
> of Unix/Linux compiled without support for anything you
> don't need,
> like printing.  Create your database file on a RAM
> disk.  Write your
> application as a command-line app, and don't run the GUI.
> 
> > That is also why I need multiple threads
> 
> May not help if they're all constantly accessing the
> database.  In
> fact contention for access can slow the process down: you
> have seven
> threads, five of which are perpetually blocked. 
> There's no one-size-
> fits-all solution to fast database access, it depends on
> the patterns
> of when data is available for writing, and how important
> the order the
> data was available is when you read.  Sometimes you
> pile up all your
> data to be written into a text file, and another process
> (on a
> different computer ?!) works through the text file and does
> the writing.
> 
> > 2.  All threads need to write to the same db
> > 3.  No other application needs access to the db
> > 4.  I don't care about durability, just fast
> insert times since
> > reads are much less frequent.
> 
> Use BEGIN TRANSACTION and COMMIT properly.  This may
> be more important
> than multi-threading.  It has a huge result.
> 
> If reads are /really/ rare, it might be worth removing all
> indices on
> your database, and only creating an index just before you
> need to
> read, or even just executing the SELECT without any
> indices.
> 
> Simon.
> 
> 
> IMPORTANT: The information contained in this email and/or
> its attachments is confidential. If you are not the intended
> recipient, please notify the 

Re: [sqlite] multi-thread access to a db

2009-07-08 Thread Ken

Ray,

Using multiple threads you will have locking contention on the database. Only 
one thread is allowed to write at a time. If you need concurrent writing then 
create multiple databases or maybe look into a different DB platform like 
mysql, postgress or oracle.



--- On Wed, 7/8/09, Rizzuto, Raymond  wrote:

> From: Rizzuto, Raymond 
> Subject: Re: [sqlite] multi-thread access to a db
> To: "sqlite-users@sqlite.org" 
> Date: Wednesday, July 8, 2009, 3:28 PM
> If I remove the
> locking_mode=exclusive, I don't get those errors.
> 
> I'd appreciate any advice on how I can get the best
> performance using multiple threads in my application, given
> that:
> 
> 
>  1.  I need maximum performance.  That is also
> why I need multiple threads
>  2.  All threads need to write to the same db
>  3.  No other application needs access to the db
>  4.  I don't care about durability, just fast insert
> times since reads are much less frequent.
> 
> Ray
> 
> 
> From: Rizzuto, Raymond
> Sent: Wednesday, July 08, 2009 3:27 PM
> To: 'sqlite-users@sqlite.org'
> Subject: multi-thread access to a db
> 
> I have an application where I have 7  threads. 
> Each thread opens its own db connection object, but the
> connections are to the same db.  I am seeing sporadic
> insert failures when a thread attempts to insert into the
> db.  sqlite3_errmsg returns this message:
> 
> database is locked
> 
> I am using sqlite3 version 3.6.1.  I use the following
> two pragmas to get the best insert performance (the db is
> used exclusively by this application, and I don't need to
> have the DB recover after an os crash or power fail):
> 
>         sqlite3_exec(result->db,
> "pragma synchronous=off;", 0, 0, &zErrMsg);
>         sqlite3_exec(result->db,
> "pragma locking_mode=exclusive;", 0, 0, &zErrMsg);
> 
> I am using threading mode "multi-thread".
> 
> Does mode=exclusive mean that the first thread that opens
> and writes to the db locks out all other threads?
> 
> Ray
> 
> 
> 
> Ray Rizzuto
> raymond.rizz...@sig.com
> Susquehanna International Group
> (610)747-2336 (W)
> (215)776-3780 (C)
> 
> 
> 
> 
> IMPORTANT: The information contained in this email and/or
> its attachments is confidential. If you are not the intended
> recipient, please notify the sender immediately by reply and
> immediately delete this message and all its attachments. Any
> review, use, reproduction, disclosure or dissemination of
> this message or any attachment by an unintended recipient is
> strictly prohibited. Neither this message nor any attachment
> is intended as or should be construed as an offer,
> solicitation or recommendation to buy or sell any security
> or other financial instrument. Neither the sender, his or
> her employer nor any of their respective affiliates makes
> any warranties as to the completeness or accuracy of any of
> the information contained herein or that this message or any
> of its attachments is free of viruses.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 3 million rows, query speeds, and returning zero for rows that don't exist

2009-07-02 Thread Ken

Seems to me you might need a master and detail tables.
One with the dates, timestamp and one with the data

CREATE TABLE 'log_time' (
 id integer primary key
 datetimestring VARCHAR COLLATE NOCASE,
 timestamp INTEGER,
 date INTEGER,
 hour INTEGER,
 min INTEGER,
 sec INTEGER
)

CREATE TABLE 'log_data' (
 id integer primary key
 ip VARCHAR,
 uid VARCHAR COLLATE NOCASE,
 operation VARCHAR COLLATE NOCASE,
 target VARCHAR,
 response INTEGER,
 client VARCHAR COLLATE NOCASE,
 completion VARCHAR COLLATE NOCASE,
 port INTEGER,
 connections INTEGER
 )

Then your queries can be left joins based upon "ID" 


Also you might take a look into simplifying all of the timestring/timestamp and 
min sec hour stuff to just one or two fields and using the sqlite time functions

http://www.sqlite.org/lang_datefunc.html




--- On Wed, 7/1/09, yaconsult  wrote:

> From: yaconsult 
> Subject: [sqlite]  3 million rows, query speeds, and returning zero for rows 
> that don't exist
> To: sqlite-users@sqlite.org
> Date: Wednesday, July 1, 2009, 8:00 PM
> 
> I'm using sqlite to do some analysis on very large web
> application log
> files - approaching 3 million lines per day.  And what
> a wonderful tool
> it is!  It has saved me from writing lots of custom
> scripts.
> 
> I have a perl script that parses an rsynced copy of this
> huge log file,
> munges, converts, and adds new entries to my database.
> 
> The schema currently looks something like this - some of
> the columns
> are there as a aid to development/debugging so I can see
> what's
> going on:
> 
> CREATE TABLE 'log' (
>  ip VARCHAR,
>  datetimestring VARCHAR COLLATE NOCASE,
>  uid VARCHAR COLLATE NOCASE,
>  timestamp INTEGER,
>  operation VARCHAR COLLATE NOCASE,
>  target VARCHAR,
>  response INTEGER,
>  client VARCHAR COLLATE NOCASE,
>  completion VARCHAR COLLATE NOCASE,
>  port INTEGER,
>  connections INTEGER,
>  date INTEGER,
>  hour INTEGER,
>  min INTEGER,
>  sec INTEGER
> )
> 
> I've been experimenting with indices as far as when the
> tradeoff in
> size and speed of adding data is offset sufficiently by how
> fast I
> need to get the results.  For testing purposes, I
> created indexes
> for all the columns used in the selects below.
> 
> Most of the queries I've done so far have been pretty
> straightforward
> and it's worked very well.  But, now I need to do one
> that's taking
> too long.  There's probably a better way than the one
> I'm using.
> 
> The problem is that I need to produce information for the
> timeslots
> that are missing as well as those that are there.
> 
> For example, I need to be able to evaluate the effect of
> different
> load balancers, so I want to compare the numbers of
> transactions on
> all the ports.   I need to do so on a second
> by second basis so I can
> see the most detail and not an average.
> 
> I thought one way do do this would be with a self-join
> because
> there are so many transactions that I'm pretty sure that
> all seconds
> will be present in the table.  I also considered
> creating a second
> table derived from the first whenever it's updated that
> would have a
> unix epoch entry for each second within the log file. 
> 
> Here's what I tried for the first approach:
> 
> select
> distinct t.timestamp,
> t.port,
> count(u.timestamp)
> from
> log t
> left join
> log u
> on
> t.timestamp = u.timestamp
> and
> t.timestamp >= 1246266000
> and u.uid != "-"
> and (u.response = 207
>      or u.response = 200)
> and u.port = 8444;
> 
> The purpose of the join is to give me all of the timestamps
> - one for
> each second - even those seconds that had no activity on
> that port -
> and then use those for the query.  But this query is
> taking a very,
> very, very long time.  I'm probably making some newbie
> mistake,
> because that's what I am!
> 
> Is this a valid approach?  Is there a better
> solution?
> 
> I am able to get results extremely quickly by using
> something like
> this:
> 
> select
> date,
> hour,
> min,
> sec,
> count(port)
> from log
> where
> uid != "-"
> and (response = 207
>      or response = 200)
> and port = 8444
> group by
> timestamp
> order by
> timestamp;
> 
> but the problem is that I also need to know when the ports
> are not
> busy and there are no log entries.  Here are the last
> few lines of the
> result: 
> 
> 2009-06-29|17|42|0|2
> 2009-06-29|17|42|7|1
> 2009-06-29|17|42|8|4
> 
> What I need to have in the results are entries for seconds
> 1-6 with a
> value of 0, like this:
> 
> 2009-06-29|17|42|0|2
> 2009-06-29|17|42|1|0
> 2009-06-29|17|42|2|0
> 2009-06-29|17|42|3|0
> 2009-06-29|17|42|4|0
> 2009-06-29|17|42|5|0
> 2009-06-29|17|42|6|0
> 2009-06-29|17|42|7|1
> 2009-06-29|17|42|8|4
> 
> Am I on the right path with the self-join?  I also
> considered adding a
> new table to the database that contained all the possible
> time values
> in epoch seconds - I could easily do so with the script
> that creates
> the database and adds the data.  I could then left
> join this table
> with the data tab

Re: [sqlite] SQLite3: Database is sometimes locked when a reader is attached with open_v2

2009-07-02 Thread Ken

This is by design.

The read only transaction acquires a "Read" Lock on the database file. 
So if that lock has not been released your writing process will receive the 
SQLITE_LOCKED error code. 

a.  Use a begin exclusive on your writing process and test for sqlite locked. 
Using a loop and retry logic.

The read uncommitted pragma will only work with multi threaded applications not 
multi process so I don't think this will work for you.

The technical reason behind this is that SQLITE does not implement multi 
versioning of its internal pages. If it did this would allow readers and 
writers to co-exist with out locking each other out.



--- On Thu, 7/2/09, Tino Lange  wrote:

> From: Tino Lange 
> Subject: Re: [sqlite] SQLite3: Database is sometimes locked when a reader is 
> attached with open_v2
> To: sqlite-users@sqlite.org
> Date: Thursday, July 2, 2009, 9:45 AM
> Hi Marcus,
> 
> I have no problem when the reading application gets a lock
> error because the 
> writing application has a lock.
> 
> But the problem is the other way around:
> -> The writing application gets a lock error because
> someone reads!
> 
> This is what I can't understand and what I didn't expect. I
> would expect 
> that the writing application is undisturbed by any readers
> that open with 
> SQLITE_OPEN_READONLY.
> 
> Thanks
> 
> Tino
> 
> --
> 
> 
> Marcus Grimm wrote:
> 
> > I'm afraid this is by design of sqlite: Sqlite will
> lock
> > the database during a writing transaction, I think no
> matter
> > if you open a 2nd connection using the readonly flag.
> > 
> > the typical solutions are:
> > a) retry your read attempt after you receive a busy
> error code until
> >     it proceeds.
> > b) consider shared cache mode and pragma
> read_uncommitted = True;
> > 
> > hope this helps
> > 
> > Marcus
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] very large SQLite tables

2009-06-25 Thread Ken

Along the same lines, the buckets could be created in their own unique Sqlite 
Db, thus improving concurrency as well!!!

--- On Thu, 6/25/09, Douglas E. Fajardo  wrote:

> From: Douglas E. Fajardo 
> Subject: Re: [sqlite] very large SQLite tables
> To: "sqlite-users@sqlite.org" 
> Date: Thursday, June 25, 2009, 11:24 AM
> One approach might be to split the
> big, monolithic table into some number of hash buckets,
> where each 'bucket' is separate table. When doing a search,
> the program calculates the hash and accesses reads only the
> bucket that is needed.
> 
> This approach also has the potential for allowing multiple
> databases, where tables would be spread across the different
> databases. The databases could be spread across multiple
> drives to improve performance.
> 
> *** Doug Fajardo
> 
> 
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org]
> On Behalf Of Matthew O'Keefe
> Sent: Wednesday, June 24, 2009 12:21 PM
> To: sqlite-users@sqlite.org
> Subject: [sqlite] very large SQLite tables
> 
> 
> 
> We are using SQLite for indexing a huge number (i.e., 100
> million to 1
> billion) of key pairs
> that are represented by an 88-byte key. We are using a
> single table with a
> very large number of rows (one for each data chunk), and
> two columns.
> 
> The table has two columns.  One is of type ³text²
> and the other is type
> ³integer².
> > 
> > The table is created with:
> > 
> > CREATE TABLE chunks
> > (
> >   name text primary key,
> >   pid integer not null
> );
> 
> As expected, as the
> table grows, the underlying B-tree implementation for
> SQLite means that the
> number of
> disks accesses to (a) find, and (b) add a chunk, grows
> larger and larger.
> We¹ve tested up
> to 20 million chunks represented in the table: as expected
> performance
> exponentially 
> decreases as the number of table entries grows.
> 
> We wanted to post to the mailing list to see if there are
> any obvious,
> first-order things
> we can try to improve performance for such a large table.
> 
> We really appreciate the efforts of the SQLite developer
> community!
> 
> Matt O¹Keefe
> 
> sqlite-users@sqlite.org
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_step performance degredation

2009-06-15 Thread Ken

Also is there an index on the table B.ID field?

--- On Mon, 6/15/09, Mike Borland  wrote:

> From: Mike Borland 
> Subject: Re: [sqlite] sqlite3_step performance degredation
> To: "General Discussion of SQLite Database" 
> Date: Monday, June 15, 2009, 4:11 PM
> Nuno, unfortunately your psychic
> skills are a bit off on this one.  Sorry I wasn't more
> explicit.  I am not using any LIMIT or OFFSET to do any
> virtual scrolling.  Basically I have table A which has
> 900 rows.  Table B has 180,000 rows (900 * 200) which
> has a foreign key relationship back to table A.  So for
> each row in table A, there are 200 rows in table B.  My
> query is basically a "SELECT * FROM Table B WHERE ID =
> TableA.ID".  I'm executing this query 900 times, once
> for each row in table A.  
> 
> When I start the 900 read iterations (always in the same
> order), the first one generally reads in about 50ms and by
> the last read, it's taking roughly 1000ms.  Sometimes
> it slows down immediately, sometimes after the 100th
> iteration.  The only absolutely reproducible aspect is
> that it always slows down eventually and once it slows down,
> it never speeds back up.  I don't believe it's a
> locking issue since my timer doesn't start until the query
> is successfully executed.
> 
> Any ideas?  Would the occasional write operation in
> the midst of these reads cause any permanent slow down to
> the read time?  Thanks.
> 
> Mike Borland
> 
> -Original Message-
> From: Nuno Lucas [mailto:ntlu...@gmail.com]
> 
> Sent: Friday, June 12, 2009 7:16 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] sqlite3_step performance degredation
> 
> On Sat, Jun 13, 2009 at 1:52 AM, Mike
> Borland
> wrote:
> > I have a fairly complex program where I am seeing a
> performance
> > degradation of the sqlite3_step() function.
>  Basically I'm iterating
> > roughly 200 rows at a time, over and over.  The only
> work happening
> > during the iteration is I'm copying the record into an
> array.  At first,
> > sqlite3_step() takes less than a millisecond to run.
>  After 0-50
> > iterations, it's taking anywhere from 10-100ms.
> >
> > Does anybody have any insight into what's happening
> behind the scenes
> > with this function to help me track down the cause?
>  I appreciate it!
> 
> You should explicitly say what your SQL query is. Without
> that we can
> only guess.
> 
> My current "psychic" guess is that you are using LIMIT to
> obtain those
> 200 rows, one "page" at a time, and as you go advancing
> "pages" it
> becomes slower and slower.
> If this is true, then you should re-think your design as
> LIMIT just
> skips the rows, but it will  "generate" them before,
> meaning it
> becomes slower as you advance on the offset given.
> Look at http://www.sqlite.org/cvstrac/wiki?p=ScrollingCursor
> (and
> notice the "What not to do" at the end, talking about
> "LIMIT" and
> "OFFSET").
> 
> If my my psychic abilities are becoming weak, then please
> supply your
> exact query that is getting slower  (and maybe your
> database schema)
> and then someone can give you an exact answer.
> 
> 
> Regards,
> ~Nuno Lucas
> 
> >
> > Mike Borland
> >
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

> >
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_step performance degredation

2009-06-15 Thread Ken

Mike, 

Not 100% sure of the prior information but a write could slow things down. It 
would basically gain the lock to the DB preventing the reads from happening. 
But it should not be permanent.

If you are reading all of the data. Could you just execute one query instead of 
iterating  over all of table A?

I think that would be faster overall and prevent any locking issues.

--- On Mon, 6/15/09, Mike Borland  wrote:

> From: Mike Borland 
> Subject: Re: [sqlite] sqlite3_step performance degredation
> To: "General Discussion of SQLite Database" 
> Date: Monday, June 15, 2009, 4:11 PM
> Nuno, unfortunately your psychic
> skills are a bit off on this one.  Sorry I wasn't more
> explicit.  I am not using any LIMIT or OFFSET to do any
> virtual scrolling.  Basically I have table A which has
> 900 rows.  Table B has 180,000 rows (900 * 200) which
> has a foreign key relationship back to table A.  So for
> each row in table A, there are 200 rows in table B.  My
> query is basically a "SELECT * FROM Table B WHERE ID =
> TableA.ID".  I'm executing this query 900 times, once
> for each row in table A.  
> 
> When I start the 900 read iterations (always in the same
> order), the first one generally reads in about 50ms and by
> the last read, it's taking roughly 1000ms.  Sometimes
> it slows down immediately, sometimes after the 100th
> iteration.  The only absolutely reproducible aspect is
> that it always slows down eventually and once it slows down,
> it never speeds back up.  I don't believe it's a
> locking issue since my timer doesn't start until the query
> is successfully executed.
> 
> Any ideas?  Would the occasional write operation in
> the midst of these reads cause any permanent slow down to
> the read time?  Thanks.
> 
> Mike Borland
> 
> -Original Message-
> From: Nuno Lucas [mailto:ntlu...@gmail.com]
> 
> Sent: Friday, June 12, 2009 7:16 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] sqlite3_step performance degredation
> 
> On Sat, Jun 13, 2009 at 1:52 AM, Mike
> Borland
> wrote:
> > I have a fairly complex program where I am seeing a
> performance
> > degradation of the sqlite3_step() function.
>  Basically I'm iterating
> > roughly 200 rows at a time, over and over.  The only
> work happening
> > during the iteration is I'm copying the record into an
> array.  At first,
> > sqlite3_step() takes less than a millisecond to run.
>  After 0-50
> > iterations, it's taking anywhere from 10-100ms.
> >
> > Does anybody have any insight into what's happening
> behind the scenes
> > with this function to help me track down the cause?
>  I appreciate it!
> 
> You should explicitly say what your SQL query is. Without
> that we can
> only guess.
> 
> My current "psychic" guess is that you are using LIMIT to
> obtain those
> 200 rows, one "page" at a time, and as you go advancing
> "pages" it
> becomes slower and slower.
> If this is true, then you should re-think your design as
> LIMIT just
> skips the rows, but it will  "generate" them before,
> meaning it
> becomes slower as you advance on the offset given.
> Look at http://www.sqlite.org/cvstrac/wiki?p=ScrollingCursor
> (and
> notice the "What not to do" at the end, talking about
> "LIMIT" and
> "OFFSET").
> 
> If my my psychic abilities are becoming weak, then please
> supply your
> exact query that is getting slower  (and maybe your
> database schema)
> and then someone can give you an exact answer.
> 
> 
> Regards,
> ~Nuno Lucas
> 
> >
> > Mike Borland
> >
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

> >
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] corrupt database recovery

2009-05-28 Thread Ken

Gene,

Im sure others have suggested, but have you tried running your code through 
valgrind? 
Can you remove the custom VFS ?


--- On Wed, 5/27/09, Gene Allen  wrote:

> From: Gene Allen 
> Subject: Re: [sqlite] corrupt database recovery
> To: mgr...@medcom-online.de, "'General Discussion of SQLite Database'" 
> 
> Date: Wednesday, May 27, 2009, 4:51 PM
> I've reviewed the code over and over
> again, but am yet to find it.  But it
> is a good bit of very complicated code (blowfish and
> compression code).
> 
> That's why I was asking about the format of the
> database.  Since I'm able to
> successfully do a .dump but the integrity_check whines, I'm
> thinking that
> maybe if I focus on where what could cause that, it might
> give me a clue.  
> 
> For example, if the tree is stored at the end of file maybe
> I'm falling out
> the loop without writing the final bytes.  But if the
> tree is not localized
> to one area of the file, I'll have to rethink my approach.
> 
> 
> 
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org]
> On Behalf Of Marcus Grimm
> Sent: Wednesday, May 27, 2009 2:44 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] corrupt database recovery
> 
> so, if you think it is a coding error on your side it will
> be a hard work to find the source. I can only image side
> effects
> on an used API like sqlite3 with the classics:
> 
> - stack frame overload in a function that calls sqlite
> functions
> - using a local stack variable as a return pointer and
> reuse
>   this external.
> - memory overwriting on malloc'ed pointers or reusing of
> allready
>   free'd memory pages.
> 
> one and two might be found using a code review and
> identify
> local variables or particular arrays that are written:
> So wherever you call anything from sqlite check the stack
> declaration and review the usage of these variables...
> of course just don't do number two... ;)
> 
> memory errors might be detected using special debug
> versions of the
> malloc/free library, by code review, or manually by adding
> some
> test variables whereever you malloc or free a pointer.
> 
> I'm sure you allready did some of these... anyway, good
> luck
> 
> are you using threads? would be another beautiful issue to
> trace...;)
> 
> Marcus
> 
> 
> 
> > It is a server class machine running Windows 2003 with
> 4 GB.  No, it's a
> > local drive with 20GB free on it.
> >
> > I'm sure that it's a coding error on my part. 
> SQLite is very stable, in
> > my
> > opinion.  I'm just trying to get a rough idea on
> where I'm screwing up the
> > database.
> >
> >
> > -Original Message-
> > From: sqlite-users-boun...@sqlite.org
> > [mailto:sqlite-users-boun...@sqlite.org]
> On Behalf Of John Elrick
> > Sent: Wednesday, May 27, 2009 12:58 PM
> > To: General Discussion of SQLite Database
> > Subject: Re: [sqlite] corrupt database recovery
> >
> > What platform?  Any chance they are using a
> network drive?
> >
> >
> > John
> >
> > Gene wrote:
> >> My code is outside the database layer.  So I
> do all my database work,
> >> then
> >> compress and encrypt it.  No errors are
> returned anywhere.  I'm guessing
> >> that it's going to be an uninitialized variable or
> byte alignment
> >> problems
> >> somewhere.
> >>
> >> This code is running on hundreds of machines
> without a problem and I've
> >> never reproduced it but every now and again I get
> a support ticket
> >> showing
> > a
> >> corrupt database.  So I'm trying to figure
> out WHERE to look.
> >>
> >> -Original Message-
> >> From: sqlite-users-boun...@sqlite.org
> >> [mailto:sqlite-users-boun...@sqlite.org]
> On Behalf Of John Elrick
> >> Sent: Wednesday, May 27, 2009 10:59 AM
> >> To: General Discussion of SQLite Database
> >> Subject: Re: [sqlite] corrupt database recovery
> >>
> >> Gene Allen wrote:
> >>
> >>> Ok...it's happened again and I've decided that
> I need to track this
> >>> down
> >>> once and for all!
> >>>
> >>> Here is what I'm seeing: I get errors when I
> do a integrity_check (see
> >>> below), but I can .dump it to a text file and
> then .read it into
> >>> another
> >>> database ok.
> >>>
> >>> It seems to me that I'm screwing up an index
> or something.  Are indexes
> >>> stored at the end of the database file? 
> All I can think of is that my
> >>> compression/encryption routines are messing
> something up and I'm trying
> > to
> >>> figure out 'where' to look.
> >>>
> >>> I guess the real question is, what would I
> have to do to make an
> >>> integrity_check fail, but still let a dump
> work correctly?
> >>>
> >>> Many thanks for any advice on tracking down
> this ugliness.
> >>>
> >>>
> >>
> >> SNIP
> >>
> >> Personally, I'd refactor the code to allow me to
> verify the operation of
> >> the compression/encryption routines independently
> of the database
> >> operation.  How are you injecting the
> compression/encryption into the
> >> database layer?
> >>
> >>
> >> 

Re: [sqlite] SQLite spawns multiple processes?

2009-05-15 Thread Ken

not sure really... 

But threading in linux will show two processes when a thread is created.

Can you run strace and see if clone is called at thread creation?

Check your threading package and verify its internal operations. Newer versions 
do not show duplicate process. But older versions do.



--- On Fri, 5/15/09, jkim...@one.net  wrote:

> From: jkim...@one.net 
> Subject: [sqlite] SQLite spawns multiple processes?
> To: sqlite-users@sqlite.org
> Date: Friday, May 15, 2009, 10:23 AM
> 
> I'm running a embedded SQL C application (SQLite v3.6.13)
> on an embedded
> platform (Coldfire) running Linux 2.6.25 kernel. Something
> odd I've
> noticed is that when my application reaches the point of
> calling dbopen it
> spawns a new process, with the same name as my application,
> so that the
> process list always shows two of my application running.
> 
> Why is this? Is this normal behaviour?
> 
> My application talks (via a socket) to a "backend" process
> that is
> multithreaded and also reads/writes to the database and
> each thread
> creates two processes.
> 
> Am I doing something wrong here or is this just something
> SQLite does
> normally? I'm having problems with my application that seem
> to jump around
> and are definately memory corruption related so I'm
> wondering if I'm doing
> something wrong with the DB and causing this.
> 
> Any help or suggestions would be much appreciated...
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite version 3.6.14 and async vfs

2009-05-07 Thread Ken

Obviously i had not looked at the code. I see now that the async code is 
registered as a VFS... And thanks to Dan there is only one background thread 
and queue.

One could implement the background thread and a function, mutex and condition 
variable so that the function would call the sqlite3async_control setting the 
SQLITEASYNC_HALT_IDLE. The function would then block on the condition awaiting 
the running thread to complete the work. The running thread would then signal 
the condition causing the "waiting" function to return.

 
--- On Thu, 5/7/09, Virgilio Alexandre Fornazin  
wrote:

> From: Virgilio Alexandre Fornazin 
> Subject: RE: [sqlite] SQLite version 3.6.14 and async vfs
> To: kennethinbox-sql...@yahoo.com, "'General Discussion of SQLite Database'" 
> , "'Pavel Ivanov'" 
> Date: Thursday, May 7, 2009, 12:04 PM
> This break purpose of VFS, all VFS
> should work in same way, you must not
> know if your VFS is asynchronous
> or not. VFS close method should wait for all file I/O on
> this database
> handle (not all databases) to 
> finalize before returning, providing compatibility with all
> other existing
> VFS implementations.
> 
> 
> -Original Message-----
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org]
> On Behalf Of Ken
> Sent: quinta-feira, 7 de maio de 2009 13:47
> To: General Discussion of SQLite Database; Pavel Ivanov
> Subject: Re: [sqlite] SQLite version 3.6.14 and async vfs
> 
> 
> I see the confusion with the word "Shutdown".
> 
> How about but a call that would block until the async
> thread completes all
> operations that are enqueued. Effectively a Close of the
> async thread/queue
> and db. The call could be sqlite3Async_close.
> 
> Hope that clarifies my intent. 
> 
> 
> --- On Thu, 5/7/09, Pavel Ivanov 
> wrote:
> 
> > From: Pavel Ivanov 
> > Subject: Re: [sqlite] SQLite version 3.6.14 and async
> vfs
> > To: kennethinbox-sql...@yahoo.com,
> "General Discussion of SQLite Database"
> 
> > Date: Thursday, May 7, 2009, 11:10 AM
> > Shutdown is not an option at all. I
> > need vfs to continue working on
> > other databases but to be notified (or have
> possibility to
> > check) when
> > one particular database is no longer opened.
> > 
> > Pavel
> > 
> > On Thu, May 7, 2009 at 12:00 PM, Ken 
> > wrote:
> > >
> > > --- On Thu, 5/7/09, Virgilio Alexandre Fornazin
> 
> > wrote:
> > >
> > >> From: Virgilio Alexandre Fornazin 
> > >> Subject: Re: [sqlite] SQLite version 3.6.14
> and
> > async vfs
> > >> To: "'General Discussion of SQLite
> Database'"
> > 
> > >> Date: Thursday, May 7, 2009, 10:50 AM
> > >> Close should wait for all file
> > >> operations complete to meet that needs.
> > >> I think asynchronous VFS should take care of
> > waiting in
> > >> sqlite3_close()
> > >> call.
> > >>
> > >> -Original Message-
> > >> From: sqlite-users-boun...@sqlite.org
> > >> [mailto:sqlite-users-boun...@sqlite.org]
> > >> On Behalf Of Pavel Ivanov
> > >> Sent: quinta-feira, 7 de maio de 2009 12:33
> > >> To: General Discussion of SQLite Database
> > >> Subject: Re: [sqlite] SQLite version 3.6.14
> and
> > async vfs
> > >>
> > >> Hi!
> > >>
> > >> It's great to hear about performance
> improvements
> > and
> > >> especially about
> > >> asynchronous I/O extension. Thank you very
> much
> > for your
> > >> work!
> > >>
> > >> I have one question though: taking quick look
> at
> > the
> > >> sources of async
> > >> vfs I've noticed that even closing the file
> is
> > just a task
> > >> in the
> > >> async queue and thus after closing sqlite
> > connection file
> > >> remains
> > >> opened for some time. It sounds pretty
> reasonable,
> > but here
> > >> stands the
> > >> question: what if I want to do something with
> the
> > database
> > >> file after
> > >> I close sqlite connection to it (e.g. move to
> the
> > archive
> > >> directory,
> > >> zip it etc.)? With sync vfs I could be sure
> that
> > after
> > >> closing
> > >> connection file is closed and I can do with
> it
> > whatever I
> > >&g

Re: [sqlite] SQLite version 3.6.14 and async vfs

2009-05-07 Thread Ken

I see the confusion with the word "Shutdown".

How about but a call that would block until the async thread completes all 
operations that are enqueued. Effectively a Close of the async thread/queue and 
db. The call could be sqlite3Async_close.

Hope that clarifies my intent. 


--- On Thu, 5/7/09, Pavel Ivanov  wrote:

> From: Pavel Ivanov 
> Subject: Re: [sqlite] SQLite version 3.6.14 and async vfs
> To: kennethinbox-sql...@yahoo.com, "General Discussion of SQLite Database" 
> 
> Date: Thursday, May 7, 2009, 11:10 AM
> Shutdown is not an option at all. I
> need vfs to continue working on
> other databases but to be notified (or have possibility to
> check) when
> one particular database is no longer opened.
> 
> Pavel
> 
> On Thu, May 7, 2009 at 12:00 PM, Ken 
> wrote:
> >
> > --- On Thu, 5/7/09, Virgilio Alexandre Fornazin 
> wrote:
> >
> >> From: Virgilio Alexandre Fornazin 
> >> Subject: Re: [sqlite] SQLite version 3.6.14 and
> async vfs
> >> To: "'General Discussion of SQLite Database'"
> 
> >> Date: Thursday, May 7, 2009, 10:50 AM
> >> Close should wait for all file
> >> operations complete to meet that needs.
> >> I think asynchronous VFS should take care of
> waiting in
> >> sqlite3_close()
> >> call.
> >>
> >> -Original Message-
> >> From: sqlite-users-boun...@sqlite.org
> >> [mailto:sqlite-users-boun...@sqlite.org]
> >> On Behalf Of Pavel Ivanov
> >> Sent: quinta-feira, 7 de maio de 2009 12:33
> >> To: General Discussion of SQLite Database
> >> Subject: Re: [sqlite] SQLite version 3.6.14 and
> async vfs
> >>
> >> Hi!
> >>
> >> It's great to hear about performance improvements
> and
> >> especially about
> >> asynchronous I/O extension. Thank you very much
> for your
> >> work!
> >>
> >> I have one question though: taking quick look at
> the
> >> sources of async
> >> vfs I've noticed that even closing the file is
> just a task
> >> in the
> >> async queue and thus after closing sqlite
> connection file
> >> remains
> >> opened for some time. It sounds pretty reasonable,
> but here
> >> stands the
> >> question: what if I want to do something with the
> database
> >> file after
> >> I close sqlite connection to it (e.g. move to the
> archive
> >> directory,
> >> zip it etc.)? With sync vfs I could be sure that
> after
> >> closing
> >> connection file is closed and I can do with it
> whatever I
> >> want. Is
> >> there a way to catch the moment of actual file
> closing with
> >> async vfs?
> >>
> >> And another question just to be sure that I
> understand it
> >> correctly:
> >> async vfs holds only one queue for all opened
> database
> >> files, right?
> >>
> >> Pavel
> >>
> >> On Wed, May 6, 2009 at 10:36 PM, D. Richard Hipp
> 
> >> wrote:
> >> > SQLite version 3.6.14 is now available on the
> SQLite
> >> website
> >> >
> >> >     http://www.sqlite.org/
> >> >
> >> > Version 3.6.14 contains performance enhances
> in the
> >> btree and pager
> >> > subsystems.  In addition, the query
> optimizer now
> >> knows how to take
> >> > advantage of OR and IN operators on columns
> of a
> >> virtual table.
> >> >
> >> > A new optional extension is included that
> implements
> >> an asynchronous I/
> >> > O backend for SQLite on either windows or
> unix.  The
> >> asynchronous I/O
> >> > backend processes all writes using a
> background
> >> thread.  This gives
> >> > the appearance of faster response time at the
> cost of
> >> durability and
> >> > additional memory usage.  See http://www.sqlite.org/asyncvfs.html for
> >> > additional information.
> >> >
> >> > This release also includes many small bug
> fixes and
> >> documentation
> >> > improvements.
> >> >
> >> > As always, please let me know if you
> encounter any
> >> difficulties.
> >> >
> >> > D. Richard Hipp
> >> > d...@hwaci.com
> >> >
> >> >
> >
> > Without actually looking at the async code I think
> that instead of using the sqlite3_close to cause a block
> there should be a "shutdown" that would wait for the
> shutdown of the async thread to complete. So maybe a better
> name would be sqlite3Async_close or something similar.
> >
> > Ken
> >
> >
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite version 3.6.14 and async vfs

2009-05-07 Thread Ken

--- On Thu, 5/7/09, Virgilio Alexandre Fornazin  
wrote:

> From: Virgilio Alexandre Fornazin 
> Subject: Re: [sqlite] SQLite version 3.6.14 and async vfs
> To: "'General Discussion of SQLite Database'" 
> Date: Thursday, May 7, 2009, 10:50 AM
> Close should wait for all file
> operations complete to meet that needs.
> I think asynchronous VFS should take care of waiting in
> sqlite3_close()
> call.
> 
> -Original Message-
> From: sqlite-users-boun...@sqlite.org
> [mailto:sqlite-users-boun...@sqlite.org]
> On Behalf Of Pavel Ivanov
> Sent: quinta-feira, 7 de maio de 2009 12:33
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] SQLite version 3.6.14 and async vfs
> 
> Hi!
> 
> It's great to hear about performance improvements and
> especially about
> asynchronous I/O extension. Thank you very much for your
> work!
> 
> I have one question though: taking quick look at the
> sources of async
> vfs I've noticed that even closing the file is just a task
> in the
> async queue and thus after closing sqlite connection file
> remains
> opened for some time. It sounds pretty reasonable, but here
> stands the
> question: what if I want to do something with the database
> file after
> I close sqlite connection to it (e.g. move to the archive
> directory,
> zip it etc.)? With sync vfs I could be sure that after
> closing
> connection file is closed and I can do with it whatever I
> want. Is
> there a way to catch the moment of actual file closing with
> async vfs?
> 
> And another question just to be sure that I understand it
> correctly:
> async vfs holds only one queue for all opened database
> files, right?
> 
> Pavel
> 
> On Wed, May 6, 2009 at 10:36 PM, D. Richard Hipp 
> wrote:
> > SQLite version 3.6.14 is now available on the SQLite
> website
> >
> >     http://www.sqlite.org/
> >
> > Version 3.6.14 contains performance enhances in the
> btree and pager
> > subsystems.  In addition, the query optimizer now
> knows how to take
> > advantage of OR and IN operators on columns of a
> virtual table.
> >
> > A new optional extension is included that implements
> an asynchronous I/
> > O backend for SQLite on either windows or unix.  The
> asynchronous I/O
> > backend processes all writes using a background
> thread.  This gives
> > the appearance of faster response time at the cost of
> durability and
> > additional memory usage.  See http://www.sqlite.org/asyncvfs.html for
> > additional information.
> >
> > This release also includes many small bug fixes and
> documentation
> > improvements.
> >
> > As always, please let me know if you encounter any
> difficulties.
> >
> > D. Richard Hipp
> > d...@hwaci.com
> >
> >

Without actually looking at the async code I think that instead of using the 
sqlite3_close to cause a block there should be a "shutdown" that would wait for 
the shutdown of the async thread to complete. So maybe a better name would be 
sqlite3Async_close or something similar.

Ken


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Large SQLite3 Database Memory Usage

2009-05-05 Thread Ken

Upgrade your ram.

Are you sure its sqlite using all the memory and not the WebServer?

Do you have an index on Events.NotificationTime ?





--- On Tue, 5/5/09, Kalyani Phadke  wrote:

> From: Kalyani Phadke 
> Subject: [sqlite] Large SQLite3 Database Memory Usage
> To: sqlite-users@sqlite.org
> Date: Tuesday, May 5, 2009, 11:36 AM
> I have 2259207 records in table using
> SQLite3 database. I am running the
> select query to retrive records from DB
>  
> SELECT ID, EventClassName, EventClassRef, TransitionTime,
> Message,
> MonitoredRef, EventRef,ToState,Priority,Acked from
> Events  WHERE
> Events.NotificationTime >= {ts '2009-05-04
> 14:44:10'}  Order By ID DESC
> LIMIT 100
>  
> If I run my application once the query returns results in
> 80 sec and if
> run it again , memory usage starts growing and reaches
> nearly 100%
> ..then I get more hard page faults , the application
> responds slow, or
> system just hangs. I am running my application on Windows
> web server
> 2008 with 512 MB RAM. How can I optimize the query ? I have
> indexes on
> ID and notificationtime (datatype timestamp)
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] OT: how best to convert sqlite3_int64 to and from string in a cross platform fashion?

2009-05-05 Thread Ken

Dan... Yes the varint!

--- On Mon, 5/4/09, Dan  wrote:

> From: Dan 
> Subject: Re: [sqlite] OT: how best to convert sqlite3_int64 to and from 
> string in a cross platform fashion?
> To: "General Discussion of SQLite Database" 
> Date: Monday, May 4, 2009, 11:52 PM
> 
> On May 4, 2009, at 10:14 PM, Ken wrote:
> 
> >
> > You could just pass the sqlite3_int64 value. It is
> portable between  
> > systems. Search through the sqlite3 code and there are
> routines that  
> > do the conversions from the sqlite3_int64 to a native
> int64 type.  
> > They basically perform bit shifting and will be much
> faster than the  
> > string conversions.
> 
> 
> You're thinking of the varint format used in the
> file-format:
> 
>    http://www.sqlite.org/fileformat.html#varint_format
> 
> sqlite3_int64 is just a typdef of int64_t on most
> platforms.
> 
> Dan.
> 
> 
> 
> 
> >
> >
> > HTH,
> > Ken
> >
> > --- On Sun, 5/3/09, Sam Carleton 
> wrote:
> >
> >> From: Sam Carleton 
> >> Subject: [sqlite] OT: how best to convert
> sqlite3_int64 to and from  
> >> string in a cross platform fashion?
> >> To: "General Discussion of SQLite Database" 
> >> Date: Sunday, May 3, 2009, 4:21 PM
> >> I am current developing a system only
> >> on Windows, but I do plan to port it
> >> to OSX someday.  I am passing ID's as strings
> to keep
> >> maximum flexibility
> >> between databases and the existing system. 
> So how do
> >> I convert a
> >> sqlite3_int64 to a string and a string to a
> sqlite3_int64
> >> in a cross
> >> platform fashion?
> >>
> >> Sam
> >> ___
> >> sqlite-users mailing list
> >> sqlite-users@sqlite.org
> >> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >>
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] OT: how best to convert sqlite3_int64 to and from string in a cross platform fashion?

2009-05-04 Thread Ken

Why would you want to convert something to a string that is already cross 
platform compatible?

If you need to store the value in a different DB, then converting it locally to 
a native int64 and then into whatever DB construct would be way more efficient 
than hauling around string conversions. 

Im of course assuming your going to be transmitting this over some network 
protocol or file format from system to system of varying endianess?





--- On Mon, 5/4/09, Sam Carleton  wrote:

> From: Sam Carleton 
> Subject: Re: [sqlite] OT: how best to convert sqlite3_int64 to and from 
> string in a cross platform fashion?
> To: kennethinbox-sql...@yahoo.com, "General Discussion of SQLite Database" 
> 
> Date: Monday, May 4, 2009, 10:29 AM
> Ken,
> 
> this is true, except that I might migrate the system to
> some other database someday that wants to use something else
> as PK other than an int or int64 (MS SQL is optimized for
> guid's not int), so passing around the id's as strings keeps
> all the middle and front end code neutral :)
> 
> Sam
> 
> Ken wrote:
> > You could just pass the sqlite3_int64 value. It is
> portable between systems. Search through the sqlite3 code
> and there are routines that do the conversions from the
> sqlite3_int64 to a native int64 type. They basically perform
> bit shifting and will be much faster than the string
> conversions.
> > 
> > HTH,
> > Ken
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] OT: how best to convert sqlite3_int64 to and from string in a cross platform fashion?

2009-05-04 Thread Ken

You could just pass the sqlite3_int64 value. It is portable between systems. 
Search through the sqlite3 code and there are routines that do the conversions 
from the sqlite3_int64 to a native int64 type. They basically perform bit 
shifting and will be much faster than the string conversions.

HTH,
Ken

--- On Sun, 5/3/09, Sam Carleton  wrote:

> From: Sam Carleton 
> Subject: [sqlite] OT: how best to convert sqlite3_int64 to and from string in 
> a cross platform fashion?
> To: "General Discussion of SQLite Database" 
> Date: Sunday, May 3, 2009, 4:21 PM
> I am current developing a system only
> on Windows, but I do plan to port it
> to OSX someday.  I am passing ID's as strings to keep
> maximum flexibility
> between databases and the existing system.  So how do
> I convert a
> sqlite3_int64 to a string and a string to a sqlite3_int64
> in a cross
> platform fashion?
> 
> Sam
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to synchronize the SQLite db - SQLite db

2009-04-20 Thread Ken

A simple thing to prevent data collisions is to design a unique name for each 
client into the tables. That way you know where the data comes from.



--- On Mon, 4/20/09, Ravi Thapliyal  wrote:

> From: Ravi Thapliyal 
> Subject: [sqlite] How to synchronize the SQLite db - SQLite db
> To: sqlite-users@sqlite.org
> Date: Monday, April 20, 2009, 2:34 AM
> I am looking at a design that will
> require synchronizing a disconnected
> SQLite DB file on client's machines to a central server.
> The version of the DB on the server will also be modified
> periodically, so
> there is a chance that new records will be created in
> either and also
> updated.
> Conflicts therefore are an issue. What I'm worried about is
> just the
> logistics of either
> 1)    Importing all user's data to a single
> DB somehow 
> 2)    Managing several DB files from clients
> automatically.
> 3)    Has anyone does this kind of syncing?
> I realize I'm somewhat light
> on details, but I'm not really even sure exactly what this
> system will need
> to do: it's more of a framework really.
>  At any rate, anyone have experience synchronizing SQLite
> DB files?
> Suggestions appreciated.
> 
> -Inline Attachment Follows-
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to secure standalone SQLite db

2009-04-20 Thread Ken

also ACL's might help

--- On Mon, 4/20/09, Ravi Thapliyal  wrote:

> From: Ravi Thapliyal 
> Subject: [sqlite] How to secure standalone SQLite db
> To: sqlite-users@sqlite.org
> Date: Monday, April 20, 2009, 2:36 AM
> I have a windows standalone
> application with SQLite as a database, so what
> is the procedure to secure this SQLite database, so that
> the application
> users cannot access the database directly, it should only
> be accessed by
> application.
> Thanks
> 
> 
> -Inline Attachment Follows-
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to secure standalone SQLite db

2009-04-20 Thread Ken

encryption


--- On Mon, 4/20/09, Ravi Thapliyal  wrote:

> From: Ravi Thapliyal 
> Subject: [sqlite] How to secure standalone SQLite db
> To: sqlite-users@sqlite.org
> Date: Monday, April 20, 2009, 2:36 AM
> I have a windows standalone
> application with SQLite as a database, so what
> is the procedure to secure this SQLite database, so that
> the application
> users cannot access the database directly, it should only
> be accessed by
> application.
> Thanks
> 
> 
> -Inline Attachment Follows-
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Strange SAVEPOINT behaviour with locking_mode=exclusive

2009-04-19 Thread Ken

I think the problem is not in the locking mode but rather:

 PRAGMA journal_mode = off;

I'm not sure if rollbacks actually function with the journalling turned off.
Can you try it without the above line? 
The logic implies that the rows in question should not exist since they are 
rolledback.

hth

--- On Fri, 4/17/09, Ralf Junker  wrote:

> From: Ralf Junker 
> Subject: [sqlite] Strange SAVEPOINT behaviour with locking_mode=exclusive
> To: sqlite-users@sqlite.org
> Date: Friday, April 17, 2009, 3:07 PM
> I experience some strange behaviour
> with SAVEPOINT in combination with locking_mode=exclusive.
> Below is a script which I assembled from savepoint.text 13.1
> to 13.4. Those tests run OK in normal locking mode, but fail
> in exclusive locking more.
> 
> To reproduce, run the script below on a NEW and EMPTY
> database. The result difference is marked as follows:
> 
>   a  b
>   1  2 
>   3  4 
>   5  6 
>   7  8 
>   9  10 
>   11 12 
>   13 14 <- missing rows if run in exclusive mode 
>   15 16 <- on a new and empty database
> 
> 
> 
> PRAGMA locking_mode=exclusive;
> 
> DROP TABLE IF EXISTS t1;
> 
> BEGIN;
>   CREATE TABLE t1(a PRIMARY KEY, b);
>   INSERT INTO t1 VALUES(1, 2);
> COMMIT;
> 
> PRAGMA journal_mode = off;
> 
> BEGIN;
>   INSERT INTO t1 VALUES(3, 4);
>   INSERT INTO t1 SELECT a+4,b+4  FROM t1;
> COMMIT;
> 
> BEGIN;
>   INSERT INTO t1 VALUES(9, 10);
>   SAVEPOINT s1;
>     INSERT INTO t1 VALUES(11, 12);
> COMMIT;
> 
> BEGIN;
>   INSERT INTO t1 VALUES(13, 14);
>   SAVEPOINT s1;
>     INSERT INTO t1 VALUES(15, 16);
>     ROLLBACK TO s1;
> ROLLBACK;
> 
> SELECT * FROM t1;
> 
> 
> 
> I wonder if this behaviour is intended of if there is an
> error in the library?
> 
> Ralf
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] step() fails with SQLITE_BUSY after BEGINEXCLUSIVETRANSACTION - SOLVED

2009-04-15 Thread Ken

This may help: You can use the function to track all statements 
associated with a database connection.

http://sqlite.org/c3ref/next_stmt.html


--- On Wed, 4/15/09, m...@mwlabs.de  wrote:

> From: m...@mwlabs.de 
> Subject: Re: [sqlite] step() fails with SQLITE_BUSY after 
> BEGINEXCLUSIVETRANSACTION - SOLVED
> To: "'General Discussion of SQLite Database'" 
> Date: Wednesday, April 15, 2009, 4:01 AM
> 
> I've found the problem. It was caused by a pre-compiled
> statement in a third
> party library. Since the library was used by several
> threads now, the
> statement, compiled in the context of one thread, blocked
> the entire
> database when an attempt way made to use it from another
> thread.
> 
> The programmer has changed his code to not use pre-compiled
> statements and
> now it works, even if used by multiple threads. Another
> solution would have
> been to maintain pre-compiled statements per thread.
> 
> Since this was really hard to track down, I wonder if there
> would have been
> a way to get more info from SQLite, e.g. some debug log or
> anyhting?
> Something telling me "I'm blocking everything because I
> here have an
> statement which is in state "X" and hence no other thread
> can do anything
> right now"...
> 
> -- Mario
> 
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] creating unique data takes many hours, help

2009-03-30 Thread Ken



As others have indicated: 
   1. Create the index at the end after the data is loaded. 
   2. Wrap the inserts in a transaction. Commiting every N transactions.

Also:
   3. Instead of the sqlite3_mprintf/exec, you should prepare,bind, step.
   4. If this is single threaded then maybe recompiling the sqlite library 
disabling threading would likely help some.

  
   

--- On Mon, 3/30/09, mrobi...@cs.fiu.edu  wrote:

> From: mrobi...@cs.fiu.edu 
> Subject: Re: [sqlite] creating unique data takes many hours, help
> To: "General Discussion of SQLite Database" 
> Date: Monday, March 30, 2009, 7:19 AM
> Hi,
> 
> Yes, I am dealing with dna sequences, the data is huge,
> human genome is
> about 3.2 billion records, others are of similar size.
> 
> Before I found squile, using native C, I extracted the
> strings, testing
> for uniqueness. With sqlite I did the same and  it
> took about the same
> time, once I added the index to sqlite the time was
> reduced.
> 
> I don't need this data to be sorted, just unique, adding
> the index made it
> faster, but I am always looking for ways of reducing
> processing times
> 
> I will try your suggestions and report back.
> 
> More suggestions are very welcome
> 
> Thanks very much for your help
> 
> Michael
> 
> > Hi Michael,
> >
> >
> > Two thoughts -- and I hope others will chime in if I'm
> off-base here:
> >
> >
> > 1) Build just one index after the import:
> >
> >>From page:
> > http://www.sqlite.org/lang_createtable.html
> > "The UNIQUE constraint causes an unique index to be
> created on the
> > specified columns."
> >
> > I think that if you then create your own index, it
> will be redundant,
> > and, with millions of records, time-consuming.
> >
> > However, you may well want to KEEP your own index (and
> add the UNIQUE
> > constrait to it), and *remove* the UNIQUE constraint
> on the table column
> > definition.   I hear that creating the
> index after the data has been
> > imported is faster (and less fragmented) than creating
> an implicit one
> > on-the-fly during import.
> >
> >
> >
> > 2) Hashing
> >
> > If Vinnie [thev...@yahoo.com]
> was correct in guessing your goal (to
> > determine if a given string has been examined before),
> and if
> > performance remains a problem, I wondered if the
> design might benefit
> > from using hashing to "pre-qualify" a string and/or
> substitute for a
> > direct match.
> >
> > A short hash (perhaps a 64-bit integer?) could be
> stored in a separate
> > table or even a separate database from the strings
> themselves.  If the
> > hash of a new string does not match any hash in the
> database, then you
> > know there is no need to search for the string
> itself.  If the entire
> > hash index can be kept in RAM, you might get a big
> benefit.
> >
> > You could decide to take this a step further by using
> a longer hash as a
> > proxy for the string itself.  The hash would need
> to be long enough to
> > make collisions extremely unlikely, yet not so long as
> to negate the
> > value of using a proxy for the original strings. 
> In practice, you'd
> > probably want to compute a single long hash for each
> string, then use
> > the first X bytes as the "pre-qualifying" hash, and
> the remainder as the
> > "confirming hash." If your average string length is
> short, then using
> > *two* hashes may not be much of a gain.
> >
> > (Of course, since this is not a cryptographic
> application, you don't
> > need to worry about whether your chosen hash algorithm
> is "secure" or
> > not.)
> >
> > Regards,
> >Donald
> >
> >
> >
> > -Original Message-
> > From: sqlite-users-boun...@sqlite.org
> > [mailto:sqlite-users-boun...@sqlite.org]
> On Behalf Of Vinnie
> > Sent: Sunday, March 29, 2009 6:14 PM
> > To: sqlite-users@sqlite.org
> > Subject: Re: [sqlite] creating unique data takes many
> hours, help
> >
> >
> > Michael:
> >
> > While my answer isn't strictly limited to sqlite, the
> performance nut in
> > me can't help myself. You have a a table with only one
> column, a string.
> > I could be wrong here but it seems like you just want
> to keep a list of
> > values that you have already tried. After you insert a
> bunch of strings
> > into the table you want to be able to quickly look up
> a string to see if
> > it exists, so that you can tell if you already probed
> that sequence
> > (taking a guess here).
> >
> > If my guess is right, and the only thing you are doing
> is looking up
> > sorted single-column elements, you probably can get
> away with your own
> > quick disk-based binary tree implementation and avoid
> sqlite for this
> > particular circumstance altogether. The result would
> be several orders
> > of magnitude faster, even after you have followed the
> suggestions others
> > have given.
> >
> >
> >
> >
> >> Hi,
> >>
> >> I am new with sqlite, and I create a program that
> reads several mllion
> >
> >> records and puts them into a sqlite db using.
> >>
> >> The table has one column ONLY indexed and 

[sqlite] httpd server ???

2009-03-17 Thread Ken

This is off topic, but since sqlite list has such savvy folks I thought I'd try 
here.

Does anyone know of an embedded http server that can serve and/or create pages 
from a sqlite database?


Thanks,
Ken

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] control buffering of query results

2009-03-17 Thread Ken

use the LIMIT clause track and re-run the query when you need the next chunk of 
data.


--- On Tue, 3/17/09, baxy77bax  wrote:

> From: baxy77bax 
> Subject: [sqlite]  control buffering of query results
> To: sqlite-users@sqlite.org
> Date: Tuesday, March 17, 2009, 6:44 AM
> hi 
> 
> i need help with this one.
> 
> i have this perl script that goes something like this:
> 
> my $fetchrow_stmt;
> 
> sub _fetchrow_stmt {
>   
>   my ($self,%arg) = @_;
>   my $stm = "select * from $arg{table}";
>   $fetchrow_stmt = $dbh->prepare($stm) || die
> $dbh->errstr; ;
>   $fetchrow_stmt->execute || die $dbh->errstr; 
>  
> }
> 
>  sub _return_row {
>   
> my ($self,%arg) =...@_;
> return $fetchrow_stmt->fetchrow_arrayref();
> 
> 
>   }
>   
> sub _finish_stmt {
>   
>   my ($self,%arg) = @_;
>   
>  $fetchrow_stmt->finish();
>   
> }
> 
> the thing is that it's using my memory like crasy, and
> the source of this
> behaviour (I THINK/not sure) is in buffering the query
> results from sqlite.
> so is there a way to limit that, so that in query results
> there are only 2
> results max at a time (not the whole table)
> 
> thanx
> 
> -- 
> View this message in context:
> http://www.nabble.com/control-buffering-of-query-results-tp22557409p22557409.html
> Sent from the SQLite mailing list archive at Nabble.com.
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Re trieve results of a query into a bash script, and use them to iterate

2009-03-16 Thread Ken

you might want to do a search on named pipes... 

Also you query could then be simplified.

attach database 'db2.sl3' as usr
select u.* from usr.user u, names n
where u.name = n.name




--- On Mon, 3/16/09, urschrei  wrote:

> From: urschrei 
> Subject: [sqlite] Re trieve results of a query into a bash script, and use 
> them to iterate
> To: sqlite-users@sqlite.org
> Date: Monday, March 16, 2009, 9:50 PM
> I'd like to use the results of a simple select query in
> a bash script, and
> iterate through them with a 'for' loop. I have
> something like the following
> in mind:
> 
> names = `sqlite3 db1.sl3 'select * from names;'`
> users = `sqlite3 db2.sl3 'select * from users;'`
> for n in names
> do
>  for u in users
>   do
>if $n == $u echo $u is a valid user!
>fi
>   done
> done
> 
> Am I on the right track, or should I be going about this
> differently?
> -- 
> View this message in context:
> http://www.nabble.com/Retrieve-results-of-a-query-into-a-bash-script%2C-and-use-them-to-iterate-tp22551722p22551722.html
> Sent from the SQLite mailing list archive at Nabble.com.
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Advices to get max performance with SQLITE and BLOBS

2009-03-13 Thread Ken

Pierre,

Have you considered storing the blob data into a file?

Just keep a record of the blob in the sqlite.db if you store all the blobs in a 
single file then you'll need a filename, begin, length. Or if one blob per file 
just the filename..

Just an idea. Not really sure which would be faster.


--- On Fri, 3/13/09, Pierre Chatelier  wrote:

> From: Pierre Chatelier 
> Subject: [sqlite] Advices to get max performance with SQLITE and BLOBS
> To: sqlite-users@sqlite.org
> Date: Friday, March 13, 2009, 3:49 AM
> Hello,
> 
> I am using SQLITE to store and retrieve raw data blocks
> that are  
> basically ~300Ko. Each block has an int identifier, so that
> insert/ 
> select are easy. This is a very basic use : I do not use
> complex  
> queries. Only "INSERT/SELECT where index=..."
> 
> Now, I am thinking about performance, for writing a
> sequence of a few  
> hundreds 300k blocks, as fast as possible.
> Obviously, I use bind_blob(), blob_read() and blob_write()
> functions.  
> I have already tuned the PRAGMAs for
> journal/synchronous/page_size/ 
> cache, so that it's rather efficient.
> I do not DELETE any content and the whole database is
> dropped after  
> use: VACUUM is not important.
> 
> There are other ways to optimize, but I wonder if it is
> worth, or it  
> the gain would be only marginal regarding what I am doing.
> 1)recompile SQLite ? Which compile options would help in
> this case ?
> 2)using other memory allocators ? I am not sure that
> writing big data  
> blocks triggers many calls to malloc()
> 3)using compression ? zlib could help, but since my data
> does not  
> compress very well (Let's say an average 20% space can
> be saved per  
> block), I am not sure that the compression time will
> balance the  
> writing time.
> 
> Of course, I am only asking for advices regarding your
> experience,  
> there is certainly no exact answer, and it will always
> depend on my  
> data.
> 
> Regards,
> 
> Pierre Chatelier
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to divide a database?

2009-03-13 Thread Ken


on lower half..
attach database 'yourdbname' as full;


insert into table1 select * from full.table1 where col1 < 'your mid point 
value';

on upper half
attach database 'yourdbname' as full;
insert into table1 select * from full.table1 where col1 >= 'your mid point 
value';


You'll need a unique index or primary key on the tables.. In the above col1 is 
your unique column or primary key

HTH



--- On Fri, 3/13/09, baxy77bax  wrote:

> From: baxy77bax 
> Subject: Re: [sqlite] how to divide a database?
> To: sqlite-users@sqlite.org
> Date: Friday, March 13, 2009, 3:21 AM
> ok,
>  and how to construct a query that says; select data from
> table until row
> number = 'middle point or so...'.
> 
> or 
> 
> generally how to say; select 30 rows then transfer it into
> new table , then
> select next 30 rows and do the same... and so on until you
> reach the end 
> 
> iteration is not such a problem but the problem is ::: 
> 
> 
> how to say: select 30 rows   ->  this is the actual
> question !
> 
> thnx 
> 
> 
> 
> 
> ken-33 wrote:
> > 
> > 
> > You'll need to know something about your data.
> You'll need the 
> > midpoint of each table so or at least something close.
> > 
> > Then create two db's attach the original. And
> insert the data using a
> > select statement with a where clause.
> > 
> > 
> > 
> > --- On Thu, 3/12/09, baxy77bax
>  wrote:
> > 
> >> From: baxy77bax 
> >> Subject: Re: [sqlite] how to divide a database?
> >> To: sqlite-users@sqlite.org
> >> Date: Thursday, March 12, 2009, 4:13 PM
> >> ok, i have one database with two tables and i want
> to create
> >> two databases ,
> >> each with two tables in which contain half of the
> initial
> >> data
> >> 
> >> input:
> >> 
> >> db
> >> table1   table2
> >>  11
> >>  22
> >>  33
> >>  44
> >> 
> >> 
> >> result:
> >> 
> >> db1 db2 
> >> table1 table2table1table2
> >>    11   33
> >>    22   44
> >>  
> >> 
> >> Jim Dodgen wrote:
> >> > 
> >> > I am confused also.
> >> > 
> >> > Are you wanting to put half the rows from
> each table
> >> in each database?
> >> > 
> >> > On Thu, Mar 12, 2009 at 2:01 PM, baxy77bax
> >>  wrote:
> >> >>
> >> >> hi,
> >> >> let say that i have one db with 2 tables
> in it.
> >> and now i wish to  divide
> >> >> that database in 2 equal (half size of
> the
> >> original) databases. two
> >> >> tables
> >> >> in initial database have one column. how
> would i
> >> do that ?:confused:
> >> >> any suggestions
> >> >>
> >> >> thank you
> >> >> --
> >> >> View this message in context:
> >> >>
> >>
> http://www.nabble.com/how-to-divide-a-database--tp22485238p22485238.html
> >> >> Sent from the SQLite mailing list archive
> at
> >> Nabble.com.
> >> >>
> >> >>
> ___
> >> >> sqlite-users mailing list
> >> >> sqlite-users@sqlite.org
> >> >>
> >>
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >> >>
> >> > 
> >> > 
> >> > 
> >> > -- 
> >> > Jim Dodgen
> >> > j...@dodgen.us
> >> >
> ___
> >> > sqlite-users mailing list
> >> > sqlite-users@sqlite.org
> >> >
> >>
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >> > 
> >> > 
> >> 
> >> -- 
> >> View this message in context:
> >>
> http://www.nabble.com/how-to-divide-a-database--tp22485238p22485445.html
> >> Sent from the SQLite mailing list archive at
> Nabble.com.
> >> 
> >> ___
> >> sqlite-users mailing list
> >> sqlite-users@sqlite.org
> >>
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> >
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> > 
> > 
> 
> -- 
> View this message in context:
> http://www.nabble.com/how-to-divide-a-database--tp22485238p22492011.html
> Sent from the SQLite mailing list archive at Nabble.com.
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to divide a database?

2009-03-12 Thread Ken

You'll need to know something about your data. You'll need the 
midpoint of each table so or at least something close.

Then create two db's attach the original. And insert the data using a select 
statement with a where clause.



--- On Thu, 3/12/09, baxy77bax  wrote:

> From: baxy77bax 
> Subject: Re: [sqlite] how to divide a database?
> To: sqlite-users@sqlite.org
> Date: Thursday, March 12, 2009, 4:13 PM
> ok, i have one database with two tables and i want to create
> two databases ,
> each with two tables in which contain half of the initial
> data
> 
> input:
> 
> db
> table1   table2
>  11
>  22
>  33
>  44
> 
> 
> result:
> 
> db1 db2 
> table1 table2table1table2
>    11   33
>    22   44
>  
> 
> Jim Dodgen wrote:
> > 
> > I am confused also.
> > 
> > Are you wanting to put half the rows from each table
> in each database?
> > 
> > On Thu, Mar 12, 2009 at 2:01 PM, baxy77bax
>  wrote:
> >>
> >> hi,
> >> let say that i have one db with 2 tables in it.
> and now i wish to  divide
> >> that database in 2 equal (half size of the
> original) databases. two
> >> tables
> >> in initial database have one column. how would i
> do that ?:confused:
> >> any suggestions
> >>
> >> thank you
> >> --
> >> View this message in context:
> >>
> http://www.nabble.com/how-to-divide-a-database--tp22485238p22485238.html
> >> Sent from the SQLite mailing list archive at
> Nabble.com.
> >>
> >> ___
> >> sqlite-users mailing list
> >> sqlite-users@sqlite.org
> >>
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >>
> > 
> > 
> > 
> > -- 
> > Jim Dodgen
> > j...@dodgen.us
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> >
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> > 
> > 
> 
> -- 
> View this message in context:
> http://www.nabble.com/how-to-divide-a-database--tp22485238p22485445.html
> Sent from the SQLite mailing list archive at Nabble.com.
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite Transaction Rate and speed...

2009-03-12 Thread Ken


Calling   dbh->do("BEGIN")/  dbh->do("COMMIT") should get rid of the 
automatic transactions. The txn's are still attomic.

I'm just guessing but I'd suspect the dbi interface.. 

Can you rewrite it in C and call the sqlite API directly, 
You'll get better performance by creating a statement handles and preparing 
those handles once. Then use the reset/bind to rebind data to the handles.

Hope that helps.


--- On Thu, 3/12/09, VF  wrote:

> From: VF 
> Subject: Re: [sqlite] SQLite Transaction Rate and speed...
> To: sqlite-users@sqlite.org
> Date: Thursday, March 12, 2009, 12:57 AM
> Great suggestion!
> 
> I wonder however how should I implement it though... I am
> upserting tens of
> millions of rows in chunks of several hundreds thousands
> (naturally I can't
> do all this in memory). SQLite tutorial says if each
> update/insert is not
> resulting in transaction, it can make up to 50k updates a
> second. 
> 
> I tried calling:
> 
> my $dbh = DBI->connect_cached(  # connect to
> your database, create
> if needed
>   "dbi:SQLite:dbname=$dbpath", # DSN: dbi,
> driver,
> database file
>   "",  # no user
>   "",  # no password
>   { RaiseError => 1, AutoCommit => 0 },
> #
> complain if something goes wrong
>   ) or die $DBI::errstr;
>   
>   # set pragmas
>   $dbh->do('pragma synchronous=off;');
>   $dbh->do('PRAGMA default_cache_size = 1;');
>   $dbh->do('pragma page_size=819;2');
>   $dbh->do('pragma temp_store=memory;');
> 
> before the actual upserts, however it didn't seem to
> help. 
> 
> Now, if in my bulk_upser() function I do this:
>   $dbh->do("BEGIN");
>   foreach  {
>   $dbh-do("insert or ignore")
>   }
>   $dbh->do("COMMIT");
> 
> ...would that work? How do I get rid of these atomic
> transactions?
> 
> In general though - I am a little surprised. This is a
> seemingly simple task
> - hundreds of thousands of upserts in a non-shared DB -
> requiring such
> "gymnastics" and taking hours to complete - seem
> to be neither "SQL" nor
> "lite". Or is it a problem of Perl DBI interface?
> 
> Thanks a lot in advance, Bobby
> 
> > Is it possible for you to do the following?  The
> INSERT should fail
> > silently if you violate a uniqueness constraint; we
> also set the initial
> > counter to 0 as it will be immediately bumped to 1 by
> the UPDATE.  You
> > should be able to wrap the whole thing in a
> transaction.
> > 
> > ***
> > 
> > INSERT OR IGNORE INTO MAPPINGS_$idx
> >  (key, mapping, rank, counter, timeCreated,
> timeModified)
> >values (?, ?, 1, 0, CURRENT_TIMESTAMP,
> CURRENT_TIMESTAMP);
> > 
> > UPDATE MAPPINGS_$idx SET counter = counter + 1, 
> >   timeModified = CURRENT_TIMESTAMP WHERE key = ? AND
> mapping = ?;
> > 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite Transaction Rate and speed...

2009-03-06 Thread Ken


look at the sql syntax for insert or replace for sqlite. 

Also you goal to handle 1 million per minute is probably going to be dependant 
upon your hardware. 

For instance throughput greatly increases with disk striping. 

Also the faster the RPM of the drive the more transactions can be processed. 
Code it up and find out!



--- On Fri, 3/6/09, Nuzzi  wrote:

> From: Nuzzi 
> Subject: [sqlite]  SQLite Transaction Rate and speed...
> To: sqlite-users@sqlite.org
> Date: Friday, March 6, 2009, 2:27 PM
> I have a project where I have to be determining if a row
> exists, if so get
> the data, change it, and then write it back, if not, then
> just writing the
> data.  I have to be able to do millions of these per
> minute.  Is that pretty
> much impossible with SQLite or any other DB?
> 
> Thanks,
> 
> John
> -- 
> View this message in context:
> http://www.nabble.com/SQLite-Transaction-Rate-and-speed...-tp22379931p22379931.html
> Sent from the SQLite mailing list archive at Nabble.com.
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] shared cache and SQLITE_MISUSE on sqlite3_step()

2009-03-04 Thread Ken

Marcus,

I'm not sure if this will help or not...

But I noticed your sample code does not test the return value from the 
sqlite3_prepare_v2 call. Could the code be entering the do  loop when an error 
was returned from prepare?

Just an idea.

Have you tried the reset call as DRH had suggested? 
Your prior post indicated you had more code and another loop. Can you post the 
full code for both loops?

HTH



--- On Wed, 3/4/09, Marcus Grimm  wrote:

> From: Marcus Grimm 
> Subject: Re: [sqlite] shared cache and SQLITE_MISUSE on sqlite3_step()
> To: kennethinbox-sql...@yahoo.com, "General Discussion of SQLite Database" 
> 
> Date: Wednesday, March 4, 2009, 4:31 PM
> Ken,
> 
> you are of course right that it needs some checks
> for locks and busy states. I left that out to
> simplify the code given below. My original code
> checks that and it usually works quite well.
> that's basically the reason why I was puzzled
> by the randomly MISUSE results after I added the shared
> cache.
> 
> I'm not yet finished with my debugging since it is a
> multithreading/collision issue - ugly to trace.
> 
> My feeling is that it is related to the condition when
> one thread is attempting or holding an exclusive lock
> while another thread is just doing an sqlite_step (read
> only)
> on an allready created statement. Both threads use their
> own
> DB connections. For example: when I do a sqlite_reset
> right after sqlite_step returns SQLITE_MISUSE, as Richard
> suggest,
> I get immediately a SQLITE_LOCK return code from
> sqlite_reset in this case. Why I didn't get that before
> or
> from the sqlite_prepare ?
> 
> Anyway, I'm going on to workaround this ALso I
> would like
> to mention once more that it is only during an artificial
> stress
> test, mainly to verify my implementation. Under normal
> usercondition it is very unlikely to happend and sqlite
> works perfect as expected.
> 
> Thanks
> 
> Marcus
> 
> >
> > Marcus,
> >
> > You might want to also add some checks in for
> sqlite_busy as on the result
> > of the prepare and the first call to sqlite_step.
> >
> >
> > On the inner loop test for the most common case first
> (SQLITE_ROW) then
> > test for errors... Slight performance improvement...
> >
> >
> > --- On Wed, 3/4/09, Marcus Grimm
>  wrote:
> >
> >> From: Marcus Grimm 
> >> Subject: Re: [sqlite] shared cache and
> SQLITE_MISUSE on sqlite3_step()
> >> To: "General Discussion of SQLite
> Database" 
> >> Date: Wednesday, March 4, 2009, 10:25 AM
> >> Richard, thanks again for the feedback.
> >>
> >> However, I don't see how it can happend that
> the
> >> statement
> >> is completed internally without returning
> SQLITE_DONE.
> >> In the particular code of the "reading
> thread" I
> >> do something like:
> >>
> >> --
> >> sqlite3_prepare_v2(db, "SELECT * FROM
> TableA",
> >> -1, &stmt, 0);
> >> /** step throu table result **/
> >> do
> >> {
> >>rc = sqlite3_step(stmt);
> >>if( rc == SQLITE_MISUSE )
> >>{  fprintf(stderr, "ERROR...\n");
> break; }
> >>else
> >>if( rc == SQLITE_ROW )
> >>  read_data_etc();
> >>else
> >>  break;
> >> }while( rc != SQLITE_DONE );
> >>
> >> sqlite3_finalize(stmt);
> >> --
> >>
> >> The prepare statement pointer is defined locally
> and no
> >> other
> >> thread can access it, except sqlite internal
> maybe.
> >> To me it looks that in case a parallel thread is
> inserting
> >> or
> >> updating data, the above loop is somehow affected
> and
> >> returns the
> >> MISUSE.
> >>
> >> Your reply so far indicates either a bug on my
> side or a
> >> missusage.
> >> I'll go on and try to find the reason why the
> magic
> >> number is resetted
> >> without knowing from calling functions.
> >>
> >> calling sqlite3_reset in these cases is difficult
> for me
> >> since it might
> >> not be the first step and previous data is
> allready in use
> >> in the upper loop.
> >> But maybe an option for the very first step if
> that
> >> fails... hm... I'll try that.
> >>
> >>
> >> Thanks again
> >>
> >> kind regards
> >>
> >> Marcus Grimm
> >>
> >>
> >>
> >> D

Re: [sqlite] shared cache and SQLITE_MISUSE on sqlite3_step()

2009-03-04 Thread Ken

Marcus,

You might want to also add some checks in for sqlite_busy as on the result of 
the prepare and the first call to sqlite_step.


On the inner loop test for the most common case first (SQLITE_ROW) then test 
for errors... Slight performance improvement...


--- On Wed, 3/4/09, Marcus Grimm  wrote:

> From: Marcus Grimm 
> Subject: Re: [sqlite] shared cache and SQLITE_MISUSE on sqlite3_step()
> To: "General Discussion of SQLite Database" 
> Date: Wednesday, March 4, 2009, 10:25 AM
> Richard, thanks again for the feedback.
> 
> However, I don't see how it can happend that the
> statement
> is completed internally without returning SQLITE_DONE.
> In the particular code of the "reading thread" I
> do something like:
> 
> --
> sqlite3_prepare_v2(db, "SELECT * FROM TableA",
> -1, &stmt, 0);
> /** step throu table result **/
> do
> {
>rc = sqlite3_step(stmt);
>if( rc == SQLITE_MISUSE )
>{  fprintf(stderr, "ERROR...\n"); break; }
>else
>if( rc == SQLITE_ROW )
>  read_data_etc();
>else
>  break;
> }while( rc != SQLITE_DONE );
> 
> sqlite3_finalize(stmt);
> --
> 
> The prepare statement pointer is defined locally and no
> other
> thread can access it, except sqlite internal maybe.
> To me it looks that in case a parallel thread is inserting
> or
> updating data, the above loop is somehow affected and
> returns the
> MISUSE.
> 
> Your reply so far indicates either a bug on my side or a
> missusage.
> I'll go on and try to find the reason why the magic
> number is resetted
> without knowing from calling functions.
> 
> calling sqlite3_reset in these cases is difficult for me
> since it might
> not be the first step and previous data is allready in use
> in the upper loop.
> But maybe an option for the very first step if that
> fails... hm... I'll try that.
> 
> 
> Thanks again
> 
> kind regards
> 
> Marcus Grimm
> 
> 
> 
> D. Richard Hipp wrote:
> > On Mar 4, 2009, at 9:35 AM, Marcus Grimm wrote:
> > 
> >> hi,
> >>
> >> OK, the value of p->magic is 519C2973
> (VDBE_MAGIC_HALT)
> > 
> > That means the prepared statement has run to
> completion and needs to  
> > be reset using sqlite3_reset() before you continue.
> > 
> > D. Richard Hipp
> > d...@hwaci.com
> > 
> > 
> > 
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> >
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> > 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite vs. Oracle (parallelized)

2009-02-23 Thread Ken
I fully agree with DRH regarding SQLITE and single user performance. If you 
need to replace  fopen then sqlite is a really really great product. Even if 
you have some mild concurrency sqlite still does pretty darned good.

Oracle excels when you have many users that require concurrent database 
changes. 

Oracle performs row level locking and Multi-Versioning on data blocks to 
achieve concurrency.

Oracle can perform Parallel queries. But the best usage of parallel query is 
for full table scans where the entire table is read. And there are equally 
sized segments for scanning by the query slaves. Other wise parallel query 
probably won't help much. 



--- On Mon, 2/23/09, D. Richard Hipp  wrote:

> From: D. Richard Hipp 
> Subject: Re: [sqlite] SQLite vs. Oracle (parallelized)
> To: "General Discussion of SQLite Database" 
> Date: Monday, February 23, 2009, 3:28 PM
> On Feb 23, 2009, at 3:54 PM, pyt...@bdurham.com wrote:
> 
> > Dr. Hipp,
> >
> > When you say "SQLite is way faster than Oracle in
> a single-user
> > applications" do you mean that SQLite can be
> faster than Oracle even
> > when Oracle's parallel processing features are
> being used? For example
> > Oracle's support for parallelization can speed up
> table loading from  
> > an
> > external data source, certain SQL selects, and certain
> indexing
> > operations.
> 
> I don't run Oracle and have no way of verifying the
> following.  But I  
> conjecture that from a cold start, you and launch an
> application that  
> uses SQLite, have it do a dozen or so queries, print out
> the answer,  
> and shut down, all before the Oracle server has even booted
> up to the  
> point where it will accept connections.  Correct me if I am
> wrong.
> 
> Perhaps Oracle will run a gazillion more transactions per
> second,  
> given enough memory and CPUs, and once you get it up and
> going.  I  
> have no way of knowing.  But then again, that isn't
> really the point  
> of SQLite.
> 
> 
> >
> >
> > Are there any plans to enhance SQLite to support some
> of Oracle's
> > parallel processing or partitioning capabilities?
> 
> Remember:  SQLite is not trying to replace Oracle.  SQLite
> is trying  
> to replace fopen().
> 
> For people who are using Oracle as a replacement for
> fopen() (as  
> apparently Angela is) they will likely find that SQLite
> makes a far  
> superior replacement.  Or to put it another way, people who
> are using  
> Oracle for a single-user application (low concurrency) will
> likely  
> find that SQLite works much better for them.  It has been
> my  
> experience that old-time Oracle users are incredulous at
> this  
> statement, until they actually see a live demonstration. 
> So I won't  
> try to argue the point.  It is merely my observation.
> 
> On the other hand, nobody things that SQLite is a suitable
> database  
> when you have 1000 separate connections beating on the
> database all at  
> once.
> 
> 
> 
> >
> >
> > Thank you,
> > Malcolm
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> >
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> D. Richard Hipp
> d...@hwaci.com
> 
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Unab to close data base due to unfinalized statements

2009-02-11 Thread Ken
try finalize at the end instead of reset.


--- On Wed, 2/11/09, krishnakumar...@luckymail.com 
 wrote:

> From: krishnakumar...@luckymail.com 
> Subject: [sqlite] Unab to close data base due to unfinalized statements
> To: sqlite-users@sqlite.org
> Date: Wednesday, February 11, 2009, 2:28 AM
> Hi ,
> i am using sqlite with vc++, i tried to insert a row into
> data base with prepared statement using bind operations, i
> reset the statement after using it but when i try to close
> DB it is giving error "Unable to close data base due to
> unfinalized statements"
> i userd query like
> 
> 
> _T("INSERT INTO?MY_TABLE VALUES(?, ?, ?, ?, ?, ?, ?,
> ?, ?)")
> 
> "INSERT INTO?MY_TABLE VALUES(?, ?, ?, ?, ?, ?, ?, ?,
> ?)")
> 
> and bind operations on it.
> 
> Regards
> 
> 
> Krishna Kumar T M
> 
> Don't let your email address define you - Define
> yourself at http://www.tunome.com today! 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] out of memory with sqlite3 shell

2009-02-11 Thread Ken
have you checked your evironment variables, such as LD_PRELOAD??
also you can try running strace to trace system calls.




--- On Wed, 2/11/09, Roberto Lumbreras  wrote:

> From: Roberto Lumbreras 
> Subject: Re: [sqlite] out of memory with sqlite3 shell
> To: "General Discussion of SQLite Database" 
> Date: Wednesday, February 11, 2009, 7:15 AM
> I'm using the default allocator, and it is a uclinux
> system without dynamic
> libraries, everything must be staticaly linked, so ltrace
> can't  help
> (anyway, I think there is no ltrace available for this
> machine).
> 
> I'll try to use gdbserver to debug it, but I really
> don't understand why
> sqlite3 is doing different things (working/not working)
> depending on which
> filesystem the database file is located.
> 
> Salud,
> Roberto Lumbreras
> 
> 
> On Tue, Feb 10, 2009 at 9:42 PM, Roger Binns
>  wrote:
> 
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: SHA1
> >
> > Roberto Lumbreras wrote:
> > > With strace the problem is clear: sqlite3 calls
> "old_mmap" with length=0.
> > > I'm running version 2.6.25-uc0 of uCLinux, so
> mmap returns EINVAL because
> > > length==0, then sqlite3 says "out of
> memory".
> >
> > The default SQLite 3 code never calls mmap.  Go ahead
> and grep the code
> > to see!  Are you using a custom allocator?  If not
> then your malloc
> > library is the problem.
> >
> > The ltrace command can help find the library routines
> responsible.
> >
> > Roger
> > -BEGIN PGP SIGNATURE-
> > Version: GnuPG v1.4.9 (GNU/Linux)
> >
> >
> iEYEARECAAYFAkmR5psACgkQmOOfHg372QT0DACgyI1/4fN5H2jFBPELIEiRRy1c
> > 3HcAoMIDA7tNIW4K9SO4BVeIctQI3zjw
> > =ssx5
> > -END PGP SIGNATURE-
> >
> >
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multi-threading problem!

2009-01-28 Thread Ken
Create a connection for each thread.



--- On Wed, 1/28/09, Anatoly Stepanov  wrote:

> From: Anatoly Stepanov 
> Subject: [sqlite] Multi-threading problem!
> To: "sqlite-users@sqlite.org" 
> Date: Wednesday, January 28, 2009, 7:58 AM
> Hello!
> I use the latest (3.6.10) version of SQLite library.
> I was trying to perform query "select" from one
> table inside single database connection from different
> threads(up to 8 threads).
> In summary I had 50% CPU Usage( 2 core CPU)-this is a
> problem! I tried to find the reason in Debug mode and
> I've found that the most of the execution time one
> thread was executing when  other threads waited him..
> So I have at least two questions:
> 1.What may be a reason for this lack of performance?
> 2.May be I there is  a RIGHT way for multi-thread using of
> SQLite?
> 
> 
> Best regards!
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction behavior with large numbers of inserts

2009-01-27 Thread Ken
Nope your not missing anything. Thats how it works.

It acquires the lock when it must spill its dirty cache to disk!

What happens is the journal file holds the original pages and the new writes 
(inserts) are written to the db file. So an exclusive lock is taken out by the 
writing thread/connection. The original pages are written to the journal then 
the modified pages are written to the database file. 
Upon commit the journal file is deleted. A rollback copies the pages from the 
journal back into the original database file.


You are probably going to have to commit more frequently to allow readers 
access.

SQLITE does not have any kind of multi versioning of the database pages. Thus 
you may not have one connection reading and another writing. 

HTH
Ken



--- On Tue, 1/27/09, Ian Frosst  wrote:

> From: Ian Frosst 
> Subject: [sqlite]  Transaction behavior with large numbers of inserts
> To: sqlite-users@sqlite.org
> Date: Tuesday, January 27, 2009, 6:49 PM
> I have an application which on a single thread writes data
> to a database.
> This writing can result in millions of writes to the
> database in a single
> transaction (started with BEGIN TRANSACTION.)  I also have
> other threads and
> processes which read the same database, but never write to
> it, and never
> start transactions  (they just issue select commands
> against it.)  In the
> writing application, I have one thread which originally
> opens the database,
> but then hands it off to the writer thread, and never
> touches it again.
> 
> The issue that I am seeing is that when the transaction is
> started,
> everything behaves as expected; the writer happily calls
> inserts within his
> connection, and they go to wherever the isolated
> modification data goes,
> which I presume is the journal file.  Initially, other
> processes and threads
> can freely read the database, and all is good.  Fast
> forward a bit, to when
> a large number of inserts have occurred.  It seems that
> when the page cache
> gets filled up, SQLite tries to find some free pages, which
> causes an
> exclusive lock to be obtained within the context of the
> transaction, that is
> not relinquished until the completion of the transaction. 
> This causes some
> pain, because my readers that were happily able to read
> from the original
> database are now stopped up until the potentially long
> running transaction
> completes (they either get database is locked errors, or in
> my case, their
> infintely waiting busy handlers do just that.)
> 
> My question is; am I assessing the situation correctly, and
> if so, is there
> anything I can do to avoid this rather nasty situation?  I
> would rather not
> take the periodic-commit approach, and increasing the page
> cache size would
> potentially cause the use of more memory than I can spare. 
> Disk usage,
> however, is not a problem; I can use as much as is
> necessary.
> 
> Thanks very much.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


  1   2   3   4   5   6   >