Re: [sqlite] [EXTERNAL] Re: Patch: VTable Column Affinity Question and Change Request (WIthdrawn)

2020-02-07 Thread Simon Slavin
On 8 Feb 2020, at 4:05am, Keith Medcalf  wrote:

> filename=FILENAME is the csv filename in quotes.

Please mention in documentation whether full or partial paths can be included.

> header=BOOL   will "parse" the first row to get the column names unless a 
> SCHEMA is specified.

Looks good until I see

> skip=N 

Perhaps it would be better to use "header=N" to nominate a header row.  The 
first row is row 1.  "header=0" means "there is no header row".

Having made this change, "skip=N" says how many lines after the header line 
should be skipped.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Re: Patch: VTable Column Affinity Question and Change Request (WIthdrawn)

2020-02-07 Thread Keith Medcalf

I don't think that patch to apply affinities is a good idea since it will 
usually be mostly useless and will negatively impact performance since one can 
and should assume that the actual author of the VTable knows what they are 
doing, currently SQLite3 does not enforce declared VTable column affinities and 
it should stay that way.  That said, although the definitions of the column 
affinities are stored in the internal schema, it is not trivial to access them 
from a non-builtin extension.

Instead I have once again changed the vsv.c VTable plugin so that it does some 
more useful things.

The below parameters work exactly as they did in the csv.c extension with 
regards to setting the functionality with the single exception that now if you 
specify all of header, schema and columns, the header row will still be 
skipped.  In csv.c if you specified both schema and columns then the header row 
was not skipped even if you specified it existed.

filename=FILENAME is the csv filename in quotes.
data=DATA provides the data inline.
header=BOOL   will "parse" the first row to get the column names unless a 
SCHEMA is specified.
schema=STRING allows the specification of the schema to use
columns=N number of columns in VTable


New parameters:


skip=N 

will skip the specified number of rows at the start of the CSV file, exclusive 
of the header row (ie, skip data rows). If the header, via the header option, 
is specified to exist then it will always be skipped. 


fsep=SEP and rsep=SEP

allow to specify the particular separators to be used.  The defaults are 
fsep=',' and rsep='\n' so the default is the same as the original.  The 
separator can be any single character, or it can be a supported escape code (\t 
\v \n \f for horizontal-tab, vertical-tab, newline and form-feed), or it can be 
a hexadecimal coded byte as in '\x1f'.  Multibyte separators cannot be 
specified.


affinity=AFF 

where the affinity can be NONE, BLOB, TEXT, INTEGER, REAL, NUMERIC with the 
default being NONE.  NONE does nothing different from the existing csv.c VTable 
(unless you also specify validatetext).  However, if you specify an affinity it 
applies to *every* returned value and makes the following changes:
 - BLOB returns all data as BLOB
 - TEXT returns all data as TEXT
 - INTEGER returns anything that looks like an integer as an INTEGER otherwise 
as TEXT
 - REAL returns anything that looks like a number as a REAL (double) otherwise 
as TEXT
 - NUMERIC returns anything that looks like a number as either an INTEGER or 
REAL otherwise as TEXT
   - if the field "looks like an integer" then an INTEGER is returned
   - if the field "looks like a number" then a REAL is returned unless it can 
be converted to an integer
"looks like an integer" effectively means passing the following regex "^ 
*[+-]?\d+ *$"
"looks like a number" effectively means passing the following regex "^ 
*[+-]?(\d+\.?\d*|\d*\.?\d+)([eE][+-]?\d+)? *$"

Processing is limited by the runtime C library so the detection is not quite so 
good as the affinities built into the SQLite3 core.  This means that something 
that looks like an integer may fail to load as an integer because it is too big 
(918273745985736365575984857636253857564363 looks like an integer and the 
SQLite3 core will recognize this and load it as a double.  The C library is not 
so smart and will probably simply return MAX_INT).

Processing is also limited by the compiler.  Some compilers (wave at Microsoft) 
do not treat long double as being more precise than a regular double (ie, 10 or 
16 bytes rather than 8) and treat that simply as a plain old double (some 
compilers, like Intel, need a compilation option to enable long doubles being 
actually longer than a regular double).  It is detected if this is the case 
(sizeof(long dounble)==sizeof(double)) so that only "integers" that can be 
entirely contained in the mantissa are converted to integer, and others will 
remain as double.


validatetext=BOOL

will cause TEXT fields to be validated as containing a valid UTF-8 coding 
sequence (no content check is performed, only the validity of the encoding is 
checked).  If this is turned on together with any type of affinity (ie, other 
than none) then improperly encoded UTF8 text will be returned as a BLOB.  If no 
affinity is in effect (ie, affinity=none) then an error will be thrown rather 
then letting a text field contain invalid data.  This means that fields 
containing embedded nulls (rather than encoded 0 bytes) will not be able to be 
stored as TEXT and will either be BLOB type or throw an invalid encoding error.


nulls=BOOL

when enabled empty fields will be returned as NULL rather than empty strings.  
An empty field means a field with no content (separators are side-by-each).  
Specifically empty strings are not changed.  That is, the middle column in this 
a,,b is a null column but this is not a,"",b

-- 
The fact that there's a Highway to Hell but only a 

Re: [sqlite] loading extension csv.c

2020-02-07 Thread Richard Damon

On 2/7/20 3:14 PM, Jens Alfke wrote:



On Feb 7, 2020, at 9:11 AM, chiahui chen  wrote:

/usr/include/sqlite3ext.h:437:53: note: expanded from macro
'sqlite3_vsnprintf'

#define sqlite3_vsnprintf  sqlite3_api->vsnprintf

   ~~~  ^

/usr/include/secure/_stdio.h:75:3: note: expanded from macro 'vsnprintf'

  __builtin___vsnprintf_chk (str, len, 0, __darwin_obsz(str), format, ap)

  ^

This appears to be your problem. The system header  is 
defining `vsnprintf` as a macro that expands to a compiler builtin. This is 
conflicting with a struct field named `vsnprintf` in the SQLite extension API.

I've never heard of  before, although it does exist in the macOS SDK. 
Looking through the normal , it does include that header at the end:

#if defined (__GNUC__) && _FORTIFY_SOURCE > 0 && !defined (__cplusplus)
/* Security checking functions.  */
#include 
#endif

So it looks like the trigger is that you're somehow building with 
_FORTIFY_SOURCE defined, and the others are not.

Anyway, I think you could work around the problem by editing csv.c and 
inserting something like this at the top:
#include 
#undef vsnprintf
Or else figuring out how to turn off _FORTIFY_SOURCE.

—Jens

PS: Your use of `gcc` in the command line confused me briefly — turns out `gcc` 
on macOS is simply an alias for `cc`, so it invokes Clang. If you really want 
GCC for some reason you'd have to install it yourself and put it in your $PATH 
before /usr/bin.

It looks like that header (sys/_stdio.h) is non-conforming. The C 
Standard does allow the stdio.h header to define a macro for the name 
vsnprintf, but that macro must be a *function-like* macro (7.1.4p1 in 
the C17 Standard) which it appears not to be (as that shouldn't cause a 
problem with the shown code).


--
Richard Damon

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPSERT documentation question

2020-02-07 Thread nomad
I should perhaps point out that the issue has been solved - the page
has been adjusted. Thanks devs.

-- 
Mark Lawrence
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPSERT documentation question

2020-02-07 Thread nomad
On Fri Feb 07, 2020 at 01:45:53PM +, David Raymond wrote:
> > CREATE TABLE vocabulary(word TEXT PRIMARY KEY, count INT DEFAULT
> > 1); INSERT INTO vocabulary(word) VALUES('jovial') ON CONFLICT(word)
> > DO UPDATE SET count=count+1;
> > 
> > Shouldn't that actually be written as "vocabulary.count+1"?
> 
> Nope. Unqualified names there refer to the one and only record that's
> getting updated.

Your edit of my email broke some context. I was actually referring to
the paragraph after the example SQL, where "vocabularly.count" was
given as being equivalent to "count+1".

> Similar to how in an blanket update statement you would do:
> update vocabulary set count = count + 1;
> ...and not:
> update vocabulary set vocabulary.count = vocabulary.count + 1;
> 
> I mean, it might still work, but it's not needed, no.

It is needed if you are have a correllated subquery in the UPDATE
statement and want to refer to the original row.

-- 
Mark Lawrence
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] loading extension csv.c

2020-02-07 Thread Jens Alfke


> On Feb 7, 2020, at 9:11 AM, chiahui chen  wrote:
> 
> /usr/include/sqlite3ext.h:437:53: note: expanded from macro
> 'sqlite3_vsnprintf'
> 
> #define sqlite3_vsnprintf  sqlite3_api->vsnprintf
> 
>   ~~~  ^
> 
> /usr/include/secure/_stdio.h:75:3: note: expanded from macro 'vsnprintf'
> 
>  __builtin___vsnprintf_chk (str, len, 0, __darwin_obsz(str), format, ap)
> 
>  ^

This appears to be your problem. The system header  is 
defining `vsnprintf` as a macro that expands to a compiler builtin. This is 
conflicting with a struct field named `vsnprintf` in the SQLite extension API.

I've never heard of  before, although it does exist in the macOS 
SDK. Looking through the normal , it does include that header at the 
end:

#if defined (__GNUC__) && _FORTIFY_SOURCE > 0 && !defined (__cplusplus)
/* Security checking functions.  */
#include 
#endif

So it looks like the trigger is that you're somehow building with 
_FORTIFY_SOURCE defined, and the others are not.

Anyway, I think you could work around the problem by editing csv.c and 
inserting something like this at the top:
#include 
#undef vsnprintf
Or else figuring out how to turn off _FORTIFY_SOURCE.

—Jens

PS: Your use of `gcc` in the command line confused me briefly — turns out `gcc` 
on macOS is simply an alias for `cc`, so it invokes Clang. If you really want 
GCC for some reason you'd have to install it yourself and put it in your $PATH 
before /usr/bin.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] notify all processes of database modification

2020-02-07 Thread Jens Alfke


> On Feb 7, 2020, at 6:23 AM, Kees Nuyt  wrote:
> 
> Anyway, SQLite doesn't have such a mechanism by itself.
> Maybe inotify is useful to you :
> 
> https://en.wikipedia.org/wiki/Inotify 
> http://man7.org/linux/man-pages/man7/inotify.7.html 
> 

Or on Apple platforms, FSEvents.

On any platform, you'd need to monitor both the main database and the .wal file.

And the notification would trigger soon after a transaction began making 
changes, although the changes wouldn't be visible to you until the commit, so 
you'd probably need to start polling until you see the changes, with some 
heuristic about timing out if nothing happens for a while (e.g. if the 
transaction is aborted.)

—Jens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VTable Column Affinity Question and Change Request

2020-02-07 Thread Jens Alfke


> On Feb 5, 2020, at 6:56 PM, Keith Medcalf  wrote:
> 
>> It probably doesn’t parse that declaration or
>> figure out from it what the declared types of the columns are.
> 
> Actually it does, and this is documented for the sqlite3_declare_vtab 
> function -- and in fact the column names and affinities are parsed and stored 
> in the internal Table schema.  

Sorry, I meant that the _CSV extension_ doesn't parse the 'CREATE TABLE…' 
declaration to figure out what column affinities the caller desires.

SQLite probably stores the column affinities so they can be returned from APIs 
that request them; IIRC isn't there a C API call that tells you the affinity of 
a column?

> The column names are used but the executed VDBE program does not "apply 
> affinity".

I suspect this is for the same reason that led to the immediate objections to 
your proposal: it could hurt performance.

I think your proposal makes sense given that it doesn't slow anything down if 
the extension didn't specify any column affinities. My virtual table doesn't.

—Jens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] case-insensitivity of keywords are hardly documented

2020-02-07 Thread Aapo Rantalainen
Hi, I'm just doing some 'software archeology' and I found that:
on Nov 21 01:02:00 2004
FossilOrigin-Name: ac72a1d5518f7b505ae2a1bd3be3d71db461ae7e
git: f8565825622a1ed48bdaa835968a1137b2ffa593

This sentence have been dropped out of documentation:
"Keyword matching in SQLite is case-insensitive."

https://github.com/sqlite/sqlite/commit/f8565825622a1ed48bdaa835968a1137b2ffa593#diff-b43337792fa9656f4e2ae1351e18bee6L1556

Can anybody say what happened at that time? This sentence seems to not
been there ever since. Just by using SQLite man can say that keywords
are case-insensitive, but why it is not documented explicitly anymore?

-Aapo Rantalainen
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] loading extension csv.c

2020-02-07 Thread chiahui chen
Hi Dr. Hipp,

Thank you very much. I tried the suggested solution. sqlite3 was
downloaded  and compiled.
Then ran : gcc -g -l. -fPIC -dynamiclib ./ext/csv.c -o csv.dylib
the same errors occurred again.

./ext/csv.c:115:3: error: no member named '__builtin___vsnprintf_chk' in
'struct sqlite3_api_routines'

  sqlite3_vsnprintf(CSV_MXERR, p->zErr, zFormat, ap);

  ^

/usr/include/sqlite3ext.h:437:53: note: expanded from macro
'sqlite3_vsnprintf'

#define sqlite3_vsnprintf  sqlite3_api->vsnprintf

   ~~~  ^

/usr/include/secure/_stdio.h:75:3: note: expanded from macro 'vsnprintf'

  __builtin___vsnprintf_chk (str, len, 0, __darwin_obsz(str), format, ap)

  ^

./ext/csv.c:115:21: warning: incompatible integer to pointer conversion
passing 'int' to parameter of type

  'const void *' [-Wint-conversion]

  sqlite3_vsnprintf(CSV_MXERR, p->zErr, zFormat, ap);

^

./ext/csv.c:67:19: note: expanded from macro 'CSV_MXERR'

#define CSV_MXERR 200

  ^~~

/usr/include/secure/_stdio.h:75:57: note: expanded from macro 'vsnprintf'

  __builtin___vsnprintf_chk (str, len, 0, __darwin_obsz(str), format, ap)

^~~

/usr/include/secure/_common.h:39:54: note: expanded from macro
'__darwin_obsz'

#define __darwin_obsz(object) __builtin_object_size (object,
_USE_FORTIFY_LEVEL > 1 ? 1 : 0)

 ^~

./ext/csv.c:568:5: error: use of undeclared identifier 'sqlite3_str'

sqlite3_str *pStr = sqlite3_str_new(0);

^

./ext/csv.c:568:18: error: use of undeclared identifier 'pStr'

sqlite3_str *pStr = sqlite3_str_new(0);

 ^

./ext/csv.c:568:25: warning: implicit declaration of function
'sqlite3_str_new' is invalid in C99

  [-Wimplicit-function-declaration]

sqlite3_str *pStr = sqlite3_str_new(0);

^

./ext/csv.c:571:5: warning: implicit declaration of function
'sqlite3_str_appendf' is invalid in C99

  [-Wimplicit-function-declaration]

sqlite3_str_appendf(pStr, "CREATE TABLE x(");

^

./ext/csv.c:571:25: error: use of undeclared identifier 'pStr'

sqlite3_str_appendf(pStr, "CREATE TABLE x(");

^

./ext/csv.c:581:29: error: use of undeclared identifier 'pStr'

sqlite3_str_appendf(pStr, "%sc%d TEXT", zSep, iCol);

^

./ext/csv.c:588:31: error: use of undeclared identifier 'pStr'

  sqlite3_str_appendf(pStr,"%s\"%w\" TEXT", zSep, z);

  ^

./ext/csv.c:597:31: error: use of undeclared identifier 'pStr'

  sqlite3_str_appendf(pStr,"%sc%d TEXT", zSep, ++iCol);

  ^

./ext/csv.c:603:25: error: use of undeclared identifier 'pStr'

sqlite3_str_appendf(pStr, ")");

^

./ext/csv.c:604:18: warning: implicit declaration of function
'sqlite3_str_finish' is invalid in C99

  [-Wimplicit-function-declaration]

CSV_SCHEMA = sqlite3_str_finish(pStr);

 ^

./ext/csv.c:604:37: error: use of undeclared identifier 'pStr'

CSV_SCHEMA = sqlite3_str_finish(pStr);

^

./ext/csv.c:643:27: error: use of undeclared identifier
'SQLITE_VTAB_DIRECTONLY'

  sqlite3_vtab_config(db, SQLITE_VTAB_DIRECTONLY);

  ^

4 warnings and 10 errors generated.

Here is Mac terminal sqlite3 compiling process if it can shed light on the
extension compiling errors.

./configure && make

checking for a BSD-compatible install... /usr/bin/install -c

checking whether build environment is sane... yes

checking for a thread-safe mkdir -p... ./install-sh -c -d

checking for gawk... no

checking for mawk... no

checking for nawk... no

checking for awk... awk

checking whether make sets $(MAKE)... yes

checking whether make supports nested variables... yes

checking for style of include used by make... GNU

checking for gcc... gcc

checking whether the C compiler works... yes

checking for C compiler default output file name... a.out

checking for suffix of executables...

checking whether we are cross compiling... no

checking for suffix of object files... o

checking whether we are using the GNU C compiler... yes

checking whether gcc accepts -g... yes

checking for gcc option to accept ISO C89... none needed

checking whether gcc understands -c and -o together... yes

checking dependency style of gcc... gcc3

checking for special C compiler options needed for large files... no

checking for _FILE_OFFSET_BITS value needed for large files... no

checking for gcc... (cached) gcc

checking whether we are using the GNU C compiler... (cached) yes

checking whether gcc accepts -g... (cached) yes

checking for gcc option to accept ISO C89... (cached) none needed

checking whether gcc understands -c and -o together... (cached) yes

checking dependency style 

Re: [sqlite] Please increase the default for SQLITE_MAX_VARIABLE_NUMBER

2020-02-07 Thread Alex Bronstein
Thanks, everyone, for your responses on Tuesday. I wasn't on the mailing
list, so didn't receive them in my email, but I am now, and I can see the
responses on
https://www.mail-archive.com/sqlite-users@mailinglists.sqlite.org/msg118979.html.
Much appreciated!

> Can you show us some place where using 999 parameters is a reasonable way
to use SQLite ?

The use-case I have is that I'm a maintainer of Drupal
, a CMS written in PHP. In Drupal, we support
multiple database backends. Typically, a production site will operate on
MySQL or PostgreSQL, but we also support SQLite, which is convenient for
local testing. We have a database abstraction API that allows module and
site developers to write code like:

$all_results = $db->select('table_foo')
  ->fields('table_foo', ['field_1', 'field_2'])
  ->condition('field_3', $large_array, 'IN')
  ->execute()
  ->fetchAll();

The implementation of execute() above ends up calling a PHP PDO
 statement that's more or less
as follows:

$statement = $pdo->prepare("SELECT field_1, field_2 FROM table_foo WHERE
field_3 IN (?, ?, ... ?)");
return $statement->execute($large_array);


A similar use case is for insertion. For example, a module or site
developer can call:

$upsert = $connection->upsert('kv')->key('k')->fields(['k', 'v']);
foreach ($large_associative_array as $key => $value) {
  $upsert->values(['k' => $key, 'v' => $value]);
}
$upsert->execute();

The execute() above ends up calling a PHP PDO statement that's appropriate
for the database backend. On SQLite prior to 3.24 it's more or less:

$statement = $pdo->prepare("INSERT OR REPLACE INTO kv (k, v) VALUES ((?,
?), (?, ?), ... (?, ?))");
return $statement->execute($values);


In the case of the multi-row insert/upsert, we can change our
implementation to execute multiple PDO statements in succession in order to
keep each one below the 999 limit.

In the case of the IN (...), thank you for your suggestions to use a temp
table. Because there can be multiple calls to ->condition(), including
nested ones, it might be a bit tricky for us to get the temp tables
implemented correctly. We'd need to track multiple temp tables and then
drop them at the correct time, which would break the current encapsulation
of our API, but it's potentially doable with some refactoring. However,
your temp table idea inspired me to come up with an alternate solution of
using json_each(). So that first example could be implemented as:

$statement = $pdo->prepare("SELECT field_1, field_2 FROM table_foo WHERE
field_3 IN (select value from json_each(?))");
return $statement->execute([json_encode($large_array)]);


If you're curious, I posted a proof of concept patch for both the upsert
and the large IN condition to
https://www.drupal.org/project/drupal/issues/2031261#comment-13454464.


In summary, yes, I think we can change things in Drupal to work within the
999 limit. I think there might be other PHP projects that could benefit
from a higher limit (for example, ones that don't have Drupal's abstraction
API around PDO), but I don't know enough specifics about those to argue on
their behalf.

Thanks again for your consideration and insights.

Alex.


On Tue, Feb 4, 2020 at 10:27 AM Alex Bronstein 
wrote:

> Prepared statements are good practice and recommended in places such as
> https://www.php.net/manual/en/pdo.prepared-statements.php. There are use
> cases for using them with many items in an IN() clause, or when inserting
> many rows in a single INSERT statement. In such cases, you can easily end
> up with more than 999 parameters.
>
> While sqlite can be compiled with a larger SQLITE_MAX_VARIABLE_NUMBER
> flag, there are situations where the application developer doesn't have
> control over how the system libraries are compiled. For example, a given
> PHP application could run on either a stock Debian/Ubuntu installation, a
> stock Fedora/RHEL/CentOS installation, or other systems. Debian compiles
> sqlite with SQLITE_MAX_VARIABLE_NUMBER=25
>  (issue
> ) whereas
> Fedora doesn't set the flag
>  (
> issue ), so gets
> sqlite's default.
>
> According to some of the answers on
> https://stackoverflow.com/questions/6581573/what-are-the-max-number-of-allowable-parameters-per-database-provider-type,
> PostgreSQL and some other databases support a 16 bit parameter count (64K
> parameters). Given current memory availability, can sqlite's default be
> raised to something similar to that?
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] loading extension csv.c

2020-02-07 Thread Rolf Ade

> On 2/7/20, chiahui chen  wrote:
>>
>> I tried the suggested solution. sqlite3 was
>> downloaded  and compiled.
>> Then ran : gcc -g -l. -fPIC -dynamiclib ./ext/csv.c -o csv.dylib
 ^^^

Is this a typo? This should read

gcc -I. 

>> the same errors occurred again.
>>
>> Is there any suggestion? Just in case if my Mac operating system provides
>> some info (macOS High Sierra version 10.13.6)
>>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] loading extension csv.c

2020-02-07 Thread Richard Hipp
On 2/7/20, chiahui chen  wrote:
>
> I tried the suggested solution. sqlite3 was
> downloaded  and compiled.
> Then ran : gcc -g -l. -fPIC -dynamiclib ./ext/csv.c -o csv.dylib
> the same errors occurred again.
>
> Is there any suggestion? Just in case if my Mac operating system provides
> some info (macOS High Sierra version 10.13.6)
>

That command works fine for me on my iMac.  I have no idea why it is
failing for you.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] notify all processes of database modification

2020-02-07 Thread Kees Nuyt
On Fri, 7 Feb 2020 00:13:00 -0700 (MST), you wrote:

> The documents are no longer available, can you kindly share it to my mail.
> I'll be extremely thankful to you. Mail is aliahmadqureshi...@gmail.com

You are probably following up a posting of about 10 years ago.
Most of us do not have that message anymore, so it is not totally clear what you
need.

Anyway, SQLite doesn't have such a mechanism by itself.
Maybe inotify is useful to you :

https://en.wikipedia.org/wiki/Inotify
http://man7.org/linux/man-pages/man7/inotify.7.html

-- 
Regards,
Kees Nuyt
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UPSERT documentation question

2020-02-07 Thread David Raymond
> CREATE TABLE vocabulary(word TEXT PRIMARY KEY, count INT DEFAULT 1);
> INSERT INTO vocabulary(word) VALUES('jovial')
> ON CONFLICT(word) DO UPDATE SET count=count+1;
> 
> Shouldn't that actually be written as "vocabulary.count+1"?

Nope. Unqualified names there refer to the one and only record that's getting 
updated.

Similar to how in an blanket update statement you would do:
update vocabulary set count = count + 1;
...and not:
update vocabulary set vocabulary.count = vocabulary.count + 1;

I mean, it might still work, but it's not needed, no.

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] notify all processes of database modification

2020-02-07 Thread Ali Ahmad Qureshi
The documents are no longer available, can you kindly share it to my mail.
I'll be extremely thankful to you. Mail is aliahmadqureshi...@gmail.com



--
Sent from: http://sqlite.1065341.n5.nabble.com/
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance problem with DELETE FROM/correlated subqueries

2020-02-07 Thread Jürgen Baier

Hi,

On 07.02.20 09:25, Clemens Ladisch wrote:

Jürgen Baier wrote:

   CREATE TABLE main ( ATT1 INT, ATT2 INT, PRIMARY KEY (ATT1,ATT2) );
   CREATE TABLE staging ( ATT1 INT, ATT2 INT );

Then I execute

   DELETE FROM main WHERE EXISTS (SELECT 1 FROM staging WHERE main.att1 = 
staging.att1 AND main.att2 = staging.att2)

which takes a very long time.


DELETE FROM main WHERE (att1, att2) IN (SELECT att1, att2 FROM staging);


Thank you very much.

I can confirm that this solves my problem and indeed scans the staging 
table and looks up the main table:


sqlite> EXPLAIN QUERY PLAN DELETE FROM main WHERE (att1, att2) IN 
(SELECT att1, att2 FROM staging);

QUERY PLAN
|--SEARCH TABLE main USING INDEX sqlite_autoindex_main_1 (ATT1=? AND ATT2=?)
`--LIST SUBQUERY
   `--SCAN TABLE staging

For reference: This syntax is not supported by Microsoft SQL Server 
(2017). But Microsoft SQL Server is relatively fast when using the 
original DELETE FROM query.


Thanks,

Jürgen




Regards,
Clemens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance problem with DELETE FROM/correlated subqueries

2020-02-07 Thread Rowan Worth
On Fri, 7 Feb 2020 at 16:25, Clemens Ladisch  wrote:

> Jürgen Baier wrote:
> >   CREATE TABLE main ( ATT1 INT, ATT2 INT, PRIMARY KEY (ATT1,ATT2) );
> >   CREATE TABLE staging ( ATT1 INT, ATT2 INT );
> >
> > Then I execute
> >
> >   DELETE FROM main WHERE EXISTS (SELECT 1 FROM staging WHERE main.att1 =
> staging.att1 AND main.att2 = staging.att2)
> >
> > which takes a very long time.
>
> DELETE FROM main WHERE (att1, att2) IN (SELECT att1, att2 FROM staging);
>

Note using row-values requires sqlite 3.15.0 or later -- which is three
years old at this point, but every version I have on hand still says
'Error: near ",": syntax error' so I thought I'd track down the details :)
-Rowan
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance problem with DELETE FROM/correlated subqueries

2020-02-07 Thread Clemens Ladisch
Jürgen Baier wrote:
>   CREATE TABLE main ( ATT1 INT, ATT2 INT, PRIMARY KEY (ATT1,ATT2) );
>   CREATE TABLE staging ( ATT1 INT, ATT2 INT );
>
> Then I execute
>
>   DELETE FROM main WHERE EXISTS (SELECT 1 FROM staging WHERE main.att1 = 
> staging.att1 AND main.att2 = staging.att2)
>
> which takes a very long time.

DELETE FROM main WHERE (att1, att2) IN (SELECT att1, att2 FROM staging);


Regards,
Clemens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Performance problem with DELETE FROM/correlated subqueries

2020-02-07 Thread Jürgen Baier

Hi,

I have a question regarding the performance of DELETE FROM (or maybe 
better: correlated subqueries).


I have a table "main" and a table "staging". In "staging" I have a 
subset of "main". I want to delete all rows from "main" which are also 
in "staging".


  CREATE TABLE main ( ATT1 INT, ATT2 INT, PRIMARY KEY (ATT1,ATT2) );
  CREATE TABLE staging ( ATT1 INT, ATT2 INT );

Then I execute

  DELETE FROM main WHERE EXISTS (SELECT 1 FROM staging WHERE main.att1 
= staging.att1 AND main.att2 = staging.att2)


which takes a very long time. As far as I understand the query plan 
SQLite scans the full staging table for each row in "main":


  sqlite> EXPLAIN QUERY PLAN DELETE FROM main WHERE EXISTS (SELECT 1 
FROM staging WHERE main.att1 = staging.att1 AND main.att2 = staging.att2)

     ...> ;
  QUERY PLAN
  |--SCAN TABLE main
  `--CORRELATED SCALAR SUBQUERY
     `--SCAN TABLE staging

How do I speed this up? The idea is that the database should scan 
"staging" and lookup "main" (because an appropriate primary index exists).


But I'm open to any alternative approach. I just have the situation that 
I have a very large "main" table and a "staging" table which contains 
also a large number of tuples which should be deleted from "main".


Any ideas?

Thanks,

Jürgen

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users