Re: [sqlite] sqlite3 shell in windows

2015-01-16 Thread Keith Medcalf

>I had the bright idea yesterday of trying to use an extension module in
>Windows.  I found myself a bit confused, and the messages and
>documentation were not as helpful as they might have been.  I suspect I
>had a 32/64 bit mismatch in one case, and that sqlite3 wasn't compiled
>with extension-module support, but I was unable to prove that to
>myself.

...

>1.  The architecture of an executable file, x86 or x64.

dumpbin -- comes with the dev kit
editbin -- same source, lets you edit some of the flags, such as 32-bit clean 
(LargeAddressAware)

>2.  The compilation options used when building binaries posted on the
>download page.  In particular, SQLITE_OMIT_LOAD_EXTENSION.

sqlite> pragma compile_options;
DEBUG
ENABLE_COLUMN_METADATA
ENABLE_FTS3
ENABLE_FTS3_PARENTHESIS
ENABLE_FTS4
ENABLE_LOAD_EXTENSION
ENABLE_MEMORY_MANAGEMENT
ENABLE_RTREE
ENABLE_STAT4
MAX_MMAP_SIZE=0x8000
MAX_SCHEMA_RETRY=50
SOUNDEX
TEMP_STORE=2
THREADSAFE=1
WIN32_MALLOC

of course, that is for the shell I build, not the one on the download pages.  
running "pragma compile_options" as an SQL statement will return the options 
which were used to build the engine queried.

>3.  The name of the module not found.

Presumably this is the one you tried to load.  However, not necessarily.  You 
need depends (the tool, not the diaper) to ensure that the module can be loaded 
together with all its dependencies since if some dependency is missing, the 
system will just fail to load the module without being able to tell you why.  
Depends will also show you the import/export and symbol tables of the modules 
(whether EXE or DLL) that you run it against (plus all the pre-loaded 
dependencies).

Importantly make sure you are exporting "C" names.  Mangled names won't work, 
nor will ordinals.


>4.  The name of the procedure not found.

Presumably the procedure you tried to call.  I know, not necessarily helpful.  
Since depends can show you the import/export tables for your module, it will 
probably help here too, to ensure that the symbol you think should be exported 
actually is.

>Things I would like to verify here:
>
>5.  The 32-bit windows sqlite3 shell supports extensions?

See pragma compile_options;

>6.  The above messages come from the OS, and result from LoadLibrary
>failing?  IOW, I didn't build a proper extension, windowswise?

The error code comes from the OS, the message is either the default or and 
internal one generated to describe the failure code.  The OS returned error 
codes are, shall we say, not very diagnostic.

Errors such as "The service could not be started because the file was not 
found" are common in Windows.  Telling you the name of the service and the name 
of the file (and where it expected to find it) would be useful, but I think 
rule 1 prevails "Everything Useful is Prohibited".  As a matter of fact, that 
particular error message can be completely bogus because it is possible that 
the file was found and the service started, and then it crashed during module 
load initialization (ie, in the loader) before the loader was able to 
"complete" the load/link and relocate operations.

>7.  How does the shell behave if SQLITE_OMIT_LOAD_EXTENSION is used?
>Obviously the function would fail, but I wonder if ".load" is dropped
>from the help and parseable syntax?

>8.  Where is $HOME in windows, wrt .sqliterc?  I tried %appdata%; that
>didn't seem to be it.

I kinda doubt it.  It will use the standard windows load order for modules:  
first the toilet (windows\system directories) then the current directory, then 
the path.  If a path is specified in the load call, then only that location is 
checked (but dependencies will follow the default windows load order).  Some 
dependencies may be loaded from new-fangled magical locations, and only from 
those locations, even if the module is located in the toilet or the current 
directory.  This is a Windows Feature (that can be really annoying, since you 
cannot just put the correct dependencies in the correct places, you have to 
recite the appropriate magical incantations to get windows to recognize them -- 
though this often applies to runtime libraries not your modules (ie, 
MSVCRTxx.DLL)).

BTW, it is *much* easier to use the compiler in real (command line) mode rather 
than the hooey-gooey.  The compiler is called CL and it can give you its help 
with "cl -?".  There is a silly batch file somewhere in the VS install 
directories that set all the environment variables properly so you can then use 
a real editor and compile from the command line.

---
Theory is when you know everything but nothing works.  Practice is when 
everything works but no one knows why.  Sometimes theory and practice are 
combined:  nothing works and no one knows why.




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3 tool bug

2015-01-16 Thread Scott Robison
On Fri, Jan 16, 2015 at 3:48 PM, Keith Medcalf  wrote:

>
> >You have a system with a bunch of apps installed.  You then upgrade to
> >a new version of the operating system and a whole bunch of the apps
> >break.  Do you think people blame the apps or the operating system?
> >Do you think anyone takes the apps apart and blames them for using the
> >wrong apis despite documentation to the contrary?
>
> This is kind of begging the question.  There is lots of software that runs
> perfectly well on *all* 32-bit capable versions of Windows.  This is not
> particularly hard to do, you just have to not do stupid things.  If the
> application has problems that is an application problem, or an application
> design issue.  It has nothing whatsoever to do with the Operating System.
>
> Quite frankly, it is a Microsoft strategy to force application
> incompatibility, and something Microsoft sells as a "feature" to their
> ISVs.  Each new Microsoft "thing" is sold to ISVs with the promise that "if
> you use this new buzzword technology, which we have paid handsomely to have
> the pundit and magazine writers claim is the greatest thing since sliced
> bread" we can guarantee you that your customers will keep coming back and
> buying the same product over and over and over again because we will
> "embrace, extend, and make incompatible", deliberately, with each version
> and patch, in order to bolster your revenue stream.  It is also sold to
> Enterprises on the same basis, because that ensures that the IT Department
> will has to be continually staffed (big = more underthings = higher paid
> execs = bigger budgets = bigger bonusses) with throngs of people to keep
> running along the Microsoft designed incompatibility treadmill.
>

While there is some truth to this, it is more nuanced. There are bugs &
misfeatures that Microsoft maintains specifically not to annoy paying
customers, and there are undocumented APIs that developers have discovered
and used (not just in Microsoft, but certainly they have the best chance of
accessing those undocumented APIs) that Microsoft maintains some level of
compatibility with just to avoid breaking selected apps some customers
depend on.

Linux, BSD, whatever, caters to a technical crowd. If those developers
decide they want to change the kernel API / ABI at every release, it is not
the end of the world. They hold less than 10% of desktop systems (depending
on the source). Windows caters to a much later audience, many who have no
technical expertise, much less source code to recompile software when
operating system changes break applications. Even if the applications are
the real source of the problem (using undocumented APIs or depending on
bugs) that doesn't matter to customers.

Frankly, given the vast array of hardware that Windows tries to be
compatible with, and all the permutations of hardware & drivers & such, I
think it is a miracle they that succeed as often as they do. Do they make
stupid decisions / choices at times? Sure. Everybody does. But that doesn't
mean they are all incompetent or stupid. I dare say DRH would like to (and
has said as much IIUC) go back in time and make changes that were set in
stone years ago, but he specifically does not do it to maintain backward
compatibility. Heck, if nothing else, I'm sure he'd love to go back and
eliminate every bug that has forced a patch update.

-- 
Scott Robison
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3 tool bug

2015-01-16 Thread Simon Slavin

On 16 Jan 2015, at 10:27pm, Keith Medcalf  wrote:

> [snip] The long and the short of it is that the interprocess pipe in Windows 
> connects to cooked channels because it never occurred to anyone at Microsoft 
> that this was undesirable and irrational.

Thanks for this long explanation which I found interesting and useful.  And 
annoying in that it decreases my appreciation of Windows still further.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3 tool bug

2015-01-16 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 01/16/2015 02:35 PM, James K. Lowden wrote:
> I'm skeptical of the notion that cmd.exe is diddling with your data
> en route to the pipe.

Almost certainly the reason is that stdout and stdin are in character
mode.  It requires extra code to put them into binary mode:

  http://support.microsoft.com/kb/58427

Roger

-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iEYEARECAAYFAlS5lz4ACgkQmOOfHg372QR6mACg4MmvKwP3pvZ0AJwLyKl08sGm
Z/0An050kV+FhIUtHnyZzKYGYEiK836s
=aG9W
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3 tool bug

2015-01-16 Thread Keith Medcalf

>You have a system with a bunch of apps installed.  You then upgrade to
>a new version of the operating system and a whole bunch of the apps
>break.  Do you think people blame the apps or the operating system?
>Do you think anyone takes the apps apart and blames them for using the
>wrong apis despite documentation to the contrary?

This is kind of begging the question.  There is lots of software that runs 
perfectly well on *all* 32-bit capable versions of Windows.  This is not 
particularly hard to do, you just have to not do stupid things.  If the 
application has problems that is an application problem, or an application 
design issue.  It has nothing whatsoever to do with the Operating System.

Quite frankly, it is a Microsoft strategy to force application incompatibility, 
and something Microsoft sells as a "feature" to their ISVs.  Each new Microsoft 
"thing" is sold to ISVs with the promise that "if you use this new buzzword 
technology, which we have paid handsomely to have the pundit and magazine 
writers claim is the greatest thing since sliced bread" we can guarantee you 
that your customers will keep coming back and buying the same product over and 
over and over again because we will "embrace, extend, and make incompatible", 
deliberately, with each version and patch, in order to bolster your revenue 
stream.  It is also sold to Enterprises on the same basis, because that ensures 
that the IT Department will has to be continually staffed (big = more 
underthings = higher paid execs = bigger budgets = bigger bonusses) with 
throngs of people to keep running along the Microsoft designed incompatibility 
treadmill.

---
Theory is when you know everything but nothing works.  Practice is when 
everything works but no one knows why.  Sometimes theory and practice are 
combined:  nothing works and no one knows why.




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3 tool bug

2015-01-16 Thread James K. Lowden
On Fri, 16 Jan 2015 10:38:54 -0800
Dave Dyer  wrote:

> [$] sqlite3 po.sqlite .dump | sqlite3 po2.sqlite
> Error: incomplete SQL: INSERT INTO "imageblob" VALUES(1,'G:\share

Perhaps try -echo, to display the incomplete SQL?  

I'm skeptical of the notion that cmd.exe is diddling with your data en
route to the pipe.  I can't think of a time Windows munged my data in
that particular way despite more years using that lousy tool than I
care to remember.  Quotes and escapes, sure, don't get me started. 

Shot in the dark: maybe a string is being continued by ending the
line with a backslash.  If the output handle is opened as text with
fopen, the sequence would be 

5c 0d 0a

which the escape-reader wouldn't recognize, expecting only 

5c 0a

leading to a noncontinued, incomplete line.  

But that doesn't explain the difference between redirecting to a file
and redirecting to a pipe.  

HTH.  

--jkl


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3 shell in windows

2015-01-16 Thread Random Coder
On Fri, Jan 16, 2015 at 2:21 PM, James K. Lowden
 wrote:
> 5.  The 32-bit windows sqlite3 shell supports extensions?

Yes, it does.  The version on sqlite.org is compiled with that support.

> 6.  The above messages come from the OS, and result from LoadLibrary
> failing?  IOW, I didn't build a proper extension, windowswise?

Correct.

> 7.  How does the shell behave if SQLITE_OMIT_LOAD_EXTENSION is used?
> Obviously the function would fail, but I wonder if ".load" is dropped
> from the help and parseable syntax?

Yes, it is removed from the parser in do_meta_command() and the help
shown when you do a .help.

> 8.  Where is $HOME in windows, wrt .sqliterc?  I tried %appdata%; that
> didn't seem to be it.

The home is %USERPROFILE%.

If you're seeing the "Error: The specified procedure could not be
found." error, and you're not specifying an entry point in the .load
command, then no doubt the sqlite3_load_extension symbol isn't
properly exported.  I'd verify that your DLL has this symbol exported
using a tool like depends (http://www.dependencywalker.com/).  I'm
guessing you're missing an entry in your .def file, or a similar
location.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3 tool bug

2015-01-16 Thread Keith Medcalf
On Friday, 16 January, 2015 14:05, Simon Slavin  said:
>On 16 Jan 2015, at 9:01pm, Keith Medcalf  wrote:

>>> Not, at least, when your database contains string data with unusual
>>> characters that Windows feels like it should translate for you...

>> I think that pretty much limits one to the 7-bit ASCII character set
>> ...

>Why on earth would an operating system programmer bother to put any
>translation into piping ? Piping is really just a demonstration of how to
>use stdin and stdout.  I'm mystified.

They wouldn't.  

However, we are not talking about a real Operating System designed by real 
System Programmers -- we are talking about Microsoft Windows, which contains a 
lot of disparate code sources all stuck together with LePages White Glue which 
maintains backwards comptibility with, oh, DOS. 

DOS implemented the pipe "|" operator in "a | b" as "a > tempfile" followed by 
"b < tempfile" -- because it could not actually run two processes at the same 
time.  This meant that the pipe was 8-bit clean.  This method of implementation 
of pipes stayed the same until OS/2, which was the first truly multitasking 
descendant of DOS (more or less).  It implemented the interprocess pipe as one 
would expect -- by opening an OS 8-bit clean anonymous pipe, then cloning the 
input/output handles onto stdout and stdin of the two processes respectively.

Then along came "OS/2 New Technology" in which Microsoft renaged on the Joint 
Development agreement between IBM and Microsoft, and renamed the product 
"Windows NT".  Under the termination agreement, the codebase became the 
property of both IBM and Microsoft.  However, under the JDA the codebase 
contained quite a bit of patented and proprietary IBM code and technology.  
Microsoft was given a few years to replace all the IBM technology with their 
own (so, for example, the "Microsoft" re-written versions of all the DDE and 
IPC mechanisms, which were patented and proprietary IBM based on their 
experience with "real computing systems" where renamed COM/DCOM/ActiveX) in 
various flavours and re-implemented by Microsoft.  This eventually became 
Windows NT 4.0 (which is I believe the first version of "OS/2 New Technology" 
which was devoid of all the IBM code and partial workarounds.  Microsoft 
programmers, being the rather simple folks that they are, re-implemented many 
of those IBM bits with
  rather simplistic and crude implementations and others, that they couldn't 
figure out how to do at all such as priority-based pre-emptive multitasking, 
they simply replaced with much simpler implementations derived from their DOS 
and Windows 3.1 GUI.  Without actually redesigning and fixing those broken 
design decisions (like the one where they decided that the screensaver -- when 
triggered -- should be the highest priority task in the system), current 
versions of "Microsoft Stuff" still operates from this old ill-concieved design 
basis.  They still haven't quite figured out how to do the "pre-emption" part 
of multitasking yet, but they are getting (very slightly) better.  If Microsoft 
eventually gets around to fixing their busted dispatcher, their busted I/O 
systems, and their busted process isolation (which they deliberately bust in 
order for their other software to work) then they will have a "real" Operating 
System -- probably somewhere around Windows 25.6 at their present rate
  of progress (although, of course, that Ballmer idiot really screwed things 
up, and their propensity for doing really assinine things, makes it more likely 
that they will never be able to produce anything that would qualify as an 
"Operating System" in the true sense of that word).

Anyway ...

The pipe operator connects the "output" from one program to the "input" of the 
next.  On Windows both stdout (to the console) and stdin (from the keyboard) 
are "cooked" streams.  Using the "|" operator connects these two cooked streams 
together (meaning that Windows will perform the same output translation as if 
it were outputting to the console, and the same input cooking as if the input 
wre coming from the keyboard.

Contrast this with the "file pipe" operators < and > which cause a file to be 
opened and then that file handle to be "cloned" over top of the stdout or stdin 
handle -- meaning that the "console cookery" is no longer in effect -- you now 
have an 8-bit clear channel.

Real Operating Systems written by real System Programmers implement the pipe | 
operator by opening a pipe (which is an 8-bit clean communication channel, like 
a file) and then cloning the handles to the appropriate process stdin/stdout 
handles, thus the kitchen is disabled and the I/O is not cooked.

It would be nice if Windows had a "treat all I/O as 8-bit clean" switch and you 
could disable the cook/kitchen, but you still cannot.  Just as there is no 
switch to let you, with one go, switch Windows into "Intelligent Person using a 
Computer to Compute" 

[sqlite] sqlite3 shell in windows

2015-01-16 Thread James K. Lowden
Hello all, 

I had the bright idea yesterday of trying to use an extension module in
Windows.  I found myself a bit confused, and the messages and
documentation were not as helpful as they might have been.  I suspect I
had a 32/64 bit mismatch in one case, and that sqlite3 wasn't compiled
with extension-module support, but I was unable to prove that to
myself.  

I downloaded and built my own code and the csvfile extension.  I was
using the windows binary downloaded from sqlite.org. The only messages
I managed to produce using the ".load" command were:

Error: The specified module could not be found
Error: The specified procedure could not be found

IIUC these are not SQLite messages; they are OS messages.  I certainly
hope so, because they have that patented Microsoft je ne sais quoi:
something went vaguely wrong.  Very vaguely.  

At first I thought I had a pathname problem, but eventually convinced
myself I was trying to load a 64-bit extension in a 32-bit process.  It
was the growing pile of broken pointers under my desk that clued me
in.  

When I managed to coerce Visual Studio to build a 32-bit DLL, it
elicited the second message, also from the OS.  At the time I thought
the shell as compiled might not support extensions. ISTR that being
mentioned on this very list once upon a time.  I might also have not
really produced a DLL; maybe it was just an executable named .dll. I
don't have the compile steps accessible as I write this, and it's easy
enough to verify when I'm back in the office.  

Things I would have liked to have been able to display on the screen:

1.  The architecture of an executable file, x86 or x64.  
2.  The compilation options used when building binaries posted on the
download page.  In particular, SQLITE_OMIT_LOAD_EXTENSION.  
3.  The name of the module not found. 
4.  The name of the procedure not found. 

Things I would like to verify here:

5.  The 32-bit windows sqlite3 shell supports extensions? 
6.  The above messages come from the OS, and result from LoadLibrary
failing?  IOW, I didn't build a proper extension, windowswise?  
7.  How does the shell behave if SQLITE_OMIT_LOAD_EXTENSION is used?
Obviously the function would fail, but I wonder if ".load" is dropped
from the help and parseable syntax?  
8.  Where is $HOME in windows, wrt .sqliterc?  I tried %appdata%; that
didn't seem to be it.  

Many thanks for clues, instructions, and pointers.  

--jkl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3 tool bug

2015-01-16 Thread Random Coder
On Fri, Jan 16, 2015 at 2:02 PM, Dave Dyer  wrote:
> the input side of the pipe.  Perhaps there is some windows conditioning
> that ought to be done by sqlite, on STDIN, to make it into a binary data
> source ?

You should be able to do a freopen(NULL, "rb", stdin); to change stdin
to be in binary mode.  I can't test that it'll help in this case,
however.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3 tool bug

2015-01-16 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 01/16/2015 01:05 PM, Simon Slavin wrote:
> Why on earth would an operating system programmer bother to put any
> translation into piping

You have a system with a bunch of apps installed.  You then upgrade to
a new version of the operating system and a whole bunch of the apps
break.  Do you think people blame the apps or the operating system?
Do you think anyone takes the apps apart and blames them for using the
wrong apis despite documentation to the contrary?

Microsoft puts a phenomenal amount of effort into backwards
compatibility.  So the question is actually why Windows behaves the
way it does.

Pipes are done differently on Windows for historical reasons.  DOS
actually did them by writing to a file and not by running the commands
simultaneously.  Operating systems have always done something with
I/O.  C libraries (fopen etc) also do things.  Heck a good question
might be why does Unix not have a separate text type of file?  BTW
SQLite shell uses fopen but claiming binary mode.

Raymond Chen - a Microsoft employee who does a lot of work in this
area has many good articles:

http://blogs.msdn.com/b/oldnewthing/archive/2003/12/24/45779.aspx
http://blogs.msdn.com/b/oldnewthing/archive/2007/07/23/4003873.aspx
http://blogs.msdn.com/b/oldnewthing/archive/2015/01/07/10584656.aspx
http://blogs.msdn.com/b/oldnewthing/archive/2010/03/11/9976571.aspx

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iEYEARECAAYFAlS5jEsACgkQmOOfHg372QQt8gCg4Lu0r7I5eg8B4vZUygPGczxt
SdwAniolznwWWxogG0NHFX3yHkfI3SoB
=VgJ4
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3 tool bug

2015-01-16 Thread Dave Dyer

>
>Not, at least, when your database contains string data with unusual
>characters that Windows feels like it should translate for you...

Who can guarantee what characters are used in all their text strings,
much less guarantee what unnamed transformations windows is helpfully
doing to pipe data.  

Saying "it probably will work" is not very satisfactory.

--

I did a little more experimentation, and found that the problem is on
the input side of the pipe.  Perhaps there is some windows conditioning
that ought to be done by sqlite, on STDIN, to make it into a binary data 
source ?

succeeds:
   cat < pipe.txt > pipe-out.txt 
   sqlite3 database.sqlite
   .read pipe-out.txt

fails:

   sqlite3 database.sqlite < pipe.txt 




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] sqlite3 tool bug

2015-01-16 Thread Dave Dyer

>
>Not, at least, when your database contains string data with unusual
>characters that Windows feels like it should translate for you...

Who can guarantee what characters are used in all their text strings,
much less guarantee what unnamed transformations windows is helpfully
doing to pipe data.  

Saying "it probably will work" is not very satisfactory.

--

I did a little more experimentation, and found that the problem is on
the input side of the pipe.  Perhaps there is some windows conditioning
that ought to be done by sqlite, on STDIN, to make it into a binary data 
source ?

succeeds:
   cat < pipe.txt > pipe-out.txt 
   sqlite3 database.sqlite
   .read pipe-out.txt

fails:

   sqlite3 database.sqlite < pipe.txt 






[sqlite] sqlite3 tool bug

2015-01-16 Thread Dave Dyer

>
>Not, at least, when your database contains string data with unusual
>characters that Windows feels like it should translate for you...

Who can guarantee what characters are used in all their text strings,
much less guarantee what unnamed transformations windows is helpfully
doing to pipe data.  

Saying "it probably will work" is not very satisfactory.



Re: [sqlite] sqlite3 tool bug

2015-01-16 Thread Simon Slavin

On 16 Jan 2015, at 9:01pm, Keith Medcalf  wrote:

>> Not, at least, when your database contains string data with unusual
>> characters that Windows feels like it should translate for you...
> 
> I think that pretty much limits one to the 7-bit ASCII character set ...

Why on earth would an operating system programmer bother to put any translation 
into piping ? Piping is really just a demonstration of how to use stdin and 
stdout.  I'm mystified.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3 tool bug

2015-01-16 Thread Keith Medcalf
>>>I'm guess this is a case of the windows command-line shell doing some
>>>character translations in the pipe, rather than just shipping the
>>>bytes through the pipe unaltered.

>> Ouch.  That basically means the "pipe" method shouldn't ever be
>> used on windows.

>Not, at least, when your database contains string data with unusual
>characters that Windows feels like it should translate for you...

I think that pretty much limits one to the 7-bit ASCII character set ...




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3 tool bug

2015-01-16 Thread Richard Hipp
On 1/16/15, Dave Dyer  wrote:
>
>>
>>The pipe method works fine for me on Linux.
>>
>>I'm guess this is a case of the windows command-line shell doing some
>>character translations in the pipe, rather than just shipping the
>>bytes through the pipe unaltered.
>
> Ouch.  That basically means the "pipe" method shouldn't ever be
> used on windows.
>

Not, at least, when your database contains string data with unusual
characters that Windows feels like it should translate for you...

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3 tool bug

2015-01-16 Thread Dave Dyer

>
>The pipe method works fine for me on Linux.
>
>I'm guess this is a case of the windows command-line shell doing some
>character translations in the pipe, rather than just shipping the
>bytes through the pipe unaltered.

Ouch.  That basically means the "pipe" method shouldn't ever be 
used on windows.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3 tool bug

2015-01-16 Thread Dave Dyer

>
>The pipe method works fine for me on Linux.
>
>I'm guess this is a case of the windows command-line shell doing some
>character translations in the pipe, rather than just shipping the
>bytes through the pipe unaltered.

Ouch.  That basically means the "pipe" method shouldn't ever be 
used on windows.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] sqlite3 tool bug

2015-01-16 Thread Dave Dyer

>
>The pipe method works fine for me on Linux.
>
>I'm guess this is a case of the windows command-line shell doing some
>character translations in the pipe, rather than just shipping the
>bytes through the pipe unaltered.

Ouch.  That basically means the "pipe" method shouldn't ever be 
used on windows.



[sqlite] sqlite3 tool bug

2015-01-16 Thread Dave Dyer

>
>The pipe method works fine for me on Linux.
>
>I'm guess this is a case of the windows command-line shell doing some
>character translations in the pipe, rather than just shipping the
>bytes through the pipe unaltered.

Ouch.  That basically means the "pipe" method shouldn't ever be 
used on windows.



Re: [sqlite] sqlite3 tool bug

2015-01-16 Thread Richard Hipp
On 1/16/15, Dave Dyer  wrote:
>
> I have a class of database for which using sqlite3 to create
> a copy via the "pipe" method fails.   Using an explicit intermediate
> file seems to work ok.
>

The pipe method works fine for me on Linux.

I'm guess this is a case of the windows command-line shell doing some
character translations in the pipe, rather than just shipping the
bytes through the pipe unaltered.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Increase page_size on existing db with data and WAL

2015-01-16 Thread Richard Hipp
On 1/16/15, Andy (KU7T)  wrote:
> Hi,
>
>
>
> When I read this: http://sqlite.org/pragma.html#pragma_page_size it seems
> to
> argue that I can never change the page_size on a non-empty db that is
> running in WAL mode.  Any tips how I can still do this?
>
>

Change out of WAL mode, convert the page size, then change back to WAL mode.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Increase page_size on existing db with data and WAL

2015-01-16 Thread Andy (KU7T)
Hi,

 

When I read this: http://sqlite.org/pragma.html#pragma_page_size it seems to
argue that I can never change the page_size on a non-empty db that is
running in WAL mode.  Any tips how I can still do this?

 

Thanks

Andy

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3 tool bug

2015-01-16 Thread Dave Dyer

>
>Rather than the full database, can you show us the full schema of this
>database, including triggers?

It's a very simple database, no triggers or coalitions.  The
problem is most likely a buffer overrun because of a very long
literal string field.

I've sent a minimal sample to drh



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] sqlite3 tool bug

2015-01-16 Thread Dave Dyer

>
>Rather than the full database, can you show us the full schema of this
>database, including triggers?

It's a very simple database, no triggers or coalitions.  The
problem is most likely a buffer overrun because of a very long
literal string field.

I've sent a minimal sample to drh





Re: [sqlite] database is locked

2015-01-16 Thread Roman Fleysher
Wow, thank you Hick! I will try it.

Roman


From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
behalf of Hick Gunter [h...@scigames.at]
Sent: Thursday, January 15, 2015 1:34 AM
To: 'General Discussion of SQLite Database'
Subject: Re: [sqlite] database is locked

Use the following code snippet (add error checking, set v_file to the full path 
name of your SQLite db file) to check.

It attempts to take the same locks as SQLite would, but prints the pid of the 
blocking process. It also prints the journal mode.

#include 
#include 
#ifdef AIX64
#include 
#endif
#include 
#include 
#include 

#include "sqliteInt.h"

static const char  *g_mode[] = {
"[0/0]", "[del]", "[WAL]", "[?/?]"
};

charv_buff[256];
int v_mode;

struct flockv_pending;   // = { F_WRLCK, SEEK_SET, PENDING_BYTE  ,  
 1, 0};
struct flockv_reserved;  // = { F_WRLCK, SEEK_SET, RESERVED_BYTE ,  
 1, 0};
struct flockv_shared;// = { F_WRLCK, SEEK_SET, SHARED_FIRST  , 
SHARED_SIZE, 0};
int v_fd= -1;
int v_ret   = -1;

memset( _pending, 0, sizeof( v_pending));
v_pending.l_type= F_WRLCK;
v_pending.l_whence  = SEEK_SET;
v_pending.l_start   = PENDING_BYTE;
v_pending.l_len = 1;
v_pending.l_pid = 0;

memset( _reserved, 0, sizeof( v_reserved));
v_reserved.l_type   = F_WRLCK;
v_reserved.l_whence = SEEK_SET;
v_reserved.l_start  = RESERVED_BYTE;
v_reserved.l_len= 1;
v_reserved.l_pid= 0;

memset( _shared, 0, sizeof( v_shared));
v_shared.l_type = F_WRLCK;
v_shared.l_whence   = SEEK_SET;
v_shared.l_start= SHARED_FIRST;
v_shared.l_len  = SHARED_SIZE;
v_shared.l_pid  = 0;

/* open the file readonly */
v_fd = open(v_file,O_RDONLY);

// read the file header
read(v_fd, v_buff, 100);

// interpret the bytes
switch (v_buff[18]) {
case  0: v_mode = (v_buff[18] == v_buff[19]) ? 0 : 3; break; // empty
case  1: v_mode = (v_buff[18] == v_buff[19]) ? 1 : 3; break; // 
journal_mode=delete
case  2: v_mode = (v_buff[18] == v_buff[19]) ? 2 : 3; break; // 
journal_mode=wal
default: v_mode =  3; break; // invalid
};

/* check for a PENDING lock */
fcntl(v_fd,F_GETLK,_pending);

/* check for a RESERVED lock */
fcntl(v_fd,F_GETLK,_reserved);

/* check for a SHARED/EXCLUSIVE lock */
fcntl(v_fd,F_GETLK,_shared);

/* print in ascending restrictivity */
if (v_pending.l_type == F_RDLCK)
printf("%s File:%s, Process %d PENDING (SHARED)\n"   ,g_mode[v_mode] 
,v_file, (v_ret = v_pending .l_pid));

if (v_shared .l_type == F_RDLCK)
printf("%s File:%s, Process %d SHARED\n" ,g_mode[v_mode] 
,v_file, (v_ret = v_shared  .l_pid));

switch (v_reserved.l_type) {
case F_WRLCK:
case F_RDLCK:
printf("%s File:%s, Process %d RESERVED\n"   ,g_mode[v_mode] 
,v_file, (v_ret = v_reserved.l_pid));
break;
default: break;
}

if (v_pending.l_type == F_WRLCK)
printf("%s File: %s,Process %d PENDING (EXCLUSIVE)\n",g_mode[v_mode] 
,v_file, (v_ret = v_pending .l_pid));

if (v_shared .l_type == F_WRLCK)
printf("%s File %s, Process %d EXCLUSIVE\n"  ,g_mode[v_mode] 
,v_file, (v_ret = v_shared  .l_pid));

if (v_ret == -1)
printf("%s File:%s, \n",g_mode[v_mode] 
,v_file);


-Ursprüngliche Nachricht-
Von: Roman Fleysher [mailto:roman.fleys...@einstein.yu.edu]
Gesendet: Mittwoch, 14. Jänner 2015 18:54
An: General Discussion of SQLite Database
Betreff: Re: [sqlite] database is locked

SQLite shell version 3.7.2
on Linux 2.6.18
NTFS




From: sqlite-users-boun...@sqlite.org 
[sqlite-users-boun...@sqlite.org] on behalf of Richard Hipp [d...@sqlite.org]
Sent: Wednesday, January 14, 2015 12:50 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] database is locked

On 1/14/15, Roman Fleysher 
> wrote:
> Dear SQLiters,
>
> There has been a lot of discussion, I remember, on this subject by others.
> Please forgive me for asking this for a millionth time.
>
> I somehow got my database in a locked state. I updated a table
> yesterday and I am rather sure that no one on our multi-user system is 
> updating it now.
> The time stamp on the file is from yesterday, showing correct time. I
> need to update a table (I use shell) and I get "database is locked".
> Is there a way to figure out what is happening? Clear the lock?

What operating system and filesystem are you using?  And are you specifying an 
alternative VFS for SQLite or using the default?

>
> 

Re: [sqlite] Client/Server Best Practices

2015-01-16 Thread Richard Hipp
On 1/16/15, James K. Lowden  wrote:
> On Sat, 10 Jan 2015 00:58:25 -0700
> "Keith Medcalf"  wrote:
>
>> > there's no way to hook two SELECTs together to make them see one
>> >database state. That's what JOIN is for.  :-)
>>
>> Yes, it is a part of the SQL Standard isolation levels in excess of
>> the default default of READ COMMITTED.  Different vendors call the
>> isolation levels by different names -- MS SQL Server calls them Read
>> Uncomitted, Read Committed, Repeatable Read, Snapshot, and
>> Serializable; DB/2 calls them No Commit (similar to autocommit),
>> Uncommitted Read, Cursor Stability, Read Stability, Repeatable Read.
>> DB/2 Repeatable Read = MS Serializable (which is the SQL Standard
>> Serializable isolation level).  Cursor Stability = Read Committed.
>
> Keith, I just want to thank you for relieving me of my
> misunderstanding.  I think some years ago I must have decided that for
> my purposes Repeatable Read & friends don't exist because of their
> effect on performance and concurrency.  I've always thought the
> standard was paradoxical in the sense that the more you need isolation
> (because of concurrent updates), the less useful it is to the extent
> that restricts the system's abiltity to support concurrent updates.
>
> To bring it back to SQLite, we have two logging modes and two
> isolation levels.  IIUC, given two connections it can be boiled down
> thusly:
>
> mode  transaction isolation
> Journal   implicitRead Committed
> WAL   implicitRead Committed
> journal   explicitRepeatable Read
> WAL   explicitRepeatable Read

I think the isolation in SQLite is always Serializable.

Except if you have two connections in the same process in shared cache
mode and they set "PRAGMA read_uncommitted=ON" then the isolation is
Read Uncommitted.  But otherwise, isolation in SQLite is always
Serializable.


>
> with the difference being concurrency:
>
> * in journal mode a writer is blocked by a reader, whereas
> * in WAL mode a writer can commit while a reader continues to work,
> in isolation, unaffected.
>
> --jkl
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Huge WAL log

2015-01-16 Thread Keith Medcalf

Some data using python+apsw for 10,000,000 records:

So, embeding the values in the SQL (ie, as text statements) and preparing a 
"new" statement each time is the least efficient,
Multiple Values that are embedded in the SQL are more efficient, up to about 
50/100 values per statement (50% faster)
Preparing a single statement and using the same statement to insert with the 
data being bound, 1 at a time, is even more efficient.
The last case (list of prepeared bindings), is not significantly different that 
the "one at a time" bound insert.

No significant difference for WAL journalling.

Generated 1000 random values in 50.910858 seconds
Inserted 1000 records individually parsed in 154.30888 seconds
Inserted 1000 in groups of 2 in 158.33245 seconds
Inserted 1000 in groups of 5 in 104.66141 seconds
Inserted 1000 in groups of 10 in 86.752935 seconds
Inserted 1000 in groups of 25 in 76.398294 seconds
Inserted 1000 in groups of 50 in 73.953962 seconds
Inserted 1000 in groups of 100 in 72.717076 seconds
Inserted 1000 in groups of 500 in 76.751999855 seconds
Inserted 1000 in groups of 1000 in 77.309153 seconds
Inserted 1000 in groups of 5000 in 76.059428 seconds
Inserted 1000 in groups of 1 in 74.407924 seconds
Inserted 1000 records via binding in 59.377935 seconds
Inserted 1000 records via single binding list in 53.5610001087 seconds

Generated by:

import random
import time
import apsw

recs = 1000
db = apsw.Connection('test.db')
cr = db.cursor()

chars = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'
data = set()

start = time.time()
while len(data) < recs:
item = ''
for j in xrange(8):
item += chars[int(random.random() * 52)]
data.add(item)
print 'Generated', recs, 'random values in', time.time() - start, 'seconds'

data = list(data)
mdata = [(x,) for x in data]

cr.execute('drop table if exists test; create table test ( x text unique ); 
vacuum;')

start = time.time()
cr.execute('begin')
for i in xrange(recs):
cr.execute("insert into test values ('%s')" % (data[i],))
cr.execute('commit')
print 'Inserted', recs, 'records individually parsed in', time.time() - start, 
'seconds'

for steps in [2, 5, 10, 25, 50, 100, 500, 1000, 5000, 1]:
cr.execute('drop table if exists test; create table test ( x text unique ); 
vacuum;')
start = time.time()
cr.execute('begin')
for i in xrange(0, recs, steps):
sql = "insert into test values ('" + "'),('".join(data[i:i+steps]) + 
"')"
cr.execute(sql)
cr.execute('commit')
print 'Inserted', recs, 'in groups of', steps, 'in', time.time() - start, 
'seconds'

cr.execute('drop table if exists test; create table test ( x text unique ); 
vacuum;')

start = time.time()
cr.execute('begin')
for i in xrange(recs):
cr.execute("insert into test values (?)", (data[i],))
cr.execute('commit')
print 'Inserted', recs, 'records via binding in', time.time() - start, 'seconds'

cr.execute('drop table if exists test; create table test ( x text unique ); 
vacuum;')

start = time.time()
cr.execute('begin')
cr.executemany("insert into test values (?)", mdata)
cr.execute('commit')
print 'Inserted', recs, 'records via single binding list in', time.time() - 
start, 'seconds'


---
Theory is when you know everything but nothing works.  Practice is when 
everything works but no one knows why.  Sometimes theory and practice are 
combined:  nothing works and no one knows why.

>-Original Message-
>From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
>boun...@sqlite.org] On Behalf Of Jan Slodicka
>Sent: Friday, 16 January, 2015 10:04
>To: sqlite-users@sqlite.org
>Subject: Re: [sqlite] Huge WAL log
>
>Simon Slavin-3 wrote
>>> Thanks to your post I discovered multiple-row inserts so that I now
>>> understand what you asked.
>>
>> Just a note that multiple-row inserts were added to SQLite relatively
>> recently (2012-03-20 (3.7.11)) and, because SQLite does only
>> database-level locking, its overhead for INSERTs is far less than that
>of
>> SQL Server.  It might be faster to use them but I would expect it to be
>so
>> much faster than many inserts as part of one transaction.
>
>I made a fast, perhaps oversimplified, TestA:
>
>Create an empty database. (file based)
>CREATE TABLE Test (city NVARCHAR(120) NULL COLLATE NOCASE)
>BEGIN
>Then I inserted N records using commands such as
>INSERT INTO Test VALUES('_random_'). (_random was a random 8 character
>long
>string.)
>COMMIT
>Measure WAL size
>Close the DB
>Measure DB size
>
>TestB was performed with the same data except the records were grouped
>INSERT INTO Test VALUES('_random_'), VALUES('_random1')...
>
>I tested different groupings (1,2,5,10) and different N values (10 -
>250).
>
>Results:
>- The more records are grouped, the faster.
>- Grouping of 10 records was more than 2x faster than no grouping at all.
>- WAL size did not depend on the 

Re: [sqlite] sqlite3 tool bug

2015-01-16 Thread Simon Slavin

On 16 Jan 2015, at 6:38pm, Dave Dyer  wrote:

> I have a class of database for which using sqlite3 to create
> a copy via the "pipe" method fails.   Using an explicit intermediate
> file seems to work ok.

Which version of Windows are you using ?  You can type 'ver' at the prompt.

Can you reduce your database down to just a few lines which cause the problem, 
then post those  lines here ?

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3 tool bug

2015-01-16 Thread Zsbán Ambrus
On 1/16/15, Dave Dyer  wrote:
> I have a class of database for which using sqlite3 to create
> a copy via the "pipe" method fails.   Using an explicit intermediate
> file seems to work ok.
>
> I can supply a sample database to anyone interested in investigating.

Rather than the full database, can you show us the full schema of this
database, including triggers?

Does this database have any custom collations (mentioned after the
COLLATE keyword in the schema), or custom SQL functions mentioned in
the schema (such as in DEFAULT or CHECK constraint clauses)?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] System.Data.Sqlite, not using parameterized query, how to encode special characters

2015-01-16 Thread Simon Slavin

On 16 Jan 2015, at 7:06pm, Andy (KU7T)  wrote:

> If I just build the sql string myself, how can I solve encoding
> issues as mentioned above?

There's only one character which should cause a problem with embedded strings: 
the apostrophe.  And you escape this by doubling it.  Illustration:

INSERT INTO myTable VALUES ('I can't stand this.')

will fail because the string ends at the 't' so the command can't be parsed.

INSERT INTO myTable VALUES ('I can''t stand this.')

will succeed, inserting a string containing, among other things, a single 
apostrophe.

So write yourself a little routine which doubles apostrophes and you should be 
fine.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3 tool bug

2015-01-16 Thread Richard Hipp
On 1/16/15, Dave Dyer  wrote:
>
> I have a class of database for which using sqlite3 to create
> a copy via the "pipe" method fails.   Using an explicit intermediate
> file seems to work ok.
>
> I can supply a sample database to anyone interested in investigating.

Is the database small enough to sent to me via email?

>
> --
>
> F:\2013 YearTech\Yearbook Tools\Resource>sqlite3 -version
> 3.7.3
>
> F:\2013 YearTech\Yearbook Tools\Resource>sqlite3 po.sqlite .dump | sqlite3
> po2.sqlite
> Error: incomplete SQL: INSERT INTO "imageblob" VALUES(1,'G:\share\e ...
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Client/Server Best Practices

2015-01-16 Thread James K. Lowden
On Sat, 10 Jan 2015 00:58:25 -0700
"Keith Medcalf"  wrote:

> > there's no way to hook two SELECTs together to make them see one
> >database state. That's what JOIN is for.  :-)
> 
> Yes, it is a part of the SQL Standard isolation levels in excess of
> the default default of READ COMMITTED.  Different vendors call the
> isolation levels by different names -- MS SQL Server calls them Read
> Uncomitted, Read Committed, Repeatable Read, Snapshot, and
> Serializable; DB/2 calls them No Commit (similar to autocommit),
> Uncommitted Read, Cursor Stability, Read Stability, Repeatable Read.
> DB/2 Repeatable Read = MS Serializable (which is the SQL Standard
> Serializable isolation level).  Cursor Stability = Read Committed.

Keith, I just want to thank you for relieving me of my
misunderstanding.  I think some years ago I must have decided that for
my purposes Repeatable Read & friends don't exist because of their
effect on performance and concurrency.  I've always thought the
standard was paradoxical in the sense that the more you need isolation
(because of concurrent updates), the less useful it is to the extent
that restricts the system's abiltity to support concurrent updates.  

To bring it back to SQLite, we have two logging modes and two
isolation levels.  IIUC, given two connections it can be boiled down
thusly:

modetransaction isolation
Journal implicitRead Committed
WAL implicitRead Committed
journal explicitRepeatable Read
WAL explicitRepeatable Read

with the difference being concurrency: 

* in journal mode a writer is blocked by a reader, whereas 
* in WAL mode a writer can commit while a reader continues to work,
in isolation, unaffected.  

--jkl

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] System.Data.Sqlite, not using parameterized query, how to encode special characters

2015-01-16 Thread Andy (KU7T)
Hi,

 

Some of my single row inserts use parameterized queries today to make
encoding easier. For example having embedded single quotes in strings etc.  

 

Now, I would like to take advantage of the multi row inserts as described in
http://stackoverflow.com/questions/16055566/insert-multiple-rows-in-sqlite.
There seems to be a limit of the number of SqliteParameters that I can pass
in, so it seems parameterized queries are not the way to go with multi row
inserts. If I just build the sql string myself, how can I solve encoding
issues as mentioned above?

 

Thanks

Andy

 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Huge WAL log

2015-01-16 Thread Dan Kennedy

On 01/17/2015 12:04 AM, Jan Slodicka wrote:

Simon Slavin-3 wrote

Thanks to your post I discovered multiple-row inserts so that I now
understand what you asked.

Just a note that multiple-row inserts were added to SQLite relatively
recently (2012-03-20 (3.7.11)) and, because SQLite does only
database-level locking, its overhead for INSERTs is far less than that of
SQL Server.  It might be faster to use them but I would expect it to be so
much faster than many inserts as part of one transaction.

I made a fast, perhaps oversimplified, TestA:

Create an empty database. (file based)
CREATE TABLE Test (city NVARCHAR(120) NULL COLLATE NOCASE)
BEGIN
Then I inserted N records using commands such as
INSERT INTO Test VALUES('_random_'). (_random was a random 8 character long
string.)
COMMIT
Measure WAL size
Close the DB
Measure DB size

TestB was performed with the same data except the records were grouped
INSERT INTO Test VALUES('_random_'), VALUES('_random1')...

I tested different groupings (1,2,5,10) and different N values (10 -
250).

Results:
- The more records are grouped, the faster.
- Grouping of 10 records was more than 2x faster than no grouping at all.
- WAL size did not depend on the grouping used and was just slightly larger
than the DB size.

Then I modified the test by adding an index on the single column. I run 2
sets of tests - one where the index was created before first insert and the
one with the index created after all inserts finished.

Results:
- Active index: WAL size ~ DB size
- Inactive index: WAL size ~ 50% of the DB size
- Tests with an active index were slower by 15-20%

Conclusion:
Switching off the indexing during a bulk insert brings minor advantages.
Multi-row inserts may bring larger advantages.
The reason of the "huge WAL problem" remains unclear.



Is it correct that you have a single transaction inserting lots of data 
into a table with multiple indexes on it? Something like 1GB?


When an SQL write transaction is performed, SQLite begins by modifying 
database pages within its internal page-cache. If the transaction 
modifies only a few pages, all dirty pages stay in the cache until the 
user executes "COMMIT", at which point they are appended to the *-wal 
file. However, the page-cache is of limited size (by default 2000 
pages), and once it is completely full of dirty pages SQLite begins 
appending them to the *-wal file mid-transaction in order to free up 
space. At this point each time SQLite needs to modify a page that is not 
already in the cache it must select a dirty page to write out to the 
*-wal file so as to free up space to load the new page into the cache 
where it can be modified. This means that a transaction with a large 
working set may append more than one copy of a single page to the *-wal 
file. Maybe many, many copies.


SQLite indexes are b-trees. Each b-tree node is stored on a database 
page. So if you're inserting keys in random order into a large index 
(one too large to fit entirely within the page-cache), then virtually 
all inserts result in an existing dirty page being flushed from the 
cache and appended to the *-wal file.


To avoid this it seems like there are two options - make the page-cache 
large enough to hold all the indexes or insert keys into the indexes in 
sorted order. Inserting keys in sorted order avoids the problem because 
all writes go to the right-most leaf node of the index b-tree, which 
will almost always be present in the page-cache.


To change the size of the page-cache, see the "PRAGMA cache_size" and 
"PRAGMA page_size" commands.


One way to get keys to be inserted in order is to create all indexes 
after populating the table. SQLite sorts the data before creating the 
index b-tree in this case.


The other is to create a temp (or non temp) table with *exactly the 
same* columns and indexes as the table to be populated and insert the 
new rows into it. Then running:


  INSERT INTO target_tbl SELECT * FROM temp_tbl;

In this case, SQLite detects the similar schemas and copies keys in 
sorted order from the indexes on "temp_tbl" to the corresponding index 
on "target_tbl".


Dan.









Note:
The tests were programmed in c# code that called native SQLite.dll and were
run on a W7 desktop. The results may not apply to other environments.





--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/Huge-WAL-log-tp79991p80070.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] LIKE vs GLOB vs MATCH vs REGEXP

2015-01-16 Thread Scott Robison
On Jan 16, 2015 8:05 AM, "Simon Slavin"  wrote:
>
>
> On 16 Jan 2015, at 12:23pm, Jay Kreibich  wrote:
>
> > They can all be (re)defined, some just happen to have default functions:
> >
> > https://www.sqlite.org/lang_expr.html#like
>
> Might be worth noting here that there can be a danger in replacing the
definitions of default functions.

Excellent points. As I said earlier, I am not planning to change anything,
just was curious based on my flawed inference from the core functions page.
I am playing with some C++ code to help provide some compile time checking
of queries that isn't possible with plain C string literal based queries,
and was curious about the functions vs the operators.

SDR
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3 tool bug

2015-01-16 Thread Dave Dyer

I have a class of database for which using sqlite3 to create
a copy via the "pipe" method fails.   Using an explicit intermediate
file seems to work ok.

I can supply a sample database to anyone interested in investigating.

--

F:\2013 YearTech\Yearbook Tools\Resource>sqlite3 -version
3.7.3

F:\2013 YearTech\Yearbook Tools\Resource>sqlite3 po.sqlite .dump | sqlite3 
po2.sqlite
Error: incomplete SQL: INSERT INTO "imageblob" VALUES(1,'G:\share\e ...

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] sqlite3 tool bug

2015-01-16 Thread Dave Dyer

I have a class of database for which using sqlite3 to create
a copy via the "pipe" method fails.   Using an explicit intermediate
file seems to work ok.

I can supply a sample database to anyone interested in investigating.

--

F:\2013 YearTech\Yearbook Tools\Resource>sqlite3 -version
3.7.3

F:\2013 YearTech\Yearbook Tools\Resource>sqlite3 po.sqlite .dump | sqlite3 
po2.sqlite
Error: incomplete SQL: INSERT INTO "imageblob" VALUES(1,'G:\share\e ...



[sqlite] sqlite3 tool bug

2015-01-16 Thread Dave Dyer

I have a class of database for which using sqlite3 to create
a copy via the "pipe" method fails.   Using an explicit intermediate
file seems to work ok.

I can supply a sample database to anyone interested in investigating.

--

F:\2013 YearTech\Yearbook Tools\Resource>sqlite3 -version
3.7.3

F:\2013 YearTech\Yearbook Tools\Resource>sqlite3 po.sqlite .dump | sqlite3 
po2.sqlite
Error: incomplete SQL: INSERT INTO "imageblob" VALUES(1,'G:\share\e ...



Re: [sqlite] Shell tool improvement request

2015-01-16 Thread Nelson, Erik - 2
RSmith wrote on Friday, January 16, 2015 1:08 PM
> 
> On 2015/01/16 18:33, Simon Slavin wrote:
> >
> > Error: mytextfile.txt line 13588392: expected 2 columns of data but
> > found 1
> >
> > Naturally I spent some time looking near the end of the file to
> figure out what was wrong where the actual error was in line 2, and
> should have read something like "still inside quote when file ends".
> 
> Firstly - thanks, this made me chuckle.  Secondly - I think the parser
> does not realize there is an error until it is at the end of the file
> nor counts any imports as successful or indeed imported at all, since
> that first line will very much not be imported after the transaction
> was rolled back (I hope), so to report the successful imports number is
> probably not feasible. Stating that the error was specifically due to
> this or that might also be a stretch if one examines the number of
> possible things that can go wrong.
> 
> I do however believe it would be easy to note the line at which the
> import failed as being the line where parsing started for the current
> record (i.e line 2 in Simon's case), but then it may well be the error
> actually occurs on line 13588392 (such as an invalid UNICODE character),
> reporting line 2 in this case will be diabolical.
> 
> Last I would offer an idea to simply specifically check for "Unclosed
> Quote" error. I mean I can't imagine another import like the one above
> existing in real life, but certainly an actual unclosed quote might be
> a common mistake in some new export system (which maybe one is
> designing for oneself or as an application extension) or such.
> 

Why not just report both where it started and where it was discovered?

Erik

--
This message, and any attachments, is for the intended recipient(s) only, may 
contain information that is privileged, confidential and/or proprietary and 
subject to important terms and conditions available at 
http://www.bankofamerica.com/emaildisclaimer.   If you are not the intended 
recipient, please delete this message.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database corrupted 28 billion bytes

2015-01-16 Thread Richard Hipp
On 1/16/15, MayW  wrote:
> sqlite3_analyzer.exe ended with 16 INSERT statements showing in my Windows 7
> CMD console screen:
>

The interesting information was in the part that scrolled off the top
of you console.  I suggest you rerun the command, directing output
into a file:

 sqlite3_analyzer bigdb.db3 >analysis.txt
 start analysis.txt


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database corrupted 28 billion bytes

2015-01-16 Thread Richard Hipp
On 1/16/15, MayW  wrote:
> Found table that was huge, it was named MyTable.
> It was created with:
> Create Table MyTable(comment);
>
> select max(rowid),* from MyTable;
> 80002 "This","is"," 4"
>
> Drop table Mytable;
> Vacuum.
> It went down to a little over 3,000,000 bytes.
>
> I'm going to include a filesize routine within my programs to
> test the .DB3 to see if it grows by some great amount in a day.
>
> What caused it to become huge, maybe a run-away test program?
>
> I made a backup so will run sqlite3_analyzer.exe, again.
> Not sure what I'll be looking for.
>

What sqlite3_analyzer would have told you is what you have already
figured out - that MyTable was taking up most of the space.

If you run sqlite3_analyzer on the new 3MB database, it will show you
how much space is taken up by each of the remaining tables and
indexes.  That information might be interesting to you.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Shell tool improvement request

2015-01-16 Thread RSmith


On 2015/01/16 18:33, Simon Slavin wrote:

(following description simplified)

I have a text file I wanted to .import into a table.  The text file has two 
columns separated by a tab: a word and a number.  It starts off like this:

!   32874624
"  239874242
#   98235252
$   438743824
%   324872489
&   39854724
a   23498725
i   1298371287
as  23194874
at  3598725

and continues for another 13588392 (sic.) rows.

I use the SQLite shell tool.  I create an appropriate table in my database, set 
.separator to "\t" and use .import.

The .import failed because the second line of the file contains a quote 
character.  I'm okay with the fact that it fails: it's documented, or nearly 
documented.

However, the failure message is this:

Error: mytextfile.txt line 13588392: expected 2 columns of data but found 1

Naturally I spent some time looking near the end of the file to figure out what was wrong 
where the actual error was in line 2, and should have read something like "still 
inside quote when file ends".


Firstly - thanks, this made me chuckle.  Secondly - I think the parser does not realize there is an error until it is at the end of 
the file nor counts any imports as successful or indeed imported at all, since that first line will very much not be imported after 
the transaction was rolled back (I hope), so to report the successful imports number is probably not feasible. Stating that the 
error was specifically due to this or that might also be a stretch if one examines the number of possible things that can go wrong.


I do however believe it would be easy to note the line at which the import failed as being the line where parsing started for the 
current record (i.e line 2 in Simon's case), but then it may well be the error actually occurs on line 13588392 (such as an invalid 
UNICODE character), reporting line 2 in this case will be diabolical.


Last I would offer an idea to simply specifically check for "Unclosed Quote" error. I mean I can't imagine another import like the 
one above existing in real life, but certainly an actual unclosed quote might be a common mistake in some new export system (which 
maybe one is designing for oneself or as an application extension) or such.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database corrupted 28 billion bytes

2015-01-16 Thread MayW
sqlite3_analyzer.exe ended with 16 INSERT statements showing in my Windows 7 
CMD console screen: 
They look similar to this one:
INSERT INTO space_used 
VALUES(‘Deductions’,’Deductions’,0,0,0,0,0,0,16,0,0,1024);

I remember helping someone try to get a UNION statement correct a few months 
ago.
These were his statements, except I don’t remember anything about 
insert_into-space-used?










-
May
--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/Database-corrupted-28-billion-bytes-tp80048p80067.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database corrupted 28 billion bytes

2015-01-16 Thread MayW
Last statement 2 statements were:
insert into space_used 
values(‘MyTable,”MyTable,0,728251738,704277138,588,0,0,26,285986,23974601,0,34710955,683321040,0,12,24842841088);
COMMIT;



-
May
--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/Database-corrupted-28-billion-bytes-tp80048p80068.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database corrupted 28 billion bytes

2015-01-16 Thread MayW
>What is the output from the following: 
> PRAGMA page_size; 
> PRAGMA journal_mode; 
> PRAGMA freelist_count; 
> PRAGMA page_count; 

1024
delete  
0   
27606264
 
The sqlite3_analyzer,exe is running.



-
May
--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/Database-corrupted-28-billion-bytes-tp80048p80062.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database corrupted 28 billion bytes

2015-01-16 Thread MayW
Found table that was huge, it was named MyTable.
It was created with:
Create Table MyTable(comment);

select max(rowid),* from MyTable;
80002   "This","is"," 4"

Drop table Mytable;
Vacuum.
It went down to a little over 3,000,000 bytes.

I'm going to include a filesize routine within my programs to 
test the .DB3 to see if it grows by some great amount in a day.

What caused it to become huge, maybe a run-away test program?

I made a backup so will run sqlite3_analyzer.exe, again.
Not sure what I'll be looking for.

Thank you!




-
May
--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/Database-corrupted-28-billion-bytes-tp80048p80061.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Huge WAL log

2015-01-16 Thread Jan Slodicka
Simon Slavin-3 wrote
>> Thanks to your post I discovered multiple-row inserts so that I now
>> understand what you asked.
> 
> Just a note that multiple-row inserts were added to SQLite relatively
> recently (2012-03-20 (3.7.11)) and, because SQLite does only
> database-level locking, its overhead for INSERTs is far less than that of
> SQL Server.  It might be faster to use them but I would expect it to be so
> much faster than many inserts as part of one transaction.

I made a fast, perhaps oversimplified, TestA:

Create an empty database. (file based)
CREATE TABLE Test (city NVARCHAR(120) NULL COLLATE NOCASE)
BEGIN
Then I inserted N records using commands such as
INSERT INTO Test VALUES('_random_'). (_random was a random 8 character long
string.)
COMMIT
Measure WAL size
Close the DB
Measure DB size

TestB was performed with the same data except the records were grouped
INSERT INTO Test VALUES('_random_'), VALUES('_random1')...

I tested different groupings (1,2,5,10) and different N values (10 -
250).

Results:
- The more records are grouped, the faster.
- Grouping of 10 records was more than 2x faster than no grouping at all.
- WAL size did not depend on the grouping used and was just slightly larger
than the DB size.

Then I modified the test by adding an index on the single column. I run 2
sets of tests - one where the index was created before first insert and the
one with the index created after all inserts finished.

Results:
- Active index: WAL size ~ DB size
- Inactive index: WAL size ~ 50% of the DB size
- Tests with an active index were slower by 15-20%

Conclusion:
Switching off the indexing during a bulk insert brings minor advantages.
Multi-row inserts may bring larger advantages.
The reason of the "huge WAL problem" remains unclear.

Note:
The tests were programmed in c# code that called native SQLite.dll and were
run on a W7 desktop. The results may not apply to other environments.





--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/Huge-WAL-log-tp79991p80070.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Shell tool improvement request

2015-01-16 Thread Simon Slavin
(following description simplified)

I have a text file I wanted to .import into a table.  The text file has two 
columns separated by a tab: a word and a number.  It starts off like this:

!   32874624
"   239874242
#   98235252
$   438743824
%   324872489
&   39854724
a   23498725
i   1298371287
as  23194874
at  3598725

and continues for another 13588392 (sic.) rows.

I use the SQLite shell tool.  I create an appropriate table in my database, set 
.separator to "\t" and use .import.

The .import failed because the second line of the file contains a quote 
character.  I'm okay with the fact that it fails: it's documented, or nearly 
documented.

However, the failure message is this:

Error: mytextfile.txt line 13588392: expected 2 columns of data but found 1

Naturally I spent some time looking near the end of the file to figure out what 
was wrong where the actual error was in line 2, and should have read something 
like "still inside quote when file ends".

It would be useful if this was explicitly checked for when .import reached the 
end of the file.  An alternative would be for the .import command to include 
text telling you how many records were successfully understood before the error 
occurred.  For my error this would have generated a message like the following:

Error: mytextfile.txt line 13588392: after successfully reading 1 record, 
expected 2 columns of data but found 1

This would also have told me where to look for the problem in the file.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] LIKE vs GLOB vs MATCH vs REGEXP

2015-01-16 Thread Simon Slavin

On 16 Jan 2015, at 12:23pm, Jay Kreibich  wrote:

> They can all be (re)defined, some just happen to have default functions:
> 
> https://www.sqlite.org/lang_expr.html#like

Might be worth noting here that there can be a danger in replacing the 
definitions of default functions.  These functions might be used in your schema 
(e.g. for CHECK constraints) and may be overridden in your program before it 
handles data.  Someone could then open the same database in another program 
(e.g. the Shell Tool) which had instead the default functions and use that to 
add data.

If instead you define custom functions and use those in your schema, if someone 
opens your database in another program it will have no definition for the 
function, and produce an error.

I'm not saying that overriding functions is always bad, just reminding people 
of the danger.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Huge WAL log

2015-01-16 Thread Jan Slodicka
RSmith wrote
>>
>> The code schema is as follows:
>>
>> foreach table
>> {
>>  BEGIN
>>  INSERT INTO table VALUES()
>>  INSERT INTO table VALUES()
>>  ...
>>  COMMIT
>> }
>>
>> Large column values are supplied as parameters, the rest (vast majority)
>> is
>> passed through SQL command.
> 
> When you say "passed through SQL command", you mean using
> sqlite3_execute()?
> 
> If so, why not bind them all? That will be significantly faster...

Not so in c# wrapper, where the overhead of binding is considerable.




--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/Huge-WAL-log-tp79991p80065.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database corrupted 28 billion bytes

2015-01-16 Thread Simon Slavin

On 16 Jan 2015, at 11:18am, MikeD  wrote:

> Just terminated sqlite3_analyzer.

You can let it run.  Overnight if need be.  Its CPU usage will never increase 
much past what you've already seen and memory usage shouldn't be excessive.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Huge WAL log

2015-01-16 Thread Simon Slavin

On 16 Jan 2015, at 12:39pm, Jan Slodicka  wrote:

> Thanks to your post I discovered multiple-row inserts so that I now
> understand what you asked.

Just a note that multiple-row inserts were added to SQLite relatively recently 
(2012-03-20 (3.7.11)) and, because SQLite does only database-level locking, its 
overhead for INSERTs is far less than that of SQL Server.  It might be faster 
to use them but I would expect it to be so much faster than many inserts as 
part of one transaction.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] LIKE vs GLOB vs MATCH vs REGEXP

2015-01-16 Thread Jay Kreibich

On Jan 16, 2015, at 6:56 AM, Richard Hipp  wrote:

> On 1/16/15, Scott Robison  wrote:
>> LIKE & GLOB can be overridden with user defined functions. According to
>> https://www.sqlite.org/lang_corefunc.html LIKE can be a 2 or 3 argument
>> function, GLOB can be a 2 argument function, and neither MATCH nor REGEXP
>> can be redefined.
> 
> Where did you read that MATCH and REGEXP could not be redefined?  That
> is a bug in the documentation that needs to be fixed.  They are
> undefined by default and are given meaning by redefining.  They would
> be useless if they were not redefinable.
> 


I think the OP interpreted the lack of a match() or regexp() function on the 
lang_corefunc.html page as meaning they could not be defined/redefined.

 -j


--  
Jay A. Kreibich < J A Y @ K R E I B I.C H >

"Intelligence is like underwear: it is important that you have it, but showing 
it to the wrong people has the tendency to make them feel uncomfortable." -- 
Angela Johnson





___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] stderr inconsistently flushed across platforms

2015-01-16 Thread Andrea Giammarchi
Coming from a long conversation with tests in place too, it looks like
SQLite CLI is flushing errors right away in Linux, but until undefined
amount of buffer in OSX or Windows, where in latter one nothing is shown
until `.quit` is called [1]

The only way to have a consistent behavior seems to be using `-bail` option
but then if the CLI is driven it needs to be re-spawn each error crating
quite possibly huge overhead for no concrete reason/gain.

Is in the interest of SQLite developers make the CLI error buffer/output
flush behavior consistent across platform and act as promptly as in Linux ?

Working without even knowing errors have been thrown is not really ideal.

Thanks for any sort of answer/outcome.

Best Regards


[1] https://github.com/WebReflection/dblite/issues/22#issuecomment-30319260
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] LIKE vs GLOB vs MATCH vs REGEXP

2015-01-16 Thread Scott Robison
On Fri, Jan 16, 2015 at 5:56 AM, Richard Hipp  wrote:

> On 1/16/15, Scott Robison  wrote:
> > LIKE & GLOB can be overridden with user defined functions. According to
> > https://www.sqlite.org/lang_corefunc.html LIKE can be a 2 or 3 argument
> > function, GLOB can be a 2 argument function, and neither MATCH nor REGEXP
> > can be redefined.
>
> Where did you read that MATCH and REGEXP could not be redefined?  That
> is a bug in the documentation that needs to be fixed.  They are
> undefined by default and are given meaning by redefining.  They would
> be useless if they were not redefinable.
>

I was browsing the list of functions at
https://www.sqlite.org/lang_corefunc.html and inferred from the lack of
REGEXP or MATCH functions (since they are effectively undefined) that their
presence would not impact the REGEXP or MATCH operators.

In like fashion, the GLOB function only documents a two argument form, not
a three argument form.

-- 
Scott Robison
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] LIKE vs GLOB vs MATCH vs REGEXP

2015-01-16 Thread Richard Hipp
On 1/16/15, Scott Robison  wrote:
> So the only remaining question is whether there is any functional
> difference between the LIKE & GLOB SQL functions and the same named
> operators (other than argument order)? Is there a reason to prefer one or
> the other in SQL syntax?

They are aliases for one another.  Syntactic sugar.  You can see this
by using EXPLAIN:

   .explain
   EXPLAIN SELECT * FROM sqlite_master WHERE name LIKE 'abc%';
   EXPLAIN SELECT * FROM sqlite_master WHERE like('abc%',name);

Both generate identical bytecode.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] LIKE vs GLOB vs MATCH vs REGEXP

2015-01-16 Thread Richard Hipp
On 1/16/15, Scott Robison  wrote:
> LIKE & GLOB can be overridden with user defined functions. According to
> https://www.sqlite.org/lang_corefunc.html LIKE can be a 2 or 3 argument
> function, GLOB can be a 2 argument function, and neither MATCH nor REGEXP
> can be redefined.

Where did you read that MATCH and REGEXP could not be redefined?  That
is a bug in the documentation that needs to be fixed.  They are
undefined by default and are given meaning by redefining.  They would
be useless if they were not redefinable.

>
> MATCH is only used in FTS queries if my understanding is correct, so I can
> appreciate why MATCH can't be redefined given the close integration between
> the keyword and a specific virtual table module.
>
> That leaves LIKE & GLOB & REGEXP. Why the differences in redefinition
> capabilities? Why can't an ESCAPE clause form of GLOB be redefined? Why
> can't any form of REGEXP be redefined? Or are some or all of these
> "exceptions" possible but the documentation doesn't cover them?
>
> I have no desire to redefine anything myself, this is primarily an exercise
> in understanding why the design choices were made, and I am wondering if
> there is ever a reason in SQL code to choose the function form of the LIKE
> (or other) operator over the operator itself. Is it intended that the
> operator form of the expression will always behave exactly like the
> function form of the expression (with the appropriate reordering of
> arguments)?
>
> --
> Scott Robison
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database corrupted 28 billion bytes

2015-01-16 Thread Richard Hipp
On 1/16/15, MikeD  wrote:
> I have a database that has become 28,268,814,336 bytes so
> downloaded the sqlite3_analyzer and it has been running for over
> 15-minutes.
>
> Task manager shows sqlite3_analyzer.exe using 13% and the memory stays
> steady at 23,768K.
> 19 handles, 1 thread(s).
>
> The database was a test database that has been used for some time.
> I can't say when the file became huge.  Vacuum was occassionaly run.
>
> Just terminated sqlite3_analyzer.

That's too bad, because if you had let sqlite3_analyzer run to
completion, it would have output some very useful information about
the state of the database file.


> Vacuum memory usage is between 11 and 13 percent using 24,564K.
> I'll let vacuum continue to run.  It has been running 15-minutes.
>
> The database is still working.  What should I be doing?
>
>
>
>
>
> --
> View this message in context:
> http://sqlite.1065341.n5.nabble.com/Database-corrupted-28-billion-bytes-tp80048.html
> Sent from the SQLite mailing list archive at Nabble.com.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database corrupted 28 billion bytes

2015-01-16 Thread Richard Hipp
On 1/16/15, MikeD  wrote:
> I have a database that has become 28,268,814,336 bytes so
> downloaded the sqlite3_analyzer and it has been running for over
> 15-minutes.
>
> Task manager shows sqlite3_analyzer.exe using 13% and the memory stays
> steady at 23,768K.
> 19 handles, 1 thread(s).
>
> The database was a test database that has been used for some time.
> I can't say when the file became huge.  Vacuum was occassionaly run.
>
> Just terminated sqlite3_analyzer.
> Vacuum memory usage is between 11 and 13 percent using 24,564K.
> I'll let vacuum continue to run.  It has been running 15-minutes.
>
> The database is still working.  What should I be doing?

What is the output from the following:

 PRAGMA page_size;
 PRAGMA journal_mode;
 PRAGMA freelist_count;
 PRAGMA page_count;

>
>
>
>
>
> --
> View this message in context:
> http://sqlite.1065341.n5.nabble.com/Database-corrupted-28-billion-bytes-tp80048.html
> Sent from the SQLite mailing list archive at Nabble.com.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database corrupted 28 billion bytes

2015-01-16 Thread Stephan Beal
On Fri, Jan 16, 2015 at 12:18 PM, MikeD  wrote:

> I have a database that has become 28,268,814,336 bytes so
> downloaded the sqlite3_analyzer and it has been running for over
> 15-minutes.
> ...
> The database is still working.  What should I be doing?
>

How about continue to use it? Why do you feel that something is broken just
because it's big?


-- 
- stephan beal
http://wanderinghorse.net/home/stephan/
http://gplus.to/sgbeal
"Freedom is sloppy. But since tyranny's the only guaranteed byproduct of
those who insist on a perfect world, freedom will have to do." -- Bigby Wolf
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Huge WAL log

2015-01-16 Thread Jan Slodicka
Paul Sanderson wrote
> Unlike a rollback journal a WAL file can have multiple copies of the same
> page.
> 
> So from yor main loop, expanding the following code may help us
> understand.
> 
> "insert all downloaded rows"
> 
> If your inserted records is 5million separate insertions then each
> insertion could result in a table leaf page being written to the WAL
> file. Each insertion could potentially also result in one (or more)
> table interior pages being written to the WAL. Subsequent insertions
> could result in multiple copies of the same page.

Maybe you got me onto the right track...

I'll start with the record size estimation:

CREATE TABLE [discounttype]
(
[createdon] DATETIMENULL,
[description] NTEXT NULL,
[discounttypeid] UNIQUEIDENTIFIER   NOT NULL CONSTRAINT 
PK_discounttype
PRIMARY KEY ROWGUIDCOL DEFAULT (newid()),
[isamounttype] BIT  NOT NULL DEFAULT(0),
[modifiedon] DATETIME   NULL,
[name] NVARCHAR(200)NULL COLLATE NOCASE
);

The record will be rather small. It consists of
2xDATETIME - usu. 38 By
UNIQUEIDENTIFIER - 32 By
+ 2 rather short strings

Altogether I wouldn't be surprised if  the page size (1024) is about 7x
larger than the average record length. This (together with your explanation)
would explain extreme WAL size.

---

Thanks to your post I discovered multiple-row inserts so that I now
understand what you asked.

I found  this nice article

  
that (although for SQL server) proves that multi-inserts can greatly speed
up bulk inserts. I am quite confident that most of arguments brought by that
article will apply also to our case. (sqlite used from c#)

Will have to test it thoroughly and (if successful) modify our application
code.





--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/Huge-WAL-log-tp79991p80052.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] LIKE vs GLOB vs MATCH vs REGEXP

2015-01-16 Thread Scott Robison
On Fri, Jan 16, 2015 at 5:23 AM, Jay Kreibich  wrote:

>
> On Jan 16, 2015, at 5:06 AM, Scott Robison 
> wrote:
>
> > LIKE & GLOB can be overridden with user defined functions. According to
> > https://www.sqlite.org/lang_corefunc.html LIKE can be a 2 or 3 argument
> > function, GLOB can be a 2 argument function, and neither MATCH nor REGEXP
> > can be redefined.
>
> They can all be (re)defined, some just happen to have default functions:
>
> https://www.sqlite.org/lang_expr.html#like


Thanks for the link. I was so focused on the syntax at the top of the page
and the function lists that I never scrolled down far enough, obviously.

So the only remaining question is whether there is any functional
difference between the LIKE & GLOB SQL functions and the same named
operators (other than argument order)? Is there a reason to prefer one or
the other in SQL syntax?

-- 
Scott Robison
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] LIKE vs GLOB vs MATCH vs REGEXP

2015-01-16 Thread Jay Kreibich

On Jan 16, 2015, at 5:06 AM, Scott Robison  wrote:

> LIKE & GLOB can be overridden with user defined functions. According to
> https://www.sqlite.org/lang_corefunc.html LIKE can be a 2 or 3 argument
> function, GLOB can be a 2 argument function, and neither MATCH nor REGEXP
> can be redefined.

They can all be (re)defined, some just happen to have default functions:

https://www.sqlite.org/lang_expr.html#like


  -j

--  
Jay A. Kreibich < J A Y @ K R E I B I.C H >

"Intelligence is like underwear: it is important that you have it, but showing 
it to the wrong people has the tendency to make them feel uncomfortable." -- 
Angela Johnson





___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Huge WAL log

2015-01-16 Thread RSmith


On 2015/01/16 11:33, Jan Slodicka wrote:


The code schema is as follows:

foreach table
{
 BEGIN
 INSERT INTO table VALUES()
 INSERT INTO table VALUES()
 ...
 COMMIT
}

Large column values are supplied as parameters, the rest (vast majority) is
passed through SQL command.


When you say "passed through SQL command", you mean using sqlite3_execute()?

If so, why not bind them all? That will be significantly faster...

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Database corrupted 28 billion bytes

2015-01-16 Thread MikeD
I have a database that has become 28,268,814,336 bytes so 
downloaded the sqlite3_analyzer and it has been running for over 15-minutes.

Task manager shows sqlite3_analyzer.exe using 13% and the memory stays
steady at 23,768K.
19 handles, 1 thread(s).

The database was a test database that has been used for some time.
I can't say when the file became huge.  Vacuum was occassionaly run.

Just terminated sqlite3_analyzer.
Vacuum memory usage is between 11 and 13 percent using 24,564K.
I'll let vacuum continue to run.  It has been running 15-minutes.

The database is still working.  What should I be doing?





--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/Database-corrupted-28-billion-bytes-tp80048.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Inserting/removing large number of rows with index

2015-01-16 Thread Scott Robison
On Fri, Jan 16, 2015 at 3:47 AM, Hick Gunter  wrote:

> It depends in how you define "update the index".
>
> If you mean "write to disk" then this happens "once, at the end of the
> transaction" (the exact process differs depending on the journal mode).
>
> If you mean "change the index  structure in memory" then (as already
> noted) the changes will happen for each record touched.
>
> For large batch updates, it may be faster to drop the index before
> performing the changes and recreate it afterwards.
>

Unless you change "a lot" of pages, in which case SQLite will start
spilling pages from memory to disk before the end of the transaction, then
write the final form of everything at the end of the transaction.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] LIKE vs GLOB vs MATCH vs REGEXP

2015-01-16 Thread Scott Robison
LIKE & GLOB can be overridden with user defined functions. According to
https://www.sqlite.org/lang_corefunc.html LIKE can be a 2 or 3 argument
function, GLOB can be a 2 argument function, and neither MATCH nor REGEXP
can be redefined.

MATCH is only used in FTS queries if my understanding is correct, so I can
appreciate why MATCH can't be redefined given the close integration between
the keyword and a specific virtual table module.

That leaves LIKE & GLOB & REGEXP. Why the differences in redefinition
capabilities? Why can't an ESCAPE clause form of GLOB be redefined? Why
can't any form of REGEXP be redefined? Or are some or all of these
"exceptions" possible but the documentation doesn't cover them?

I have no desire to redefine anything myself, this is primarily an exercise
in understanding why the design choices were made, and I am wondering if
there is ever a reason in SQL code to choose the function form of the LIKE
(or other) operator over the operator itself. Is it intended that the
operator form of the expression will always behave exactly like the
function form of the expression (with the appropriate reordering of
arguments)?

-- 
Scott Robison
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Inserting/removing large number of rows with index

2015-01-16 Thread Hick Gunter
It depends in how you define "update the index".

If you mean "write to disk" then this happens "once, at the end of the 
transaction" (the exact process differs depending on the journal mode).

If you mean "change the index  structure in memory" then (as already noted) the 
changes will happen for each record touched.

For large batch updates, it may be faster to drop the index before performing 
the changes and recreate it afterwards.

-Ursprüngliche Nachricht-
Von: Baruch Burstein [mailto:bmburst...@gmail.com]
Gesendet: Donnerstag, 15. Jänner 2015 22:19
An: General Discussion of SQLite Database
Betreff: [sqlite] Inserting/removing large number of rows with index

Hi,

If I have a table with an index, and INSERT or DELETE a large number of rows in 
one statement, does sqlite stop to update the index for each record, or is it 
smart enough to update the index just once for all the changed records?

--
˙uʍop-ǝpısdn sı ɹoʇıuoɯ ɹnoʎ 'sıɥʇ pɐǝɹ uɐɔ noʎ ɟı 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
 Gunter Hick
Software Engineer
Scientific Games International GmbH
FN 157284 a, HG Wien
Klitschgasse 2-4, A-1130 Vienna, Austria
Tel: +43 1 80100 0
E-Mail: h...@scigames.at

This communication (including any attachments) is intended for the use of the 
intended recipient(s) only and may contain information that is confidential, 
privileged or legally protected. Any unauthorized use or dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please immediately notify the sender by return e-mail message and 
delete all copies of the original communication. Thank you for your cooperation.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Huge WAL log

2015-01-16 Thread Jan Slodicka
Paul Sanderson wrote
> So from yor main loop, expanding the following code may help us
> understand.
> 
> "insert all downloaded rows"


The code schema is as follows:

foreach table
{
BEGIN
INSERT INTO table VALUES()
INSERT INTO table VALUES()
...
COMMIT
}

Large column values are supplied as parameters, the rest (vast majority) is
passed through SQL command.


Paul Sanderson wrote
> If your inserted records is 5million separate insertions

What alternative would you suggest?


Paul Sanderson wrote
> What is the schema for this table and what indexes are on it?

This is the first table where I noticed the problem:

CREATE TABLE [discounttype]
(
[createdon] DATETIMENULL,
[description] NTEXT NULL,
[discounttypeid] UNIQUEIDENTIFIER   NOT NULL CONSTRAINT 
PK_discounttype
PRIMARY KEY ROWGUIDCOL DEFAULT (newid()),
[isamounttype] BIT  NOT NULL DEFAULT(0),
[modifiedon] DATETIME   NULL,
[name] NVARCHAR(200)NULL COLLATE NOCASE
);
CREATE INDEX [FK_discounttype_name] ON [discounttype](name COLLATE NOCASE);

We use custom NOCASE collation.

Also, we use encrypted database. This slows down normal DB operations by a
few percent.




--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/Huge-WAL-log-tp79991p80044.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Huge WAL log

2015-01-16 Thread Jan Slodicka
Simon Slavin-3 wrote
> However, other information in your message suggests that you have a
> resource leak of some type somewhere.  Especially, it should not take 12
> minutes to insert 3.5M rows into a simple table with an index or two
> unless really long strings or blobs are involved.
> 
> Unfortunately, I'm only really familiar with the C and PHP interfaces to
> SQLite.  But in both of those you can check the result code of each API
> call to make sure it is SQLITE_OK.  Are you able to do this with whatever
> interface you're using ?

We use c# API originally inspired by system.data.sqlite library. Every
single call is checked.
Some of the numbers I reported were obtained with a customized version of
sqlite3 shell. 

These 12 min (730 secs) refer to the total time of 35 individual commits. It
is the result of processing 7 GB of data. On the other hand, the processor
load was very low all the time indicating that the disk might be a bit slow.
(Although I did not observe any other slow down...)


Simon Slavin-3 wrote
>> DB size increased by roughly 17-18K after each commit. This suggests that
>> WAL needs 10x more memory than the DB itself.
> 
> Very variable.  Depends on whether the changes in one transaction change
> many different pages or change fewer different pages multiple times.  At
> least, I think so.

Sure, it is variable. But my goal when I opened this discussion was to
discuss the worst possible case. That generates the user complaints, normal
cases when everything runs smoothly are of no interest.

I consider this a very important information that should be present in the
official WAL documentation.





--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/Huge-WAL-log-tp79991p80043.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users