Re: [sqlite] comma-separated string data

2014-04-07 Thread mm.w
Hello,

thus, good, incident closed, we've seen worse; I guess, the
misunderstanding was triggered by not following up and well on the lipstick
8-p

Best.


On Mon, Apr 7, 2014 at 4:27 PM, RSmith  wrote:

> On 2014/04/08 01:02, David Simmons wrote:
>
>> Why are these people allowed to use this discussion board?
>>
>> Using SQLite on a critical corporation application I find that by reading
>> the material provided it
>> is handling terabyte databases with remarkable performance.  SQLite does
>> not have the
>> cost associated with one like Oracle and does not require a full time DBA
>> to keep
>> things running at mediocre speeds like Oracle or MS SQLServer.  Grow up,
>> read the material
>> supplied, IMPROVE or GAIN programming skills before becoming a critic.
>>
>
> Let me be the first to apologise for whatever has offended you. Sometimes
> like-minded individuals in groups might share a common view and sometimes a
> jibe or two arise from it. There was no bad intent, but probably this is
> not the place for silly remarks and some restraint might go a long way, etc.
>
> I am however very perplexed by the rest of your note, maybe you are
> confused or did not follow the discussion or maybe misunderstood the
> content? SQLite was never under fire, quite the contrary, an actual problem
> was solved right in this thread via the virtues of SQLite. There were some
> notions as to the illogical paradigm some developers favour towards
> painting over sad DB designs rather than fixing it, which was highlighted
> with a silly analogy or two, but in no way to offend anyone or in any way
> pertaining to the utility of SQLite itself. Nobody was a critic with
> regards to SQLite.
>
> If I have misunderstood you, feel free to correct me please, but most of
> all, please have a lovely day.
>
>
>
>
>
>
>
>
>
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is there a way to load a blob from the shell?

2014-04-07 Thread Petite Abeille

On Apr 8, 2014, at 1:46 AM, Andreas Kupries  wrote:

> Most generally, a website to show off any kind of contribution to
> sqlite, be it custom function, virtual table, virtual filesystem,
> schemata, other extensions, … ?

A bit obsolete, but:

http://www.sqlite.org/contrib

Perhaps github could be of interest as well:

https://github.com/search?q=sqlite=cmdform

For example:

https://github.com/sqlcipher/sqlcipher

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] comma-separated string data

2014-04-07 Thread Petite Abeille

On Apr 8, 2014, at 1:02 AM, David Simmons  wrote:

> Why are these people allowed to use this discussion board?  

Hmmm? What we've got here is failure to communicate perhaps.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is there a way to load a blob from the shell?

2014-04-07 Thread Richard Hipp
On Mon, Apr 7, 2014 at 6:56 PM, Keith Christian
wrote:

>
> However, on production *nix machines, the path to the SQLite 'sar'
> will probably have to be absolute, or else the native 'sar' (System
> Activity Reporter) will run instead.
>

Huh.  Never heard of it.  It is not installed on my Ubuntu desktop.

Realize the my whole purpose in writing "sar" was to demonstrate that
SQLite could serve at least as well as an application's "save-file format"
as does a ZIP archive.  Note that ZIP is used as the file format for ePUB
and ODT.

Additional information:  http://www.sqlite.org/appfileformat.html


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] comma-separated string data

2014-04-07 Thread David Simmons
Why are these people allowed to use this discussion board?  

Using SQLite on a critical corporation application I find that by reading the 
material provided it
is handling terabyte databases with remarkable performance.  SQLite does not 
have the 
cost associated with one like Oracle and does not require a full time DBA to 
keep
things running at mediocre speeds like Oracle or MS SQLServer.  Grow up, read 
the material
supplied, IMPROVE or GAIN programming skills before becoming a critic.


On Apr 7, 2014, at 3:39 PM, mm.w <0xcafef...@gmail.com> wrote:

> "But if the Customer can't tell the difference, does that make you a good
> pimp?"
> 
> Hello,
> 
> you just don't get it then you don't get it, that's it.
> 
> Best Regards
> 
> 
> 
> On Mon, Apr 7, 2014 at 12:09 PM, RSmith  wrote:
> 
>> 
>> On 2014/04/07 20:57, Petite Abeille wrote:
>> 
>>> Lipstick Driven Design: “You can put lipstick on a pig, but it’s still a
>>> pig.”
>>> 
>> 
>> But if the Customer can't tell the difference, does that make you a good
>> pimp?
>> 
>> 
>> 
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is there a way to load a blob from the shell?

2014-04-07 Thread Keith Christian
Hi Richard,

'SAR' looks like a great utility!

However, on production *nix machines, the path to the SQLite 'sar'
will probably have to be absolute, or else the native 'sar' (System
Activity Reporter) will run instead.  Perhaps add a caveat to the docs
for SQLite's 'sar?'


Keith

On Sun, Apr 6, 2014 at 7:43 PM, Richard Hipp  wrote:
> On Sun, Apr 6, 2014 at 4:01 PM,  wrote:
>
>> I haven't figured out how to load a blob (e.g., image) from the shell.  I
>> would think there should be something like this but can't find anything:
>>
>> insert into table values(file('path_to_binary_file'));
>>
>> Are blobs only loadable by using SQLite from C?
>>
>> Any ideas?
>>
>
> At http://www.sqlite.org/sar there is a utility program that generates an
> "SQLite Archive", similar to a ZIP archive but using SQLite as the file
> format instead of the ZIP format.  You can use that utility (with the
> just-added "-n" option to disable compression) to load one or more images
> into a database.  Then in the shell, transfer those images out of the "SAR"
> table where the "sar" utility puts them and into the field and table of
> your choice.  Use the reverse procedures to extract the BLOBs.  This is
> more work (more commands) but has the advantage of being able to load many
> thousands of BLOBs all at once, instead of one at a time.  The "sar"
> utility works on unix.  I have made no effort to make it work on Windows,
> but I will accept patches if that is important to you.
>
> Larray Brasfield's extention to shell.c to support ".blobextract" and
> ".blobreplace" commands apparently uses the incremental BLOB I/O interface
> to avoid the need to load entire images into memory.  ("sar" does not do
> this, btw.  It loads each image into memory.)  That is nice, but on a
> modern workstation with many GB of RAM, is it really necessary?  Maybe in
> some obscure cases.  But in the common case of a smaller BLOB (a few
> megabytes) I think custom functions would work better:
>
> INSERT INTO sometable(x) VALUES(fromfile('data/myblob.gif'));
> UPDATE sometable SET x=fromfile('data/myblob.gif') WHERE rowid=123;
> SELECT tofile('data/blob-out.gif', x) FROM sometable WHERE rowid=123;
>
> -- Extract all blobs:
> SELECT tofile('data/blob-out-'||rowid||'.gif', x) FROM sometable;
>
> I'm open to the possibility of adding fromfile() and tofile() as extension
> functions in shell.c.  Maybe tomorrow sometime.
>
> Another idea is to create a virtual table that wraps the filesystem:
>
> CREATE VIRTUAL TABLE temp.fs AS fs;
> INSERT INTO sometable(x) SELECT content FROM fs WHERE
> name='data/myblob.gif';
> UPDATE sometable SET x=(SELECT content FROM fs WHERE
> name='data/myblob.gif)
>  WHERE rowid=123;
> REPLACE INTO fs(name, content) SELECT 'data/blob-out.gif', x FROM
> sometable
>  WHERE rowid=123;
>
> REPLACE INTO fs(name, content SELECT 'data/blob-out'||rowid||'.gif, x
> FROM sometable;
>
> The virtual table is potentially a much more powerful abstraction, but as
> you can see from the examples above, it requires a little more work to
> actually use.
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] comma-separated string data

2014-04-07 Thread mm.w
"But if the Customer can't tell the difference, does that make you a good
pimp?"

Hello,

you just don't get it then you don't get it, that's it.

Best Regards



On Mon, Apr 7, 2014 at 12:09 PM, RSmith  wrote:

>
> On 2014/04/07 20:57, Petite Abeille wrote:
>
>> Lipstick Driven Design: “You can put lipstick on a pig, but it’s still a
>> pig.”
>>
>
> But if the Customer can't tell the difference, does that make you a good
> pimp?
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up a bulk import

2014-04-07 Thread Drago, William @ MWG - NARDAEAST
I do 12.3 million inserts with a single commit in approx 1 minute using 
prepared statements and parameters in accordance with the "Getting the best 
performance out of SQLite" section of the SQLite.NET.chm help file, which is 
available here:

http://system.data.sqlite.org/index.html/doc/trunk/Doc/SQLite.NET.chm?mimetype=application/x-chm

Below is my actual code. It's C#, but you should have no trouble doing the 
equivalent in VB.NET.

Good Luck,
-Bill

---START---
using System;
using System.IO;
using System.Data.SQLite;
using System.Diagnostics;

namespace Spurs
{
class Program
{
static void Main(string[] args)
{
/*
 * Spur database has the following structure:
 *
 * CREATE TABLE [spurs] (
 *[FundamentalFreq] FLOAT,
 *[SpurFreq] FLOAT,
 *[SpurdBm] FLOAT);
 *
 *
 * .CSV file is 12.3 million entries like this:
 * 1075.10,2150.449990,-49.192
 * 1075.10,2150.449990,-49.278
 * 1886.90,1887.197613,-54.701
 *
 * */

// Timer stuff
Stopwatch stopWatch = new Stopwatch();
TimeSpan insertTime, commitTime;
const string timePattern = "{0:00}:{1:00}:{2:00}.{3:00}";
string elapsedTime;

// File names
string fileName = "raw_data.csv";   // Input file name.
string dbFileName = "raw_data.db";  // Database file (output file) 
name.

// Input file read-loop variables
StreamReader myReader; // A reader to read the file.
int lineCount = 0; // Total number of lines in 
input file.
int reportInterval = 10;   // Update progress every 
reportInterval lines.
int intervalCount = 0; // Number of lines since last 
update report.
string line = "";  // Holds 1 line from file.
string[] lineCSV = new string[3];  // Holds the values from each 
line.

// Database variables
string connectionString;   // Database connection string.
string commandString;  // Database SQL command string.
connectionString = String.Format("Data Source={0}", dbFileName);
commandString = "INSERT INTO spurs VALUES (?, ?, ?)";

SQLiteConnection connection;   // Active database connection.
SQLiteTransaction transaction; // Active database transaction.
SQLiteCommand SQLCommand;  // Active database command.

SQLiteParameter @FundamentalFreq;  // UUT Test frequency (MHz).
SQLiteParameter @SpurFreq; // Frequency of largest spur 
(MHz).
SQLiteParameter @SpurAmp;  // Amplitude of largest spur 
(dBm).
@FundamentalFreq = new SQLiteParameter();
@SpurFreq = new SQLiteParameter();
@SpurAmp = new SQLiteParameter();

// Process .csv file.
System.Console.WriteLine("Reading file: " + fileName);
using (connection = new SQLiteConnection(connectionString))
{
using (SQLCommand = new SQLiteCommand(connection))
{
SQLCommand.CommandText = commandString;
SQLCommand.Parameters.Add(@FundamentalFreq);
SQLCommand.Parameters.Add(@SpurFreq);
SQLCommand.Parameters.Add(@SpurAmp);
connection.Open();
using (transaction = connection.BeginTransaction())
{
using (myReader = new StreamReader(fileName))
{
stopWatch.Start();
while ((line = myReader.ReadLine()) != null)
{
// Get values from one line in the .csv file.
lineCSV = line.Split(new char[] { ',' });
@FundamentalFreq.Value = 
double.Parse(lineCSV[0]);
@SpurFreq.Value = double.Parse(lineCSV[1]);
@SpurAmp.Value = double.Parse(lineCSV[2]);

// Insert them into the database.
SQLCommand.ExecuteNonQuery();

// Print progress every reportInterval lines.
lineCount++;
intervalCount++;
if (intervalCount == reportInterval)
{
System.Console.Write("Processing line " + 
lineCount + '\r');
intervalCount = 0;
}
}// End while.
   

Re: [sqlite] Is there a way to load a blob from the shell?

2014-04-07 Thread Larry Brasfield
About my shell extension implementing ".blobextract" and
".blobreplace" commands, Richard Hipp writes:
 "apparently uses the incremental BLOB I/O interface to avoid the need
to load entire images into memory." and "That is nice, but on a modern
workstation with many GB of RAM, is it really necessary?  Maybe in
some obscure cases.  But in the common case of a smaller BLOB (a few
megabytes) I think custom functions would work better:"

Yes, that is much better for BLOBs where memory does not become a
limiting factor.  It has a nice property I think of as
"composability".

I wrote the extension when I was using a cute little TCL virtual
filesystem using SQLite for storage and BLOBs for file content.  My
thinking was that I did not wish to impose an artificial size limit in
the shell, which I consider a useful tool for all kinds of things,
including what I may dream up later.  I dislike having seemingly
reliable, simple tools suddenly pop up as failure points, blocking my
workflow with their limitations.

As I told Simon, it would be nice to combine the composability of the
extension functions with the memory gentleness of SQLite's incremental
BLOB I/O.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] comma-separated string data

2014-04-07 Thread RSmith


On 2014/04/07 20:57, Petite Abeille wrote:

Lipstick Driven Design: “You can put lipstick on a pig, but it’s still a pig.”


But if the Customer can't tell the difference, does that make you a good pimp?


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] activate statement count with sqlite3_interrupt

2014-04-07 Thread E. Timothy Uy
In this case I have a text box where the user types in search results. I
want to cancel all pending searches the moment they start typing something
else.


On Mon, Apr 7, 2014 at 12:04 PM, E. Timothy Uy  wrote:

> Isn't the interrupt in play until all active statements have been stopped?
> Including statements that are added after the interrupt is called? I just
> want to make sure it is safe to call the next statement.
>
> " Any new SQL statements that are started after the sqlite3_interrupt()
> call and before the running statements reaches zero are interrupted as if
> they had been running prior to the sqlite3_interrupt() call."
>
>
> On Mon, Apr 7, 2014 at 11:52 AM, Richard Hipp  wrote:
>
>> On Mon, Apr 7, 2014 at 2:47 PM, E. Timothy Uy  wrote:
>>
>> > Hi, I am considering using sqlite3_interrupt to interrupt a long query -
>> > but the documentation states that the order will stand until the
>> activate
>> > statement count is zero. How do I know that the activate statement
>> count is
>> > zero?
>> >
>>
>> You could use http://www.sqlite.org/c3ref/stmt_busy.html
>>
>> Why do you think you need to know that?  What does your application do
>> differently if the active statement count is zero versus if it is not?
>>
>>
>>
>>
>> --
>> D. Richard Hipp
>> d...@sqlite.org
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] activate statement count with sqlite3_interrupt

2014-04-07 Thread E. Timothy Uy
Isn't the interrupt in play until all active statements have been stopped?
Including statements that are added after the interrupt is called? I just
want to make sure it is safe to call the next statement.

" Any new SQL statements that are started after the sqlite3_interrupt()
call and before the running statements reaches zero are interrupted as if
they had been running prior to the sqlite3_interrupt() call."


On Mon, Apr 7, 2014 at 11:52 AM, Richard Hipp  wrote:

> On Mon, Apr 7, 2014 at 2:47 PM, E. Timothy Uy  wrote:
>
> > Hi, I am considering using sqlite3_interrupt to interrupt a long query -
> > but the documentation states that the order will stand until the activate
> > statement count is zero. How do I know that the activate statement count
> is
> > zero?
> >
>
> You could use http://www.sqlite.org/c3ref/stmt_busy.html
>
> Why do you think you need to know that?  What does your application do
> differently if the active statement count is zero versus if it is not?
>
>
>
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] activate statement count with sqlite3_interrupt

2014-04-07 Thread E. Timothy Uy
Hi, I am considering using sqlite3_interrupt to interrupt a long query -
but the documentation states that the order will stand until the activate
statement count is zero. How do I know that the activate statement count is
zero?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] comma-separated string data

2014-04-07 Thread mm.w
Hello,

Dear Petite Abeille, you may repeat it 1 times, they don't listen, they
prefer adding to the previous mistake instead of fixing the origin (hiding
behind falsehood constraints, like it is way it is...) until it will fall
apart with unsolvable issues and developer-made-bugs, surely that's the way
to kill a product even the best selling one.

Best.



On Mon, Apr 7, 2014 at 11:24 AM, Petite Abeille wrote:

>
> On Apr 7, 2014, at 3:28 PM, Dominique Devienne 
> wrote:
>
> > For those interested, here's an article along the same lines that
> > better demonstrate what I mean by the above:
> >
> >
> http://technology.amis.nl/2013/06/26/oracle-database-12c-joining-and-outer-joining-with-collections/
>
> Aha! That’s what that mysterious CROSS/OUTER APPLY is all about. Thanks
> for the link :)
>
> > The new Oracle 12c join syntax is basically just syntax sugar hiding
> > the TABLE operator and its implicit COLUMN_VALUE column.
>
> Well, table( … ) can apply to records (e.g. pipelined function) with fully
> named attributes.
>
> So, really, we are saying this is rather high cholesterol for
> outer/full/cross join table( pipeline( parameter, ... ) )? Is it really
> worthwhile a full blown new keyword/concept? Doubtful.
>
> Anyway… back to SQLite :)
>
> As James K. Lowden kindly, and repetitively, pointed out:
>
> http://www.schemamania.org/sql/#lists
>
> Perhaps worthwhile quoting a few words:
>
> "Questions are frequently asked about table designs that are hopelessly
> wrong. The solution to the question is not to write the query, but to
> re-write the table, after which the query will practically write itself.
>
> Perhaps the most egregious example is a column whose value is a list or,
> in SQL terms, a repeating group. The elements in the list are perhaps
> comma-separated, and some poor schlep has the task of selecting or joining
> on the the nth element in the list.”
>
> Don’t be *that* schlep.
>
> N.B. There is no glory in beautifully solving a hopelessly wrong problem.
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up a bulk import

2014-04-07 Thread RSmith


On 2014/04/07 20:33, J Trahair wrote:

Hi

I am using SQLite running under VB.net in a import routine of a series of csv files, but it seems to be slow, and I don't mean 
microseconds, I mean minutes.


A typical INSERT might be:

INSERT INTO AllSales (Source, MachineName, Location, UserPIN, TariffName, CustomerID, DateOfSale, TimeOfSale, TicketGroup, Action, 
HasBeenVerified, EPOSRecNo, CrossingName, QuantitySold) VALUES ('Terminal', 'HH098016', 'Test Company', '6992', 'Test Company', 
'20140326-135946', '2014-03-26', '13:59:46', 'Test Company', 'Logout: 

Re: [sqlite] How to speed up a bulk import

2014-04-07 Thread Simon Slavin

On 7 Apr 2014, at 7:33pm, J Trahair  wrote:

> Would having a field index or key help?

Adding more indexes and keys make searches faster, at the expense of making the 
original INSERT slower.

As a test, instead of executing the INSERT commands, write the commands 
themselves to a text file.  Then add a BEGIN and the beginning and an END at 
the end.  Then use the shell tool to .read that file.  Is is faster or slower 
than your VB code ?

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to speed up a bulk import

2014-04-07 Thread Petite Abeille

On Apr 7, 2014, at 8:33 PM, J Trahair  wrote:

> Any suggestions welcome. Thank you.

One word: transaction.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How to speed up a bulk import

2014-04-07 Thread J Trahair

Hi

I am using SQLite running under VB.net in a import routine of a series 
of csv files, but it seems to be slow, and I don't mean microseconds, I 
mean minutes.


A typical INSERT might be:

INSERT INTO AllSales (Source, MachineName, Location, UserPIN, 
TariffName, CustomerID, DateOfSale, TimeOfSale, TicketGroup, Action, 
HasBeenVerified, EPOSRecNo, CrossingName, QuantitySold) VALUES 
('Terminal', 'HH098016', 'Test Company', '6992', 'Test Company', 
'20140326-135946', '2014-03-26', '13:59:46', 'Test Company', 'Logout: 

Re: [sqlite] comma-separated string data

2014-04-07 Thread Petite Abeille

On Apr 7, 2014, at 3:28 PM, Dominique Devienne  wrote:

> For those interested, here's an article along the same lines that
> better demonstrate what I mean by the above:
> 
> http://technology.amis.nl/2013/06/26/oracle-database-12c-joining-and-outer-joining-with-collections/

Aha! That’s what that mysterious CROSS/OUTER APPLY is all about. Thanks for the 
link :)

> The new Oracle 12c join syntax is basically just syntax sugar hiding
> the TABLE operator and its implicit COLUMN_VALUE column.

Well, table( … ) can apply to records (e.g. pipelined function) with fully 
named attributes.

So, really, we are saying this is rather high cholesterol for outer/full/cross 
join table( pipeline( parameter, ... ) )? Is it really worthwhile a full blown 
new keyword/concept? Doubtful. 

Anyway… back to SQLite :)

As James K. Lowden kindly, and repetitively, pointed out:

http://www.schemamania.org/sql/#lists

Perhaps worthwhile quoting a few words:

"Questions are frequently asked about table designs that are hopelessly wrong. 
The solution to the question is not to write the query, but to re-write the 
table, after which the query will practically write itself.

Perhaps the most egregious example is a column whose value is a list or, in SQL 
terms, a repeating group. The elements in the list are perhaps comma-separated, 
and some poor schlep has the task of selecting or joining on the the nth 
element in the list.”

Don’t be *that* schlep.

N.B. There is no glory in beautifully solving a hopelessly wrong problem.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite VFS for Chan FatFS

2014-04-07 Thread Andrew Beal
Richard,

I appreciate the reference. I am trying to replace a method based on the 
implementation you suggested since we discovered that the SQLite DB doesn't 
take up even a full 5 MB on our system so we are trying to reclaim the extra 
space on the flash chip for other features. 

Andrew Beal

Email: ab...@whoi.edu

-Original Message-
From: Richard Hipp [mailto:d...@sqlite.org] 
Sent: Monday, April 07, 2014 10:06 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] SQLite VFS for Chan FatFS

On Mon, Apr 7, 2014 at 9:52 AM, Andrew Beal  wrote:

> Hey All,
>
> Does anyone have an example of a VFS for use with Chan FatFS or Petit 
> FatFS?
>
> Ref: http://elm-chan.org/fsw/ff/00index_e.html
>
>
Just glaceing at the interface spec, it appears that Petit FatFS only allows a 
single file to be open at once. That isn't sufficient for SQLite (unless you 
specify PRAGMA journal_mode=OFF or journal_mode=MEMORY) so I don't think that 
will work for you.  But an interface to Chan FatFS seems easily doable.

But have you considered using the "test_onefile.c" VFS (
http://www.sqlite.org/src/artifact/0396f220561f3b4e) that writes directly to 
persistent media, without any intervening filesystem?  Would that VFS 
accomplish what you want?

--
D. Richard Hipp
d...@sqlite.org

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] What's the purpose of the "automatic index on" warning message?

2014-04-07 Thread Richard Hipp
On Mon, Apr 7, 2014 at 11:51 AM, Jens Miltner  wrote:

> We get an sqlite3_log() message with errorCode 284 and message "automatic
> index on ...".
> I assume this is some performance penalty warning, but I have no idea what
> to make of it:
>
> We do have an explicit index on the table and column mentioned in the
> warning message, so I don't know what to do to avoid this warning and
> potentially improve the query performance.
>

The warning is to let you know that SQLite could not find a way to use your
index and so it had to make its own index, which might result in a query
that is slower than you were counting on.


>
> Can anybody shed light on this warning message (it's issued by the code in
> sqlite3.c, line 11008)?
>
>
> Thanks,
> -jens
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] What's the purpose of the "automatic index on" warning message?

2014-04-07 Thread Jens Miltner
We get an sqlite3_log() message with errorCode 284 and message "automatic index 
on ...".
I assume this is some performance penalty warning, but I have no idea what to 
make of it:

We do have an explicit index on the table and column mentioned in the warning 
message, so I don't know what to do to avoid this warning and potentially 
improve the query performance.

Can anybody shed light on this warning message (it's issued by the code in 
sqlite3.c, line 11008)?


Thanks,
-jens


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Backwards compatibility from 3.8.4.3 to 3.6.20

2014-04-07 Thread Richard Hipp
On Mon, Apr 7, 2014 at 10:48 AM, RSmith  wrote:

> On 2014/04/07 16:44, Richard Hipp wrote:
>
>> On Mon, Apr 7, 2014 at 10:41 AM, Alejandro Santos > >wrote:
>>
>>  On Mon, Apr 7, 2014 at 3:32 PM, Alejandro Santos 
>>> wrote:
>>>
 Changing the journal_mode does not solve my issue :(

  A self contained test case:
>>>
>>> $ LD_LIBRARY_PATH= ./sqlite3 /tmp/broken2.sqlite
>>> SQLite version 3.8.4.3 2014-04-03 16:53:12
>>> Enter ".help" for usage hints.
>>> sqlite> PRAGMA page_size=65536;
>>>
>>>  Support for 64K page sizes was added in 3.7.1.  Reduce the page size to
>> 32K
>> and you should be fine.
>>
>
> Also, shouldn't the OP VACUUM or something after those pragmas for the
> changes to take effect considering the file is pre-existing?
>

Yes, indeed.  I understood the above to be the process to create a new
database that demonstrates the problem.  If you want to convert an existing
database to 32K pages:

 PRAGMA page_size=32768; VACUUM;

The conversion does not actually take place until the VACUUM command is run.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Backwards compatibility from 3.8.4.3 to 3.6.20

2014-04-07 Thread RSmith

On 2014/04/07 16:44, Richard Hipp wrote:

On Mon, Apr 7, 2014 at 10:41 AM, Alejandro Santos wrote:


On Mon, Apr 7, 2014 at 3:32 PM, Alejandro Santos 
wrote:

Changing the journal_mode does not solve my issue :(


A self contained test case:

$ LD_LIBRARY_PATH= ./sqlite3 /tmp/broken2.sqlite
SQLite version 3.8.4.3 2014-04-03 16:53:12
Enter ".help" for usage hints.
sqlite> PRAGMA page_size=65536;


Support for 64K page sizes was added in 3.7.1.  Reduce the page size to 32K
and you should be fine.


Also, shouldn't the OP VACUUM or something after those pragmas for the changes 
to take effect considering the file is pre-existing?


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Backwards compatibility from 3.8.4.3 to 3.6.20

2014-04-07 Thread Richard Hipp
On Mon, Apr 7, 2014 at 10:41 AM, Alejandro Santos wrote:

> On Mon, Apr 7, 2014 at 3:32 PM, Alejandro Santos 
> wrote:
> >
> > Changing the journal_mode does not solve my issue :(
> >
>
> A self contained test case:
>
> $ LD_LIBRARY_PATH= ./sqlite3 /tmp/broken2.sqlite
> SQLite version 3.8.4.3 2014-04-03 16:53:12
> Enter ".help" for usage hints.
> sqlite> PRAGMA page_size=65536;
>

Support for 64K page sizes was added in 3.7.1.  Reduce the page size to 32K
and you should be fine.



> sqlite> PRAGMA legacy_file_format=ON;
> sqlite> PRAGMA synchronous = OFF;
> sqlite> PRAGMA journal_mode=DELETE;
> delete
> sqlite> CREATE TABLE IF NOT EXISTS mytable (
>...> my_id INTEGER PRIMARY KEY,
>...> my_name TEXT,
>...> CONSTRAINT part_name_idx UNIQUE (my_name));
> sqlite>
>
> $ LD_LIBRARY_PATH= sqlite3 /tmp/broken2.sqlite
> SQLite version 3.6.20
> Enter ".help" for instructions
> Enter SQL statements terminated with a ";"
> sqlite> .schema
> Error: file is encrypted or is not a database
>
> And here it is the gziped+base64ed broken2.sqlite file:
>
> http://pastebin.com/Tk6Nvxv7
>
> Thank you,
>
> --
> Alejandro Santos
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Backwards compatibility from 3.8.4.3 to 3.6.20

2014-04-07 Thread Alejandro Santos
On Mon, Apr 7, 2014 at 3:32 PM, Alejandro Santos  wrote:
>
> Changing the journal_mode does not solve my issue :(
>

A self contained test case:

$ LD_LIBRARY_PATH= ./sqlite3 /tmp/broken2.sqlite
SQLite version 3.8.4.3 2014-04-03 16:53:12
Enter ".help" for usage hints.
sqlite> PRAGMA page_size=65536;
sqlite> PRAGMA legacy_file_format=ON;
sqlite> PRAGMA synchronous = OFF;
sqlite> PRAGMA journal_mode=DELETE;
delete
sqlite> CREATE TABLE IF NOT EXISTS mytable (
   ...> my_id INTEGER PRIMARY KEY,
   ...> my_name TEXT,
   ...> CONSTRAINT part_name_idx UNIQUE (my_name));
sqlite>

$ LD_LIBRARY_PATH= sqlite3 /tmp/broken2.sqlite
SQLite version 3.6.20
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> .schema
Error: file is encrypted or is not a database

And here it is the gziped+base64ed broken2.sqlite file:

http://pastebin.com/Tk6Nvxv7

Thank you,

-- 
Alejandro Santos
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite VFS for Chan FatFS

2014-04-07 Thread Richard Hipp
On Mon, Apr 7, 2014 at 9:52 AM, Andrew Beal  wrote:

> Hey All,
>
> Does anyone have an example of a VFS for use with Chan FatFS or Petit
> FatFS?
>
> Ref: http://elm-chan.org/fsw/ff/00index_e.html
>
>
Just glaceing at the interface spec, it appears that Petit FatFS only
allows a single file to be open at once. That isn't sufficient for SQLite
(unless you specify PRAGMA journal_mode=OFF or journal_mode=MEMORY) so I
don't think that will work for you.  But an interface to Chan FatFS seems
easily doable.

But have you considered using the "test_onefile.c" VFS (
http://www.sqlite.org/src/artifact/0396f220561f3b4e) that writes directly
to persistent media, without any intervening filesystem?  Would that VFS
accomplish what you want?

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Backwards compatibility from 3.8.4.3 to 3.6.20

2014-04-07 Thread Alejandro Santos
On Mon, Apr 7, 2014 at 11:47 AM, Alejandro Santos  wrote:
> Hi,
>
> I've been using latest v3.8.4.3 to write a database file, but when I
> try to open the file with my current distro's sqlite shell I get this
> error:
>

For some reason I can see Richard Hipp answer on the Web but not on my
Gmail account.

Changing the journal_mode does not solve my issue :(

$ ./sqlite3 -header ~/tmp/file.sqlite
SQLite version 3.8.4.3 2014-04-03 16:53:12
Enter ".help" for usage hints.
sqlite> PRAGMA journal_mode=DELETE;
journal_mode
delete
sqlite>

$ ./sqlite3 -header ~/tmp/file.sqlite
SQLite version 3.8.4.3 2014-04-03 16:53:12
Enter ".help" for usage hints.
sqlite> PRAGMA journal_mode;
journal_mode
delete
sqlite>

$ sqlite3 -header ~/tmp/file.sqlite
SQLite version 3.6.20
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> PRAGMA journal_mode;
journal_mode
delete
sqlite> .schema
Error: file is encrypted or is not a database

Thank you,

-- 
Alejandro Santos
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] comma-separated string data

2014-04-07 Thread Dominique Devienne
On Sun, Apr 6, 2014 at 8:15 PM, Dominique Devienne  wrote:
> [...]. I'd much prefer a cleaner Oracle-like TABLE()
> operator transforming the result array of a table-function operating
> on correlated values from a join as an intermediate result-set, i.e.
>
> select t.key, csv.COLUMN_VALUE from table t TABLE(scvsplit(t.csvfield)) csv

For those interested, here's an article along the same lines that
better demonstrate what I mean by the above:

http://technology.amis.nl/2013/06/26/oracle-database-12c-joining-and-outer-joining-with-collections/

The new Oracle 12c join syntax is basically just syntax sugar hiding
the TABLE operator and its implicit COLUMN_VALUE column.

Obviously SQLite does not have collection types, nor a TABLE operator,
but it does have virtual tables, which are very similar to
collection-returning functions in a way, and already "abused" by Max
(I say that in a nice way) to the same effect. So the only piece
that's missing is an official way to use vtables "on the fly", and
pass in to its xFilter method the value from the left-correlated value
for each joined left value, so the resulting cursor can "iterate" the
right values. FWIW. --DD
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is there a way to load a blob from the shell?

2014-04-07 Thread Kees Nuyt
On Sun, 6 Apr 2014 21:43:27 -0400, Richard Hipp  wrote:

> At http://www.sqlite.org/sar there is a utility program that generates an
> "SQLite Archive", similar to a ZIP archive but using SQLite as the file
> format instead of the ZIP format. 

Wonderful, thanks!


[Open]Solaris users may want to add:

CC += -D_XOPEN_SOURCE=500 -O2

to the Makefile.

For completeness, I also added target:

clean:
-rm sar sqlite3.o


-- 
Groet, Cordialement, Pozdrawiam, Regards,

Kees Nuyt

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Fwd: Re: comma-separated string data

2014-04-07 Thread R. Smith

This message went to the wrong address, apologies, herewith the repost:

On 2014/04/06 20:23, Dominique Devienne wrote:

On Sat, Apr 5, 2014 at 11:46 AM, RSmith  wrote:

   WITH csvrec(i,l,c,r) AS (
   SELECT tmpcsv.ID, 1, colCSV||',', '' FROM tmpcsv
 UNION ALL
   SELECT i,
  instr(c,',') AS vLen,
  substr(c,instr(c,',')+1) AS vRem,
  substr(c,1,instr(c,',')-1) AS vCSV
   FROM csvrec
   WHERE vLen>0
 )
   SELECT t.ID, t.colA, rt.r FROM tmpcsv AS t, csvrec AS rt
   WHERE t.ID=rt.i AND rt.r<>''
   ORDER BY t.ID
   LIMIT 100

Very interesting. Thanks for sharing that. But can this CTE be turned
into a view? Or does one need to retype the whole "algorithm" every
time one needs "join" on the "virtual" unrolled CSV field table? And
assuming such a "CTE view" can de defined, what if one selects from
the "CTE view" with a WHERE clause, to get only the CSV fields of a
single row of scvrec, would that prevent the whole "tmpcsv" result-set
for every row of csvrec to be generated?


Hi Dominique, there is no restriction I know of in the view (result set) of any 
sql query, It's much like a table and should honour
more or less any query requested of it - but please know I have no testing to 
confirm this with the specific CTE.
As to the second part of the question, the query in the case I posted does the 
entire result set, as is caused by the line loading
the first record set into the recursive table, namely:
SELECT tmpcsv.ID, 1, colCSV||',', '' FROM tmpcsv
There is however nothing stopping you from adding a WHERE clause behind that to 
simply pick out single records or specific sets of
it to work with, etc.


If the answer to either question above is true, then a specialized
vtable would be both more convenient and faster, no?

Your CTE has the great benefit to work out of the box though, unlike a
vtable, so it's a great example nonetheless. Thanks again for that.


I can't agree more, and please know I simply made up this bit of CTE SQL 
because the question was asked and responses ranged from
very difficult to impossible in SQL, (oh yes, and it was fun to do) , but I 
would never really dream of actually implementing this
in anything real for both performance and scalability reasons, however, the OP 
had a specific problem in that they were faced with a
DB with specific tables and layout which, for legacy reasons, couldn't be 
adjusted to 1NF and did not wish to go VT but just needed
to have that one instance of getting that specific data to be de-CSV'd - which 
the CTE will do very well out of the box, nothing
else needed - problem solved and easy win, but I wouldn't actually develop a 
new DB with it (obviously).

(FWIW - It's pretty fast, only marginally slower than one of the CSV add-ons I 
use, which is really a feather in the cap of the CTE
implementation in SQLite)





___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Backwards compatibility from 3.8.4.3 to 3.6.20

2014-04-07 Thread Richard Hipp
On Mon, Apr 7, 2014 at 5:47 AM, Alejandro Santos  wrote:

> Hi,
>
> I've been using latest v3.8.4.3 to write a database file, but when I
> try to open the file with my current distro's sqlite shell I get this
> error:
>
> $ sqlite3 file.sqlite
> SQLite version 3.6.20
> Enter ".help" for instructions
> Enter SQL statements terminated with a ";"
> sqlite> .schema
> Error: file is encrypted or is not a database
>
> This works fine with the original version used to write the file:
>
> $ ./sqlite3 ~/tmp/pbeastidx.sqlite
> SQLite version 3.8.4.3 2014-04-03 16:53:12
> Enter ".help" for usage hints.
> sqlite> .schema
> CREATE TABLE ...
>
> I see no backwards compatibility issues documented on sqlite webpage.
> Is the doc behind some changes?
>

Version 3.6.20 does not understand "PRAGMA journal_mode=WAL".  You need to
run "PRAGMA journal_mode=DELETE" from a 3.7.0 or later shell.  After that
you should be able to read and write the database using version 3.6.20.




-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Backwards compatibility from 3.8.4.3 to 3.6.20

2014-04-07 Thread Alejandro Santos
Hi,

I've been using latest v3.8.4.3 to write a database file, but when I
try to open the file with my current distro's sqlite shell I get this
error:

$ sqlite3 file.sqlite
SQLite version 3.6.20
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> .schema
Error: file is encrypted or is not a database

This works fine with the original version used to write the file:

$ ./sqlite3 ~/tmp/pbeastidx.sqlite
SQLite version 3.8.4.3 2014-04-03 16:53:12
Enter ".help" for usage hints.
sqlite> .schema
CREATE TABLE ...

I see no backwards compatibility issues documented on sqlite webpage.
Is the doc behind some changes?

Thank you,

-- 
Alejandro Santos
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] R*Tree and foreign key constraints

2014-04-07 Thread Clemens Ladisch
Bernd wrote:
> I found nothing on the SQLite website stating that R*Tree tables don't
> support foreign key constraints but apparently they don't. Is this
> a limitation of virtual tables in general or is there something I
> missed?

When you execute "CREATE VIRTUAL TABLE t USING xxx(...)", any text
written between the parentheses is passed to the virtual table module,
which can do anything it wants with it.

Typcially, what the virtual table module actually does with this text
is either
- passing it unchanged to SQLite to create an actual table, and layering
  its own features on top; or
- parsing it, and implementing its own features.

The R-tree virtual table module uses the second option, but it looks
at the provided column names only to determine the names and number of
columns in the R-tree.  Therefore, foreign keys are not available in
R-trees.


Regards,
Clemens
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] comma-separated string data

2014-04-07 Thread Max Vlasov
On Sun, Apr 6, 2014 at 10:23 PM, Dominique Devienne  wrote:

>
> If the answer to either question above is true, then a specialized
> vtable would be both more convenient and faster, no?
>

Hmm... If logical peculiarity of vtable approach (when
where-constrained queries might be larger than full-scan one) is
acceptable by sqlite (mentioned in my other post), then where
expression might serve as parameters so a possible hybrid might be
possible (also inspired by the recent discussion of creating user
functions on the fly). For example, a virtual table that accepts a
Select statement might look like

CREATE VIRTUAL TABLE vcommalist USING QueryVirtualizer('WITH RECURSIVE
 :commalist  ')

And the actual query using it might look like

SELECT * FROM vcommalist WHERE commalist='1,2,3,4,5'

This one served more like shortcut, but probably a more broad version
is possible when the parameter to virtual table is a print formatted
string so one can dynamically customize parameters general parameters
can't, i.e., table names, output column names etc.

Multiply parameters would be great, but with current state of things
the implementation still should use some kind of workaround to ensure
correct results so should always return huge estimatedCost in
xBestIndex if the constrained arrived doesn't contain at least one
required parameter (WHERE clause lacks one) and low one if all
parameters are provided. I think that sqlite might as well interpret
estimatedCost equal to  -1  as a ban to use this index.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users