[sqlite] Is msvcrt coupling now unavoidable (3.8.7)?

2014-10-22 Thread Max Vlasov
Hi,

my static linking with Delphi for 3.7.8 version now complains about
_beginthreadex_/_endthreadex_.
Quick search shows than everywhere there's a recommendation to use these
functions instead of API CreateThread if one plans to use Visual c run-time
(msvcrt).

All my previous linking with sqlite was based on hand-made equivalent
function for every static link to external (non-dll) functions so my
programs never used msvcrt. Now probably I either have no choice to
including msvcrt linking of make my own "de-wrapper" so some function that
accepts _beginthreadex_ and calls CreateThread. Or can sqlite introduce
some define that makes vs run time coupled by default, but by disabling it,
uses CreateThread api?

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is msvcrt coupling now unavoidable (3.8.7)?

2014-10-22 Thread Max Vlasov
On Wed, Oct 22, 2014 at 4:50 PM, dave <d...@ziggurat29.com> wrote:

>
> > -Original Message-
> > From: sqlite-users-boun...@sqlite.org
> > [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Max Vlasov
> > Sent: Wednesday, October 22, 2014 5:25 AM
> > To: General Discussion of SQLite Database
> > Subject: [sqlite] Is msvcrt coupling now unavoidable (3.8.7)?
> ...
> > my static linking with Delphi for 3.7.8 version now complains about
> > _beginthreadex_/_endthreadex_.
> ...
> ...
>
> It's a good question; and it doesn't look like there is a great way of
> turning off that feature other than SQLITE_THREADSAFE=0, which you might
> not
> want for other reasons.
>
>
Thanks, dave, Clemens,

I also did some researching/fixing.

The sources have some contradictory information in the comments to the
current state, I'm not sure whether I should mention them here
Overall if one don't wants msvcrt coupling, then
SQLITE_MAX_WORKER_THREADS=0 should be in options. In this case no
_beginthreadex/_endthreadex
linking will be required.

Wrapping calls mentioned by Clemens Ladisch works. I wrote them
independently, but they look char by char the same :). When I did   PRAGMA
threads=4 and set breakpoints for "SELECT ... Order By" query , they were
visited and no visible errors appeared.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Search for text in all tables

2014-12-05 Thread Max Vlasov
On Thu, Dec 4, 2014 at 10:44 AM, Baruch Burstein  wrote:
>
> Is it possible to somehow search for/replace a string in all columns of all
> tables?
>

Not particularity the answer to your question, but rather a method you
or others might use.
I once implemented a virtual table "allvalues" that outputs all
database values with (hope self-explaining) fields

  TableName, TableRowId, FieldName, Value

that allows also exploring unknown complex databases. Take for example
places.sqlite of Mozilla Firefox. If you want to explore where it
saves your visited site, you can use the query

  SELECT Distinct TableName, FieldName FROM allvalues where Value like
"%http://%;

Sqlite's virtual tables are a great tool since with a little effort on
the developer side the newly created entity starts working as a first
class citizen of the sqlite engine. Compare this to a quick hack that
outputs all raw data from a specific database to a specific media.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Search for text in all tables

2014-12-05 Thread Max Vlasov
>> I once implemented a virtual table "allvalues" that outputs all
>> database values with (hope self-explaining) fields
>>
>> TableName, TableRowId, FieldName, Value
>
> Could you expand on how you coped with the underlying database
> changing, and how you mapped virtual table rowids to the actual
> database records?
>

This particular implementation was intended to be used as a
Select-only wrapper so it just iterates through every sqlite_master
table and every row of each table. I didn't support update and insert.
Rowid of this virtual table is compound bit mask starting with table
bits (able to fit the number of tables of this db), field bits
(maximum possible number of fields in a table) and the rest is rowid
of the particular table. So in theory this method won't work for
tables containing large 64-bit ids where there are not enough bits to
be used for table number and field number.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Artificially slow VACUUM by injecting a sleep() somewhere?

2014-12-08 Thread Max Vlasov
On Mon, Dec 8, 2014 at 5:56 PM, Roger Binns  wrote:
> On 12/07/2014 04:43 PM, David Barrett wrote:
>> so I'm curious if you can think of a way using the API (or any
>> other way) to essentially "nice" the process by inserting a short
>> "sleep" into whatever loop runs inside the VACUUM command.
>
> Using OS provided functionality will be the most reliable.  Other than
> that, a simple way is to sleep in the progress callback, although that
> will make I/O lumpy.

I wonder whether I/O "sleeping" possible in the first place. Correct
me, but what we usually call "sleeping" is about CPU that already
sleeps during most I/O operations waiting for rotating media finishing
its slow tasks. As a consequence, the more fragmented the data on
disk, the less relative cpu time will be spent trying to read and
write data. In this case the strategy might be to measure cpu
consumption value for consecutive blocks of data and if it's 100% (or
other heuristically adequate value)  then never sleep (this probably
means either cached data or non-fragmented data on a fast disk). But
when the cpu consumption drops significantly (much time spent waiting
for I/O), the sleeping indeed might be needed.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Sqlite as a platform performance comparison tool

2014-01-06 Thread Max Vlasov
Hi,

A thought came to compare two computers of different platforms (ie
i386 vs ARM) using uniform approach. We take two binaries of the same
sqlite version compiled with the best c compilers for both platforms
and compare the time spent for identical operations using memory based
databases (to exclulde I/O from measuring).

So, we could start with some comparatively large table loaded into
:memory: database and the following operations is going to make
joins/inserts etc probably as a single, semicolon delimited set of
queries. The sets are identical for both platforms and is probably is
best handled with a single sqlite3_exec

Is such comparision correct? So maybe even for memory-based operation
there is something that makes this comparision invalid?

Thanks,

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] latest sqlite 3 with Delphi 5 professional

2014-01-24 Thread Max Vlasov
On Thu, Jan 23, 2014 at 1:33 AM, dean gwilliam
 wrote:
> I'm just wondering what my options are here?
> Any advice much appreciated.
> ___

My two cents...

Historically I took Aducom TDataSet-compatible classes
(http://www.aducom.com/cms/page.php?2 , author - Albert Drent) and
used it ever since, but along the road there were many changes that I
made, so I'm not sure I can recommend one of the current (last
version). They had static variant implemented with msvcrt linked
(maybe it was removed to the moment), to remove the dependency
yourself you have to implement the following functions

_malloc,_realloc,_free,_memset,_strncmp,_memmove,_memcpy,_strlen,_qsort,_memcmp,_localtime

part of them might be just calls to Delphi existing rtl, for another
part a little work needed. This allowed me for example to monitor the
number of memory requests different queries make.

BCC 5.5 (freely downloadable) compiles any version of sqlite3 to
object files linkable to Delphi 5 and later, the only drawback I
noticed is that for memory-intensive operations (memory databases) the
performance is twice as worst comparing to the dll on the site
(probably VC compiled), but for databases on disk the difference is
small since I/O overhead compensate it.

Don't know about DISQLite3 , but one of the main performance issues
when using sqlite is that BDE was made for virtual access of data with
moving cursor, but it's not possible with sqlite. You have a query and
you can only move forward. So, the easiest approach is to load all
data, but imagine this for a very large table, activating TDataset in
this case may take very long time (and space). One of the approach is
to change it to load all rowids of the table and request record data
on the fly based on this array.

So, if you plan get best performance and don't need borland database
components and controls, then your best bet is to use sqlite api or
simple object wrappers around it. Otherwise, be aware that "impedance
mismatch" between sqlite and BDE may cost you performance penalties
depending on the library you use.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] latest sqlite 3 with Delphi 5 professional

2014-01-24 Thread Max Vlasov
On Fri, Jan 24, 2014 at 9:16 PM, Ralf Junker <ralfjun...@gmx.de> wrote:
> On 24.01.2014 10:06, Max Vlasov wrote:
>
>> BCC 5.5 (freely downloadable) compiles any version of sqlite3 to
>> object files linkable to Delphi 5 and later, the only drawback I
>>
>> Don't know about DISQLite3 , but one of the main performance issues
>
>
> DISQLite3 does _not_ show the performance issues you describe for your BCB
> 5.5 compiled object files. Quite the opposite: DISQLite3 outperformed
> sqlite3.dll whenever I tested.

Don't take it personally :) I just talked about c originated code that
we have to compile against OMF library files formats as long as
borland/codegear/embarcadero never supported COFF format (CMIIW). So
BCC is our only choice for static linking (probably Intel compilers
should still support OMF since Intel introduced it, but I did not try)

And when I talked about the x2 difference, it was about pure memory db
having a thousand rows and a query that make a cross join taking
totally about 6-20 seconds depending on the query. So no I/O involved,
pure cpu intensive operations inside sqlite. To my own surprise a dll
compiled with bcc 5.5 with -O2 option  (maximum optimization as I
recall) made it two times slower than the VC dll (from sqlite.org
site) compiled against the same version. So this is a synthetic test
not pretending to be general.

As for DISQLite3, I see from your site, that it is a great library
having support for many Delphi versions and many db features. I looked
at the source, as I see the dataset is unidirectional and processes
query on request. I'm sure there are no performance penalties here.
Good job

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Once again about random values appearance

2014-02-13 Thread Max Vlasov
Hi,

probably was discussed and modified before, but I still can not understand
some peculiarities with random column values.

The table
  Create table [TestTable] ([id] integer primary key)

populated with 100 default values (thanks to CTE now made with a single
query):

with recursive
  autoinc(id) as
 (values(1) UNION ALL Select id+1 from autoinc where id < 100)
insert into TestTable select id from autoinc


So the following query

select id, (select id from TestTable where id = abs(random() % 100)) as
rndid from TestTable where id=rndid

returns a single row (as expected), but this row contains different values
for id and rndid. I suppose this is because rndid to be shown is calculated
once, but the one used for comparison is calculated again and again. Is
this correct behavior?

Also checked several dll versions and saw that this change appeared
somewhere between 3.6.13 and 3.6.16, so .13 showed identical values, while
.16 different.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Once again about random values appearance

2014-02-13 Thread Max Vlasov
On Thu, Feb 13, 2014 at 4:26 PM, Richard Hipp <d...@sqlite.org> wrote:

> On Thu, Feb 13, 2014 at 4:45 AM, Max Vlasov <max.vla...@gmail.com> wrote:
>
> > Hi,
> >
> > probably was discussed and modified before, but I still can not
> understand
> > some peculiarities with random column values.
> >
>
> It is undefined behavior, subject to change depending the specific version
> of SQLite, compile-time options, optimization settings, and the whim of the
> query planner.
>
>

Thanks, this makes sense.

Assuming there are time related functions, user function with undefined
values etc, may I generalize this into "an expression is guaranteed to have
the same value in any context only if it is always return the same output
for the same input"?

Interesting, the problem probably arises since sometimes the symbols in the
result columns are easy to interpret mentally like "states" to be used
further, but they look more like macros to be substituted.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Once again about random values appearance

2014-02-13 Thread Max Vlasov
On Fri, Feb 14, 2014 at 6:35 AM, James K. Lowden
wrote:

> > > select id, (select id from TestTable where id = abs(random() % 100))
> > > as rndid from TestTable where id=rndid
>
> On Thu, 13 Feb 2014 07:26:55 -0500
> Richard Hipp  wrote:
>
> > It is undefined behavior, subject to change depending the specific
> > version of SQLite, compile-time options, optimization settings, and
> > the whim of the query planner.
>
> It should be defined.
>
> In the above query, random() should be evaluated once.  In the SQL
> model the user provides inputs once, and the system evaluates them
> once.
>
>
Once for the query or once for the row?
It does evaluated once for the query. To check, just remove where

  select id, (select id from TestTable where id = abs(random() % 100)) as
rndid from TestTable

and we have plenty of the same rndid as the right column

I almost understood the logic behind this, but just found the query that
surprised me once again. When I wrote it, I wanted to show that to compare
engines we have to remove alias usage from WHERE because most of engines
forbid using it in WHERE. But the following one that probably should be
compatible with other RDBMSes still shows different column values

select id, rndid from
(
select id, (select id from TestTable where id = abs(random() % 100)) as
rndid from TestTable
) where id=rndid

But I supposed the rndid in outer query should have known (fixed) value and
the values from where clause should be shown identical.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Suggestion to add "locate" as a broader version of "instr"

2014-02-14 Thread Max Vlasov
Hi,

Some time ago when there was no "instr" functions, I looked at Mysql help
pages and implemented a user function "locate" as the one that allows
searching starting a particular position in the string. With two parameters
form it was just identical to "instr" only the order of parameters was
reversed. As I see, the latest sqlite has only "instr".

It's not a big deal, but I noticed that "locate" with three parameters
becomes convenient for CTE recursive queries since it allows search
sequentially in the string. For example, a little bulky at last, but I
managed to do "comma-list to dataset" query

I suppose implementing "locate" and doing "instr" as a call to "locate"
would cost the developers probably no more than a hundred of bytes for the
final binary

Thanks

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Once again about random values appearance

2014-02-16 Thread Max Vlasov
On Mon, Feb 17, 2014 at 1:22 AM, James K. Lowden
<jklow...@schemamania.org>wrote:

> On Fri, 14 Feb 2014 08:32:02 +0400
> Max Vlasov <max.vla...@gmail.com> wrote:
>
> > From: Max Vlasov <max.vla...@gmail.com>
> > To: General Discussion of SQLite Database <sqlite-users@sqlite.org>
> > Reply-To: General Discussion of SQLite Database
> > <sqlite-users@sqlite.org> Date: Fri, 14 Feb 2014 08:32:02 +0400
> > Subject: Re: [sqlite] Once again about random values appearance
> >
> > On Fri, Feb 14, 2014 at 6:35 AM, James K. Lowden
> > <jklow...@schemamania.org>wrote:
> >
> > > > > select id, (select id from TestTable where id = abs(random() %
> > > > > 100)) as rndid from TestTable where id=rndid
> > >
> > > On Thu, 13 Feb 2014 07:26:55 -0500
> > > Richard Hipp <d...@sqlite.org> wrote:
> > >
> > > > It is undefined behavior, subject to change depending the specific
> > > > version of SQLite, compile-time options, optimization settings,
> > > > and the whim of the query planner.
> > >
> > > It should be defined.
> > >
> > > In the above query, random() should be evaluated once.  In the SQL
> > > model the user provides inputs once, and the system evaluates them
> > > once.
> > >
> > >
> > Once for the query or once for the row?
>
> Once for the query.
>
> As a user you have no control how the system evaluates your query.
> The evaluation may change over time with different implementations, but
> the semantics of the query do not.
>
> Not long ago on this list we discussed
>
> SELECT *, datetime('now') from T;
>
> and the behavior was that the datetime function was called per-row,
> resulting in different times on different rows.  It was changed, the
> rationale IIRC to be compatible with the SQL standard and other DBMSs.
>


Ok, I hope I found the topic, the title was
  "racing with date('now') (was: Select with dates):
one of the links to the archive
  https://www.mail-archive.com/sqlite-users@sqlite.org/msg79456.html

CMIIW, but as I see it, the final modification was commented by Richard

 > As a compromise, the current SQLite trunk causes 'now' to be
exactly  the
 > same for all date and time functions within a single sqlite3_step()
call.

But this is just for now and date-related functions. I wanted to be sure so
created a user function NanoSec() that returns  nanoseconds as it is
calculated with QueryPerformanceCounter and QueryPerformanceFrequency on
Windows and clock_gettime(CLOCK_REALTIME... on Linux. Seems like it's not
always real nanoseconds but value that is changed very frequently to be
different for close VDBE instructions of sqlite engine.

So

  Select nanosec() - nanosec() from ...

returns non-zero values for most of the times, so there's no guarantee the
user functions or any other functions will be called once for the step.

My original issue was commented by Richard and there's no other argument I
can post because ability ot use alias in the WHERE clause is probably the
sqlite-only feature so you just can't reference any standard about this.

But.. your first reply triggered another problem, it's where outer query
uses alias from the inner query and here sqlite can be compared with other
engines. So the nanosec example modified

 Select v-v from
 (
   Select nanosec() as v from TestTable
 )

...shows non-zero values for the current (3.8.3) and for older (3.6.10)
version. And here it would be interesting to know whether any standard has
something to tell about the value of v in the outer query.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Once again about random values appearance

2014-02-17 Thread Max Vlasov
On Mon, Feb 17, 2014 at 2:27 PM, Simon Slavin <slav...@bigfraud.org> wrote:

>
> On 17 Feb 2014, at 7:59am, Max Vlasov <max.vla...@gmail.com> wrote:
>
> > So the nanosec example modified
> >
> > Select v-v from
> > (
> >   Select nanosec() as v from TestTable
> > )
> >
> > ...shows non-zero values for the current (3.8.3) and for older (3.6.10)
> > version.
>
> Erm ... that worries me a little.
>
> I don't know how SQLite handles sub-selects internally.  But the
> conventional way of doing them is a JOIN to a VIEW.  Which means that v-v
> should always equal zero.
>


Explain lit a litle light
...
"4""Function""0""NanoSec(0)"
"5""Function""0""NanoSec(0)"
"6""Subtract""2"
"7""ResultRow""1"
"8""Next""4"
.
So no intermediate storage probably for performance reasons. Also the
listing looks very self-explainable while possible stateful one will add
more complexity to the VDBE code.


>
> By the way, when figuring out how to optimize this still it's worth noting
> that a parameterless function is rare in SQL.  It's rarely worth optimizing
> unnamed expressions because you rarely get the same unnamed expression
> (including parameters) used twice.
>
>
Simon, I see your point, next time my test function will depend on at least
a dozen of very important parameters and will have a very, very, very long
name :)

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Once again about random values appearance

2014-02-17 Thread Max Vlasov
On Mon, Feb 17, 2014 at 7:00 PM, RSmith <rsm...@rsweb.co.za> wrote:
>
>
> On 2014/02/17 09:59, Max Vlasov wrote:
>>
>> .
>> So
>>
>>Select nanosec() - nanosec() from ...
>>
>> returns non-zero values for most of the times, so there's no guarantee the
>> user functions or any other functions will be called once for the step.//... 
>> etc.
>
>
> Did you mark your nanosec function as SQLITE_DETERMINISTIC 
> <http://www.sqlite.org/c3ref/c_deterministic.html>?
> http://www.sqlite.org/c3ref/create_function.html
>
> Which, if not, it can and will very much return non-zero values.
>

Very interesting option, it did affected the results, they're now zero
for both cases. Also I see this quite a young option listed as a new
feature of 3.8.3.

"Added SQLITE_DETERMINISTIC as an optional bit in the 4th argument to
the sqlite3_create_function() and related interfaces, providing
applications with the ability to create new functions that can be
factored out of inner loops when they have constant arguments"

So the query with this option

  Select nanosec() as v from TestTable where v<>v

always returns empty dataset.

But it seems this option still has something to explain since

  Select nanosec() - nanosec() from TestTable

returns always zeros while

  Select nanosec(), nanosec() from TestTable

returns different values for fields

Either nanosec() - nanosec() is much faster than the granularity of
performance counter on average windows hardware or they are different
cases for some reasons.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Latest Sqlite grammar as machine understandable file

2014-02-21 Thread Max Vlasov
Hi,

Is there a machine-readable (BNF or other) grammar as equivalent to
the current syntax diagrams?
  http://www.sqlite.org/syntaxdiagrams.html

The only one a little similar I found is
  http://www.sqlite.org/docsrc/artifact/873cf35adf14cf34
  ( mentioned as art/syntax/all-bnf.html )

but it's pretty outdated (no CTE)

Thanks

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Latest Sqlite grammar as machine understandable file

2014-02-21 Thread Max Vlasov
On Fri, Feb 21, 2014 at 4:47 PM, Richard Hipp <d...@sqlite.org> wrote:
> On Fri, Feb 21, 2014 at 7:29 AM, Max Vlasov <max.vla...@gmail.com> wrote:
>
>> Is there a machine-readable (BNF or other) grammar as equivalent to
>
> Not that I am aware of.
>

I just noticed the file ( bubble-generator-data.tcl )
  
www.sqlite.org/docsrc/doc/tip/art/syntax/bubble-generator-data.tcl?mimetype=text/plain
is it created by a human or by a machine? It looks like a good
candidate, but might lack some essential information for parsing. If
it was generated by a machine then the source of this generation might
be next good candidate :)

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Latest Sqlite grammar as machine understandable file

2014-02-21 Thread Max Vlasov
On Fri, Feb 21, 2014 at 5:24 PM, Richard Hipp <d...@sqlite.org> wrote:
> On Fri, Feb 21, 2014 at 7:29 AM, Max Vlasov <max.vla...@gmail.com> wrote:
>
>>
>> The only one a little similar I found is
>>   http://www.sqlite.org/docsrc/artifact/873cf35adf14cf34
>>   ( mentioned as art/syntax/all-bnf.html )
>>
>
> An updated version of all-bnf.html has now been checked in at
> http://www.sqlite.org/docsrc/doc/trunk/art/syntax/all-bnf.html
>

Thanks a lot, that human-machine activity was much faster than my
writing reply post :)

My  thoughts about parsing was about using it to recognize some states
of user queries to suggest additional tasks. In a sense any query is a
small database containing for example
- the list of datasets used (might be a quick hint nearby),
- possible state of master-detail relationship (enabling menu item for
showing tow connected list views instead of the joined table)

I already tried to detect some states without parsing, but obviously
not so much is possible without full AST at hands.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question about how sqlite recovers after a power loss

2014-02-26 Thread Max Vlasov
On Wed, Feb 26, 2014 at 12:33 PM, Fabrice Triboix
 wrote:
>
> And even then, that would not explain why the journal file lingers after 
> re-opening the database.
>

I remember asking a similar question. As long as I remember, the main
logical implication is that journal file presence is not a mark to
force database into some actions. It's rather a supplement for the
error state of the base. So if the base is in error state and there's
no journal file, it's bad. But reverse is ok and might appear with
some scenarios.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Virtual Table "Functions"

2014-03-01 Thread Max Vlasov
On Fri, Feb 28, 2014 at 10:14 PM, Dominique Devienne
 wrote:
> Can someone tell me how the statement below works?
> 
>
> Thanks for any help on this. This is really puzzling to me. --DD


Very puzzling for me too
For any statement like this

  select * from blablabla(123)

sqlite (3.8.3.1) primarily reports syntax a error...

   near "(": syntax error

... before complaining about anything else.

I'd also be glad to see a shortcut when a virtual table created just
for a select statement and dies automatically, but I suspect it is not
supported and it was their own modified version of sqlite. Hope OP
reads this post also and will have an answer for us. Probably this
will help providing following information regarding his issues.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Virtual table API performance

2014-03-01 Thread Max Vlasov
Hi,
thanks for explaining your syntax in another post. Now about virtual
tables if you don't mind.

On Fri, Feb 28, 2014 at 8:24 PM, Eleytherios Stamatogiannakis
 wrote:
>
> If we load into SQLite, 
>
> create table newtable as select * from READCOMPRESSEDFILE('ctable.rc');
>
> it takes: 55 sec
>
>
> If we create an external program 
>
> it takes: 19 sec (~3x faster than using the virtual table API)
>
>

Looking at your numbers, as a user (and fan :) of virtual tables I
decided to do some tests.

I have a virtual table "all values", it was designed for enumeration
of all tables values to the one single virtual table, so finally it is
a long list of

  TableName, TableRowId, FieldName, Value

so you get the idea. As an example of what it may do, you may open
places.sqlite of mozilla browser and do

  Select * from AllValues where Value Like "%sqlite.org%"

and see actual results even not knowing how they planned their schema.

Internally this virtual table simply uses general selects for all
other tables met in sqlite_master. This is a good (but probably not
the best) test for measuring virtual tables performance, because

  SELECT * FROM AllValues

is equivalent to reading all conventional tables of this database.
Besides
- the tool I use has a tweaker implemented with VFS that allows
measuring speed and other characteristics of the query performed while
the query is in effect.
- I have an option that forces resetting windows cache for the
database file when it is reopened. So with it we exclude the windows
cache from consideration so pure I/O reading is used. Btw, when you do
your comparison, it's very important to reset system cache before
every measurement that involves I/O.


So I took a comparatively large (500 Mb) database consisting of
several small and one big table (Posts) and compared two queries.

(Query1)

  Select sum(length(Body) + length(Title)) from Posts

This ones effectively reads the table data and uses
- length() to force sqlite reading texts that don't fit into single db page
- sum() to exclude accumulating results on my side from comparison, so
we have a single row, single column result from the work completely
done by sqlite.

(Query2)

  Select Sum(Length(Value)) from AllValues

This one performs basically the same but using sqlite virtual tables
api. It also touches other tables, but since they're small, we can
forget about this.

Query1 (General):
  Read: 540MB,
  Time: 24.2 sec,
  CPU Time: 6 Sec (25%)
  Speed: 22.31 MB/Sec

Query2 (Virtual):
  Read: 540MB,
  Time: 27.3 Sec,
  CPU Time: 13 sec (51%)
  Speed: 20 MB/Sec

In my particular test the noticeable difference is at the part of the
CPU spent more with the virtual table. I assume this can be related to
my own implementation of this virtual table since I should retrieve,
store values temporary somewhere and talk to sqlite. But this also may
shed light on your performance drop. If your virtual implementation
spend much time processing a value, you may finally get a big drop.

You may tell that this test is not fair because it does not involve
creating a table from the values of a virtual table. Unfortunately I
can't create good enough test comparing Posts and AllValues table as
sources, because the destination geometry of the tables are different
( Posts have more columns, less rows, AllValue less columns, more
rows). The closest approximation was possible when I created an
intermediate physical table containing the results from AllValues and
compared table creation from this table and from virtual table. The
virtual one took longer, but the values - 56 seconds vs 43 second not
different enough to conclude something.

I'm not sure my tests defend sqlite virtual tables sufficiently, but
currently I don't have evidence of significant inefficiency either.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Virtual table API performance

2014-03-02 Thread Max Vlasov
On Sun, Mar 2, 2014 at 5:21 PM, Elefterios Stamatogiannakis
 wrote:
>
> Our main test case is TPCH, a standard DB benchmark. The "lineitem" table of
> TPCH contains 16 columns, which for 10M rows would require 160M xColumn
> callbacks, to pass it through the virtual table API. These callbacks are
> very expensive, especially when at the other end sits a VM (CPython or PyPy)
> handling them.
>

Ok, not stating that the performance improvment is impossible, I will
explain why I'm a little sceptical about it.

For every bulk insert we have a theoretical maxiumum we'd all glad to
see sqlite would perform with - the speed of simple file copying.
Sqlite can't be faster than that, but to be on par is a good goal.
This is not possible when an insert means also modification of other
parts of the file, for example when there's an index involved. But
let's forget about it. Finally when new data is added, sqlite should
write a number of database pages, the cost of this part is absolutely
in the hands of the media (driver) and OS (driver).  But for every
database page write there's also price to pay in CPU units, for many
actions sqlite should do before actual value is translated from what
the developer provided to what actually appears on disk.

The illustration of the CPU price is the following example
 CREATE TABLE t(Value)

on my ssd drive mulitply inserts (thousands)
  insert into t (Value) values ('123456689  // this string
contains many symbols, for example 1024)
performed with the speed
  30 MB/Sec

but the query
  insert into t (Value) values (10)  // this is a small integer value
only
  3 Mb/Sec

Both shows almost full cpu load. Why such difference? Because with
latter query the system can do more than 30 MB of writes in 1 second,
but it should wait for sqlite spending 10 seconds in preparations.
The former is better because CPU cost of passing a large text value to
sqlite is comparatively low comparing to the  time spent in I/O in
writing this on disk.

So CPU price to pay isn't avoidable and notice that in example this is
not virtual table API, this is bind API. I suppose that the price we
pay for CPU spent in virtual table API is on par with an average price
payed in sqlite as a whole. This means that if I transfom the avove
queries into inserts from virtual tables, the final speed difference
will be similar. And this also means that for your comparision tests
(when you get x3 difference), the CPU price sqlite pays inside bind
api and in its code wrapping xColumn call is probably similar. The
rest is the share your code pays.

Well, I know that there are differences in CPU architectures and
probably there are platform where compiled code for bind api and
virtual tables api behaves a little differently making the costs more
diffrent. But imagine that hard task of fine tuning and refactoring
just to get a noticeable difference for a particular platform.


Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] RPAD/LPAD

2014-03-07 Thread Max Vlasov
On Fri, Mar 7, 2014 at 12:49 PM, Dominique Devienne  wrote:
> I think what SQLite lacks is a syntax to define custom function like
> it does for virtual tables. Something like:
>
> create function rpad(x, y) using scripty_module as "return 
> PRINTF('%-*s',y,x)";
>

Nice suggestion. This probably falls into case when a small new part
needed on sqlite side and large DSL specific features depending on the
side of the registered module. But you still needs some binary module
for your functions to work

How about "macro expression substitution" feature? Something that
could accept any expression compatible with sqlite as the function
body to be just translated into corresponding vdbe instructions.

For example,

Create function complexcase(x,y) as Case when x < 10 then y else y + 10 end

Select complexcase(myField1, myField*10) from t;

On sqlite site, the engine can only syntax check for the create
function body, but the referencing it in actual query would require
actual substitution with the corresponding context-sensitive errors.

This definitely will be more constly in kB on sqlite side than module
approach, but would create a more universal solution.

Just quick thoughts, probably there are plenty of obstacles here :)

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] RPAD/LPAD

2014-03-07 Thread Max Vlasov
On Fri, Mar 7, 2014 at 6:39 PM, Clemens Ladisch <clem...@ladisch.de> wrote:
> Max Vlasov wrote:
>>
>> Nice suggestion. This probably falls into case when a small new part
>> needed on sqlite side
>
> Actually, no change to SQLite itself would be needed.  It's possible
> to create an extension that provides a function that allows to register
> another function that executes a custom SQL expression:
>
>  SELECT register_simple_function('rpad', 2, 'SELECT printf(''%-*s'', ?, ?)');
>
>

Brilliant :) this is a solution not requiring adding anything to
sqlite. I'd only prefer not using full-features sql queries, but
rather expression that is automatically wrapped in SELECT without
FROM, otherwise they would create dimensions (columns, rows) when our
function strictly expected one result. And also support for numbered
parameters. So my example whould be implemented as

SELECT register_function('complexcase', 2, 'Case when ?1 < 10 then ?2
else ?2 + 10 end');

It's interesting that such approach makes this super-function
Pragma-like. It changes some internal state (reveals a new function)
and also optionally returns some result, for example 1 for success.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] RPAD/LPAD

2014-03-07 Thread Max Vlasov
On Sat, Mar 8, 2014 at 2:16 AM, Clemens Ladisch  wrote:
> Eduardo Morras wrote:
>> So, if a webapp that uses SQLite doesn't check it's input, functions that 
>> renames SQLite internals can be injected
>>
>> SELECT register_simple_function('MAX', 1, 'DROP TABLE ?');
>
> Such a statement would not return a single column, so it wouldn't
> actually get executed.
>
> But it might be possible to execute something like "PRAGMA evil = on",
> so this function probably should be secured like load_extension().
>

Absolute evil ) I already thought that introducing such function
violates a common sense assumption that Select Api is side-effect free
(in context of database changes) since "Register" slightly violates
this by messing with namespace context. Allowing non-Select queries
might pose damage risk because it would complain after the damage is
done (no SQLITE_ROW result for a Update or Insert query, but database
had already changed to the moment). That's also why I still think that
constraining it to Select Api with assuming expression and automatic
wrapping in Select is a must.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] RPAD/LPAD

2014-03-07 Thread Max Vlasov
On Fri, Mar 7, 2014 at 11:51 PM, Dominique Devienne  wrote:
>
> basically register_function('rpad', 'x', 'y', 'printf(''%-*s'', y,
> x)') would register a 2-arg function (register_function's argc-2)
> named $argv[0], which executes the following statement
>
> with args($argv[1],  $argv[2], ... $argv[argc-2]) as (VALUES(?, ?))
> select $argv[argc-1) from args;
>

Dominique, your variant is even better than using numbered parameters.
If you use named ones supported by sqlite (:VVV) then sqlite will do
the job of the textual replacement with bind api itself  (
'printf(''%-*s'', :y,> :x)'  )

The small problem in this case is that there are two ways (times) to
check whether named parameter exists in the expression. Either when
the registered function executed (and report error about lacking some
parameters only here) or make a test call with some test parameters at
the time of registration.

The latter allows revealing errors earlier, but it has no knowledge
about the expression so might fire strange run-time errors (division
by zero etc). If we omit execution and leave only binding, we might
miss the case when not all binding variables mentioned in the
expression, actually provided (example ... 'somefunc', 'x', 'y', ':x +
:y + :z')

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] RPAD/LPAD

2014-03-08 Thread Max Vlasov
On Sat, Mar 8, 2014 at 10:52 AM, Max Vlasov <max.vla...@gmail.com> wrote:
> On Fri, Mar 7, 2014 at 11:51 PM, Dominique Devienne <ddevie...@gmail.com> 
> wrote:
>>
>> basically register_function('rpad', 'x', 'y', 'printf(''%-*s'', y,
>> x)') would register a 2-arg function (register_function's argc-2)
>> named $argv[0], which executes the following statement
>>
>> with args($argv[1],  $argv[2], ... $argv[argc-2]) as (VALUES(?, ?))
>> select $argv[argc-1) from args;
>>
>
> Dominique, your variant is even better than using numbered parameters.
> If you use named ones supported by sqlite (:VVV) then sqlite will do
> the job of the textual replacement with bind api itself  (
> 'printf(''%-*s'', :y,> :x)'  )
>
> The small problem in this case is that there are two ways (times) to
> check whether named parameter exists in the expression.

No longer problem here :) SQLite3_Bind_Parameter_name is available at
the time of registration, so the prepared statement knows everything
about the number and the names of the parameters for full checking.

I have a working prototype, some things are left to do, but this
confirms that sqlite is content with the contexts and everything.
Examples:

SELECT RegisterExpressionFunction('myfunc', '45');
Select Myfunc()
45

SELECT RegisterExpressionFunction('mysum', 'x', ':x + :x');
select mysum(45)
90

SELECT RegisterExpressionFunction('mysumalt', '', '', '?1 + ?2');
select mysumalt(45, 67)
112

SELECT RegisterExpressionFunction('strconcat', 's1', 's2', ':s1||:s2');
Select strconcat('foo ', 'bar')
foo bar


Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Suggestion to add "locate" as a broader version of "instr"

2014-03-08 Thread Max Vlasov
On Sat, Mar 8, 2014 at 2:24 AM, big stone  wrote:
> Ooups !
>
> Thanks to the awesome  posts about "RPAD/LPAD", I understood that I could
> already create a "sqrt()" function for SQLite3  in interpreted python.
>


Yes, that discussion was inspiring :)

Looking at your task I also played with cte version of sqrt.
Based on the "guessing" approach from one of the answers from
  
http://stackoverflow.com/questions/3581528/how-is-the-square-root-function-implemented
the following query finally worked.

/*
  :value=12345
*/
with recursive
  sqrt(depth, val, guess) as
   (
  select 1, :value, Cast(:value as Float)/2
  UNION ALL
  select depth + 1, val as newval, ((guess + val/guess)/2) as
newguess from sqrt where abs(newguess - guess) > 1e-308 and depth <
100
   )
select guess from sqrt order by depth desc limit 1

but I could not overcome some pecularities of float numbers so depth <
100 here is for cases when comparison fails to stop.
Also for CTE queries in general I wonder whether there is another
faster way to get the last row of the query (in natural executing
order), so order by depth can be replaced by something else. I suspect
ordering here triggers temporary storage.

I tested this function as "expression function" implemented based on
that thread and an average speed of this one is about 4000 sqrt
operations / second on a mobile Intel i3. Not so fast, but if one
desperately needs one, then it would be ok.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Observations about CTE optimization

2014-03-10 Thread Max Vlasov
Many CTE queries are just some mini-algorithms with iteration and only
last row is required. I just wondered whether it's easy to do this
without "order by ... " of the outer query (also mentioned in my reply
about CTE sqrt). There's a solution, but the good news is that
probably one rarely needs such optimization because of effectiveness
of sqlite in-memory B-tree sort. Anyway, the results are below.

A synthetic test of a query interating until 1,000,000

with recursive
  testval(depth, val) as
   (
  select 1, 0
  UNION ALL
  select depth + 1, val + 1 from testval where depth <= 100
   )
select val from testval order by depth desc limit 1

Time:
  4 sec. 20 ms
Explain query plan
"SCAN TABLE testval"
"COMPOUND SUBQUERIES 0 AND 0 (UNION ALL)"
"SCAN SUBQUERY 1"
"USE TEMP B-TREE FOR ORDER BY"


We add an endmark and where condition moves to the select query so we
get rid of "order by" completely

with recursive
  testval(endmark, depth, val) as
   (
  select 0, 1, 0
  UNION ALL
  select (case when depth < 100 then 0 else 1 end), depth + 1,
val + 1 from testval where endmark=0
   )
select val from testval where endmark=1

Time:
  2 sec 900 ms.
Explain query plan:
"SCAN TABLE testval"
"COMPOUND SUBQUERIES 0 AND 0 (UNION ALL)"
"SCAN SUBQUERY 1"

For the modified CTE sqrt from the other post the difference for
100,000 queries was 27 seconds vs 21 seconds. Not so much, but
sometimes it might make sense to do such optimization

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] which of these is faster?

2014-03-14 Thread Max Vlasov
On Thu, Mar 13, 2014 at 11:06 PM, Richard Hipp  wrote:

>
> Once you do that, you'll see that the opcode sequence is only slightly
> different between the two.  They should both run at about the same speed.
> I doubt you'll be able to measure the difference.
>
>

Actually a comparatively long (10,000,000 elements) CTE for random
integer generation shows difference 20 vs 38 seconds. I suppose pure
min should use linear search while "order by" one uses temporal b-tree
(exlain query also hints about this). Sure unless sqlite has some
detection of "order by limit 1" pattern redirecting it to linear
search.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] which of these is faster?

2014-03-14 Thread Max Vlasov
On Fri, Mar 14, 2014 at 4:51 PM, Richard Hipp  wrote:

>>
> In the original problem, there was already an index on the term for which
> the min() was requested.
>.
> Whit your CTE-generated random integers, there is not an index on the
> values.  So "SELECT min(x) FROM..." does a linear search and "SELECT x FROM
> ... ORDER BY x LIMIT 1" does a sort.
>

I see, my fault, didn't notice the db was a concrete one
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Using a customized collate function to mimic integer array type.

2014-03-21 Thread Max Vlasov
On Fri, Mar 21, 2014 at 8:06 PM, Ben Peng  wrote:
>
> I guess I will have to take the longer route, namely define a customized
> comparison function and translate user input internally.
>

There's an also virtual table method, probably not so easy to wrap the
head around, but this one allows using comma (or other symbols
splitted) lists stored in fields even in joins. See my reply about it
at https://www.mail-archive.com/sqlite-users@sqlite.org/msg63453.html.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Goodram C50 SSD slow in SQlite benchmark

2014-03-23 Thread Max Vlasov
On Mon, Mar 24, 2014 at 4:49 AM, piotr maliński  wrote:
> I know it's bad. I'm trying to determine the cause of the difference, and
> if it's a "feature" of that SSD or a bug of some sort.

There was a very intensive discussion for a post labeled
"UPDATE/INSERTing 1-2k rows slower than expected". You can read it at
https://www.mail-archive.com/sqlite-users%40sqlite.org/msg58872.html .
Also there were different tests I made during this discussion. As long
as I remember the general observation was that it's hardware that
usually says "ok, I did this guaranteed -to-be-on-disk operation
you've asked for", but actually caching it somewhere inside. And
probably multiply USB controllers from the bunch of manufacturers are
to blame. SATA controller on motherboards are usually less
diversified, so more likely to be more accurate.

Also there's a setting in Windows for hard drives, "enable writing
cache". If you find a similar setting in xubuntu, then probably
enabling it would make your sata connection on par with your usb
connection. But it's just a hypothesis, it's harder to make useful
tests with sata connection due physical and interface limitations of
the interface.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] comma-separated string data

2014-04-05 Thread Max Vlasov
On Fri, Apr 4, 2014 at 10:20 PM, peter korinis  wrote:
> A data column in a link table contains comma-separated string data, where
>
> How do you 'parse' a table entry like: "4,66,51,3009,2,678, ." to extract
> these values and use them in an SQL statement, perhaps a WHERE id='66'?
>
>

In similar cases I use my virtual table explained here:
http://www.mail-archive.com/sqlite-users@sqlite.org/msg63453.html
Actually the table works more like "function", so only one instance is
required in the db to apply this trick.

Finally the db contains

CREATE VIRTUAL TABLE cmlist Using vtcommalist

and the query

SELECT value FROM cmlist WHERE commalist='45,56,78,125'

returns 4 rows

"45"
"56"
"78"
"125"

And this was a blank db with this virtual table only and the query
doesn't reference anything outside while still generating table data.
So the data for the virtual table is actually supplied at the time of
the query itself from WHERE clause.

This trick successfully works with joins and everything else.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Difference in virtual tables logic triggered between version 3.7.15.2 to 3.8.0

2014-04-05 Thread Max Vlasov
On Sat, Apr 5, 2014 at 11:48 PM, Max Vlasov <max.vla...@gmail.com> wrote:
>
> This works for an old version of sqlite (3.6.10), but today Dominique
> Devienne mentioned some doubt about this approach and I decided to
> test it with some data with a recent version of sqlite.  With 3.8.4.3
> the same join query produced two rows and Nulls for CommaList and
> Value
>


Actually I found a way for it to work as assumed. My xBestIndex call
now makes estimatedCost huge (1) for non-constrained guess and
small (1) for constrained one. Although it works, I'd be glad to know
whether there is a way to return "impossible" instead of "expensive".

I think that I see why the query worked differently in different
sqlite versions.This trick by its nature interprets constrained data
(when Where is supplied) as superset of non-constrained one (when it
returns no data if there's no Where) and this definitely violates the
logic sqlite relies upon. So in a (normal) world where
where-constrained result is always smaller than full-scan data result,
sqlite is free to choose full-scan instead of a filtered scan for a
virtual table (probably it does by default after 3.8.0 for his join
query if provided estimatedCosts are equal).

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] comma-separated string data

2014-04-07 Thread Max Vlasov
On Sun, Apr 6, 2014 at 10:23 PM, Dominique Devienne  wrote:

>
> If the answer to either question above is true, then a specialized
> vtable would be both more convenient and faster, no?
>

Hmm... If logical peculiarity of vtable approach (when
where-constrained queries might be larger than full-scan one) is
acceptable by sqlite (mentioned in my other post), then where
expression might serve as parameters so a possible hybrid might be
possible (also inspired by the recent discussion of creating user
functions on the fly). For example, a virtual table that accepts a
Select statement might look like

CREATE VIRTUAL TABLE vcommalist USING QueryVirtualizer('WITH RECURSIVE
 :commalist  ')

And the actual query using it might look like

SELECT * FROM vcommalist WHERE commalist='1,2,3,4,5'

This one served more like shortcut, but probably a more broad version
is possible when the parameter to virtual table is a print formatted
string so one can dynamically customize parameters general parameters
can't, i.e., table names, output column names etc.

Multiply parameters would be great, but with current state of things
the implementation still should use some kind of workaround to ensure
correct results so should always return huge estimatedCost in
xBestIndex if the constrained arrived doesn't contain at least one
required parameter (WHERE clause lacks one) and low one if all
parameters are provided. I think that sqlite might as well interpret
estimatedCost equal to  -1  as a ban to use this index.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] about the apparently arriving soon "threads"

2014-04-08 Thread Max Vlasov
On Tue, Apr 8, 2014 at 11:00 PM, big stone  wrote:
> Hi,
>
> I did experiment splitting my workload in 4 threads on my cpu i3-350m to
> see what are the scaling possibilities.
>
> Timing :
> 1 cpu = 28 seconds
> 2 cpu = 16 seconds
> 3 cpu = 15 seconds
> 4 cpu = 14 seconds
>

If the info at 
http://ark.intel.com/products/43529/Intel-Core-i3-350M-Processor-3M-Cache-2_26-GHz
is right, you have 2 cores, each having 2 threads. They're logically
"cores", but physically not so. My tests with any multi-threading
benchmarking including parallel quicksort showed that a similar i3
mobile processor rarely benefit after 2 threads, probably cache
coherence penalty is the cause. Desktop Intel Core i5-2310, for
example, is a different beast (4 cores/4 threads), 3 threads almost
always was x3 times faster, 4 threads - with a little drop.

It all still depends on the application. Once I stopped believing a
2-threaded Atom would show x2 in any of tests I made, when on one
graphical one it finally made it. But still if number of threads are
bigger than number of cores then it's probably a legacy of
HyperThreading hardware Intel started multi-threading with

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Why sqlite has no sqlite_bind_auto or similary named function

2014-04-17 Thread Max Vlasov
Hi,

The problem was with my program that automatically converts xml data
into an sqilte table. It looks for an attribute and appends a column
if it does not exists, but stating no particular type. All values were
appended with sqlite_bind_text. Everything was fine, but an index
created after this on a ParentId field that was in real life integer
or null, actually appeared text-based. Sure, typeof all inserted
fields appeared to be text.

On the other side for a typeless table
  CREATE TABLE t (value)

Query
  insert into t (value) values (123)
  insert into t (value) values ('text')
  insert into t (value) values (34.45)

finally makes
  select typeof (value) from t

returns
  integer
  text
  real

So it seems like if general queries allow affinity automatical
selection while bind api does not have the corresponent function. I
know that I can analize incoming data myself, but since general
queries use a similar function probably, making some kind of
sqlite_bind_auto should be no big deal. But probably this decision was
deliberate.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why sqlite has no sqlite_bind_auto or similary named function

2014-04-18 Thread Max Vlasov
On Fri, Apr 18, 2014 at 8:08 PM, Andy Goth <andrew.m.g...@gmail.com> wrote:
> On 4/18/2014 12:29 AM, Max Vlasov wrote:
>>
>> So it seems like if general queries allow affinity automatical
>> selection while bind api does not have the corresponent function. I
>> know that I can analize incoming data myself, but since general
>> queries use a similar function probably, making some kind of
>> sqlite_bind_auto should be no big deal. But probably this decision was
>> deliberate.
>
>
> This was done for C compatibility.  A C++ wrapper for SQLite can have an
> sqlite_bind_auto() function which is overloaded for a variety of types,
> but it would in reality be multiple functions that have the same name
> but otherwise different type signatures.

I meant something different. A function accepting a text value while
sqlite doing affinity conversion according to how the value "looks".
Actually it seems that all the problems I faced was related to the
type being typeless in declaration. For integer-declared fields sqlite
correctly converts them to the integer affinity even with
sqlite3_bind_text function. So it makes no sense to force other kind
of detection if the type is declared. But if I stay with typeless
fields, I will definitely will have problems with indexes on them,
because seems like the query planner relies on declared types and
doesn't care how many rows have particular affinity. So it looks like
I should prescan some rows for type detection if I want an automatic
conversion not knowing type info in advance

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] CTE in views for older versions

2014-04-20 Thread Max Vlasov
Hi,
noticed that attempt to open a database containing a view

Create vew ... with recursive ...

... with older (non-cte) versions of sqlite failed. The call to
open_v2 was ok, but any access for example to PRAGMA encoding led to
"malformed database schema" error. Although it came as no big
surprise, probably one might expect delayed handling behavior as with
virtual tables so it don't face incompatibility until referenced in a
query.

So probably either
  https://www.sqlite.org/lang_createview.html
or
  https://sqlite.org/lang_with.html

might contain a little sentence about such incompatibility.

Thanks,

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] WITHOUT ROWID option

2014-05-09 Thread Max Vlasov
On Wed, May 7, 2014 at 6:31 PM, Richard Hipp  wrote:
> On Wed, May 7, 2014 at 9:00 AM, Marco Bambini  wrote:
>
>> What is the best way to know if a table has been created with the WITHOUT
>> ROWID option?
>>
>
>
> (1) You could send "SELECT rowid FROM table" 
>
> (2) Run both "PRAGMA index_list(table)" 
>
>

Is there a way for a virtual table implementation to report that
there's no rowid support before first xRowId call takes place?

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Latest Sqlite grammar as machine understandable file

2014-05-10 Thread Max Vlasov
On Fri, Feb 21, 2014 at 5:24 PM, Richard Hipp <d...@sqlite.org> wrote:
> On Fri, Feb 21, 2014 at 7:29 AM, Max Vlasov <max.vla...@gmail.com> wrote:
>
>> Is there a machine-readable (BNF or other) grammar as equivalent to
>> the current syntax diagrams?
>
> An updated version of all-bnf.html has now been checked in at
> http://www.sqlite.org/docsrc/doc/trunk/art/syntax/all-bnf.html
>

Thanks again for the updated grammar. Don't know whether it's somehow
used for production logic (probably just kept manually synced with
it), but wanted to mention a couple of things noticed.

in the rule
  join-clause::= [ 
  ]
the construct in [] probably should be made repeated with *. At least
without this addition it prevented from parsing multiply joins.

There are no rules described for initial-select, recursive-select. So
I guessed them as synonyms for select-stmt.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] New DLLs and sources. Was: SQLite version 3.8.5 beta

2014-06-03 Thread Max Vlasov
On Tue, May 27, 2014 at 10:49 PM, Richard Hipp  wrote:
> On Tue, May 27, 2014 at 10:50 AM, Richard Hipp  wrote:
>>
>
> This time I build the 32-bit DLL using mingw instead of MSVC.  (MSVC was
> still used for the 64-bit DLL.)  So perhaps it will work correctly on
> WinXP.  Please let me know one way or the other.  Thanks.
>

Don't know what is difference between "stock" 3.8.4.3 from the site
and the newly compiled one, but synthetic tests with memory databases
(no I/O involved) show that the new one is about 20% faster (Intel
Core i5-2310). If they're both MinGW built then probably some switch
differences between these builds (if there are any) gives such
difference.

Here is the schema and the timings. The first one is for
sqlite-dll-win32-x86-3080403.zip, the second one is for
sqlite-dll-win32-x86-201406021126.zip

The table in memory
  CREATE TABLE [TestTable] ([ID] Integer primary key, [IntValue]
INTEGER, [FloatValue] FLOAT, [StrValue] TEXT)

was populated  with 1000 pseudo-random rows using the same seed for
every test. The following queries all use cross join and differ in
types of data used (int, float, string)


Select Count(*) FROM (SELECT * FROM TestTable T1 Join TestTable T2 )
62 msec
47 msec

Select Count(*) FROM (SELECT *,t1.FloatValue / t2.FloatValue as
divvalue FROM TestTable T1 Join TestTable T2 where abs(divvalue -
round(divvalue)) > 0.499)
453 msec
359 msec

Select Count(*) FROM (SELECT * FROM TestTable T1 Join TestTable T2
where (T1.IntValue * T2.IntValue) % 1789 = 56)
203 msec
187 msec

Select Count(*) FROM (SELECT * FROM TestTable T1 Join TestTable T2
where substr(t1.strvalue, 1 + t1.intvalue % 20, 1 + t2.intvalue % 20)
= t2.strvalue)
499 msec
405 msec

Select Count(*) FROM (SELECT *,t1.FloatValue / t2.FloatValue as
divvalue FROM TestTable T1 Join TestTable T2 where abs(divvalue -
round(divvalue)) > 0.499 or (T1.IntValue * T2.IntValue) % 1789 = 56 or
substr(t1.strvalue, 1 + t1.intvalue % 20, 1 + t2.intvalue % 20) =
t2.strvalue)
1124 msec
952 msec


Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Attaching vfs

2011-03-06 Thread Max Vlasov
On Mon, Mar 7, 2011 at 2:56 AM, Roger Binns <rog...@rogerbinns.com> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 03/06/2011 03:04 PM, Max Vlasov wrote:
> > I don't think I need a solution that complex.
>
> You need to be very careful with your terminology :-)
>
> As for getting other data into your database, just do a .dump of a
> different
> SQLite database and run it.
>
> If you are looking for something to exercise ...
>
>
Roger, proably dumping is my current way to go and also thanks for tests
links

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Getting random data from grouped data in one table

2011-03-09 Thread Max Vlasov
On Wed, Mar 9, 2011 at 11:53 AM, eric wong <ewl...@gmail.com> wrote:

> But now, I must get the data from certain PATIENT_GROUP.
>
> What's your best approach to solve this? The objective is the fastest
> possible query.
>
> Thanks.
>
>

There's an approach with "order by random" mentioned by Richard, there's
also another one that is self-explained from this query

SELECT * FROM Table LIMIT 1 OFFSET abs(random() % (SELECT Count(*) FROM
Table))

in your case this should lead to something like

SELECT patient_id FROM TB_Patient
WHERE patient_group=$group
LIMIT 1
OFFSET abs(random() % (SELECT Count(*) FROM TB_Patient WHERE GroupId=$group)

Both approaches should scan the table so slow for large tables by design,
but I found the latter being a slightly faster, but I'm not sure whether
this is always the case.

Max Vlasov
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Best approach for xBestIndex/xFilter effectiveness

2011-03-22 Thread Max Vlasov
Hi,

recently I finally started experimenting with virtual tables and there's at
least one thing I can not understand.

As I see xBestIndex/xFilter were developed to allow fast searching if the
implementation is able to do this. But there's also sql language that allows
very exotic queries. Some of them may be recognized by the implementation,
some not. If the former, one just can rely on sqlite double checking and
just do full scan. But there are also cases when it looks like recognition
is not possible. For example

SELECT * FROM vtest where id > random()

in this case xBestIndex just assumes some constant as the expression, so the
one who implements just can't detect probably unresolved query and thinks
that it can search quickly (binary search, for example). The call to xFilter
just passes first random value and sqlite will never call it again for the
same enumeration. So xFilter thinks this is the constant value used in the
query and jumps to the first correct row row never planning to jump back.
But this is actually a misleading action since in real world sqlite calls
random on every row and the rows bypassed are actually important and can be
evaluated to true. I mentioned random(), but there may be other cases, for
example when other fields are part of expressions.

So, the main question: is it possible to detect simple expressions that can
be correctly resolved by quick searching? I know that I can always rely on
sqlite double-checking and always do full scan. But theoretically for large
datasets one should at least think about some optimization.

Thanks,

Max Vlasov
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Create table if not exists and insert seed data

2011-03-22 Thread Max Vlasov
On Tue, Mar 22, 2011 at 1:04 AM, Erich93063 <erich93...@gmail.com> wrote:

> I am trying to create a SQLite database if it doesn't exist, which I
> know I can use 'CREATE TABLE IF NOT EXISTS", but more importantly, I
> need to initially populate the database with seed data if it doesn't
> exist. If I use CREATE TABLE IF NOT EXISTS, it will obviously create
> the table if it doesn't exist, but if I follow that up with insert
> statements, those would ALWAYS get ran. I only want to enter the seed
> data if the database does not exist. ???
>
>

Maybe something like

CREATE TABLE IF NOT EXISTS [newTable] AS SELECT * FROM DataToPopulate

DataToPopulate can be a table from the db or temporary table created for
example when the program starts.

Max Vlasov
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Best approach for xBestIndex/xFilter effectiveness

2011-03-22 Thread Max Vlasov
On Tue, Mar 22, 2011 at 2:25 PM, Dan Kennedy <danielk1...@gmail.com> wrote:

> On 03/22/2011 04:26 PM, Max Vlasov wrote:
> > Hi,
> >
> > recently I finally started experimenting with virtual tables and there's
> at
> > least one thing I can not understand.
> >
> > As I see xBestIndex/xFilter were developed to allow fast searching if the
> > implementation is able to do this. But there's also sql language that
> allows
> > very exotic queries. Some of them may be recognized by the
> implementation,
> > some not. If the former, one just can rely on sqlite double checking and
> > just do full scan. But there are also cases when it looks like
> recognition
> > is not possible. For example
> >
> > SELECT * FROM vtest where id>  random()
> >
> > in this case xBestIndex just assumes some constant as the expression, so
> the
> > one who implements just can't detect probably unresolved query and thinks
> > that it can search quickly (binary search, for example). The call to
> xFilter
> > just passes first random value and sqlite will never call it again for
> the
> > same enumeration. So xFilter thinks this is the constant value used in
> the
> > query and jumps to the first correct row row never planning to jump back.
> > But this is actually a misleading action since in real world sqlite calls
> > random on every row and the rows bypassed are actually important and can
> be
> > evaluated to true. I mentioned random(), but there may be other cases,
> for
> > example when other fields are part of expressions.
>
> SQLite assumes that the result of each expression in the WHERE
> clause depends only on its inputs. If the input arguments are
> the same, the output should be do. Since random() has no inputs,
> SQLite figures that it must always return the same value.
>
> You can see a similar effect with:
>
>   CREATE TABLE t1(a PRIMARY KEY, b);
>   SELECT * FROM t1 WHERE a > random();  -- random() evaluated once.
>   SELECT * FROM t1 WHERE +a > random(); -- random() evaluated many times
>


Dan, thanks, I double-checked your information and (ironically) I see that
the problem is with "the double check" :)

As I see now, sqlite does a great job that probably won't require any
additional steps for the problem I posted. So if the expression is not
"simple" in the terms I used, it just won't supply any constraint to
xBestIndex so automatically forcing full-scan. But if the double-check is
on, sqlite seems like actually checks random() for every result row and this
actually can give non-correct result.

Although I can not confirm the assumption with the numbers, but I also
checked this hypothesis with another "dynamic" expression using milliseconds

SELECT * FROM vtest WHERE (id = cast(strftime('%f','now')*1000 as integer))

and for a comparatively large dataset the value passed in xFilter is always
different to one returned if I just use full scan and double-checking (for
example 15719 vs 18984).

So it seems like virtual tables double checker always evaluates the
expression used for every row. One can live with that just by disabling
double-checking or not using such dynamics at all. I'm not sure whether such
a minor thing should be fixed in the core.

Thanks

Max Vlasov.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Extremely large values error

2011-03-29 Thread Max Vlasov
Hi,

I sometimes use repeated digits as test data, for example 123456789 repeated
multiply times and recently some of my complex queries where I occasionally
wrongly chose a field gave 'unknown error' in the middle of the process.
Tracking it led to sqlite3AtoF function that appears to be raising Floating
overflow exception (at least on windows) for some extremely large numbers

The test showing the effect (tested with v3.7.5)

CREATE TABLE [Temp] ([Value] TEXT)
INSERT INTO Temp (Value) VALUES ('1234567890...') //Totally 310 digits
SELECT Cast(Value As Float) FROM Temp

the result of this query in my case is 'unknown error' and no data shown.

Can it be considered a bug? The worst part of this error is that one needs
very deep investigation especially for long queries to understand what is
going on.

Thanks,

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Extremely large values error

2011-03-29 Thread Max Vlasov
On Tue, Mar 29, 2011 at 3:56 PM, Richard Hipp <d...@sqlite.org> wrote:

> On Tue, Mar 29, 2011 at 4:48 AM, Max Vlasov <max.vla...@gmail.com> wrote:
>
> > Hi,
> >
> > I sometimes use repeated digits as test data, for example 123456789
> > repeated
> > multiply times and recently some of my complex queries where I
> occasionally
> > wrongly chose a field gave 'unknown error' in the middle of the process.
> > Tracking it led to sqlite3AtoF function that appears to be raising
> Floating
> > overflow exception (at least on windows) for some extremely large numbers
> >
> > The test showing the effect (tested with v3.7.5)
> >
> > CREATE TABLE [Temp] ([Value] TEXT)
> > INSERT INTO Temp (Value) VALUES ('1234567890...') //Totally 310 digits
> > SELECT Cast(Value As Float) FROM Temp
> >
> > the result of this query in my case is 'unknown error' and no data shown.
> >
>
> I cannot reproduce this problem.  My script is this:
>
> CREATE TABLE t1(v TEXT);
> INSERT INTO t1
>
> VALUES('1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890');
> SELECT length(v) FROM t1;
> SELECT CAST(v AS float) FROM t1;
>
> I ran this on Linux, Mac, and Windows and in all cases I got as output:
>
>
> 310
> Inf
>
> Which is pretty much what you would expect.  No floating-point exceptions
> or
> other anomalies.
>
>
Thanks, this forced me to search more on the topic. I use Delphi and it
appears that all Borland compilers has their own floating-point exception
handling chain and it is on by default so if any division by zero or
overflow appears inside sqlite3_step Delphi will handle this with its own
exception handling. More on this was here:
http://www.eztwain.com/borland-issue.htm

Probably I will need to suppress this chain manually or get ready to catch
it myself. But I suppose sqlite might not be ready for the latter and
probably won't free some resources. Hmm, or even worse, it might bypass some
important calls for query finalization. I should think what to do with this.


Thanks,

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Extremely large values error

2011-03-30 Thread Max Vlasov
On Tue, Mar 29, 2011 at 8:17 PM, Ralf Junker <ralfjun...@gmx.de> wrote:

> On 29.03.2011 15:17, Max Vlasov wrote:
>
> > Thanks, this forced me to search more on the topic. I use Delphi and it
> > appears that all Borland compilers has their own floating-point exception
> > handling chain and it is on by default so if any division by zero or
> > overflow appears inside sqlite3_step Delphi will handle this with its own
> > exception handling.
>
> When running SQLite from Delphi, you must disable FPU exceptions. Below
> is a simple console project which uses the DISQLite3 Delphi port of
> SQLite to demonstrate how this works.
>
> Ralf
>
> ---
>
>   Set8087CW($133F);
>
>

Ralf, thanks, it worked, currently I wrapped my calls to sqlite3_step into
change-restore logic, now I see Inf in my results.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Dynamically loaded sqlite (linux)

2011-06-07 Thread Max Vlasov
Hi,

I'm trying to use sqlite with linux (Ubuntu, Pascal, Lazarus). I'm still not
very familiar with linux development so I might miss something essential.

Two scenarios work ok
- statically linked latest version compiled (3.7.6.3), no options or defines
changed
- Dynamically loaded (dlopen) sqlite used from the installed package
libsqlite3 (libsqlite3.so)


But when I try to use shared library compiled from the same sources and use
it with dlopen some errors appear.
- If I just compile without mapping sqlite3_initialize to init proc, first
call to sqlite3GlobalConfig.m.xRoundup(n) in mallocWithAlarm gives SIGSEGV
error.
- If I map sqlite3_initialize to init, the same happen with the first call
to sqlite3_mutex_enter call.

The library is build with

gcc -g -c -fPIC sqlite3.c
gcc -shared -Wl,-init=sqlite3_initialize -o customsqlite.so sqlite3.o

What am I missing here?

Thanks,

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Dynamically loaded sqlite (linux)

2011-06-09 Thread Max Vlasov
On Tue, Jun 7, 2011 at 9:22 PM, Martin Gadbois <mgadb...@gmail.com> wrote:

> On Tue, Jun 7, 2011 at 12:52 PM, Jay A. Kreibich <j...@kreibi.ch> wrote:
>
> > On Tue, Jun 07, 2011 at 07:47:25PM +0400, Max Vlasov scratched on the
> wall:
> > > Hi,
> > >
> > > I'm trying to use sqlite with linux (Ubuntu, Pascal, Lazarus). I'm
> still
> > not
> > > very familiar with linux development so I might miss something
> essential.
> > >
> > > Two scenarios work ok
> > > - statically linked latest version compiled (3.7.6.3), no options or
> > defines
> > > changed
> > > - Dynamically loaded (dlopen) sqlite used from the installed package
> > > libsqlite3 (libsqlite3.so)
> >
> >   That's not how dynamic libraries work (not normally, anyways).
> >  Generally you simply tell the compiler/linker to link in the library
> >  at build time, and allow the usage of dynamic libs.  The dynamic
> >  link is then done on application start-up by the OS.  In Windows
> >  terms, it is like using an .DLL by linking in the associated .lib
> >  file.  Moving from a static library to a dynamic library requires no
> >  code changes.
> >
>
>
> There is a way to do a _good_ shared library. I suggest reading the
> excellent paper: http://www.akkadia.org/drepper/dsohowto.pdf
>
> As for the OP question, do
> gcc -shared -Wl,-init=sqlite3_initialize -o libsqlite.so sqlite3.o
>
> and then link your application with
> gcc -L. -lsqlite -o test test.c
>
> This assumes that libsqlite.so is in your current path: . (thus the -L.)
>
>

Martin and Jay, thanks for the info.

The hint helped be to track the problem.

It appears that my problem was that I forgot to remove dynamic linking of
libsqlite3.so library and had my own dynamic loading of another sqlite
binary. Since the exceptions was when sqlite accessed sqlite global config
(sqlite3GlobalConfig), I suppose they were probably magically sharing the
global data or something like that.

I know that I should avoid such things, but just curious, is it something
inside sqlite that probably makes one dynanmically linked and other
dynamically loaded library share global data and can this be avoidable? I
thought that two libraries having different filenames and sonames virtually
different for the system, but it looks like they're not.

Thanks

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Dynamically loaded sqlite (linux)

2011-06-09 Thread Max Vlasov
On Thu, Jun 9, 2011 at 6:21 PM, Pavel Ivanov  wrote:

>
>
> So if you ever want to use dlopen() you should be really really
> careful to avoid loading the same library several times (even if the
> same library have different file names).
>
>
Pavel, thanks for the hint and the information,

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Dynamically loaded sqlite (linux)

2011-06-10 Thread Max Vlasov
On Thu, Jun 9, 2011 at 6:21 PM, Pavel Ivanov  wrote:

> > I know that I should avoid such things, but just curious, is it something
> > inside sqlite that probably makes one dynanmically linked and other
> > dynamically loaded library share global data and can this be avoidable?
>
> No, it's Linux linker who works like that. If you have several
> libraries loaded into the process with the same exported symbols then
> everybody using those symbols will be linked against ones located in
> the library loaded first. Any symbols located in libraries loaded
> after that won't be used. This is the default behavior of Linux linker
> (it can be changed but AFAIK it's changed very rarely) and this
> feature can produce sometimes very nasty results. E.g. if you try to
> load the same library located in different directories then
> initialization code will be executed in both of them, but they both
> will be executed against the same global/static variables. And it
> could lead to problems during initialization and definitely will lead
> to problems during finalization (like double frees, segmentation
> faults etc.).
>
>

Occasionally found the the flag RTLD_DEEPBIND for dlopen helped. I suppose
it was a not so old answer for the problems Pavel described.

A quote from http://linux.die.net/man/3/dlopen

> Place the lookup scope of the symbols in this library ahead of the global
> scope. This means that a self-contained library will use its own symbols in
> preference to global symbols with the same name contained in libraries that
> have already been loaded. This flag is not specified in POSIX.1-2001.
>

In my case it helped, two instances (one dynamically linked, other loaded)
started to act as independent entities without faults  But what about the
last comment (about POSIX.1-2001). Does it mean some distros could not
support it?

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query with UNION on large table

2011-06-23 Thread Max Vlasov
On Wed, Jun 22, 2011 at 9:25 PM, Rense Corten <rcor...@gmail.com> wrote:

>
> CREATE TABLE table2 AS SELECT n1,n2 FROM (SELECT n1, n2 FROM table1
> UNION SELECT n2 AS n1, n1 AS n2 FROM table1) WHERE(n1<n2) GROUP BY
> n1,n2;
>
> This has the desired result on a small example, but when I try this on
> my actual table which has about 800 million rows, the query never
> seems to complete. It has been running for a couple of days now, and
> it doesn't seem sqlite is still doing anything (cpu usage dropped to
> almost zero)...
>

Rense, looks like an interesting task.

What are the ranges of n1 and n2? We can take sqlite logic into account, but
regardless of the one, if the range is small comaring to the row count, the
result set will be small, so there's always possible to do full scan
maintaining the result as a small table (or memory array). I suppose sqlite
sometimes goes this way after the some guessing. But if the range is large
(so also a large result set is expected), there's no way other then prior
sorting of both sub-tables (or probably by creating two indexes). Looking at
how your query executed against my test data and seeing I/i read and write
statistics, sqlite probably does these temporary tables creation.

I noticed that fewer reads/writes will be with the following variant

SELECT * FROM (SELECT n1, n2 FROM table1 Where n1 < n2)
UNION
SELECT * FROM (SELECT n2, n1 FROM table1 Where n2 < n1)

since this one decreases the sizes of the tables that should be ordered.

Max Vlasov
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query with UNION on large table

2011-06-23 Thread Max Vlasov
On Thu, Jun 23, 2011 at 10:20 PM, Rense Corten  wrote:

> Wow, I'm almost embarrassed by how helpful you are, gentlemen. Thanks a
> lot.
>
> As to RAM: I'm trying this on two different machines, one with 12 Gb
> and one with 32 Gb RAM. I won't be able to get more in the near
> future. Something that might be relevant is that the case of the 32Gb
> machine, the database is on an AFS.
>
>
Hmm, Jan suggested dedicating the memory to the cache, but if you can use
64-bit sqlite shell, and giving that you have 800 millions (or less) rows,
maybe it's reasonable to switch completely to memory temporary storage?
(PRAGMA temp_store=2;). In best scenario no disk will be involved until
sqlite will be merging two b-trees for your "create table as"

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] The last records of a query

2011-06-25 Thread Max Vlasov
On Sat, Jun 25, 2011 at 11:26 AM, Cecil Westerhof wrote:

> With LIMIT you can get the first N records of a SELECT. Is it also possible
> to get the last N records?
>
>
Use ORDER BY ... DESC. Sure, if you have a large dataset, it'd better be
indexed.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Substring question

2011-06-25 Thread Max Vlasov
On Sat, Jun 25, 2011 at 9:53 AM, Pete  wrote:

> I need to select a substring of a column that starts 1 character after a
> colon in the column and continues to the end of the column.  For example,
> if
> the column contained "abc:xyz" I want the select statement to return only
> "xyz" (the characters after the colon).  The substr function requires
> specific character positions - is there a way to do this?
>
>
There's no such function is sqlite so your currently has two options:

- if your strings have limited length you can create a statement with a case
case when substr(..., 2, 1)=':' then when substr(..., 3, 1)=':' ... end
you even can write a supplemental program when you enter the range for the
length and the position of the colon and the one gives you the case
statement

- write your own (user) function. If you didn't write any before, believe
be, it's very easy. If you go this way I recommend keeping compatibility at
least with MySql in this case (so either "position" or "locate" with the
corresponding parameters). This will give you theoretical possibility to
export you statements in the future without a change. I did this for example
when I implemented find_in_set

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] randomness issues on windows

2011-06-27 Thread Max Vlasov
On Sat, Jun 25, 2011 at 11:20 PM, bob  wrote:
> i'm getting an issue relating to the use of the random() function in a
> SQL query.
>
> i run 2 PHP scripts very quickly (less than 1 second apart).
>
> they use an SQL query like    SELECT * FROM table ORDER BY random() LIMIT 5;
>
> because the 2 scripts run so close together (in time)  i get the same
> values as a result.


Since sqlite initialize the seed only once you probably mean the case
when your two queries used on separately opened and closed
connections, right?

I tried to simulate this with a test table containing numbers from 1
to 5 and query Select value from testtable order by random(). Sqlite
gave different results most of the times. This was on a windows
machine where the time between connection was more than 1ms and also
on a machine when the time was < 1 ms. My tests can be wrong if sqlite
initializes the seed once for the library, not for single connection.
I tried to track this with loading sqlite dll for every connection,
but unfortunately I could not find a windows machine that can do all
these tasks in less than a 1 ms. But for tests > 1 ms the numbers were
still different

So you probably have to look in a different layer

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SELECT query first run is VERY slow

2011-07-22 Thread Max Vlasov
On Fri, Jul 22, 2011 at 2:38 PM, Григорий Григоренко  wrote:
> I have a log's database. Schema :
>
> Query:
>
> SELECT 1 as today, id as rowid, kind,who,msg,computer,process,created_at,id 
> FROM log WHERE id > 4070636 AND id <= 9223372036854775807 AND kind = 'info' 
> AND computer='KRAFTWAY';
>...
>
>
> Reboot system. Run query. Elapsed: ~100 SECONDS. Re-run query. Now it 
> executes in less than a second. Re-run query. The same: less than a second. 
> Tried it several times.
>
>
> RUNNING THIS QUERY JUST AFTER REBOOT TAKES UNPREDICTABLE LARGE TIME, FROM 1 
> TO 2 MINUTES.
>
>

Do you have and anitvirus software installed? It may scan the file at
the first usage.

To be sure sqlite doesn't read too much I suggest opening Task
Manager, choosing column I/O Read Bytes to be shown and run the query.
This also can give a hint about the software that possibly reads the
file all the this time.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SELECT query first run is VERY slow

2011-07-22 Thread Max Vlasov
On Fri, Jul 22, 2011 at 3:28 PM, Simon Slavin  wrote:
>
> Does Windows XP have some sort of task display where you can see what task is 
> hogging most of the CPU or disk access ?
>

The mentioned Task Manager (Ctrl-Alt-Del -> Task Manager) reports both
CPU and I/O Read Bytes (if one selects the column for the latter) for
every process so if Grigory shares this info with us, this will be
very useful.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SELECT query first run is VERY slow

2011-07-22 Thread Max Vlasov
On Fri, Jul 22, 2011 at 3:36 PM, Григорий Григоренко  wrote:
> Database extension is ".dblite"
>
> I'm using Process Explorer 
> (http://technet.microsoft.com/en-us/sysinternals/bb896653) to monitor 
> processes cpu and i/o usage.
>
> During these long running queries I am not using any other program. I've 
> terminated any unused service  (MS SQL, for example). System idle CPU is at 
> 98-99% during all the time.
>
> I've just tried another system (of my colleague) that has Windows 7. Same 
> result: first run took 40 seconds, second: 1 second.
>


What about Read Bytes? How much did sqlite.exe actually read during the run?

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SELECT query first run is VERY slow

2011-07-22 Thread Max Vlasov
On Fri, Jul 22, 2011 at 3:50 PM, Григорий Григоренко  wrote:
> Okay, I rebooted and tested again.
>
> First run took 76 seconds. Read bytes: ~ 307 Mb, Write bytes: ~ 66 Kb.
>
> Re-running: 1 second, Read bytes: ~ 307 Mb, Write bytes ~ 66 Kb.
>
>

Grigory, you posted to me directly (without the list e-mail), I'm
reposting this reply to the list

Ok, I downloaded the db and it actually takes long time to execute.
But this can be explained. Actually if you change your fist id > to
zero, the query starts to return actual results, so there are many
records with kind='info' and computer = 'KRAFTWAY' in your database.
So sqlite really uses the index for locating your records. But the
problem is also that your db is very fragmented so sqlite when reading
the index actually reads different parts of this file and it makes the
windows cache system mad. I think vacuum command should help.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SELECT query first run is VERY slow

2011-07-22 Thread Max Vlasov
On Fri, Jul 22, 2011 at 5:25 PM, Григорий Григоренко  wrote:
> Database is "insert-only". There wasn't any deletes or updates, will VACUUM 
> actually help in this case? I though it was about unused space?
>

There's also internal fragmentation coming from the nature of indexes.
If your inserted data have to be placed in the middle of the index and
this allocates a new page at the end of the file then the logical
order of the pages used for this index will be non-sequential in terms
of hard disc access.

>
> And after finding this position inside index SQLITE clearly sees that 
> previous and next record does not match query, so query should return nothing.
>
> This cannot take long. It's like scanning B-TREE to a predefined position and 
> then reading just 2 records near.
>
> And that's all. Why spending 2 minutes?


You query contained the id range not existed in the db so maybe here
can be some optimization improvement. That's why probably Richard
asked you to send the analyzed data. But as I suppose you're not going
to supply non existing id ranges :) so if we change the id range to a
valid one, the data will contain many records and sqlite should read
the index that is internally fragmented due to the reasons explain
above.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SELECT query first run is VERY slow

2011-07-25 Thread Max Vlasov
On Mon, Jul 25, 2011 at 5:45 PM, Григорий Григоренко  wrote:
>
> 1) SQLITE has to read about _half of index_ before it can use it (and 
> understand there are no records matching query).
>
> If cache is enough to hold 50 Mb then on subsuquent queries sqlite process is 
> not reading at all.
>

Please, post your query. To understand whether sqlite reads too much
or not it's better to know what exactly you want to select.


> 2) SQLITE is reading abnormally slowly during this first-time running query  
> (waiting for something a lot?).
>

Is this with the recreated index or still the one that was created
during the lifetime of your program?

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SELECT query first run is VERY slow

2011-07-26 Thread Max Vlasov
On Tue, Jul 26, 2011 at 2:18 PM, Simon Slavin  wrote:

>
> On 26 Jul 2011, at 11:05am, Григорий Григоренко wrote:
>
> > This leads us to conclusion: index in SQLITE database if scattered and
> cannot be "jumped directly" to N-th element. SQLITE has to read it somehow
> consecutively.
> >
> > And so SQLITE has to read half of index (!) to find matching index
> record.
>
> I don't think it's SQLite itself that's reading half the index.  I think
> it's some part of your operating system that's trying to cache all of your
> database file as SQLite reads lots of different parts of it spread about
> randomly.  Unfortunately I don't see how an OS can reasonably do that since
> it will produce the slow speeds you're complaining about.
>
>
Simon,

actually I narrowed down the problem  (cmiiw).

The simple table

CREATE TABLE [abctable] ([id] INTEGER PRIMARY KEY AUTOINCREMENT, a, b, c)

add many records (100,000)
INSERT INTO abctable (a, b, c) VALUES (10, 20, 30)

Good variant
CREATE INDEX idxabc ON abctable (a, b, c)
SELECT * FROM abctable WHERE a=10 and b=20 and c > 100
Sqlite reads few data (3k actually for 100,000 records) to show empty result

Another variant
CREATE INDEX idxabid ON abctable (a, b, id)
SELECT * FROM abctable WHERE a=10 and b=20 and id > 100

Sqlite reads much (1,7MB)

Checked with 3.7.6.2,
I suppose the latter due to some special meaning of the id/rowid, but I
suppose the second case should work with fewer reads

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SELECT query first run is VERY slow

2011-07-26 Thread Max Vlasov
2011/7/26 Black, Michael (IS) 

>
> Turns out the if you include the primary key in an index it doesn't use the
> triple index but uses the primary key instead.  And analyze doesn't change
> it.
>
> This is with version 3.7.5
>
>
Not sure about the primary index , because with my data and 3.7.6.2 it still
uses the index,  but visits every entry.

the fast one:
WHERE a=10 and b=20 and c> 100
explain query plan > SEARCH TABLE abctable USING COVERING INDEX idxabc (a=?
AND b=? AND c>?) (~2 rows)

the "slow" one
WHERE a=10 and b=20 and id>100
explain query plan > SEARCH TABLE abctable USING COVERING INDEX idxabid (a=?
AND b=?) (~3 rows)

I assume the optimizer could use the following "plan"
... COVERING INDEX idxabid (a=? AND b=? AND rowid>?)
but it didn't.
Maybe for a reason...

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SELECT query first run is VERY slow

2011-07-26 Thread Max Vlasov
On Tue, Jul 26, 2011 at 8:50 PM, Григорий Григоренко wrote:

>
> EXPLAIN QUERY PLAN SELECT * FROM foo WHERE bar = 'three' and id > 290
> LIMIT 10;
>
> SELECT * FROM foo WHERE bar = 'three' and id > 290 LIMIT 10;
>
>
> Run query. Now using idx2 index SQLITE reads only about 20 Kbytes!
>
>
>
Grigory, it seems you just added a field and copied rowid column to it, so
your fast query is equivalent to queries with general fields that works as
expected (see my post about a,b,c fields). Id that is mapped to rowid is
still has special treatment and not used effectively. This still has to be
explained. But if your requirements to your base allows you to add extra
field, I think you can use it, just support autoincrement feature some other
way.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SELECT query first run is VERY slow

2011-07-28 Thread Max Vlasov
On Thu, Jul 28, 2011 at 11:41 AM, Григорий Григоренко wrote:

> So, should we file this as a defect or bug somehow?
>
>
As I understand currently the issues acknowledged and fixed during the
e-mail conversation in this list ( read
http://www.sqlite.org/src/wiki?name=Bug+Reports ). So I'd rather ask Richard
or Dan writing their resolution here about either behavior is by design or
the issue is acknowledged and (maybe) will be fixed.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Exclusive locking mode not working as expected?

2011-07-29 Thread Max Vlasov
On Fri, Jul 29, 2011 at 10:49 AM, Jonathan Little wrote:

>
> This behavior seems undesirable to me -- we've got users of our application
> copying the database file using Windows Explorer while it's being written
> to, and ending up with inconsistent/corrupt databases. Am I missing
> something here or if I want to prevent this, is my only option to modify our
> build of SQLite to open the file for exclusive access?
>
>
When you say "inconsistent/corrupt databases" are you talking about the same
users trying to use this copied bases later or about the base uses while
this copy operation was in process and being corrupt after this?

If the former, as I see from winOpen (windows case), the sharing set as
FILE_SHARE_READ | FILE_SHARE_WRITE so there's no denying on the os level
during access to this file and the only option here is to implement your own
virtual file system for your files. But just curious, do you really want to
give your users an "official" way to manipulate sqlite files manually? I
know that many  advanced users know what sqlite really is and try to do some
back-up by hands, but they also should know that such knowledge voids
"warranty" in a sense

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Will SQLite supports UnQL?

2011-07-30 Thread Max Vlasov
On Sat, Jul 30, 2011 at 9:45 PM, Alexey Pechnikov wrote:

> Very interesting annonce:
> http://www.couchbase.com/press-releases/unql-query-language
>
>
Thanks for the info, very interesting indeed.

After some reading at http://www.unqlspec.org I'd rather think Richard would
introduce something new (UnQLite?) so CouchDB will be a server solution and
this one is a local one. This is because all the things native to relational
databases world make no sense here. For example, if you want some multiply
properties to you table, you will create a separate table in the sqlite.
Here these properties will be part of your document in JSON format and I
suppose from the syntax that queries would look something like this:

SELECT address.street from clientscollection where person.name='smith'

But these are my speculations. Probably Richard will share some thoughts.
But honestly I've already very impressed :)

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request: extend the IN operator

2011-08-07 Thread Max Vlasov
On Sun, Aug 7, 2011 at 3:17 PM, Alexey Pechnikov wrote:

> Is it possible to support construction like to
>
> where x IN "1 2 3"
> ...
>
> Does somebody interesting in this functionality too?
>
>

I needed one, I just implemented mysql find_in_set as a user function. there
it was comma-delimited, but it's not that important, I'm sure you can change
it to any other symbol in your data

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Split Function for SQLite?

2011-08-24 Thread Max Vlasov
On Tue, Aug 23, 2011 at 7:10 AM, Gregory Moore  wrote:
> I need to split up a list of items in a single row so they each have
> their own row.
>

You can read about my trick query solving partly this task with a trick
http://www.mail-archive.com/sqlite-users@sqlite.org/msg55935.html

This will require a user function (GetItemFromSet) and the results are
limited in a sense. You will get ti idea from the following query (the
version modified by Jim Morris)

SELECT Trim(GetItemFromSet(Value, '23, 14, 1, 7, 9')) Item FROM
SELECT B1.B + B2.B + B3.B + B4.B FROM
(SELECT 0 AS B UNION SELECT 1 AS B) AS B1,
(SELECT 0 AS B UNION SELECT 2 AS B) AS B2,
(SELECT 0 AS B UNION SELECT 4 AS B) AS B3,
(SELECT 0 AS B UNION SELECT 8 AS B) AS B4
) WHERE NOT (Item Is Null)

Also theoretically it is possible to use virtual tables for this. So
when your virtual query implementation accepts list in some way ('23,
14, 1, 7, 9') and returns the table when querying

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Split Function for SQLite?

2011-08-25 Thread Max Vlasov
On Thu, Aug 25, 2011 at 9:34 AM, Max Vlasov <max.vla...@gmail.com> wrote:
>
> Also theoretically it is possible to use virtual tables for this. So
> when your virtual query implementation accepts list in some way ('23,
> 14, 1, 7, 9') and returns the table when querying
>

I tried to implement something like this and it seems it works (very
easy for anyone familiar with virtual tables). The table itself
doesn't need any init data (and doesn't even contain) and absorbs data
from the query itself

So finally the db/program that wants to use it creates it with something like
  CREATE VIRTUAL TABLE cmlist Using vtcommalist
Table format
  CREATE TABLE [xxx] ([CommaList] TEXT, [Value] TEXT)
And example query is
  SELECT value FROM cmlist WHERE commalist='45,56,78,125'

So we finally got
"45"
"56"
"78"
"125"

Internally my xFilter just stores the commalist from the query and
uses it for First/Next

The only problem with this approach is that rowid makes sense only for
queries mentioned.
So
  SELECT rowid FROM cmlist
can't return valid results

I wonder can it be a problem for complex queries when sqlite decides
itself what to query and maybe relies on the correctness of rowid.

Max Vlasov
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] splitting a line to produce several lines?

2011-09-12 Thread Max Vlasov
On Mon, Sep 12, 2011 at 12:28 PM, Jean-Denis MUYS  wrote:
>
> My question is: is there a pure SQL way to split the instances string, and 
> generate the instance lines from the instance list string? contrived example:
>

This kind of questions appear from time to time (including my brain :)
and there are tricky solutions like for example using specially
implemented virtual tables (read my recent post here:
http://www.mail-archive.com/sqlite-users@sqlite.org/msg63453.html)

Thanks to your example I added delimiter support in the virtual table
initialization and tested the concept on a real data, seems like it
works. The query in this case looks like this

SELECT TestTable.Id, TestTable.Name, cmlist.value FROM TestTable LEFT
JOIN cmlist On Instances=commalist

For those who familiar with sqlite virtual tables implementation this
concept is probably an hour to implement. If you're on windows and
ready to test your conversion with my tool, consider writing to me
directly

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Unexplained minor regression (bug) 3.7.8 up

2011-12-06 Thread Max Vlasov
Hi,
Noticed a strange regression after 3.7.7 (in 3.7.8 and inherited in 3.7.9)

I have a Russian morphology database and different queries working with it.
I narrowed it to the following case and populated with a couple of English
words (just to make sense)

The following database

CREATE TABLE [Beginnings] ([Id] INTEGER PRIMARY KEY AUTOINCREMENT,[Title]
TEXT, [EndingId] INTEGER);
CREATE TABLE [Endings] (Id INT,Title TEXT,EndingId INT);
INSERT INTO Beginnings (Id, Title, EndingId) VALUES (1, 'FACTOR', 18);
INSERT INTO Beginnings (Id, Title, EndingId) VALUES (2, 'SWIMM', 18);
INSERT INTO Endings (Id, Title, EndingId) VALUES (1, 'ING', 18);

There's a query that searches for primary form of a united list of some
words (here FACTORING and SWIMMING):

SELECT
  SrcWord, Beginnings.Title
FROM
  (SELECT 'FACTORING' AS SrcWord UNION
   SELECT 'SWIMMING' AS SrcWord )
LEFT JOIN
  Beginnings
WHERE
  Beginnings.Id=
   (SELECT BeginningId FROM
   (SELECT
 SrcWord, B.Id as BeginningId, B.Title || E.Title As Connected
   FROM
 Beginnings B
   LEFT JOIN
 Endings E
   ON
 B.EndingId=E.EndingId
   WHERE
 Connected=SrcWord
   LIMIT
 1))

Sqlite versions before 3.7.7 (inclusive) returns results (2 rows). 3.7.8
and above shows empty result set

Thanks,

Max Vlasov
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Unexplained minor regression (bug) 3.7.8 up

2011-12-06 Thread Max Vlasov
On Tue, Dec 6, 2011 at 9:49 PM, Dan Kennedy <danielk1...@gmail.com> wrote:

> On 12/06/2011 03:28 PM, Max Vlasov wrote:
>
>> Hi,
>> Noticed a strange regression after 3.7.7 (in 3.7.8 and inherited in 3.7.9)
>>
>>
> There is a candidate fix for this in fossil now.
>
>
Dan, thanks
I checked the latest trunk against my unprepared database and query,
everything is fine.

btw, could not find exact steps for getting trunk amalgamation.
Is it :
- get
http://www.sqlite.org/src/tarball/sqlite-latest-trunk.tar.gz?uuid=trunk
- extract on a unix-compatible machine
- ./configure
- make sqlite3.c

For me it worked, but maybe there are other things to mention.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Changes to VFS starting 3.8.3

2016-03-10 Thread Max Vlasov
Hi,

I have a compatibility problem with my vfs implementation of memory
databases. I once implemented it successfully probably with a version
3.6.something. Today I tried to create a new database using the same code
with the latest (3.11.1) version (the procedure is  when no prior db data
exists, i.e. absolutely new file is being worked with) and it failed on a
first write-related query, something like "create table if not exists" .
After that I decided to try different binary versions of sqlite and
narrowed it down to the difference between 3.8.2 and 3.8.3. So, with 3.8.2
everything is ok, the db created has correct structure (checked on the
memory block saved as file) and no error produced during the creation. With
3.8.3 the first attempt to perform the same query on a new file produces
"Sql logic error or missing database".

As I see from the version history page that no specific vfs-related changes
for 3.8.3 were reported, only a major change that can affect structure and
vfs is initial common table expressions implementation.

What are the changes that might trigger the change?


Thanks,

Max


[sqlite] Changes to VFS starting 3.8.3

2016-03-10 Thread Max Vlasov
Thanks,

I suspect there's indeed some special behavior not obvious at the moment.
I'll try to gather some additional information if it's possible or detect
this specific behavior





On Thu, Mar 10, 2016 at 5:26 PM, Richard Hipp  wrote:

> On 3/10/16, Max Vlasov  wrote:
> > I have a compatibility problem with my vfs implementation of memory
> > databases.
> >
> > As I see from the version history page that no specific vfs-related
> changes
> > for 3.8.3 were reported, only a major change that can affect structure
> and
> > vfs is initial common table expressions implementation.
> >
> > What are the changes that might trigger the change?
>
> Dunno.  The changes to the unix VFS were minimal
> (
> https://www.sqlite.org/src/fdiff?v1=f076587029285554=f3ed0e406cbf9c82=1
> )
> and likewise the windows VFS
> (
> https://www.sqlite.org/src/fdiff?v1=4323dd0bac4f7a70=1b21af72c5fa6f9e=1
> ).
> Perhaps your in-memory VFS was relying on some unspecified behavior
> that changed?  Without access to your source code, it is difficult to
> say.
>
> --
> D. Richard Hipp
> drh at sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] Changes to VFS starting 3.8.3

2016-03-11 Thread Max Vlasov
On Thu, Mar 10, 2016 at 5:26 PM, Richard Hipp  wrote:

> Perhaps your in-memory VFS was relying on some unspecified behavior
> that changed? 


Some tests finally led to the source of my problems. When I implemented the
handlers of vfs interface before, I made xFileControl return SQLITE_ERROR
instead of SQLITE_NOTFOUND. For whatever reasons sqlite versions below
3.8.3 allowed me to do this in some of the usage contexts. Today when I
tested I saw that querying PRAGMA Encoding in 3.7.11 also gave an error
even before any write-related queries. So in my initial case probably some
minor logic change in 3.8.3 led to a critical xFileControl request while
3.8.2 was ok with the wrong implementation.


Re: [sqlite] What languages can include SQLite statically?

2010-05-24 Thread Max Vlasov
Gilles,

For Delphi I successfully used files from http://www.aducom.com to
statically link sqlite files compiled with bcc (Borland command-line c
compiler freely available now) with Delphi. Also the components of
aducom.com will allow you to use all the power of Delphi database components
with the sqlite without necessity to provide any additional dlls.

But there are some adjustments needed, if you use c run-time from msvcrt dll
(the forum at aducom.com shows how to do this), no other actions needed. But
if you want to be free from msvcrt dependency, you can use c-runtime object
files also available on aducom.com, but the version I used had a serious
bug, Albert (from aducom.com) promised to fix on the site, probably you can
use it as of now.

Max
maxerist.net


On Fri, May 21, 2010 at 1:31 PM, Gilles Ganault wrote:

> Hello
>
> My C skills are very basic. I was wondering: After compiling SQLite
> into an .OBJ or .LIB file, what languages can be used to include this
> output into a main program, so we end up with a single EXE.
>
> I assume we have the choice of:
> - C
> - C++
> - Delphi (?)
> - Other?
>
> Thank you.
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Update detail record based on master properties

2010-05-26 Thread Max Vlasov
We all know UPDATE sometimes is limited so in order to make some complex
conditional updating we can only rely on the complexity of WHERE clause.

I would like to update my detail table based on master properties (so to set
some value only  if the corresponding master record fits some conditions).
In case of only one property the statement can look like this

UPDATE detail_table SET somefield=somevalue
  WHERE
  (SELECT masterfieldtocheck FROM master_table WHERE master_table.id
=detail_table.masterid)=okvalue

Is there a way to alias the master table when I need to check several fields
of the master table? Or the only choice is to write

  WHERE
  ((SELECT masterfieldtocheck FROM master_table WHERE
master_table.id=detail_table.masterid)=okvalue)
AND
  ((SELECT masterfield2tocheck FROM master_table WHERE master_table.id
=detail_table.masterid)=okvalue2)

If I still need to repeat a similar select statement in the second section,
is sqlite optimizer smart enough to recognize that the both parts queries
the same record and not to perform this operation several times?

Thanks

Max,
maxerist.net
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Update detail record based on master properties

2010-05-26 Thread Max Vlasov
> where exists (
>  select 1 from master_table where
>master_table.id=detail_table.masterid and
>masterfieldtocheck = okvalue and
>masterfield2tocheck = okvalue2);
>
> -- or
>
> where detail_table.masterid in (
>  select id from master_table where
>masterfieldtocheck = okvalue and
>masterfield2tocheck = okvalue2);
>
> --
> Igor Tandetnik
>
>
Thanks,Igor, it works, I see that sql itself is smart enough )
Both suggested by you are similar in speed, but the latter looks more
self-explanatory for me.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Does sqlite always allow creating own field named 'rowid'?

2010-05-26 Thread Max Vlasov
While implementing a table that intended for arbitrary table storage, I
automatically named some field rowid not linking at the moment that it will
have a name conflict with sqlite internal rowid name. Lately I discovered
this, and (not a big deal) recreated table with a different name. But just
wondering, was allowing to create such field intentional? As I suppose such
field is complete phantom since most sql queries will interpret rowid
internally and won't allow access this user field at all.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Does sqlite always allow creating own field named 'rowid'?

2010-05-26 Thread Max Vlasov
On Wed, May 26, 2010 at 6:19 PM, Pavel Ivanov  wrote:

> > But just
> > wondering, was allowing to create such field intentional? As I suppose
> such
> > field is complete phantom since most sql queries will interpret rowid
> > internally and won't allow access this user field at all.
>
> Situation is completely opposite. When you declare your field named
> "rowid" the real rowid becomes phantom and all queries will select
> your field instead. But real rowid will be still accessible via other
> names like "_rowid_" or "oid" (is there a way to access real rowid
> when all three built-in names are declared by user in the table, I
> wonder?). And this behavior is documented (see
> http://www.sqlite.org/lang_createtable.html) and thus I suppose
> intended and introduced long ago.
>
>
>
Ok, it makes sense. My only complain is that although rowid is a sqlite
specific word, it became more of "reserved" words. One example (maybe too
specific). One works with tables with a some sqlite admin and some imported
tables from other database engine contains its own rowid field. As everyone
knows every sqlite table has id regardless of the developer intention, so I
assume that I can query SELECT rowid ... for almost every existing sqlite
table in the world. But with this hypothetical case the results of such
query even can be puzzling, for example, if the data don't declare
uniqueness limitation on its own rowid field, we could see repeating values
or even non numeric values. I think that using _rowid_ might be additional
guarantee from such problems, but I suppose many developers still prefer
rowid.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Aggregate and join query very slow

2010-05-27 Thread Max Vlasov
On Thu, May 27, 2010 at 3:07 PM, Michael Ash  wrote:

> ...These are large tables (52,355 records in facility and 4,085,137 in
> release_cl).
>
> ...
> sqlite> explain query plan
>   ...> SELECT name,score
>   ...> FROM facility f, (SELECT facilitynumber,SUM(score_rev) AS score
>   ...> FROM release_cl
>   ...> WHERE media<3
>   ...> AND year=2006
>   ...> GROUP BY facilitynumber) r
>


Michael, from what I see, if your release_cl table is not properly indexed
to be quickly aggregated (media and year field), this will lead to full
table reading (so all the data of your 4M records). If it's properly
indexed, and the result number of records of this select is big, consider
adding non-indexed fields to this (or brand-new) index since otherwise
sqlite quickly finds records with this index, but have to look up main data
tree to retrieve other fields.

Max,
maxerist.net
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite turns 10 years old

2010-05-29 Thread Max Vlasov
Mr. D. Richard Hipp,

10-year anniversary a great news and real pleasure to see how your and
supporters' skills turn this library into such big power. Sometimes it makes
me sad I can't explain to a non-developer what is so great about sqlite,
what makes it appearing in so many software packages. If I had power, I'd be
glad to calculate and see how many sqlite-related reads and writes are
taking place on the whole earth. I'm sure it's a big number :)

Thank you for sqlite!

Max Vlasov

On Sat, May 29, 2010 at 5:57 PM, D. Richard Hipp <d...@hwaci.com> wrote:

>
> Thanks, everybody, for helping to make SQLite the most widely deployed
> SQL database engine in the world.  And Happy 10th Birthday to SQLite!
>
> D. Richard Hipp
> d...@sqlite.org
>
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Returning empty result set

2010-06-01 Thread Max Vlasov
Hi,

Tried to figured out the simplest query returning empty result set without
binding to any existing table.

The query
SELECT 1 WHERE 1=2

worked, but it looked a little strange ) and besides it didn't work in
mysql. Will it work in future versions of sqlite or I'd be better to use a
query working in both worlds:

SELECT * FROM (SELECT 1) AS TBL WHERE 1=2

?

Thanks

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is the absence of msvcrt.dll a known issue with SQLite Windows 2000

2010-06-06 Thread Max Vlasov
On Sat, Jun 5, 2010 at 11:01 AM, Frank Church  wrote:

>
> On checking the sqlite3.dll docs it states the only additional requirement
> for it is msvcrt.dll.
>

Frank, as long as I know it's just a variation of sqlite3.dll that uses MS C
run-time dynamically linked vs statically. You can compile sqlite3.dll
without msvcrt dependency with any other compiler, for example freely
available borland command-line compiler, just checked the dll made with it,
it has only kernel32.dll and user32.dll functions in import section. Also,
sqlite3 source has no any msvcrt* mentioning in the sources (if it had, this
can be some indication of dynamic dependency).

Max,
maxerist.net
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] database corruption problem

2010-06-08 Thread Max Vlasov
On Mon, Jun 7, 2010 at 7:31 PM, Daniel Stutzbach <
dan...@stutzbachenterprises.com> wrote:

> I'm the author of a small Windows application that embeds SQLite.  I'm
> currently using version 3.5.9 with PRAGMA locking_mode = EXCLUSIVE
> and PRAGMA journal_mode = TRUNCATE.  I just received a crash report from a
> user with the dreaded error: "database disk image is malformed".
>
> My application is multi-threaded, although only the main thread calls
> SQLite
> functions (the other threads do non-DB stuff).
>
>
You didn't mention your language/development tools. It would help also.

I also suggest to implement artificial tests like constantly emulating user
actions in your program. Once it helped me to find the cause of the same
corruption error. It would be some dev-only menu item that activates a loop
constantly reading randomly and writing randomly until specially
interrupted. While the program reads and writes you check PRAGMA
integrity_check; from time to time and stop if result shows corruption.
If you can't reproduce the problem on your machine you can send specially
prepared version to the user and ask him to perform the similar actions

Max Vlasov,
maxerist.net
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why "select count(distinct f1, f2) from tbl_test" can not work?

2010-06-26 Thread Max Vlasov
On Sat, Jun 26, 2010 at 7:27 AM, zhangzhenggui wrote:

> tbl_test maybe like this:
> create table tbl_test(f1, f2, f3);
>
> Now, I want to get the num of records which (f1, f2) are distinct.
>
> I try "select count(distinct f1, f2) from tbl_test", but error occur: SQL
> error: wrong number of arguments to function count()
>
>
> although, I can do like this, but I think it not very good :
> select count(1) from (select distinct f1, f2 from tbl_test);
>
>

If you know the type and the maximum range (in case of integer) you can do
the tricks like this

select count(distinct ((Value1 << 32) | Value2)) FROM TestTable

but I did quick text for a table consisting of thousands of integers, the
speed is similar to your another query that you called not very good. I
think both needs some temporary storage, so what you like aesthetically is
the best :)

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] WAL questions

2010-07-07 Thread Max Vlasov
Alexey,

I read this sentence, but it didn't help.
So I suppose there's a bug in PRAGMA journal_mode logic
Steps to reproduce.

1. Create an empty base with some table.
Look at the 18,19 offsets, they both = 1, it's ok, the base is compatible
with full range of sqlite3 versions.

2. Do PRAGMA journal_mode=WAL; , it returns wal, so WAL is set.
Look at the 18,19 offsets, they both = 2, the base no longer compatible with
older versions, checking... yes, they say "encrypted" or something.
Query PRAGMA journal_mode; alone (just to check not set) , it still says
"wal", ok

3. Close the database. offsets 18,19 still = 2, so the WAL setting are kept
between sessions.

4. Open the db again, offsets 18,19 still = 2,
query PRAGMA journal_mode; it says  "delete", but definetely should return
"wal".

Max


On Thu, Jul 8, 2010 at 12:09 AM, Alexey Pechnikov wrote:

> See http://sqlite.org/draft/wal.html :
>
> "An SQLite database _connection_
> defaults
> to journal_mode=DELETE. To convert to WAL mode, use the following
> pragma: PRAGMA journal_mode=WAL;"
>
> --
> Best regards, Alexey Pechnikov.
> http://pechnikov.tel/
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to determine when to VACUUM?

2010-07-08 Thread Max Vlasov
> >> (I guess it well might not on an SSD disk, but on a conventional
> >> rotational disk, pager could read several pages ahead with one seek -
> >> but does it?)
> >
> >  No, the pager does not.  Among other things, my feeling is that the
> >  locality of pages is not very strong, unless the database was just
> >  VACUUMed.
>
> Actually the SSD possibility makes it worse, not better.
>

Simon, you gave an interesting explanation, but does this rule work in
general? I mean there are many models, many sizes and so on. For example
SanDisk SSD used in my Asus T91MT claims it has some internal writing cache,
so this controller can have its own logic working independently of the
software installed. Also, allowing several chips writing at the same time
might have conflict  with any OS'  own caching mechanism. Besides I'm not
sure the caching in any OS is smart enough to take this into account. For
example (I'm not sure Windows is the best) but giving the fact that XP
didn't have proper partition alignment for SSD and it took some time for
enthusiasts to let MS know about this makes me think the cache system in
Windows is still not smart enough even for much easier SSD-related tasks :)

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] WAL questions

2010-07-08 Thread Max Vlasov
> You want "PRAGMA main.journal_mode"
>
> A "PRAGMA journal_mode;" (without the "main.") shows you the default
> journal
> mode used by newly created databases, which is always "DELETE" unless you
> have changed it with a prior "PRAGMA journal_mode=MODE" command.
>
> Though, I will admit, this is confusing, and I was thinking last night
> about
> ways we could possibly change it
>

Thanks for pointing out. I have two observations
- According to docs this was present for ages so it's a shame on me not to
knowing it in the first place. So probably it should stay as it is now.

- But this is the first time when this important pragma really affects the
sqlite file format and what it more important, not only for current session.
You probably had reasons for implementing WAL the way it is now, but
imagine, this setting once changed don't remind of itself for the developer.
So all his current code base works once it started using this version of
sqlite, but consequently small (or maybe large part) of his bases becomes
WAL-enabled (number 2 in the file format). The latter may appear because of
his own WAL-On without WAL-Off or WAL-On with unexpected program
interruption. Everything is ok, until these bases have to be
distributed/used in other enviroments, that probably use lower than 3.7
versions of sqlite3, so it stops working claming about unknown file format.
I think it's a potential way of new wave of mass bug/missing reporting or
simply confusion

I may exaggerate, I suggest other participants of the list share their
thoughts

Thanks,

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] WAL questions

2010-07-08 Thread Max Vlasov
Second Proposed Change:

>
> Remove the WAL mode from "PRAGMA journal_mode".  The journal_mode pragma
> only specifies the various rollback journal modes.  Enable the WAL using a
> separate pragma such as  "PRAGMA wal=ON; PRAGMA wal=OFF;"
>
> Thoughts?  Comments?  Other suggestions?
>
>
Maybe it's not the right time, but can it be implemented with separated
pragma as your second proposed change but with more options like OFF,
PERSISTENT, TEMPORAL. Still thinking about the fact that WAL being a new
software feature and actually becomes a new file format change, the latter
option can solve this (if it's technically possible and not hard to append
of course). The logic can be like this, if WAL=TEMPORAL, the first
reader/writer that accesses the base makes the changes to 18,19 bytes (=2)
allowing it and following reader/writers work in WAL mode and the last
reader/writer that closes the base, reverts it back (=1) allowing the file
format stays the same. Sure there are possible cases when the format still
changed (for example unexpected program end), but the next successful
open/close will fix this.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to determine when to VACUUM?

2010-07-08 Thread Max Vlasov
>>

> >
> > Simon, you gave an interesting explanation, but does this rule work in
> > general? I mean there are many models, many sizes and so on.
>
> Don't know.  You could test it.  Write a program that creates a file half
> the size of the drive, then writes to random parts of it timing each
> command.  If all the write commands take about the same amount of time then
> it doesn't work the way I described.  I just repeated the description I had
> read of the way SSD drives work.
>

Thanks, that what I thought )
SSD is an interesting thing to research especially in sqlite perspective,
for example, for reading it's access times that makes SSD winner, so
probably reading large sqlite database randomly can have some benefits being
used on SSD, but I don't know of any real world measurements. Do you know
any?

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Suggestion for sqlite shell

2010-07-12 Thread Max Vlasov
Hi,

recently I implemented a feature for an gui admin I use internally, but I
think it would be useful also for sqlite shell. Sorry if it already exists
in some form, I could not find similar.

The idea is to use bind syntax together with csv (or clipboard compatible)
import. So if one enters a query with question marks and other symbols that
allows parameters numbering (but not naming obviously) and also provides csv
file, shell will allow to interate through each row of the file and perform
the query using the row as the parameters set. It will allow not only using
a complex inserts with concatenation and so on but also give the ability to
make mass updates or other conditional operations. There are errors possible
like conflict between expected number of parameters and provided, but this
could be resolved either with returrning an error (% expected, but % found)
or allowing ignoring extra fields with padding.

I don't know whether the clipboard content can be used in sqlite
ideologically, but in my case it saved additional time allowing importing
for example from excel without the intermediate file (csv).

Thanks

Max Vlasov
www.maxerist.net
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Suggestion for sqlite shell

2010-07-12 Thread Max Vlasov
On Mon, Jul 12, 2010 at 5:00 PM, Alexey Pechnikov wrote:

> Maxim, please show example here and the link to your implementation.
>
>
Alexey,

to illustrate I prepared a screen shot

http://www.maxerist.net/downloads/temp/bind_iterate.png

excel and cells to be copied are at the top, at the center the query, the
dialog ask for the user confirmation and the results are below. It's just an
example of complex data manipulation "on the fly" with this feature, the
code itself is very simple as you may see from the input and output. Knowing
that sqlite shell already can parse csv content, I think this feature will
require no more than a couple of hundred lines of code

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] WAL: no longer able to use read-only databases?

2010-07-13 Thread Max Vlasov
>
> > In testing the latest SQLite snapshot with WAL enabled, it seems that
> > there's no way to use a database in a read-only location.
> >
>
> Documentation on the developers' current view of this issue can be found
> here:
>
>
>
In my opinion it's ok, WAL already has special conditions on which it would
operate and the current documentation describes them all thoroughly.  I wish
only the "advantages" sections of WAL also grew a little, maybe giving more
details about speed improvement encouraging using this mode more frequently

Thanks

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


  1   2   3   4   5   >