Re: [sqlite] Long long int constants in sources

2019-12-25 Thread Max Vlasov
On Wed, 25 Dec 2019 at 01:24, Clemens Ladisch  wrote:

> Max Vlasov wrote:
> > bcc 5.5 compiler ... didn't like long long constants
> >such as -2251799813685248LL
>
> If you want to make this particular compiler happy, use
> -2251799813685248i64.
>

Thanks, this helped when I tested this scenario.

Interesting that while searching and replacing I found a comment existing
in 3.6.10 version (the earliest I have) and probably prior that addressed
the problems with exposing 64-bit constants to compilers

> /* Some compilers complain about constants of the form
0x7fff.
> ** Others complain about 0x7i64.  The following macro
seems
> ** to provide the constant while making all compilers happy.
>*/
> #   define MAX_ROWID  (i64)( (((u64)0x7fff)<<32) | (u64)0x )

So probably it's hard to keep making all compilers (old and young) happy
all the time :). Thanks to Richard and the team for at least trying to make
them as happy as it's possible.

Max
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Long long int constants in sources

2019-12-24 Thread Max Vlasov
On Tue, 24 Dec 2019 at 23:58, Warren Young  wrote:

> On Dec 24, 2019, at 12:32 PM, Max Vlasov  wrote:
> >
> > I didn't mention it was for sqlite
> > static compiling with Delphi 32 bit, that supports (also quite old) OMF
> > static libraries format
>
> Would it be out of the question to compile SQLite to a DLL?  Then you can
> use any Windows C compiler.  You might even be able to start using the
> pre-built DLL shipped from sqlite.org.
>
>
Absolutely. I even had a dll coming along the obj binary and sometimes used
it in a program instead of static one. I might as well completely migrate
to dynamic linking. I just wonder how long sqlite made do without requiring
64-bit integer arithmetic from the compiler/cpu and mostly still does.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Long long int constants in sources

2019-12-24 Thread Max Vlasov
On Tue, 24 Dec 2019 at 21:37, Warren Young  wrote:

> On Dec 24, 2019, at 3:19 AM, Max Vlasov  wrote:
> >
> > an "ancient" bcc 5.5 compiler
>
> Yes, [almost 20 years old][1] now.  Even in these times of slowing
> technology, that’s still a very long time in computing.
>
> ...
>
> If you need a free C compiler for Windows, you have many other options
> these days.  Philosophically, I suspect you’ll be happiest with MinGW, but
> there are [many alternatives][3].  (Some of those listed don’t run on
> Windows, but most do.)
>
>
>
Thanks for thoughts and suggestions. I didn't mention it was for sqlite
static compiling with Delphi 32 bit, that supports (also quite old) OMF
static libraries format, most of existing supports either COFF or ELF If I
recall correctly. I'm in process of managing the task with with an upgraded
embarcadero command-line compiler (clang based). It will probably solve my
problem.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Long long int constants in sources

2019-12-24 Thread Max Vlasov
Hi,

an "ancient" bcc 5.5 compiler perfectly compiled sqlite to win32 until
3.28.00, but starting 3.29.00 it appears it didn't like long long constants
such as -2251799813685248LL , mentioned at least in sqlite3RealSameAsInt
and sqlite3VdbeExec. The errors tells unrelated error messages, like [ )
expected in function ] or [ If statement missing ) ], but in all those
cases the lines contained long long constants.

Is this something that appeared unintentionally and might be fixed or long
long int constants are first-class citizens now?

Thanks,

Max
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] "Optional" incoming parameters for virtual tables

2019-04-18 Thread Max Vlasov
On Thu, 18 Apr 2019 at 09:48, Hick Gunter  wrote:

> So the QP is attemopting to determine which cost product is less:
>
> My guess is it will probably choose the undesired, invalid plan. A linear
> cost penalty for desired but optional inputs is probably not going to cut
> it. In this case, the returned cost needs to be greater than 4 (more if
> there are more records in the IN set). I feel there needs to be a
> multiplicative penalty of at least 10.
>
>
>
As I mentioned in other post, the direct test with my current
implementation and IN operator gave a good result, but I understand this
can't be a rule. I see your points about cost adjustments so will give them
a try.

I also decided to look at the sources, estimatedCost as usable input only
mentioned once, it goes to rRun field being translated with
sqlite3LogEstFromDouble call. The LogEst value is nicely explained in the
sources (below is the compacted version of the comment in the sources)

Estimated quantities used for query planning are stored as 16-bit
> logarithms.  For quantity X, the value stored is 10*log2(X).  This  gives a
> possible range of values of approximately 1.0e986 to 1e-986. But the
> allowed values are "grainy".  Not every value is representable.  For
> example, quantities 16 and 17 are both represented by a LogEst of 40.
> However, since LogEst quantities are suppose to be estimates, not exact
> values, this imprecision is not a problem.
>  "LogEst" is short for "Logarithmic Estimate".
>  Examples:
>   1 -> 0  20 -> 43  1 -> 132
>   2 -> 10 25 -> 46  25000 -> 146
>   3 -> 16100 -> 66100 -> 199
>   4 -> 20   1000 -> 991048576 -> 200
>  10 -> 33   1024 -> 1004294967296 -> 320
>  The LogEst can be negative to indicate fractional values.
>  Examples:
> 0.5 -> -10   0.1 -> -330.0625 -> -40
>

It seems that as a general rule it's not good to have close estimatedCost
values since they will sometimes yield the same LogEst  values.
rRun evaluations in the sources are too complex to analyze

Max
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] "Optional" incoming parameters for virtual tables

2019-04-17 Thread Max Vlasov
On Wed, 17 Apr 2019 at 15:08, Hick Gunter  wrote:

>
> SLEECT contents from textfiles where search_path = 'mypath' and
> name_pattern IN ('*.txt','*.csv');
>
>
It's interesting, I implemented the mask and decided to give such a query a
try (having the same cost adjust I explained in the first post). With
newest versions and even older 3.15.1, it works ok with two cursors one
after another (two xOpen calls) providing constraints with two masks.
Version 3.8.3 visits constraints with optional parameters, but chooses a
different route without filemask in constraint (so the tables outputs with
a default mask). 3.6.10 both doesn't visit constraints with optional
parameters and also gives results with default mask. Looks like similar to
what you mentioned in your first reply.

The usage of IN allows many expressive applications to such queries, but I
should probably have in mind that sometimes it is safe to implement a data
piece in a single entity so file mask may acquire something like comma-list
(or semocolon) "*.txt;*.csv"

Max
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] "Optional" incoming parameters for virtual tables

2019-04-17 Thread Max Vlasov
On Wed, 17 Apr 2019 at 15:08, Hick Gunter  wrote:

> ...
>
> SELECT contents from textfiles( 'mypath', NULL, 0);
> SELECT contents from textfiles where search_path = 'mypath' and
> is_recursive = 1;
> SLEECT contents from textfiles where search_path = 'mypath' and
> name_pattern IN ('*.txt','*.csv');
>
> With xBestIndex returning costs 1, 2, and 2/3 (with IN and without IN)
> respectively.
>
> And you want to know how to make SQLite always call xFilter with
> ('mypath','*.txt') and ('mypath','*.csv') instead of just once with only
> ('mypath') and attempting to retrieve the undefined/empty name_pattern. Is
> this correct?
>
>
>
Yes, you're correct. The name_pattern is also a good illustrative extension
to the case. And indeed, IN operator is really a tricky case
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] "Optional" incoming parameters for virtual tables

2019-04-17 Thread Max Vlasov
On Wed, 17 Apr 2019 at 12:52, Hick Gunter  wrote:

> Your xBestIndex function should be returning a cost that is proportional
> to the "effort required to fulfill the query". My own VT implementations
> have been returning the total number of records for queries with no
> constraints and assuming a constant fan-out factor for each key field
> provided as a constraint (eg. 25 for a full table scan, 5 if only the first
> of two key fields is provided and 1 if both are provided).
>

I suspect that you talk more about tables that possess and outputs data
regardless of "incoming" parameters involved. I understand that in this
case it is mostly about performance-wise issues when the worst-case
scenario is a time penalty but still valid results. I'm here more about
table-valued tables when for some of them not providing incoming parameters
means not valid data at all. One of my examples is a virtual table
outputting text files from a folder. Folder is a required parameter, but
there's also a binary flag regarding whether the scan processes subfolders.
If the flag is provided, the implementation wants it to be used in the
constraint chosen. Otherwise the intention from the query will not be
transferred and the scan will be folder-recursive despite the fact that the
flag disabling it appeared in the query.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] "Optional" incoming parameters for virtual tables

2019-04-17 Thread Max Vlasov
Hi,

I'm trying to implement "optional incoming" parameters for my virtual
tables. It's when there are columns required, but also there are some
fine-tuning columns that should be noticed when provided in the query and
assumed some default if not. The system that I tried seems to work, but I
would like an opinion whether it is future-proof.
- When the required parameters are not provided, xBestIndex returns a large
cost for earlier sqlite version and additionallly SQLITE_CONSTRAINT for
3.26.0 onward.
- When the required parameters provided and all optional provided the cost
is 1.
- For every optional parameter not provided, the cost is increased by 1. So
if there are 3 optional parameters and no provided, the cost is 4, if one
optional provided - the cost is 2.

Should this always work as expected or sqlite might not always choose the
lowest cost index amongst the ones with small values? I tested it with a
couple of vt implementations, but the  number of columns in both
required/optional pool are low and I expect some trouble when the number
will go up.

Thanks,

Max
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Malformed schema (orphan index) on particular db with the version 3.27.2

2019-03-16 Thread Max Vlasov
I noticed I replied directly to drh with the integrity_check answer
So I repeat the information here

PRAGMA integrity_check in 3.26.0
says
 *** in database main ***
Page 3 is never used
{ skipped about 20 lines }
Page 268 is never used

Additional information:
- My logic behind the opening includes querying Pragma encoding. So the
error message in question appear on this particular query and not when the
db is opened.
- Today I made a full scan of folder that might contain sqlite bases (based
on excluding obvious files with non-db extensions and checking the rest for
the starting sequence) and within hundreds of sqlite bases (some of them
though are identical backups), only this one and derivatives reveals the
error when performing the PRAGMA on them.

So I suspect that the file is a very rare example of inconsistency of data
not compatible with recent version, but working with older ones.

Max


On Fri, 15 Mar 2019 at 18:00, Richard Hipp  wrote:

> On 3/15/19, Max Vlasov  wrote:
> >
> > But 3.27.2 said
> >   malformed database schema ({sqlite_autoindex_mytablename}_1) - orphan
> > index.
>
> This error message arises from enhanced early detection of corrupt
> database files that was added to version 3.27.x.  Do you still have
> the original database, before you VACUUM-ed it?  If so, what does it
> say if you run "PRAGMA integrity_check" on that database.
>
> Can you share the database with me, through private email?  There is
> always the possibility that the enhanced early detection of corrupt
> databases is giving a false-positive.
>
> --
> D. Richard Hipp
> d...@sqlite.org
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Malformed schema (orphan index) on particular db with the version 3.27.2

2019-03-15 Thread Max Vlasov
Hi,

I have a database that used in software since probably 2010, there are many
variants of this database at my hd and everything was fine with it starting
3.6.10 through 3.26.00, I'm not sure about every version, but during the
version history, at least several was used to open it.

But 3.27.2 said
  malformed database schema ({sqlite_autoindex_mytablename}_1) - orphan
index.

if I open the db in 3.26.0 and make VACUUM, then 3.27.2 is ok with it. If I
delete every row except the first one from the table in 3.26, version
3.27.2 still says it's bad.

I tried to google something about this and some of results at the web
contained "orphan index" with a ticket somewhere near jan 2019. I'm not
sure this is related, but maybe some recent change might affect this.

If sqlite now is more constraint with indexes errors due some reasons, I
will probably should reopen my existing databases to check. But if this is
due some bug, fixing it would probably save much time for me :)

Thanks

Max
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Exception handling during create table from a virtual one

2018-10-26 Thread Max Vlasov
On Fri, Oct 26, 2018 at 5:50 PM Clemens Ladisch  wrote:

> > Аfter the exception is thrown 
> >  If I continue the execution
>
> You must return from the callback function normally, or abort the process.
> Anything else will corrupt SQLite's internal state.
>
>
Thanks, probably it's better for me to wrap any sensitive callback handler
in an exception catcher with SQLITE_ERROR code return.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Exception handling during create table from a virtual one

2018-10-26 Thread Max Vlasov
Hi,

I have a virtual table that raises an unhandled exception during a create
table ... as select * from {myvirtualtable} command, inside xColumn
callback. I already fixed the error inside my library to handle the
situation reasonably, but there's something I noticed for the non-handled
exception scenario.

Аfter the exception is thrown 

 If I continue the execution, then the following symptoms are shown:
- The sqlite returns "unknown" error for this query
- Select * from sqlite_master reveals an empty row.
- integrity_check:  *** in database main *** Page 2 is never used
- after I reopen this db, Pragma encoding query and probably other queries
produces "schema is malformed" error

.. If I halt the process when the debugger stops at the exception
- The db is restored (rolled back) to correct state (no new table created,
no integrity errors)

Is this expected? I mean, sqlite usually handles well unexpected halts and
this is something that might be desired (for example, computers might shut
down abruptly), but unexpected exception is something about code control
flow which no sane library can guarantee to handle well.

sqlite 3.25.2
If necessary I can provide small copies of the original db (1024 bytes) and
malformed (2048)

Thanks,

Max
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Re: sqlite3_progress_handler(D,N,X,P)

2018-05-25 Thread Max Vlasov
On Fri, May 25, 2018 at 10:29 AM, x  wrote:

> You could have interrupt checks just before the return of a result row
> yourself by creating a Step fct that  called sqlite3_step and then checked
> for an interrupt. It’s when the sqlite3.c code’s being run interrupt is
> most needed. e.g. if a query has a large result set that needs sorting the
> first sqlite3_step could be a very lengthy operation.
>
> The only way I can see around it is to use “select sum(1) from Tbl” to get
> the count. That uses plenty of VMI’s but is slower.
>
>
I suppose Select count(*) is uninterruptible for some historical reasons.
you can also use Select count(rowid) ... (if the table is not WITHOUT
ROWID), it's interruptible, just checked, at least in v3.23.1. Probably
takes the same time as Select sum(1), but a little bit less cryptic when
reading
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Map a field type to a best suited sqlite3_result_* call

2018-05-05 Thread Max Vlasov
On Sat, May 5, 2018 at 1:19 PM, J Decker  wrote:

> https://www.sqlite.org/c3ref/column_blob.html
>
> The sqlite3_column_type() routine returns the datatype code
>


Column api is not suitable for me since it relies on the actual data
provided, but I'm the one who calls sqlite3_result_* to provide this.


>
> PRAGMA table_info(table1);
>

The pragma just returns the type part as it was provided by the virtual
table module with schema data. No conversion or mapping is made. But it
will help be to extract the type part correctly if the provided list
comments for example .
  [FieldA] /*This is fieldA */ TEXT, [FieldB] FLOAT


Something also came up as a solution during the reading of this post. The
query
  Select typeof(cast('' as ))
doesn't require any table so I just can replace  with a provided
column type and get the best affinity.
But for any unrecognized it will return "integer", but I'd rather have
"text".

Max
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Map a field type to a best suited sqlite3_result_* call

2018-05-05 Thread Max Vlasov
Hi,

What is the best way to map a field type as it is provided in create table
statement to sqlite3_result_* function call?

More specifically, I have a virtual table implementation where a raw table
data provided and the corresponding field definition part ([FieldA] TEXT,
[FieldB] FLOAT). When asked by sqlite to fetch the column data I'd like
to convert the text representation of a column to the best "type" and make
the best sqlite3_result_* call. So I need probably the same machinery as
sqlite itself when it handles type affinity.

I found a function in the sources, sqlite3AffinityType, that probably
serves the goal, but it's private. Also, a suggestion to implement a
similar functionality was made by Jay A. Kreibich is in the mail list
(mentioning  sqlite3AffinityType)

On Thu, Jul 8, 2010 at 12:23 PM, Jay A. Kreibich  wrote:
>> Other suggestions?
>
>  This has nothing to do with WAL, but it might be nice to expose
>  the logic that does SQL-type => SQLite-affinity mappings
>  (i.e. sqlite3AffinityType()):
>
>int sqlite3_get_affinity( const char *type );
>
>  Takes a string that contains an SQL type.  Returns one of:
>
>  SQLITE_AFF_TEXT, SQLITE_AFF_NONE, SQLITE_AFF_NUMERIC,
>  SQLITE_AFF_INTEGER, SQLITE_AFF_REAL.

But I suppose sqlite3_get_affinity or similar function was never
implemented.
I know that I probably might mimic the sqlite machinery knowing it's not
that complex, but it is always better to rely on the approved logic of
sqlite itself.

Thanks,

Max
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite equivalent of SQL Over(Partition)

2018-04-15 Thread Max Vlasov
Simon,
I think the Group by might work correctly, but sometimes (as in OP case)
would require a lot of rewriting (copy-paste). The key point here is that
the Window function doesn't change the set, but only allows wider access to
other rows of the set at the current row "time". So we just have to move
the group by to the column.

An example.

The good answer to a PARTITION BY question is at

https://stackoverflow.com/questions/2404565/sql-server-difference-between-partition-by-and-group-by#30907639
(I enforced the second popular answer)

The table
  is CREATE TABLE [TableA] ([id] integer, [firstname] text, [lastname]
text, [Mark] integer)

and the PARTITION BY query from the answer

  select SUM(Mark) OVER (PARTITION BY id) AS marksum, firstname from TableA

can be rewritten in SQlite as

  select (select sum(mark) from TableA grpTableA where
grpTableA.id=TableA.id) as marksum, firstname from TableA

The problem with the rewriting of OP query is that we have to move the
whole FROM TblContractTasks ... part to the column with the group by
sub-query. Not only the query becomes less readable, we will probably get
some performance penalty unless Sqlite does some optimization for the
column sub-query. But at least it's formally possible.

Max


On Fri, Apr 6, 2018 at 8:20 PM, Simon Slavin  wrote:

> On 5 Apr 2018, at 11:41am, DThomas  wrote:
>
> > Select DISTINCT Sites.Customer, Sites.Digit,
> > Count(TblContractTasks.TaskNumber)
> > OVER (PARTITION BY Sites.Digit)  As TaskCount
> > FROM TblContractTasks INNER Join (Sites INNER Join TblContractDetails On
> > Sites.Digit = TblContractDetails.SiteDigit) On
> > TblContractTasks.ContractNumber = TblContractDetails.ContractNumber
> > WHERE (Sites.Consultant='xx' ) ORDER BY Sites.Digit
>
> Instead of PARTITION BY use GROUP BY .  See "with COUNT function" example
> here:
>
> 
>
> I think everything else used will continue to work.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Access to sqlite3_api_routines outside of a loadable extension context

2018-04-06 Thread Max Vlasov
Oh, I see, just looked at the examples, all you described works thanks to
the magic of c preprocessor. The problem in my case is that I'm on Delphi.
Plus I already has something like indirect access to sqlite api functions,
I will probably make a converter during the init between sqlite3Apis and my
structure

Thanks

On Fri, Apr 6, 2018 at 12:52 AM, Keith Medcalf <kmedc...@dessus.com> wrote:

>
> You write the code as if it were a loadable extension.
>
> If you compile "inline" (ie, appended to the amalgamation) the headers
> will detect the SQLITE_CORE symbol being defined and generate direct calls
> rather than indirect calls.  There is really no need to change the code
> from the code you would use from a loadable extension, unless you want to
> "hide" the loadable extension init() symbol when compiled as part of the
> amalgamation (or inline using the SQLITE_CORE define) and use the
> EXTRA_INIT hook to do initialization.
>
> ---
> The fact that there's a Highway to Hell but only a Stairway to Heaven says
> a lot about anticipated traffic volume.
>
> >-Original Message-
> >From: sqlite-users [mailto:sqlite-users-
> >boun...@mailinglists.sqlite.org] On Behalf Of Max Vlasov
> >Sent: Thursday, 5 April, 2018 09:24
> >To: SQLite mailing list
> >Subject: [sqlite] Access to sqlite3_api_routines outside of a
> >loadable extension context
> >
> >Hi,
> >I'm considering creating a virtual table or user function that might
> >possible work either as a loadable extension or as a general,
> >statically
> >created one. In order to avoid repeating during developing, I thought
> >that
> >I might use sqlite3_api_routines structure as a universal access to
> >sqlite
> >code routines in both cases. But it seems it is impossible to access
> >the
> >structure outside of a loadable extension context. Or am I missing
> >something?
> >
> >Probably some direct access to sqlite3Apis variable is possible, but
> >I
> >suspect such practice isn't  recommended.
> >
> >Thanks
> >___
> >sqlite-users mailing list
> >sqlite-users@mailinglists.sqlite.org
> >http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Access to sqlite3_api_routines outside of a loadable extension context

2018-04-05 Thread Max Vlasov
Hi,
I'm considering creating a virtual table or user function that might
possible work either as a loadable extension or as a general, statically
created one. In order to avoid repeating during developing, I thought that
I might use sqlite3_api_routines structure as a universal access to sqlite
code routines in both cases. But it seems it is impossible to access the
structure outside of a loadable extension context. Or am I missing
something?

Probably some direct access to sqlite3Apis variable is possible, but I
suspect such practice isn't  recommended.

Thanks
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Unexpected optimization

2018-03-22 Thread Max Vlasov
Hi,

I noticed an unexpected optimization at the sqlite side.
Currently I can not reproduce this with some arbitrary test data (probably
I will eventually). Anyway the logic behind this (pseudo-code query)

Select  , (Select count(*) from LookUpTable where
LookUpTable.Value=TableValue) as StatCount from
(
  ... Select TableValue, ... left join  ... left join
  where 
)
   where StatCount  = ..

The aggregate lookup (Select count()) is relatively expensive to perform
and involves a virtual table on my side (LookUpTable). So the goal of
 is also to narrow the output of the data for this lookup.
Most of the time (including my synthetic tests) the filter indeed works the
expected way (Filtering with  then performing the aggregate
only for the suitable), but for some of queries where there are several
joins sqlite starts to perform the lookup before applying 
so I get my expensive calculations used for all rows of the inner joined
table and then filtering with . I double checked this since
the LookUpTable is my virtual table so I can set a breakpoint and inspect
the passed value. Ironically, when I remove the outer condition ( where
StatCount ..  ) from the query in question, it starts to work much faster.

I suspect this might be related to how I respond to the constraint cost
requests from sqlite. For this virtual table the possible results might be
1 or a very big value. I see that the value 1 is indeed visited for this
query and probably sqlite might assume some absolute minimum cost for this
look-up. But when I change it to a bigger value (still lower than "a very
big value" also used), the query plan will not change.

Here are summary of Explain Query Plan (rea is my virtual table, there are
4 joins in this query, sqlite 3.21.0).

The Query with Outer condition "where StatCount  = .."
SCAN TABLE
SEARCH TABLE (2 times)
EXECUTE CORRELATED SCALAR SUBQUERY 1
SCAN TABLE Rea VIRTUAL TABLE INDEX 2:
SEARCH TABLE (2 times)
EXECUTE CORRELATED SCALAR SUBQUERY 2
SCAN TABLE Rea VIRTUAL TABLE INDEX 2:

The same query when I just removed the outer "where StatCount  = .."
SCAN TABLE...
SEARCH TABLE (4 times)
EXECUTE CORRELATED SCALAR SUBQUERY 1
SCAN TABLE Rea VIRTUAL TABLE INDEX 2:


Can I manually affect the plan for this query or probably by further
tweaking the virtual table costs?

Thanks



Thanks
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] finding the number of records until a value is different

2017-12-29 Thread Max Vlasov
I think it's possible with CTE.

Recently I wondered whether it would be possible to implement an operation
that might be called "an accumulated group by". It's when you enumerate the
rows and based on the values of the previous row and current row you apply
some new "group" value that can be used in the following "group by" query.
My use case was a "words" table when every word has a length and the task
is to "format" them as the words are formatted when printing a text on the
page or on the screen, so the new "group" here is a printing row that
increased when the current printing line is no longer able to fit the
following word.

I see your task is similar, even a little simpler since we don't accumulate
here, we just compare previous and next values.

CTE worked for me, but the big problem is that there are much redundancy in
the text of the query since contrary to general select queries, it's much
harder to to reusable aliasing in CTE, you will see this in the final query
of this post.

The basic template for any filtering for "accumulated group by".

given the table
  CREATE TABLE [testdata] ([id] integer primary key)

the following query outputs the table as it is but the second select inside
now has a luxury of compare previous and next values (ordered by the
primary key).

with recursive
  filter(curid) as
   (
  select (select min(id) from testdata)
  UNION ALL
  select (select id from testdata where id > curid order by id limit 1)
as nextid from filter where nextid not null
)
select * from filter

Back to your case

if the table is
  CREATE TABLE [testdata] ([id] integer primary key, [bc] integer, [temp]
integer)

the following gigantic query should output the counts for every consecutive
groups ("grp" here is a temporal column used exclusively for the final
group by I was talking about previously).

with recursive
  filter(curid, bc, temp, grp) as
   (
  select (select min(id) from testdata), (select bc from testdata where
id=(select min(id) from testdata)), (select temp from testdata where
id=(select min(id) from testdata)), 1
  UNION ALL
  select (select id from testdata where id > curid order by id limit 1)
as nextid, (select bc from testdata where id > curid order by id limit 1),
(select temp from testdata where id > curid order by id limit 1),
case when (select bc from testdata where id > curid order by id limit 1) =
bc then grp else grp + 1 end
from filter
  where nextid not null
)
select bc, count(*) from filter group by grp


I'll be glad to reduce expressions here to some more readable constructs,
but I suspect it's impossible with current SQL syntax.


Max


On Thu, Dec 7, 2017 at 9:46 PM,  wrote:

> Hi all,
>
> I have a DB i.e. like this:
>
> table values
> bc  temp
> 35  123
> 35  124
> 35  123
> 20  123
> 12  123
> 12  123
> 16  123
> 35  123
> 35  123
> 35  123
> 35  123
> 35  123
>
> The value in temp (or all the other columns) is not of interest.
>
> Now I a looking for the basic concept how to count the number of rows where
> bc=bc_of_last_row after the last different bc.
>
> SELECT COUNT(bc) FROM values WHERE bc=35;
>
> gives me the number of all rows with bc=35, so here 8.
> But I am looking for 5, the number of rows after the last change of bc.
>
> Regards Matth
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] All versions compatible "very big" estimatedCost (Virtual tables)

2017-04-04 Thread Max Vlasov
On Tue, Mar 28, 2017 at 11:26 AM, Max Vlasov <max.vla...@gmail.com> wrote:

>
>
> So, what is the maximum reasonable value of estimatedCost that will not
> turn sqlite into possible overflow errors while telling at the same time
> that I consider some variant very, very expensive? Or maybe changing cheap
> from 1 to 0 will do the trick?
>
>
Hi again, replying to myself since I  noticed a more straightforward case
explaining estimatedCost peculiarities

Sqlite 3.17.0

My comma-list virtual table implementation (vtcommalist) reported the table
structure as
  CREATE TABLE [xxx] ([CommaList] TEXT HIDDEN, [Value] TEXT)
Basically it is only functional when CommaList is provided so it can
produce Value in this case based on coming CommaList. Hidden column here to
allow table-valued functions syntax.

Database:
The virtual table
  create virtual table [cmlist] Using VtCommaList
a simple test table with lists data
  create table  [lists] ([list] TEXT)
populated with 1 rows
  insert into lists (list) values ('1, 2, 3, 4')

The query in question:
  Select distinct trim(value) From lists, cmlist(list)

Sqlite asks for index evaluation twice:
1. Suggesting constraint for CommaList field only, my code reported
"cheap"  (value 1)
2. Suggesting no constraint at all, my code reported "expensive" (value
10)

But Sqlite still went for full-scan in this case (second choice) so I had
to report error leading to "SQL Logic error"
Only when I increased "expensive" to 10*10 = 100, Sqlite
accepted my cheap/expensive ratio as worth considering with following
filter with commalist field constraint.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] All versions compatible "very big" estimatedCost (Virtual tables)

2017-03-28 Thread Max Vlasov
On Tue, Mar 28, 2017 at 12:51 PM, Dominique Devienne 
wrote:

> On Tue, Mar 28, 2017 at 11:00 AM, Hick Gunter  wrote:
>
> >
> > The "estimated cost" is described as "how many disk IO operations are
> > expected". Version higher than 3.8.2 allow setting an "estimatedRows"
> > (default: 25) and versions above 3.9.0 allow setting a flag that
> indicates
> > that 0 or 1 rows will be returned (i.e. the constraints form a "unique
> > index").
> >
>
> Thanks for the RTFM. That helps. Looks like I have about 3 years worth of
> catch-up to do on sqlite3_index_info.
>
>

Thanks, Dominique, Hick

I looked at the additional fields of sqlite3_index_info and probably
they're of no use for me, at least they don't provide some enforcing of my
particular index. As for my problem queries, they usually involve "order
by" or "group by", that's when the planner (despite the obvious bias from
me about the particular index) decides to provide only one of my two
required fields.

Also interesting that actually what I called input/output approach, sqlite
implemented calling it table-valued functions (probably starting 3.9.0) (
https://sqlite.org/vtab.html#tabfunc2). To make my tables compatible with
this syntax, I just had to append HIDDEN to the "input" fileds of the table
definition. But seems like changing to this definition doesn't affect the
planner.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] All versions compatible "very big" estimatedCost (Virtual tables)

2017-03-28 Thread Max Vlasov
Hi,

I sometimes use virtual tables to implement some kind of one-to-many
output. One of examples mentioned previously was the comma list virtual
table when a field containing comma-separated values might be used to
output rows of values from this list. Other example - performing
regexp-like query against a text and outputting the result columns. But
this involves assuming that some columns of the virtual table are required
"input" parameters while the other is "output".  For example, for comma
list, the required parameter is the list, the output - extracted values.
The consequence of this trick is that this virtual table is not fully
functional one, so you can't query select * from it, it's useful only by
providing where or "join .. on" clause containing the required "input"
parameters.

I usually encouraged sqlite to provide all required input parameters by
reporting a "cheap" value of estimatedCost when I recognized my "input"
parameters in xBestIndex call and providing an "expensive" values for any
other cases. This usually worked for simple and complex cases until
3.8.0.0. It seems the version introduced Next-Generation Query Planner and
I noticed that in some complex cases my cheap-expensive recommendations
were ignored, so even when I noticed in the debugger that cheap value was
provided for the right index and expensive for all other cases, the
following xFilter provided not all values required probably deciding it
knew better :). Before this I used cheap value 1 and expensive value 1.
Testing (3.17.0) revealed that in some cases multiplying expensive by x100
helped, but other cases required increasing this value even more.

So, what is the maximum reasonable value of estimatedCost that will not
turn sqlite into possible overflow errors while telling at the same time
that I consider some variant very, very expensive? Or maybe changing cheap
from 1 to 0 will do the trick?

Thanks

Max
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] trimming with tab and other special characters

2016-11-29 Thread Max Vlasov
On Tue, Nov 29, 2016 at 4:19 PM, Igor Tandetnik  wrote:

That's the exact opposite of your interpretation. For backslash escapes,
> you need to rely on "facilities of your programming language". If you
> cannot, and must use raw SQL queries, there are still ways to represent
> arbitrary characters, but backslash escapes is not one of them.
>

Yes, my fault.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] trimming with tab and other special characters

2016-11-29 Thread Max Vlasov
Reasonable enough,

I wonder why OP and other authors of the discussion

https://www.mail-archive.com/sqlite-users@mailinglists.sqlite.org/msg49355.html
was so sure about backslash escaping support, even Igor Tandetnik :)

On Tue, Nov 29, 2016 at 11:39 AM, Clemens Ladisch <clem...@ladisch.de>
wrote:

> Max Vlasov wrote:
> > trim(col, char(9))
> >   works, while
> > trim(col,'\t')
> >   does not.
>
>  SELECT trim('ttthello\tt\\\', '\t');
>  hello
>
> Works as designed.
>
> SQL does not use backslash escaping.
> Use char(9) or an actual tab character ('   ').
>
>
> Regards,
> Clemens
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] trimming with tab and other special characters

2016-11-29 Thread Max Vlasov
Hi,

the search in the mailing list about the trim function reveals possible
escaping support for the second parameter of the function, but in my case
(sqlite 3.15.1)

trim(col, char(9))
  works, while
trim(col,'\t')
  does not.

Can someone clarify on that?

Thanks

Max
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] import-prefixed symbols in sqlite3.obj file

2016-11-10 Thread Max Vlasov
On Thu, Nov 10, 2016 at 1:51 PM, Dominique Devienne <ddevie...@gmail.com> wrote:
> On Thu, Nov 10, 2016 at 11:38 AM, Max Vlasov <max.vla...@gmail.com> wrote:
>
>> __imp_EnterCriticalSection
>
>
> Is this post [1] related to your issue?
> Maybe you're not using the 64-bit SDK too? --DD

thanks for mentioning. As this topic and many other discussions at the
web are c++ linker related. So mostly it's about setting the correct
path/library information when the linker knows what different prefixes
mean. I suppose that in my case delphi linker understands only
non-prefixed symbol entries. Even if I failed switching to the correct
SDK, then I'd get many additional unresolved external entries, but in
my case other windows symbols used by sqlite are correctly resolved
when linking.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] import-prefixed symbols in sqlite3.obj file

2016-11-10 Thread Max Vlasov
Hi,

Trying to link sqlite object file generated by visual c++ command-line
compiler(cl.exe sqlite3.c /c) with the Delphi 64 bit executable, I
encountered that some functions by windows api used by sqlite contains
"___imp_" prefix (total: 7). This happened since recent delphi 64 bit
compilers accept COFF format files  but probably lack some assumptions
about semantics so doesn't automatically interprets such symbols as
imported function names.

The examples of imp-prefixed functions are
  __imp_EnterCriticalSection
  __imp_InitializeCriticalSection

All the other windows api imported function is linked without problems
and they don't have a prefix in the symbol table of the obj file.

I tried to compare the first appearances of normal and imp-prefixed
functions in the sqlite3.c. The normal ones are usually part of
win_syscall structure, while imp-prefixed appears randomly inside
win*** functions, for example EnterCriticalSection in winMutexEnter.

I will probably fix this by making special imported entries containing
this prefix, but just wonder why Visual c++ compiler interprets some
windows symbols as strict dll-imported entries and some as general
external symbols.

Thanks,

Max
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Best way to wipe out data of a closed database

2016-10-24 Thread Max Vlasov
On Mon, Oct 24, 2016 at 4:28 PM, Wade, William  wrote:
> It sounds like you've got a way forward on leaks via the malloc() system 
> within the process space.
>
> 1) The region of the C process stack that was reached by some deep call stack.
> 2) Processor registers.
> 3) Process pages that were copied to disk by the OS (this could be a problem 
> even if you otherwise have a good handle on clearing malloc() blocks).
>

Good points, thanks, especially (3) as having more likely scenario to
contain all kind of data and lengths. Probably in case of predictable
memory requirements this one can be handled by providing zero-malloc
implementation with a physically locked region of memory (in case of
Windows - VirualLock API).
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Best way to wipe out data of a closed database

2016-10-24 Thread Max Vlasov
On Mon, Oct 24, 2016 at 1:36 PM, Richard Hipp  wrote:
>
> Memsys5 is also faster than your global system memory allocator
> (before the extra overhead of zeroing, at least).  But on the other
> hand, you have to know the maximum amount of memory SQLite will want
> at the very beginning, and that memory will be used only by SQLite and
> not other parts of your application, so memory utilization is not as
> efficient.
>

Thanks, I understand the risks and benefits, so probably it will be
either zero-malloc allocator or my own allocator replacement.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Best way to wipe out data of a closed database

2016-10-24 Thread Max Vlasov
Simon, thanks
never heard of secure_delete, interesting, but probably no use in case
of VFS Layer that leaves only encrypted data on disk.
As for zero-malloc option, it looks promising.

On Mon, Oct 24, 2016 at 1:34 PM, Simon Slavin <slav...@bigfraud.org> wrote:
>
> On 24 Oct 2016, at 9:58am, Max Vlasov <max.vla...@gmail.com> wrote:
>
>> in an application that implements encryption/decryption with VFS, what
>> is the best way to ensure that the memory of the application doesn't
>> contain decrypted data after the database is closed.
>
> We can't answer about memory that your own application handles, of course.
>
> To ensure zeroing out of memory I suggest you use the zero-malloc option as 
> provided by SQLite's memory allocator.  For more details on them see sections 
> 3.1.4 and 3.1.5 of
>
> <https://www.sqlite.org/malloc.html>
>
> It's also worth noting here that SQLite has the following PRAGMA:
>
> PRAGMA schema.secure_delete = boolean
>
> which zeros space in files.  However I don't remember this working by zeroing 
> out the memory copy of the file then writing that block to disk.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Best way to wipe out data of a closed database

2016-10-24 Thread Max Vlasov
Hi,

in an application that implements encryption/decryption with VFS, what
is the best way to ensure that the memory of the application doesn't
contain decrypted data after the database is closed. So no side
application could retrieve sensitive information by reading this
process memory. Not only the base as a whole but also fragments of
database sectors anywhere in the process memory space.

One of the trick possible is to add additional zeroing out to the
global free handler, but this can probably introduce performance
penalties.

Is there any other way to do this?

Thanks,

Max
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Changes to VFS starting 3.8.3

2016-03-11 Thread Max Vlasov
On Thu, Mar 10, 2016 at 5:26 PM, Richard Hipp  wrote:

> Perhaps your in-memory VFS was relying on some unspecified behavior
> that changed? 


Some tests finally led to the source of my problems. When I implemented the
handlers of vfs interface before, I made xFileControl return SQLITE_ERROR
instead of SQLITE_NOTFOUND. For whatever reasons sqlite versions below
3.8.3 allowed me to do this in some of the usage contexts. Today when I
tested I saw that querying PRAGMA Encoding in 3.7.11 also gave an error
even before any write-related queries. So in my initial case probably some
minor logic change in 3.8.3 led to a critical xFileControl request while
3.8.2 was ok with the wrong implementation.


[sqlite] Changes to VFS starting 3.8.3

2016-03-10 Thread Max Vlasov
Thanks,

I suspect there's indeed some special behavior not obvious at the moment.
I'll try to gather some additional information if it's possible or detect
this specific behavior





On Thu, Mar 10, 2016 at 5:26 PM, Richard Hipp  wrote:

> On 3/10/16, Max Vlasov  wrote:
> > I have a compatibility problem with my vfs implementation of memory
> > databases.
> >
> > As I see from the version history page that no specific vfs-related
> changes
> > for 3.8.3 were reported, only a major change that can affect structure
> and
> > vfs is initial common table expressions implementation.
> >
> > What are the changes that might trigger the change?
>
> Dunno.  The changes to the unix VFS were minimal
> (
> https://www.sqlite.org/src/fdiff?v1=f076587029285554=f3ed0e406cbf9c82=1
> )
> and likewise the windows VFS
> (
> https://www.sqlite.org/src/fdiff?v1=4323dd0bac4f7a70=1b21af72c5fa6f9e=1
> ).
> Perhaps your in-memory VFS was relying on some unspecified behavior
> that changed?  Without access to your source code, it is difficult to
> say.
>
> --
> D. Richard Hipp
> drh at sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


[sqlite] Changes to VFS starting 3.8.3

2016-03-10 Thread Max Vlasov
Hi,

I have a compatibility problem with my vfs implementation of memory
databases. I once implemented it successfully probably with a version
3.6.something. Today I tried to create a new database using the same code
with the latest (3.11.1) version (the procedure is  when no prior db data
exists, i.e. absolutely new file is being worked with) and it failed on a
first write-related query, something like "create table if not exists" .
After that I decided to try different binary versions of sqlite and
narrowed it down to the difference between 3.8.2 and 3.8.3. So, with 3.8.2
everything is ok, the db created has correct structure (checked on the
memory block saved as file) and no error produced during the creation. With
3.8.3 the first attempt to perform the same query on a new file produces
"Sql logic error or missing database".

As I see from the version history page that no specific vfs-related changes
for 3.8.3 were reported, only a major change that can affect structure and
vfs is initial common table expressions implementation.

What are the changes that might trigger the change?


Thanks,

Max


Re: [sqlite] Artificially slow VACUUM by injecting a sleep() somewhere?

2014-12-08 Thread Max Vlasov
On Mon, Dec 8, 2014 at 5:56 PM, Roger Binns  wrote:
> On 12/07/2014 04:43 PM, David Barrett wrote:
>> so I'm curious if you can think of a way using the API (or any
>> other way) to essentially "nice" the process by inserting a short
>> "sleep" into whatever loop runs inside the VACUUM command.
>
> Using OS provided functionality will be the most reliable.  Other than
> that, a simple way is to sleep in the progress callback, although that
> will make I/O lumpy.

I wonder whether I/O "sleeping" possible in the first place. Correct
me, but what we usually call "sleeping" is about CPU that already
sleeps during most I/O operations waiting for rotating media finishing
its slow tasks. As a consequence, the more fragmented the data on
disk, the less relative cpu time will be spent trying to read and
write data. In this case the strategy might be to measure cpu
consumption value for consecutive blocks of data and if it's 100% (or
other heuristically adequate value)  then never sleep (this probably
means either cached data or non-fragmented data on a fast disk). But
when the cpu consumption drops significantly (much time spent waiting
for I/O), the sleeping indeed might be needed.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Search for text in all tables

2014-12-05 Thread Max Vlasov
>> I once implemented a virtual table "allvalues" that outputs all
>> database values with (hope self-explaining) fields
>>
>> TableName, TableRowId, FieldName, Value
>
> Could you expand on how you coped with the underlying database
> changing, and how you mapped virtual table rowids to the actual
> database records?
>

This particular implementation was intended to be used as a
Select-only wrapper so it just iterates through every sqlite_master
table and every row of each table. I didn't support update and insert.
Rowid of this virtual table is compound bit mask starting with table
bits (able to fit the number of tables of this db), field bits
(maximum possible number of fields in a table) and the rest is rowid
of the particular table. So in theory this method won't work for
tables containing large 64-bit ids where there are not enough bits to
be used for table number and field number.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Search for text in all tables

2014-12-05 Thread Max Vlasov
On Thu, Dec 4, 2014 at 10:44 AM, Baruch Burstein  wrote:
>
> Is it possible to somehow search for/replace a string in all columns of all
> tables?
>

Not particularity the answer to your question, but rather a method you
or others might use.
I once implemented a virtual table "allvalues" that outputs all
database values with (hope self-explaining) fields

  TableName, TableRowId, FieldName, Value

that allows also exploring unknown complex databases. Take for example
places.sqlite of Mozilla Firefox. If you want to explore where it
saves your visited site, you can use the query

  SELECT Distinct TableName, FieldName FROM allvalues where Value like
"%http://%;

Sqlite's virtual tables are a great tool since with a little effort on
the developer side the newly created entity starts working as a first
class citizen of the sqlite engine. Compare this to a quick hack that
outputs all raw data from a specific database to a specific media.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is msvcrt coupling now unavoidable (3.8.7)?

2014-10-22 Thread Max Vlasov
On Wed, Oct 22, 2014 at 4:50 PM, dave <d...@ziggurat29.com> wrote:

>
> > -Original Message-
> > From: sqlite-users-boun...@sqlite.org
> > [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Max Vlasov
> > Sent: Wednesday, October 22, 2014 5:25 AM
> > To: General Discussion of SQLite Database
> > Subject: [sqlite] Is msvcrt coupling now unavoidable (3.8.7)?
> ...
> > my static linking with Delphi for 3.7.8 version now complains about
> > _beginthreadex_/_endthreadex_.
> ...
> ...
>
> It's a good question; and it doesn't look like there is a great way of
> turning off that feature other than SQLITE_THREADSAFE=0, which you might
> not
> want for other reasons.
>
>
Thanks, dave, Clemens,

I also did some researching/fixing.

The sources have some contradictory information in the comments to the
current state, I'm not sure whether I should mention them here
Overall if one don't wants msvcrt coupling, then
SQLITE_MAX_WORKER_THREADS=0 should be in options. In this case no
_beginthreadex/_endthreadex
linking will be required.

Wrapping calls mentioned by Clemens Ladisch works. I wrote them
independently, but they look char by char the same :). When I did   PRAGMA
threads=4 and set breakpoints for "SELECT ... Order By" query , they were
visited and no visible errors appeared.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Is msvcrt coupling now unavoidable (3.8.7)?

2014-10-22 Thread Max Vlasov
Hi,

my static linking with Delphi for 3.7.8 version now complains about
_beginthreadex_/_endthreadex_.
Quick search shows than everywhere there's a recommendation to use these
functions instead of API CreateThread if one plans to use Visual c run-time
(msvcrt).

All my previous linking with sqlite was based on hand-made equivalent
function for every static link to external (non-dll) functions so my
programs never used msvcrt. Now probably I either have no choice to
including msvcrt linking of make my own "de-wrapper" so some function that
accepts _beginthreadex_ and calls CreateThread. Or can sqlite introduce
some define that makes vs run time coupled by default, but by disabling it,
uses CreateThread api?

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] New DLLs and sources. Was: SQLite version 3.8.5 beta

2014-06-03 Thread Max Vlasov
On Tue, May 27, 2014 at 10:49 PM, Richard Hipp  wrote:
> On Tue, May 27, 2014 at 10:50 AM, Richard Hipp  wrote:
>>
>
> This time I build the 32-bit DLL using mingw instead of MSVC.  (MSVC was
> still used for the 64-bit DLL.)  So perhaps it will work correctly on
> WinXP.  Please let me know one way or the other.  Thanks.
>

Don't know what is difference between "stock" 3.8.4.3 from the site
and the newly compiled one, but synthetic tests with memory databases
(no I/O involved) show that the new one is about 20% faster (Intel
Core i5-2310). If they're both MinGW built then probably some switch
differences between these builds (if there are any) gives such
difference.

Here is the schema and the timings. The first one is for
sqlite-dll-win32-x86-3080403.zip, the second one is for
sqlite-dll-win32-x86-201406021126.zip

The table in memory
  CREATE TABLE [TestTable] ([ID] Integer primary key, [IntValue]
INTEGER, [FloatValue] FLOAT, [StrValue] TEXT)

was populated  with 1000 pseudo-random rows using the same seed for
every test. The following queries all use cross join and differ in
types of data used (int, float, string)


Select Count(*) FROM (SELECT * FROM TestTable T1 Join TestTable T2 )
62 msec
47 msec

Select Count(*) FROM (SELECT *,t1.FloatValue / t2.FloatValue as
divvalue FROM TestTable T1 Join TestTable T2 where abs(divvalue -
round(divvalue)) > 0.499)
453 msec
359 msec

Select Count(*) FROM (SELECT * FROM TestTable T1 Join TestTable T2
where (T1.IntValue * T2.IntValue) % 1789 = 56)
203 msec
187 msec

Select Count(*) FROM (SELECT * FROM TestTable T1 Join TestTable T2
where substr(t1.strvalue, 1 + t1.intvalue % 20, 1 + t2.intvalue % 20)
= t2.strvalue)
499 msec
405 msec

Select Count(*) FROM (SELECT *,t1.FloatValue / t2.FloatValue as
divvalue FROM TestTable T1 Join TestTable T2 where abs(divvalue -
round(divvalue)) > 0.499 or (T1.IntValue * T2.IntValue) % 1789 = 56 or
substr(t1.strvalue, 1 + t1.intvalue % 20, 1 + t2.intvalue % 20) =
t2.strvalue)
1124 msec
952 msec


Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Latest Sqlite grammar as machine understandable file

2014-05-10 Thread Max Vlasov
On Fri, Feb 21, 2014 at 5:24 PM, Richard Hipp <d...@sqlite.org> wrote:
> On Fri, Feb 21, 2014 at 7:29 AM, Max Vlasov <max.vla...@gmail.com> wrote:
>
>> Is there a machine-readable (BNF or other) grammar as equivalent to
>> the current syntax diagrams?
>
> An updated version of all-bnf.html has now been checked in at
> http://www.sqlite.org/docsrc/doc/trunk/art/syntax/all-bnf.html
>

Thanks again for the updated grammar. Don't know whether it's somehow
used for production logic (probably just kept manually synced with
it), but wanted to mention a couple of things noticed.

in the rule
  join-clause::= [ 
  ]
the construct in [] probably should be made repeated with *. At least
without this addition it prevented from parsing multiply joins.

There are no rules described for initial-select, recursive-select. So
I guessed them as synonyms for select-stmt.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] WITHOUT ROWID option

2014-05-09 Thread Max Vlasov
On Wed, May 7, 2014 at 6:31 PM, Richard Hipp  wrote:
> On Wed, May 7, 2014 at 9:00 AM, Marco Bambini  wrote:
>
>> What is the best way to know if a table has been created with the WITHOUT
>> ROWID option?
>>
>
>
> (1) You could send "SELECT rowid FROM table" 
>
> (2) Run both "PRAGMA index_list(table)" 
>
>

Is there a way for a virtual table implementation to report that
there's no rowid support before first xRowId call takes place?

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] CTE in views for older versions

2014-04-20 Thread Max Vlasov
Hi,
noticed that attempt to open a database containing a view

Create vew ... with recursive ...

... with older (non-cte) versions of sqlite failed. The call to
open_v2 was ok, but any access for example to PRAGMA encoding led to
"malformed database schema" error. Although it came as no big
surprise, probably one might expect delayed handling behavior as with
virtual tables so it don't face incompatibility until referenced in a
query.

So probably either
  https://www.sqlite.org/lang_createview.html
or
  https://sqlite.org/lang_with.html

might contain a little sentence about such incompatibility.

Thanks,

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why sqlite has no sqlite_bind_auto or similary named function

2014-04-18 Thread Max Vlasov
On Fri, Apr 18, 2014 at 8:08 PM, Andy Goth <andrew.m.g...@gmail.com> wrote:
> On 4/18/2014 12:29 AM, Max Vlasov wrote:
>>
>> So it seems like if general queries allow affinity automatical
>> selection while bind api does not have the corresponent function. I
>> know that I can analize incoming data myself, but since general
>> queries use a similar function probably, making some kind of
>> sqlite_bind_auto should be no big deal. But probably this decision was
>> deliberate.
>
>
> This was done for C compatibility.  A C++ wrapper for SQLite can have an
> sqlite_bind_auto() function which is overloaded for a variety of types,
> but it would in reality be multiple functions that have the same name
> but otherwise different type signatures.

I meant something different. A function accepting a text value while
sqlite doing affinity conversion according to how the value "looks".
Actually it seems that all the problems I faced was related to the
type being typeless in declaration. For integer-declared fields sqlite
correctly converts them to the integer affinity even with
sqlite3_bind_text function. So it makes no sense to force other kind
of detection if the type is declared. But if I stay with typeless
fields, I will definitely will have problems with indexes on them,
because seems like the query planner relies on declared types and
doesn't care how many rows have particular affinity. So it looks like
I should prescan some rows for type detection if I want an automatic
conversion not knowing type info in advance

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Why sqlite has no sqlite_bind_auto or similary named function

2014-04-17 Thread Max Vlasov
Hi,

The problem was with my program that automatically converts xml data
into an sqilte table. It looks for an attribute and appends a column
if it does not exists, but stating no particular type. All values were
appended with sqlite_bind_text. Everything was fine, but an index
created after this on a ParentId field that was in real life integer
or null, actually appeared text-based. Sure, typeof all inserted
fields appeared to be text.

On the other side for a typeless table
  CREATE TABLE t (value)

Query
  insert into t (value) values (123)
  insert into t (value) values ('text')
  insert into t (value) values (34.45)

finally makes
  select typeof (value) from t

returns
  integer
  text
  real

So it seems like if general queries allow affinity automatical
selection while bind api does not have the corresponent function. I
know that I can analize incoming data myself, but since general
queries use a similar function probably, making some kind of
sqlite_bind_auto should be no big deal. But probably this decision was
deliberate.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] about the apparently arriving soon "threads"

2014-04-08 Thread Max Vlasov
On Tue, Apr 8, 2014 at 11:00 PM, big stone  wrote:
> Hi,
>
> I did experiment splitting my workload in 4 threads on my cpu i3-350m to
> see what are the scaling possibilities.
>
> Timing :
> 1 cpu = 28 seconds
> 2 cpu = 16 seconds
> 3 cpu = 15 seconds
> 4 cpu = 14 seconds
>

If the info at 
http://ark.intel.com/products/43529/Intel-Core-i3-350M-Processor-3M-Cache-2_26-GHz
is right, you have 2 cores, each having 2 threads. They're logically
"cores", but physically not so. My tests with any multi-threading
benchmarking including parallel quicksort showed that a similar i3
mobile processor rarely benefit after 2 threads, probably cache
coherence penalty is the cause. Desktop Intel Core i5-2310, for
example, is a different beast (4 cores/4 threads), 3 threads almost
always was x3 times faster, 4 threads - with a little drop.

It all still depends on the application. Once I stopped believing a
2-threaded Atom would show x2 in any of tests I made, when on one
graphical one it finally made it. But still if number of threads are
bigger than number of cores then it's probably a legacy of
HyperThreading hardware Intel started multi-threading with

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] comma-separated string data

2014-04-07 Thread Max Vlasov
On Sun, Apr 6, 2014 at 10:23 PM, Dominique Devienne  wrote:

>
> If the answer to either question above is true, then a specialized
> vtable would be both more convenient and faster, no?
>

Hmm... If logical peculiarity of vtable approach (when
where-constrained queries might be larger than full-scan one) is
acceptable by sqlite (mentioned in my other post), then where
expression might serve as parameters so a possible hybrid might be
possible (also inspired by the recent discussion of creating user
functions on the fly). For example, a virtual table that accepts a
Select statement might look like

CREATE VIRTUAL TABLE vcommalist USING QueryVirtualizer('WITH RECURSIVE
 :commalist  ')

And the actual query using it might look like

SELECT * FROM vcommalist WHERE commalist='1,2,3,4,5'

This one served more like shortcut, but probably a more broad version
is possible when the parameter to virtual table is a print formatted
string so one can dynamically customize parameters general parameters
can't, i.e., table names, output column names etc.

Multiply parameters would be great, but with current state of things
the implementation still should use some kind of workaround to ensure
correct results so should always return huge estimatedCost in
xBestIndex if the constrained arrived doesn't contain at least one
required parameter (WHERE clause lacks one) and low one if all
parameters are provided. I think that sqlite might as well interpret
estimatedCost equal to  -1  as a ban to use this index.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Difference in virtual tables logic triggered between version 3.7.15.2 to 3.8.0

2014-04-05 Thread Max Vlasov
On Sat, Apr 5, 2014 at 11:48 PM, Max Vlasov <max.vla...@gmail.com> wrote:
>
> This works for an old version of sqlite (3.6.10), but today Dominique
> Devienne mentioned some doubt about this approach and I decided to
> test it with some data with a recent version of sqlite.  With 3.8.4.3
> the same join query produced two rows and Nulls for CommaList and
> Value
>


Actually I found a way for it to work as assumed. My xBestIndex call
now makes estimatedCost huge (1) for non-constrained guess and
small (1) for constrained one. Although it works, I'd be glad to know
whether there is a way to return "impossible" instead of "expensive".

I think that I see why the query worked differently in different
sqlite versions.This trick by its nature interprets constrained data
(when Where is supplied) as superset of non-constrained one (when it
returns no data if there's no Where) and this definitely violates the
logic sqlite relies upon. So in a (normal) world where
where-constrained result is always smaller than full-scan data result,
sqlite is free to choose full-scan instead of a filtered scan for a
virtual table (probably it does by default after 3.8.0 for his join
query if provided estimatedCosts are equal).

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] comma-separated string data

2014-04-05 Thread Max Vlasov
On Fri, Apr 4, 2014 at 10:20 PM, peter korinis  wrote:
> A data column in a link table contains comma-separated string data, where
>
> How do you 'parse' a table entry like: "4,66,51,3009,2,678, ." to extract
> these values and use them in an SQL statement, perhaps a WHERE id='66'?
>
>

In similar cases I use my virtual table explained here:
http://www.mail-archive.com/sqlite-users@sqlite.org/msg63453.html
Actually the table works more like "function", so only one instance is
required in the db to apply this trick.

Finally the db contains

CREATE VIRTUAL TABLE cmlist Using vtcommalist

and the query

SELECT value FROM cmlist WHERE commalist='45,56,78,125'

returns 4 rows

"45"
"56"
"78"
"125"

And this was a blank db with this virtual table only and the query
doesn't reference anything outside while still generating table data.
So the data for the virtual table is actually supplied at the time of
the query itself from WHERE clause.

This trick successfully works with joins and everything else.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Goodram C50 SSD slow in SQlite benchmark

2014-03-23 Thread Max Vlasov
On Mon, Mar 24, 2014 at 4:49 AM, piotr maliński  wrote:
> I know it's bad. I'm trying to determine the cause of the difference, and
> if it's a "feature" of that SSD or a bug of some sort.

There was a very intensive discussion for a post labeled
"UPDATE/INSERTing 1-2k rows slower than expected". You can read it at
https://www.mail-archive.com/sqlite-users%40sqlite.org/msg58872.html .
Also there were different tests I made during this discussion. As long
as I remember the general observation was that it's hardware that
usually says "ok, I did this guaranteed -to-be-on-disk operation
you've asked for", but actually caching it somewhere inside. And
probably multiply USB controllers from the bunch of manufacturers are
to blame. SATA controller on motherboards are usually less
diversified, so more likely to be more accurate.

Also there's a setting in Windows for hard drives, "enable writing
cache". If you find a similar setting in xubuntu, then probably
enabling it would make your sata connection on par with your usb
connection. But it's just a hypothesis, it's harder to make useful
tests with sata connection due physical and interface limitations of
the interface.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Using a customized collate function to mimic integer array type.

2014-03-21 Thread Max Vlasov
On Fri, Mar 21, 2014 at 8:06 PM, Ben Peng  wrote:
>
> I guess I will have to take the longer route, namely define a customized
> comparison function and translate user input internally.
>

There's an also virtual table method, probably not so easy to wrap the
head around, but this one allows using comma (or other symbols
splitted) lists stored in fields even in joins. See my reply about it
at https://www.mail-archive.com/sqlite-users@sqlite.org/msg63453.html.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] which of these is faster?

2014-03-14 Thread Max Vlasov
On Fri, Mar 14, 2014 at 4:51 PM, Richard Hipp  wrote:

>>
> In the original problem, there was already an index on the term for which
> the min() was requested.
>.
> Whit your CTE-generated random integers, there is not an index on the
> values.  So "SELECT min(x) FROM..." does a linear search and "SELECT x FROM
> ... ORDER BY x LIMIT 1" does a sort.
>

I see, my fault, didn't notice the db was a concrete one
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] which of these is faster?

2014-03-14 Thread Max Vlasov
On Thu, Mar 13, 2014 at 11:06 PM, Richard Hipp  wrote:

>
> Once you do that, you'll see that the opcode sequence is only slightly
> different between the two.  They should both run at about the same speed.
> I doubt you'll be able to measure the difference.
>
>

Actually a comparatively long (10,000,000 elements) CTE for random
integer generation shows difference 20 vs 38 seconds. I suppose pure
min should use linear search while "order by" one uses temporal b-tree
(exlain query also hints about this). Sure unless sqlite has some
detection of "order by limit 1" pattern redirecting it to linear
search.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Observations about CTE optimization

2014-03-10 Thread Max Vlasov
Many CTE queries are just some mini-algorithms with iteration and only
last row is required. I just wondered whether it's easy to do this
without "order by ... " of the outer query (also mentioned in my reply
about CTE sqrt). There's a solution, but the good news is that
probably one rarely needs such optimization because of effectiveness
of sqlite in-memory B-tree sort. Anyway, the results are below.

A synthetic test of a query interating until 1,000,000

with recursive
  testval(depth, val) as
   (
  select 1, 0
  UNION ALL
  select depth + 1, val + 1 from testval where depth <= 100
   )
select val from testval order by depth desc limit 1

Time:
  4 sec. 20 ms
Explain query plan
"SCAN TABLE testval"
"COMPOUND SUBQUERIES 0 AND 0 (UNION ALL)"
"SCAN SUBQUERY 1"
"USE TEMP B-TREE FOR ORDER BY"


We add an endmark and where condition moves to the select query so we
get rid of "order by" completely

with recursive
  testval(endmark, depth, val) as
   (
  select 0, 1, 0
  UNION ALL
  select (case when depth < 100 then 0 else 1 end), depth + 1,
val + 1 from testval where endmark=0
   )
select val from testval where endmark=1

Time:
  2 sec 900 ms.
Explain query plan:
"SCAN TABLE testval"
"COMPOUND SUBQUERIES 0 AND 0 (UNION ALL)"
"SCAN SUBQUERY 1"

For the modified CTE sqrt from the other post the difference for
100,000 queries was 27 seconds vs 21 seconds. Not so much, but
sometimes it might make sense to do such optimization

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Suggestion to add "locate" as a broader version of "instr"

2014-03-08 Thread Max Vlasov
On Sat, Mar 8, 2014 at 2:24 AM, big stone  wrote:
> Ooups !
>
> Thanks to the awesome  posts about "RPAD/LPAD", I understood that I could
> already create a "sqrt()" function for SQLite3  in interpreted python.
>


Yes, that discussion was inspiring :)

Looking at your task I also played with cte version of sqrt.
Based on the "guessing" approach from one of the answers from
  
http://stackoverflow.com/questions/3581528/how-is-the-square-root-function-implemented
the following query finally worked.

/*
  :value=12345
*/
with recursive
  sqrt(depth, val, guess) as
   (
  select 1, :value, Cast(:value as Float)/2
  UNION ALL
  select depth + 1, val as newval, ((guess + val/guess)/2) as
newguess from sqrt where abs(newguess - guess) > 1e-308 and depth <
100
   )
select guess from sqrt order by depth desc limit 1

but I could not overcome some pecularities of float numbers so depth <
100 here is for cases when comparison fails to stop.
Also for CTE queries in general I wonder whether there is another
faster way to get the last row of the query (in natural executing
order), so order by depth can be replaced by something else. I suspect
ordering here triggers temporary storage.

I tested this function as "expression function" implemented based on
that thread and an average speed of this one is about 4000 sqrt
operations / second on a mobile Intel i3. Not so fast, but if one
desperately needs one, then it would be ok.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] RPAD/LPAD

2014-03-08 Thread Max Vlasov
On Sat, Mar 8, 2014 at 10:52 AM, Max Vlasov <max.vla...@gmail.com> wrote:
> On Fri, Mar 7, 2014 at 11:51 PM, Dominique Devienne <ddevie...@gmail.com> 
> wrote:
>>
>> basically register_function('rpad', 'x', 'y', 'printf(''%-*s'', y,
>> x)') would register a 2-arg function (register_function's argc-2)
>> named $argv[0], which executes the following statement
>>
>> with args($argv[1],  $argv[2], ... $argv[argc-2]) as (VALUES(?, ?))
>> select $argv[argc-1) from args;
>>
>
> Dominique, your variant is even better than using numbered parameters.
> If you use named ones supported by sqlite (:VVV) then sqlite will do
> the job of the textual replacement with bind api itself  (
> 'printf(''%-*s'', :y,> :x)'  )
>
> The small problem in this case is that there are two ways (times) to
> check whether named parameter exists in the expression.

No longer problem here :) SQLite3_Bind_Parameter_name is available at
the time of registration, so the prepared statement knows everything
about the number and the names of the parameters for full checking.

I have a working prototype, some things are left to do, but this
confirms that sqlite is content with the contexts and everything.
Examples:

SELECT RegisterExpressionFunction('myfunc', '45');
Select Myfunc()
45

SELECT RegisterExpressionFunction('mysum', 'x', ':x + :x');
select mysum(45)
90

SELECT RegisterExpressionFunction('mysumalt', '', '', '?1 + ?2');
select mysumalt(45, 67)
112

SELECT RegisterExpressionFunction('strconcat', 's1', 's2', ':s1||:s2');
Select strconcat('foo ', 'bar')
foo bar


Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] RPAD/LPAD

2014-03-07 Thread Max Vlasov
On Fri, Mar 7, 2014 at 11:51 PM, Dominique Devienne  wrote:
>
> basically register_function('rpad', 'x', 'y', 'printf(''%-*s'', y,
> x)') would register a 2-arg function (register_function's argc-2)
> named $argv[0], which executes the following statement
>
> with args($argv[1],  $argv[2], ... $argv[argc-2]) as (VALUES(?, ?))
> select $argv[argc-1) from args;
>

Dominique, your variant is even better than using numbered parameters.
If you use named ones supported by sqlite (:VVV) then sqlite will do
the job of the textual replacement with bind api itself  (
'printf(''%-*s'', :y,> :x)'  )

The small problem in this case is that there are two ways (times) to
check whether named parameter exists in the expression. Either when
the registered function executed (and report error about lacking some
parameters only here) or make a test call with some test parameters at
the time of registration.

The latter allows revealing errors earlier, but it has no knowledge
about the expression so might fire strange run-time errors (division
by zero etc). If we omit execution and leave only binding, we might
miss the case when not all binding variables mentioned in the
expression, actually provided (example ... 'somefunc', 'x', 'y', ':x +
:y + :z')

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] RPAD/LPAD

2014-03-07 Thread Max Vlasov
On Sat, Mar 8, 2014 at 2:16 AM, Clemens Ladisch  wrote:
> Eduardo Morras wrote:
>> So, if a webapp that uses SQLite doesn't check it's input, functions that 
>> renames SQLite internals can be injected
>>
>> SELECT register_simple_function('MAX', 1, 'DROP TABLE ?');
>
> Such a statement would not return a single column, so it wouldn't
> actually get executed.
>
> But it might be possible to execute something like "PRAGMA evil = on",
> so this function probably should be secured like load_extension().
>

Absolute evil ) I already thought that introducing such function
violates a common sense assumption that Select Api is side-effect free
(in context of database changes) since "Register" slightly violates
this by messing with namespace context. Allowing non-Select queries
might pose damage risk because it would complain after the damage is
done (no SQLITE_ROW result for a Update or Insert query, but database
had already changed to the moment). That's also why I still think that
constraining it to Select Api with assuming expression and automatic
wrapping in Select is a must.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] RPAD/LPAD

2014-03-07 Thread Max Vlasov
On Fri, Mar 7, 2014 at 6:39 PM, Clemens Ladisch <clem...@ladisch.de> wrote:
> Max Vlasov wrote:
>>
>> Nice suggestion. This probably falls into case when a small new part
>> needed on sqlite side
>
> Actually, no change to SQLite itself would be needed.  It's possible
> to create an extension that provides a function that allows to register
> another function that executes a custom SQL expression:
>
>  SELECT register_simple_function('rpad', 2, 'SELECT printf(''%-*s'', ?, ?)');
>
>

Brilliant :) this is a solution not requiring adding anything to
sqlite. I'd only prefer not using full-features sql queries, but
rather expression that is automatically wrapped in SELECT without
FROM, otherwise they would create dimensions (columns, rows) when our
function strictly expected one result. And also support for numbered
parameters. So my example whould be implemented as

SELECT register_function('complexcase', 2, 'Case when ?1 < 10 then ?2
else ?2 + 10 end');

It's interesting that such approach makes this super-function
Pragma-like. It changes some internal state (reveals a new function)
and also optionally returns some result, for example 1 for success.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] RPAD/LPAD

2014-03-07 Thread Max Vlasov
On Fri, Mar 7, 2014 at 12:49 PM, Dominique Devienne  wrote:
> I think what SQLite lacks is a syntax to define custom function like
> it does for virtual tables. Something like:
>
> create function rpad(x, y) using scripty_module as "return 
> PRINTF('%-*s',y,x)";
>

Nice suggestion. This probably falls into case when a small new part
needed on sqlite side and large DSL specific features depending on the
side of the registered module. But you still needs some binary module
for your functions to work

How about "macro expression substitution" feature? Something that
could accept any expression compatible with sqlite as the function
body to be just translated into corresponding vdbe instructions.

For example,

Create function complexcase(x,y) as Case when x < 10 then y else y + 10 end

Select complexcase(myField1, myField*10) from t;

On sqlite site, the engine can only syntax check for the create
function body, but the referencing it in actual query would require
actual substitution with the corresponding context-sensitive errors.

This definitely will be more constly in kB on sqlite side than module
approach, but would create a more universal solution.

Just quick thoughts, probably there are plenty of obstacles here :)

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Virtual table API performance

2014-03-02 Thread Max Vlasov
On Sun, Mar 2, 2014 at 5:21 PM, Elefterios Stamatogiannakis
 wrote:
>
> Our main test case is TPCH, a standard DB benchmark. The "lineitem" table of
> TPCH contains 16 columns, which for 10M rows would require 160M xColumn
> callbacks, to pass it through the virtual table API. These callbacks are
> very expensive, especially when at the other end sits a VM (CPython or PyPy)
> handling them.
>

Ok, not stating that the performance improvment is impossible, I will
explain why I'm a little sceptical about it.

For every bulk insert we have a theoretical maxiumum we'd all glad to
see sqlite would perform with - the speed of simple file copying.
Sqlite can't be faster than that, but to be on par is a good goal.
This is not possible when an insert means also modification of other
parts of the file, for example when there's an index involved. But
let's forget about it. Finally when new data is added, sqlite should
write a number of database pages, the cost of this part is absolutely
in the hands of the media (driver) and OS (driver).  But for every
database page write there's also price to pay in CPU units, for many
actions sqlite should do before actual value is translated from what
the developer provided to what actually appears on disk.

The illustration of the CPU price is the following example
 CREATE TABLE t(Value)

on my ssd drive mulitply inserts (thousands)
  insert into t (Value) values ('123456689  // this string
contains many symbols, for example 1024)
performed with the speed
  30 MB/Sec

but the query
  insert into t (Value) values (10)  // this is a small integer value
only
  3 Mb/Sec

Both shows almost full cpu load. Why such difference? Because with
latter query the system can do more than 30 MB of writes in 1 second,
but it should wait for sqlite spending 10 seconds in preparations.
The former is better because CPU cost of passing a large text value to
sqlite is comparatively low comparing to the  time spent in I/O in
writing this on disk.

So CPU price to pay isn't avoidable and notice that in example this is
not virtual table API, this is bind API. I suppose that the price we
pay for CPU spent in virtual table API is on par with an average price
payed in sqlite as a whole. This means that if I transfom the avove
queries into inserts from virtual tables, the final speed difference
will be similar. And this also means that for your comparision tests
(when you get x3 difference), the CPU price sqlite pays inside bind
api and in its code wrapping xColumn call is probably similar. The
rest is the share your code pays.

Well, I know that there are differences in CPU architectures and
probably there are platform where compiled code for bind api and
virtual tables api behaves a little differently making the costs more
diffrent. But imagine that hard task of fine tuning and refactoring
just to get a noticeable difference for a particular platform.


Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Virtual table API performance

2014-03-01 Thread Max Vlasov
Hi,
thanks for explaining your syntax in another post. Now about virtual
tables if you don't mind.

On Fri, Feb 28, 2014 at 8:24 PM, Eleytherios Stamatogiannakis
 wrote:
>
> If we load into SQLite, 
>
> create table newtable as select * from READCOMPRESSEDFILE('ctable.rc');
>
> it takes: 55 sec
>
>
> If we create an external program 
>
> it takes: 19 sec (~3x faster than using the virtual table API)
>
>

Looking at your numbers, as a user (and fan :) of virtual tables I
decided to do some tests.

I have a virtual table "all values", it was designed for enumeration
of all tables values to the one single virtual table, so finally it is
a long list of

  TableName, TableRowId, FieldName, Value

so you get the idea. As an example of what it may do, you may open
places.sqlite of mozilla browser and do

  Select * from AllValues where Value Like "%sqlite.org%"

and see actual results even not knowing how they planned their schema.

Internally this virtual table simply uses general selects for all
other tables met in sqlite_master. This is a good (but probably not
the best) test for measuring virtual tables performance, because

  SELECT * FROM AllValues

is equivalent to reading all conventional tables of this database.
Besides
- the tool I use has a tweaker implemented with VFS that allows
measuring speed and other characteristics of the query performed while
the query is in effect.
- I have an option that forces resetting windows cache for the
database file when it is reopened. So with it we exclude the windows
cache from consideration so pure I/O reading is used. Btw, when you do
your comparison, it's very important to reset system cache before
every measurement that involves I/O.


So I took a comparatively large (500 Mb) database consisting of
several small and one big table (Posts) and compared two queries.

(Query1)

  Select sum(length(Body) + length(Title)) from Posts

This ones effectively reads the table data and uses
- length() to force sqlite reading texts that don't fit into single db page
- sum() to exclude accumulating results on my side from comparison, so
we have a single row, single column result from the work completely
done by sqlite.

(Query2)

  Select Sum(Length(Value)) from AllValues

This one performs basically the same but using sqlite virtual tables
api. It also touches other tables, but since they're small, we can
forget about this.

Query1 (General):
  Read: 540MB,
  Time: 24.2 sec,
  CPU Time: 6 Sec (25%)
  Speed: 22.31 MB/Sec

Query2 (Virtual):
  Read: 540MB,
  Time: 27.3 Sec,
  CPU Time: 13 sec (51%)
  Speed: 20 MB/Sec

In my particular test the noticeable difference is at the part of the
CPU spent more with the virtual table. I assume this can be related to
my own implementation of this virtual table since I should retrieve,
store values temporary somewhere and talk to sqlite. But this also may
shed light on your performance drop. If your virtual implementation
spend much time processing a value, you may finally get a big drop.

You may tell that this test is not fair because it does not involve
creating a table from the values of a virtual table. Unfortunately I
can't create good enough test comparing Posts and AllValues table as
sources, because the destination geometry of the tables are different
( Posts have more columns, less rows, AllValue less columns, more
rows). The closest approximation was possible when I created an
intermediate physical table containing the results from AllValues and
compared table creation from this table and from virtual table. The
virtual one took longer, but the values - 56 seconds vs 43 second not
different enough to conclude something.

I'm not sure my tests defend sqlite virtual tables sufficiently, but
currently I don't have evidence of significant inefficiency either.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Virtual Table "Functions"

2014-03-01 Thread Max Vlasov
On Fri, Feb 28, 2014 at 10:14 PM, Dominique Devienne
 wrote:
> Can someone tell me how the statement below works?
> 
>
> Thanks for any help on this. This is really puzzling to me. --DD


Very puzzling for me too
For any statement like this

  select * from blablabla(123)

sqlite (3.8.3.1) primarily reports syntax a error...

   near "(": syntax error

... before complaining about anything else.

I'd also be glad to see a shortcut when a virtual table created just
for a select statement and dies automatically, but I suspect it is not
supported and it was their own modified version of sqlite. Hope OP
reads this post also and will have an answer for us. Probably this
will help providing following information regarding his issues.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question about how sqlite recovers after a power loss

2014-02-26 Thread Max Vlasov
On Wed, Feb 26, 2014 at 12:33 PM, Fabrice Triboix
 wrote:
>
> And even then, that would not explain why the journal file lingers after 
> re-opening the database.
>

I remember asking a similar question. As long as I remember, the main
logical implication is that journal file presence is not a mark to
force database into some actions. It's rather a supplement for the
error state of the base. So if the base is in error state and there's
no journal file, it's bad. But reverse is ok and might appear with
some scenarios.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Latest Sqlite grammar as machine understandable file

2014-02-21 Thread Max Vlasov
On Fri, Feb 21, 2014 at 5:24 PM, Richard Hipp <d...@sqlite.org> wrote:
> On Fri, Feb 21, 2014 at 7:29 AM, Max Vlasov <max.vla...@gmail.com> wrote:
>
>>
>> The only one a little similar I found is
>>   http://www.sqlite.org/docsrc/artifact/873cf35adf14cf34
>>   ( mentioned as art/syntax/all-bnf.html )
>>
>
> An updated version of all-bnf.html has now been checked in at
> http://www.sqlite.org/docsrc/doc/trunk/art/syntax/all-bnf.html
>

Thanks a lot, that human-machine activity was much faster than my
writing reply post :)

My  thoughts about parsing was about using it to recognize some states
of user queries to suggest additional tasks. In a sense any query is a
small database containing for example
- the list of datasets used (might be a quick hint nearby),
- possible state of master-detail relationship (enabling menu item for
showing tow connected list views instead of the joined table)

I already tried to detect some states without parsing, but obviously
not so much is possible without full AST at hands.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Latest Sqlite grammar as machine understandable file

2014-02-21 Thread Max Vlasov
On Fri, Feb 21, 2014 at 4:47 PM, Richard Hipp <d...@sqlite.org> wrote:
> On Fri, Feb 21, 2014 at 7:29 AM, Max Vlasov <max.vla...@gmail.com> wrote:
>
>> Is there a machine-readable (BNF or other) grammar as equivalent to
>
> Not that I am aware of.
>

I just noticed the file ( bubble-generator-data.tcl )
  
www.sqlite.org/docsrc/doc/tip/art/syntax/bubble-generator-data.tcl?mimetype=text/plain
is it created by a human or by a machine? It looks like a good
candidate, but might lack some essential information for parsing. If
it was generated by a machine then the source of this generation might
be next good candidate :)

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Latest Sqlite grammar as machine understandable file

2014-02-21 Thread Max Vlasov
Hi,

Is there a machine-readable (BNF or other) grammar as equivalent to
the current syntax diagrams?
  http://www.sqlite.org/syntaxdiagrams.html

The only one a little similar I found is
  http://www.sqlite.org/docsrc/artifact/873cf35adf14cf34
  ( mentioned as art/syntax/all-bnf.html )

but it's pretty outdated (no CTE)

Thanks

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Once again about random values appearance

2014-02-17 Thread Max Vlasov
On Mon, Feb 17, 2014 at 7:00 PM, RSmith <rsm...@rsweb.co.za> wrote:
>
>
> On 2014/02/17 09:59, Max Vlasov wrote:
>>
>> .
>> So
>>
>>Select nanosec() - nanosec() from ...
>>
>> returns non-zero values for most of the times, so there's no guarantee the
>> user functions or any other functions will be called once for the step.//... 
>> etc.
>
>
> Did you mark your nanosec function as SQLITE_DETERMINISTIC 
> <http://www.sqlite.org/c3ref/c_deterministic.html>?
> http://www.sqlite.org/c3ref/create_function.html
>
> Which, if not, it can and will very much return non-zero values.
>

Very interesting option, it did affected the results, they're now zero
for both cases. Also I see this quite a young option listed as a new
feature of 3.8.3.

"Added SQLITE_DETERMINISTIC as an optional bit in the 4th argument to
the sqlite3_create_function() and related interfaces, providing
applications with the ability to create new functions that can be
factored out of inner loops when they have constant arguments"

So the query with this option

  Select nanosec() as v from TestTable where v<>v

always returns empty dataset.

But it seems this option still has something to explain since

  Select nanosec() - nanosec() from TestTable

returns always zeros while

  Select nanosec(), nanosec() from TestTable

returns different values for fields

Either nanosec() - nanosec() is much faster than the granularity of
performance counter on average windows hardware or they are different
cases for some reasons.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Once again about random values appearance

2014-02-17 Thread Max Vlasov
On Mon, Feb 17, 2014 at 2:27 PM, Simon Slavin <slav...@bigfraud.org> wrote:

>
> On 17 Feb 2014, at 7:59am, Max Vlasov <max.vla...@gmail.com> wrote:
>
> > So the nanosec example modified
> >
> > Select v-v from
> > (
> >   Select nanosec() as v from TestTable
> > )
> >
> > ...shows non-zero values for the current (3.8.3) and for older (3.6.10)
> > version.
>
> Erm ... that worries me a little.
>
> I don't know how SQLite handles sub-selects internally.  But the
> conventional way of doing them is a JOIN to a VIEW.  Which means that v-v
> should always equal zero.
>


Explain lit a litle light
...
"4""Function""0""NanoSec(0)"
"5""Function""0""NanoSec(0)"
"6""Subtract""2"
"7""ResultRow""1"
"8""Next""4"
.
So no intermediate storage probably for performance reasons. Also the
listing looks very self-explainable while possible stateful one will add
more complexity to the VDBE code.


>
> By the way, when figuring out how to optimize this still it's worth noting
> that a parameterless function is rare in SQL.  It's rarely worth optimizing
> unnamed expressions because you rarely get the same unnamed expression
> (including parameters) used twice.
>
>
Simon, I see your point, next time my test function will depend on at least
a dozen of very important parameters and will have a very, very, very long
name :)

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Once again about random values appearance

2014-02-16 Thread Max Vlasov
On Mon, Feb 17, 2014 at 1:22 AM, James K. Lowden
<jklow...@schemamania.org>wrote:

> On Fri, 14 Feb 2014 08:32:02 +0400
> Max Vlasov <max.vla...@gmail.com> wrote:
>
> > From: Max Vlasov <max.vla...@gmail.com>
> > To: General Discussion of SQLite Database <sqlite-users@sqlite.org>
> > Reply-To: General Discussion of SQLite Database
> > <sqlite-users@sqlite.org> Date: Fri, 14 Feb 2014 08:32:02 +0400
> > Subject: Re: [sqlite] Once again about random values appearance
> >
> > On Fri, Feb 14, 2014 at 6:35 AM, James K. Lowden
> > <jklow...@schemamania.org>wrote:
> >
> > > > > select id, (select id from TestTable where id = abs(random() %
> > > > > 100)) as rndid from TestTable where id=rndid
> > >
> > > On Thu, 13 Feb 2014 07:26:55 -0500
> > > Richard Hipp <d...@sqlite.org> wrote:
> > >
> > > > It is undefined behavior, subject to change depending the specific
> > > > version of SQLite, compile-time options, optimization settings,
> > > > and the whim of the query planner.
> > >
> > > It should be defined.
> > >
> > > In the above query, random() should be evaluated once.  In the SQL
> > > model the user provides inputs once, and the system evaluates them
> > > once.
> > >
> > >
> > Once for the query or once for the row?
>
> Once for the query.
>
> As a user you have no control how the system evaluates your query.
> The evaluation may change over time with different implementations, but
> the semantics of the query do not.
>
> Not long ago on this list we discussed
>
> SELECT *, datetime('now') from T;
>
> and the behavior was that the datetime function was called per-row,
> resulting in different times on different rows.  It was changed, the
> rationale IIRC to be compatible with the SQL standard and other DBMSs.
>


Ok, I hope I found the topic, the title was
  "racing with date('now') (was: Select with dates):
one of the links to the archive
  https://www.mail-archive.com/sqlite-users@sqlite.org/msg79456.html

CMIIW, but as I see it, the final modification was commented by Richard

 > As a compromise, the current SQLite trunk causes 'now' to be
exactly  the
 > same for all date and time functions within a single sqlite3_step()
call.

But this is just for now and date-related functions. I wanted to be sure so
created a user function NanoSec() that returns  nanoseconds as it is
calculated with QueryPerformanceCounter and QueryPerformanceFrequency on
Windows and clock_gettime(CLOCK_REALTIME... on Linux. Seems like it's not
always real nanoseconds but value that is changed very frequently to be
different for close VDBE instructions of sqlite engine.

So

  Select nanosec() - nanosec() from ...

returns non-zero values for most of the times, so there's no guarantee the
user functions or any other functions will be called once for the step.

My original issue was commented by Richard and there's no other argument I
can post because ability ot use alias in the WHERE clause is probably the
sqlite-only feature so you just can't reference any standard about this.

But.. your first reply triggered another problem, it's where outer query
uses alias from the inner query and here sqlite can be compared with other
engines. So the nanosec example modified

 Select v-v from
 (
   Select nanosec() as v from TestTable
 )

...shows non-zero values for the current (3.8.3) and for older (3.6.10)
version. And here it would be interesting to know whether any standard has
something to tell about the value of v in the outer query.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Suggestion to add "locate" as a broader version of "instr"

2014-02-14 Thread Max Vlasov
Hi,

Some time ago when there was no "instr" functions, I looked at Mysql help
pages and implemented a user function "locate" as the one that allows
searching starting a particular position in the string. With two parameters
form it was just identical to "instr" only the order of parameters was
reversed. As I see, the latest sqlite has only "instr".

It's not a big deal, but I noticed that "locate" with three parameters
becomes convenient for CTE recursive queries since it allows search
sequentially in the string. For example, a little bulky at last, but I
managed to do "comma-list to dataset" query

I suppose implementing "locate" and doing "instr" as a call to "locate"
would cost the developers probably no more than a hundred of bytes for the
final binary

Thanks

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Once again about random values appearance

2014-02-13 Thread Max Vlasov
On Fri, Feb 14, 2014 at 6:35 AM, James K. Lowden
wrote:

> > > select id, (select id from TestTable where id = abs(random() % 100))
> > > as rndid from TestTable where id=rndid
>
> On Thu, 13 Feb 2014 07:26:55 -0500
> Richard Hipp  wrote:
>
> > It is undefined behavior, subject to change depending the specific
> > version of SQLite, compile-time options, optimization settings, and
> > the whim of the query planner.
>
> It should be defined.
>
> In the above query, random() should be evaluated once.  In the SQL
> model the user provides inputs once, and the system evaluates them
> once.
>
>
Once for the query or once for the row?
It does evaluated once for the query. To check, just remove where

  select id, (select id from TestTable where id = abs(random() % 100)) as
rndid from TestTable

and we have plenty of the same rndid as the right column

I almost understood the logic behind this, but just found the query that
surprised me once again. When I wrote it, I wanted to show that to compare
engines we have to remove alias usage from WHERE because most of engines
forbid using it in WHERE. But the following one that probably should be
compatible with other RDBMSes still shows different column values

select id, rndid from
(
select id, (select id from TestTable where id = abs(random() % 100)) as
rndid from TestTable
) where id=rndid

But I supposed the rndid in outer query should have known (fixed) value and
the values from where clause should be shown identical.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Once again about random values appearance

2014-02-13 Thread Max Vlasov
On Thu, Feb 13, 2014 at 4:26 PM, Richard Hipp <d...@sqlite.org> wrote:

> On Thu, Feb 13, 2014 at 4:45 AM, Max Vlasov <max.vla...@gmail.com> wrote:
>
> > Hi,
> >
> > probably was discussed and modified before, but I still can not
> understand
> > some peculiarities with random column values.
> >
>
> It is undefined behavior, subject to change depending the specific version
> of SQLite, compile-time options, optimization settings, and the whim of the
> query planner.
>
>

Thanks, this makes sense.

Assuming there are time related functions, user function with undefined
values etc, may I generalize this into "an expression is guaranteed to have
the same value in any context only if it is always return the same output
for the same input"?

Interesting, the problem probably arises since sometimes the symbols in the
result columns are easy to interpret mentally like "states" to be used
further, but they look more like macros to be substituted.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Once again about random values appearance

2014-02-13 Thread Max Vlasov
Hi,

probably was discussed and modified before, but I still can not understand
some peculiarities with random column values.

The table
  Create table [TestTable] ([id] integer primary key)

populated with 100 default values (thanks to CTE now made with a single
query):

with recursive
  autoinc(id) as
 (values(1) UNION ALL Select id+1 from autoinc where id < 100)
insert into TestTable select id from autoinc


So the following query

select id, (select id from TestTable where id = abs(random() % 100)) as
rndid from TestTable where id=rndid

returns a single row (as expected), but this row contains different values
for id and rndid. I suppose this is because rndid to be shown is calculated
once, but the one used for comparison is calculated again and again. Is
this correct behavior?

Also checked several dll versions and saw that this change appeared
somewhere between 3.6.13 and 3.6.16, so .13 showed identical values, while
.16 different.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] latest sqlite 3 with Delphi 5 professional

2014-01-24 Thread Max Vlasov
On Fri, Jan 24, 2014 at 9:16 PM, Ralf Junker <ralfjun...@gmx.de> wrote:
> On 24.01.2014 10:06, Max Vlasov wrote:
>
>> BCC 5.5 (freely downloadable) compiles any version of sqlite3 to
>> object files linkable to Delphi 5 and later, the only drawback I
>>
>> Don't know about DISQLite3 , but one of the main performance issues
>
>
> DISQLite3 does _not_ show the performance issues you describe for your BCB
> 5.5 compiled object files. Quite the opposite: DISQLite3 outperformed
> sqlite3.dll whenever I tested.

Don't take it personally :) I just talked about c originated code that
we have to compile against OMF library files formats as long as
borland/codegear/embarcadero never supported COFF format (CMIIW). So
BCC is our only choice for static linking (probably Intel compilers
should still support OMF since Intel introduced it, but I did not try)

And when I talked about the x2 difference, it was about pure memory db
having a thousand rows and a query that make a cross join taking
totally about 6-20 seconds depending on the query. So no I/O involved,
pure cpu intensive operations inside sqlite. To my own surprise a dll
compiled with bcc 5.5 with -O2 option  (maximum optimization as I
recall) made it two times slower than the VC dll (from sqlite.org
site) compiled against the same version. So this is a synthetic test
not pretending to be general.

As for DISQLite3, I see from your site, that it is a great library
having support for many Delphi versions and many db features. I looked
at the source, as I see the dataset is unidirectional and processes
query on request. I'm sure there are no performance penalties here.
Good job

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] latest sqlite 3 with Delphi 5 professional

2014-01-24 Thread Max Vlasov
On Thu, Jan 23, 2014 at 1:33 AM, dean gwilliam
 wrote:
> I'm just wondering what my options are here?
> Any advice much appreciated.
> ___

My two cents...

Historically I took Aducom TDataSet-compatible classes
(http://www.aducom.com/cms/page.php?2 , author - Albert Drent) and
used it ever since, but along the road there were many changes that I
made, so I'm not sure I can recommend one of the current (last
version). They had static variant implemented with msvcrt linked
(maybe it was removed to the moment), to remove the dependency
yourself you have to implement the following functions

_malloc,_realloc,_free,_memset,_strncmp,_memmove,_memcpy,_strlen,_qsort,_memcmp,_localtime

part of them might be just calls to Delphi existing rtl, for another
part a little work needed. This allowed me for example to monitor the
number of memory requests different queries make.

BCC 5.5 (freely downloadable) compiles any version of sqlite3 to
object files linkable to Delphi 5 and later, the only drawback I
noticed is that for memory-intensive operations (memory databases) the
performance is twice as worst comparing to the dll on the site
(probably VC compiled), but for databases on disk the difference is
small since I/O overhead compensate it.

Don't know about DISQLite3 , but one of the main performance issues
when using sqlite is that BDE was made for virtual access of data with
moving cursor, but it's not possible with sqlite. You have a query and
you can only move forward. So, the easiest approach is to load all
data, but imagine this for a very large table, activating TDataset in
this case may take very long time (and space). One of the approach is
to change it to load all rowids of the table and request record data
on the fly based on this array.

So, if you plan get best performance and don't need borland database
components and controls, then your best bet is to use sqlite api or
simple object wrappers around it. Otherwise, be aware that "impedance
mismatch" between sqlite and BDE may cost you performance penalties
depending on the library you use.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Sqlite as a platform performance comparison tool

2014-01-06 Thread Max Vlasov
Hi,

A thought came to compare two computers of different platforms (ie
i386 vs ARM) using uniform approach. We take two binaries of the same
sqlite version compiled with the best c compilers for both platforms
and compare the time spent for identical operations using memory based
databases (to exclulde I/O from measuring).

So, we could start with some comparatively large table loaded into
:memory: database and the following operations is going to make
joins/inserts etc probably as a single, semicolon delimited set of
queries. The sets are identical for both platforms and is probably is
best handled with a single sqlite3_exec

Is such comparision correct? So maybe even for memory-based operation
there is something that makes this comparision invalid?

Thanks,

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Virtual Table: misuse error on sqlite3_declare_vtab()

2013-10-04 Thread Max Vlasov
Simon,

don't know what exactly wrong in your particular case, but I'd suggest
setting debugger breakpoints everywhere in your x-handlers and notice the
moment after which calls are ceased (or you get a error code).

Max


On Fri, Oct 4, 2013 at 6:07 PM, Simon  wrote:

> Hi there,
>
> I'm currently building my own virtual table implementation. I've built a
> dummy vtable that returns the string "1" to all queries for all columns
> but I'm having trouble getting started (getting a 'misuse' error). Can
> someone help me get in the right direction?
>
>
> >
>
> 1) I first create an sqlite3* object in ":memory:". This one works fine.
>
> 2) Allocate and init my derived sqlite3_module* object.
> 2.1) Function pointers I use are Create,Destroy, Connect,Disconnect,
> Open,Close, BestIndex,Filter, Next,Eof, Column,Rowid.
> 2.2) NULL pointers for Update, Begin,Sync,Commit,Rollback,
> FindFunction,Rename, Savepoint,Release,RollbackTo.
> 2.3) *** ? *** Are there any other members not documented I'd need to init?
>
> 3) I call sqlite3_create_module() with module name "vtable1"
>
> 4) I call sqlite3_declare_vtab() with this statement: "CREATE TABLE foo ( x
> integer, y integer );"
>
> --> That one fails with ERROR 21 (lib misuse).
>
> <
>
>
> Did I forget a step? Do them in wrong order? Anything obvious seems wrong
> to you? Where should I go to get more details and fix the issue?
> My current implementation is rather complex and involves C++ and templates
> a lot. Making a "small example" would be highly time-consuming, so I'd
> prefer to avoid going that way if possible.
>
> Anyone can tell me where I'd find a simple functionnal vtable "hello
> world"? The vtable documentation is great but is missing examples, imo.
> Personnally, I think the vtable documentation page could benefit from one
> example project throughout and develop it. Something like a basic CSV
> vtable would probably help the masses a lot! :)
>
> Thanks for your help and time,
> Simon
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why latest places.sqlite from firefox can't be opened by some older versions of sqlite

2013-08-31 Thread Max Vlasov
On Sat, Aug 31, 2013 at 10:14 PM, Richard Hipp <d...@sqlite.org> wrote:
> On Sat, Aug 31, 2013 at 2:03 PM, Max Vlasov <max.vla...@gmail.com> wrote:
>
>> Hi,
>>
>> I noticed that opening places.sqlite of my installation of Firefox
>> can't be made for example with sqlite 3.6.10, it says that file either
>> encrypted or invalid
>
> Recent versions of Firefox use WAL mode (http://www.sqlite.org/wal.html)
> which is supported by SQLite version 3.7.0 and later.
>

Thanks, Richard, that's it. A small suggestion, if it's possible, the
doc mentioned by Slavin could contain additional section after the
main table with conditional format incompatibilities. WAL and latest
partial index support can be mentioned there.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Why latest places.sqlite from firefox can't be opened by some older versions of sqlite

2013-08-31 Thread Max Vlasov
Hi,

I noticed that opening places.sqlite of my installation of Firefox
can't be made for example with sqlite 3.6.10, it says that file either
encrypted or invalid (everything ok for example with 3.7.15.2 and sure
firefox itself, it works :)).

This might be a failure of my particular installation so maybe after
several years of upgrading firefox made my particular db semi-valid or
something. It's not big deal, just curious, probably ff uses some
features making sqlite 3 format incompatible with 3.6.* versions

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Group by in sqlite 3.8 works a little differently depending on the spaces at the end

2013-08-28 Thread Max Vlasov
On Wed, Aug 28, 2013 at 5:11 PM, Igor Tandetnik <i...@tandetnik.org> wrote:
> On 8/28/2013 8:57 AM, Max Vlasov wrote:
> See the recent discussion at
>
> http://comments.gmane.org/gmane.comp.db.sqlite.general/83005
>
> It's not about trailing spaces, but about whether Title in GROUP BY resolves
> to mean the table column or the alias.

Thanks, Igor, sorry, didn't notice the original discussion

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Group by in sqlite 3.8 works a little differently depending on the spaces at the end

2013-08-28 Thread Max Vlasov
Hi,

the following query  (notice the space at the end of the 3rd string)

Create table [TestTable] ([Title] TEXT);
INsert into TestTable (Title) VALUES ('simple text');
INsert into TestTable (Title) VALUES ('simple text');
INsert into TestTable (Title) VALUES ('simple text ');
select Trim(Title) as Title, Count(*) as Cnt FROM TestTable Group By Title;

produces two results

"simple text""2"
"simple text""1"

while all previous versions I tried a single one

"simple text""3"

Speaking about the correct way, seems like 3.8 is right (technically
the strings are different), but I just wonder why all this time this
seems like existed and never noticed. On the other side, mysql of a
some old version also showed a single result

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to create connection life-time object?

2013-07-26 Thread Max Vlasov
On Fri, Jul 26, 2013 at 9:56 PM, Dušan Paulovič  wrote:

> Thanks for suggestion, but:
> 1.) one object is not linked to one connection
>

If you have your own memory management, it's not a problem since the scheme
I described is basically just a storage of pointers. To free or not to free
(if the pointer points to a disposable entity) is your decision at your
chosen time. But if you want automatic reference counting (so when a
pointer not referenced anymore, it would be automatically deallocated),  I
agree, the proposal is not good. Probably using your own global structure
not related to sqlite is less pain.


> 2.) object is not destroyed together with connection
>

The same, you're free to use the table just as pointers storage

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to create connection life-time object?

2013-07-26 Thread Max Vlasov
Hi, Dušan


On Thu, Jul 25, 2013 at 2:39 PM, Dušan Paulovič  wrote:

> Hello, is there a way to somehow set a connection life-time object?
> ...
>
>
> It would be fine to have something like:
> int sqlite3_set_lifetime_object(
>   sqlite3 *db,  /*db connection*/
>   const char *zObjectName,  /*utf8 name of object*/
>   void *pObject,/*if NULL, object is removed*/
>   void(*xDestroy)(void*)/*destructor*/
> );
>
> void * sqlite3_get_lifetime_object(
>   sqlite3 *db,  /*db connection*/
>   const char *zObjectName   /*utf8 name of object*/
> );
>



How about temporary memory table just for the task of storing your objects.

You initialization code for particular connection
  Attach ':memory:' as MyObjectStorage
  Create table MyObjectStorage.[objects] (Name Text, Ptr Text)

Your code for inserting an object
  Insert into MyObjectStorage.[objects] (Name, Ptr) VALUES ('obj1',
'0x12345678')

This code will query the pointer
  select Ptr from MyObjectStorage.[objects] where Name='obj1'

The only convention rule here will be the name of the attached db so no
other databases (or instances of the same storage) should use this name.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is REAL the best universal data type?

2013-07-23 Thread Max Vlasov
On Tue, Jul 23, 2013 at 10:09 PM, Petite Abeille
<petite.abei...@gmail.com>wrote:

>
> On Jul 23, 2013, at 9:52 AM, Max Vlasov <max.vla...@gmail.com> wrote:
>
> > Basically it's several tables implementing Object-Propery-Value metaphor
>
> Hurray! The Entity–attribute–value (EAV) anti-pattern!
>
>
pattern, anti-pattern... I think the main principle should be "whatever
works" :)
As for my experience, this was started when I wanted to extract the
information from the CIA Factbook data. And this was 170 columns for more
than 150 countries. Making it triple storage with the excellent sqlite
support allowed very flexible data and queries, for example showing the
data about single country as a long table with two columns: property
name/property value, not mentioning other comparison queries, vertical or
horizontal depending on the goal. Queries become bigger, I admit, usually
this involves extra join, but they are still manageable

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is REAL the best universal data type?

2013-07-23 Thread Max Vlasov
On Tue, Jul 23, 2013 at 1:38 PM, Dan Kennedy <danielk1...@gmail.com> wrote:

> On 07/23/2013 02:52 PM, Max Vlasov wrote:
>
>>   So
>> par adoxically probably the best type for universal field container is
>> REAL
>> (or NUMERIC) since it will accept data of any type, but has advantage of
>> best sorting if reals or integers are involved.
>>
>> Is this correct or I am missing something?
>>
>
> The only difference between "INTEGER" and "REAL" is that real
> values are converted to integers if this is possible without
> loss of data. In other respects they are the same. Both try
> to convert text values to numbers on insert.
>

So when building indexes, it doesn't matter whether it's REAL or INTEGER,
the comparison will include both real and fractional parts of values if
they exist, right?

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is REAL the best universal data type?

2013-07-23 Thread Max Vlasov
On Tue, Jul 23, 2013 at 1:32 PM, Clemens Ladisch  wrote:

>
>
> But REAL will sort the strings '1', '10', '2' wrong.
>

What do you mean by "wrong"?

The test

CREATE TABLE testtable (id integer primary key, value real);
insert into testtable (value) values ('1');
insert into testtable (value) values ('2');
insert into testtable (value) values ('10');
insert into testtable (value) values ('something');
CREATE INDEX [idx_Testable] ON [testtable] ([Value]);
SELECT * FROM testtable order by value;

will show

1, 2, 10, something
that's what I wanted (except for "something" being exception)

if I change create to
CREATE TABLE testtable (id integer primary key, value);

then the order will be
1, 10, 2, something.
(undesired result)

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Is REAL the best universal data type?

2013-07-23 Thread Max Vlasov
I've created a kind of triple storage base with Sqlite db as the container.
Basically it's several tables implementing Object-Propery-Value metaphor.
There's only one field for data so thinking about generality I assumed that
the type for the data field should be TEXT of nothing since most of other
types (let's forget about blob for a moment) can be stored (and easily
visualized) with this field. But there are also indexes involved and here
comes the problem. If I insert natural numbers in some sub-sequence I will
get non-naturally sorted ones (1, 10, 2, 20). But we know that Sqlite can
accept any data in any field, so I can change the type to INTEGER and enjoy
numbered order when there are numbers were added (1, 2, 10, 20). On the
other side, when we look at real numbers, the problem would still exist. So
paradoxically probably the best type for universal field container is REAL
(or NUMERIC) since it will accept data of any type, but has advantage of
best sorting if reals or integers are involved.

Is this correct or I am missing something?

Thanks,

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite 3.7.17 preview - 2x faster?

2013-04-09 Thread Max Vlasov
Thanks, Richard, it worked. For cases when I want to get benefits of faster
file-mapping I did some adjustment.

Btw, this was also for a small tool I use. It's similar to how
defragmenters show sectors while doing the job. In this case the sectors
are pages of sqlite file. If a sector was read I draw a pixel and this
works as animation while a query is being executed. So for my query that
worked 27 seconds with 3.7.16.1 and 9 seconds for 3.7.17.draft, the reading
pattern a exactly the same, virtually random filling half of the area of a
32MB file. Looks like file caching is very conservative in guessing what
can be used around in the future, while file-mapping routines assumes wider
ranges around. Sure, after both queries if I don't reset the cache, the
repeated query is almost instant.

Max



On Mon, Apr 8, 2013 at 4:31 PM, Richard Hipp <d...@sqlite.org> wrote:

> On Mon, Apr 8, 2013 at 8:20 AM, Max Vlasov <max.vla...@gmail.com> wrote:
>
> > Richard,
> >
> > It makes sense, but I see here some possibility of future confusion.
> > Correct me if I'm wrong. Currently if I have a vfs that requires special
> > preparations (for example, decompression), I have two choices, either
> > provide V2 interface or emulate memory-mapping by allocating my own
> blocks
> > of memory in xFetch and deallocating in xUnfetch. If future V4 IO
> routines
> >
>
> You an implement xFetch to always return NULL:
>
> int xFetch(sqlie3_file *pNotUsed1, i64 notUsed2, int notUsed3, void
> **pp){
>   *pp = 0;
>   return SQLITE_OK;
> }
>
> Then SQLite will always fallback to doing plain old xRead and xWrite.
>
>
>
> > introduce something new, one will not have the first option. So anyone in
> > the future should be aware that there are two points where data can be
> > needed and since one expects filling previously allocated block and
> another
> > expects pointer to the data, the complexity of understanding will grow.
> Or
> > is there a simple way to disable xFetch/xUnfetch on the VFS level?
> >
> > Max
> >
> >
> >
> >
> >
> > On Mon, Apr 8, 2013 at 3:33 PM, Richard Hipp <d...@sqlite.org> wrote:
> >
> > > On Mon, Apr 8, 2013 at 6:12 AM, Max Vlasov <max.vla...@gmail.com>
> wrote:
> > >
> > > > But I also noticed that if I provide
> > > > version 2 of vfs, I won't get benefits of file-mapping
> > > >
> > >
> > > That's how we implement backwards compatibility to legacy VFSes.
> > >
> > > --
> > > D. Richard Hipp
> > > d...@sqlite.org
> > > ___
> > > sqlite-users mailing list
> > > sqlite-users@sqlite.org
> > > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> > >
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >
>
>
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite 3.7.17 preview - 2x faster?

2013-04-08 Thread Max Vlasov
Richard,

It makes sense, but I see here some possibility of future confusion.
Correct me if I'm wrong. Currently if I have a vfs that requires special
preparations (for example, decompression), I have two choices, either
provide V2 interface or emulate memory-mapping by allocating my own blocks
of memory in xFetch and deallocating in xUnfetch. If future V4 IO routines
introduce something new, one will not have the first option. So anyone in
the future should be aware that there are two points where data can be
needed and since one expects filling previously allocated block and another
expects pointer to the data, the complexity of understanding will grow. Or
is there a simple way to disable xFetch/xUnfetch on the VFS level?

Max





On Mon, Apr 8, 2013 at 3:33 PM, Richard Hipp <d...@sqlite.org> wrote:

> On Mon, Apr 8, 2013 at 6:12 AM, Max Vlasov <max.vla...@gmail.com> wrote:
>
> > But I also noticed that if I provide
> > version 2 of vfs, I won't get benefits of file-mapping
> >
>
> That's how we implement backwards compatibility to legacy VFSes.
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite 3.7.17 preview - 2x faster?

2013-04-08 Thread Max Vlasov
Dan, I see, seems like this works. But I also noticed that if I provide
version 2 of vfs, I won't get benefits of file-mapping (I assume because my
query that worked 9 seconds for file-mapping now back 27 seconds). I'm not
sure this is right, but calling sqlite original xRead very abstract by its
nature, so nobody ever expected particular implementation from sqlite and
if file mapping is on, sqlite could use file-mapping even if user provided
ver.2 of io routines.

In other word, if sqlite doesn't expect special knowledge of file mapping
existence for vfs-free client then it should not expect this from client
that uses vfs

Max


On Mon, Apr 8, 2013 at 1:56 PM, Dan Kennedy <danielk1...@gmail.com> wrote:

> On 04/08/2013 04:40 PM, Max Vlasov wrote:
>
>> On Mon, Apr 8, 2013 at 1:23 PM, Dan Kennedy <danielk1...@gmail.com>
>> wrote:
>>
>>
>>> Right. But a VFS is not obliged to support the new xFetch() and
>>> xUnfetch() methods (used to read data from a memory mapped file).
>>> And if it doesn't, SQLite will use xRead() exclusively.
>>>
>>> It always uses xWrite() to write - whether mmap is enabled or not.
>>>
>>>
>>>  Great, what is the correct way of not providing fetch procedures? Maybe
>> I
>> did something wrong?
>> - If I call original xFetch/xUnfetch from mine - no xRead called,
>> - if I provide Nil as the function address, then I get Access violation
>> () so probably sqlite tries to call it anyway.
>> - If return SQLITE_Error from xFetch, xUnfetch, I get sqlite logic error.
>>
>
> Set the iVersion field of your sqlite3_io_methods struct to 2 (not 3).
>
> The idea is that if you have an existing VFS, it should keep working
> as is without any modifications. If you find this is not the case, it
> may be a bug.
>
> Dan.
>
>
> __**_
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-**bin/mailman/listinfo/sqlite-**users<http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users>
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite 3.7.17 preview - 2x faster?

2013-04-08 Thread Max Vlasov
On Mon, Apr 8, 2013 at 1:23 PM, Dan Kennedy  wrote:

>
> Right. But a VFS is not obliged to support the new xFetch() and
> xUnfetch() methods (used to read data from a memory mapped file).
> And if it doesn't, SQLite will use xRead() exclusively.
>
> It always uses xWrite() to write - whether mmap is enabled or not.
>
>
Great, what is the correct way of not providing fetch procedures? Maybe I
did something wrong?
- If I call original xFetch/xUnfetch from mine - no xRead called,
- if I provide Nil as the function address, then I get Access violation
() so probably sqlite tries to call it anyway.
- If return SQLITE_Error from xFetch, xUnfetch, I get sqlite logic error.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite 3.7.17 preview - 2x faster?

2013-04-08 Thread Max Vlasov
On Sun, Apr 7, 2013 at 2:12 PM, Max Vlasov <max.vla...@gmail.com> wrote:

>
>
>
> On Thu, Apr 4, 2013 at 4:02 PM, Richard Hipp <d...@sqlite.org> wrote:
>
>> We would like to encourage people to try out the new code and
>> report both success and failure.
>>
>
>
> Not particulary about this draft version, but about my experience with
> memory mapped files on Windows If you don't mind .
> ...
>
> I don't know whether such scenario is possible with sqlite.
>
>
Finally I did some tests and didn't not notice anything like that with
creating tables, probably because memory-mapping is not currently for
inserting and updating, so the problem I described seems like not actual.

As for general queries, I have mixed feeling. At least one of my queries
worked 9 seconds on 3.7.17 ddraft instead of 27 seconds with 3.7.16.1. So
the speed progress can be very noticeable in some cases. But as I see the
VFS stopped working transparently in this case. Shouldn't it be so that
xRead and probably xWrite still be in the chain of callings, just doing
memcpy from file-mapping regions instead of calling file routines?
Otherwise many existing vfs filtering solutions (encryption, compression)
won't longer work when memory-mapping is on.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite 3.7.17 preview - 2x faster?

2013-04-07 Thread Max Vlasov
On Sun, Apr 7, 2013 at 3:55 PM, Chris Smith  wrote:

> Possibly related:
> http://en.wikipedia.org/wiki/Thrashing_(computer_science)
>
>
That's an interesting direction. Surprisingly if one query
... site:microsoft.com "Thrashing" "memory-mapped"...
on google, he or she would find a forum topic "Memory mapped file
performance" where the first contributor to answers is Igor Tandetnik and
we knows that Igor gave very valuable answers also on this list, so he
(having good knowledge about both win32 and sqlite) could also give his
expert opinion on this topic .

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite 3.7.17 preview - 2x faster?

2013-04-07 Thread Max Vlasov
don't know what's wrong with the link, I'm clicking the one from gmail
thread and it works. Other way is to google [Unresponsive system under some
file-mapping related conditions] and the first result is the thread link

Max


On Sun, Apr 7, 2013 at 2:25 PM, Jean-Christophe Deschamps
wrote:

> Hi Max,
>
> The link is dead. I'd like to know more about this so can you provide us
> with a live link?
>
> Thanks.
>
>
> __**_
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-**bin/mailman/listinfo/sqlite-**users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite 3.7.17 preview - 2x faster?

2013-04-07 Thread Max Vlasov
On Thu, Apr 4, 2013 at 4:02 PM, Richard Hipp  wrote:

> By making use of memory-mapped I/O, the current trunk of SQLite (which will
> eventually become version 3.7.17 after much more refinement and testing)
> can be as much as twice as fast, on some platforms and under some
> workloads.  We would like to encourage people to try out the new code and
> report both success and failure.
>


Not particulary about this draft version, but about my experience with
memory mapped files on Windows If you don't mind .

When I worked with memory-mapped files on Windows two years ago, I
implemented a library for accessing files virtually unlimited in size with
sliding-view approach. There was an interesting effect affecting the system
as a whole. It's when  I write sequentially and starting some point the
system became unresponsive as a whole. This is an important point, not the
application that wrote to the file, the whole system, so no Alt-Tab, no
blinking caret in another application and sometimes even no mouse moving. I
tried to report and MS forums (
http://social.msdn.microsoft.com/Forums/en-US/windowsgeneraldevelopmentissues/thread/81dd029f-2f55-49f2-bd02-1a8ceb0373eb),
but seems like this wasn't noticed. I added a small procedure to show the
effect at the forum topic in pascal (it's sill there) that can easily be
ported to any other language supporting windows api directly.

Right now I tried to reproduce this while writing this message. The machine
is windows 64 bit 4 Gb memory. I started the program writing the the file
until 10Gb. And no surprise, at about 5-6 Gb, the notepad (another
application), stopped responding on my key presses, the caret stopped
blinking and Alt-tab and taskbar didn't work for about a minute. So I could
not do anything (!) on my computer for about minute or so while other
application did something using official documented API.

I don't know whether such scenario is possible with sqlite. Only that on
Windows memory-mapped files are still implemented as a very special entity,
sometimes having exclusively more permissions than other entities
regardless of permissions of the application that uses it. Probably I
should do some particular sqlite-specific tests to find out whether this
affects sqlite but before this I wanted to share this information.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] integrity_check "out of memory"

2013-01-23 Thread Max Vlasov
On Tue, Jan 22, 2013 at 12:33 PM, Dominique Pellé <dominique.pe...@gmail.com
> wrote:

> Max Vlasov wrote:
>
> > Hi,
> >
> > I found with the web search that this error (out of memory  for PRAGMA
> > integrity_check) appeared in the wild and there were some fixes related
> to
> > it.It looks perhaps like the bug that was fixed in this checkin:
>
> http://www.sqlite.org/src/info/120c82d56e
>
> Can you try and confirm?
>
>
The trunk version works ok, no limits noticed.

Max
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] integrity_check "out of memory"

2013-01-22 Thread Max Vlasov
Thanks, Dominique

I will try asap, but looking at the arithmetics

25 (The length of "Page ? is never used")
*
800 (maximum I could get)

= 20,000 (limit mentioned in the ticket )

looks like this is a winner :)

Max



> Hi Max
>
> It looks perhaps like the bug that was fixed in this checkin:
>
> http://www.sqlite.org/src/info/120c82d56e
>
> Can you try and confirm?
>
> -- Dominique
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


  1   2   3   4   5   >