Re: [sqlite] Table was deleted on macOS

2019-10-23 Thread Eduardo Morras
On Tue, 15 Oct 2019 21:11:05 +
"t...@qvgps.com"  wrote:

> Hi Group,
> 
> here is a very strange and rare fault which one of our users
> experienced on macOS .
> 
> He was working in our app for a while creating a lot of new data.
> So over several hours many INSERT INTO mytable 
> Without any errors.
> 
> Then he did a query SELECT COUNT(*) FROM mytable WHERE  (deleted=0 OR
> deleted IS NULL) AND IdCat=2 and an exception occurs:
> "no such table: mytable"
> 
> The client sent us the database and the table "mytable" was really
> gone. The database also seems to be fine, no errors.
> How can this happen?
> In our app is no DROP TABLE command, what else can cause a table to
> be deleted?
> 
> We have a few hundred copies of our app out, and this is the first
> time this error happens. Its on macOS and the sqlite-version is
> 3024000

Did he closed the transaction?? If he did

BEGIN
CREATE TABLE .
INSERT .

INSERT

SELECT COUNT(*)

the table didn't exist and on close, whole transaction in backrolled. I
say he, but it could be the application used to manage the db too that
forgot the COMMIT after the INSERTS

> 
> 
> Thanks
> Tom
> 
> 
> 
> /
> **   Flemming Software Development CC
> **   Thomas Flemming
> **   PO Box 81244
> **   Windhoek, Namibia
> **   http://www.quovadis-gps.com
> **   mail  t...@qvgps.com<mailto:t...@qvgps.com>
> **   +264 (0)81 3329923Nam mobile
> **   +49  (0)175 7313081   D mobile
> **   +49  (0)6182 8492599  D office
> ***/
> 
> _______
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Row locking sqlite3

2019-03-27 Thread Eduardo Morras
On Fri, 22 Mar 2019 09:25:24 -0500
Peng Yu  wrote:

> Hi,
> 
> I see that sqlite3 still does not support row locking. This package
> tries to resolve this problem. But it does not have a standard build
> process for Linux.
> 
> https://github.com/sqlumdash/sqlumdash/
> 
> Are there other packages similar to sqlite3 but support row locking?

Row locking of how many rows?

In some projects I use a hidden column with a simple letter to flag the row 
state and make queries filtering that flag. You can use a simple 'L' flag on 
the rows you want locked and add a where to don't touch them.

I use it for softdelete rows, this allows fight against internal fragmentation, 
but adds cost of adding the column to all indexes. At idle times delete all of 
them or recreate the table with live rows.

Of course, if you plan to lock almost all rows, reverse the logic. If you want 
to lock and unlock fast, use other method or PostgreSQL.

You can create a virtual table that adds this feature pretty easily.

> Thanks.
> 
> -- 
> Regards,
> Peng
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is there a way to select using cid?

2019-03-25 Thread Eduardo Morras
On Sat, 23 Mar 2019 12:57:55 +0100 (CET)
Eric  wrote:

> On Fri, 22 Mar 2019 16:34:20 -0500, Peng Yu 
> wrote:
> > There are cid's for each table. Is there a way to use "select" with
> > cid's instead of their names? Thanks.
> > 
> > $ sqlite3 dbfile < > create table test (id integer primary key, value text);
> > insert into test (id, value) values (1, 'abc');
> > .mode column
> > .headers on
> > select * from pragma_table_info('test');
> > EOF
> > cid nametypenotnull dflt_value  pk
> > --  --  --  --  --
> > -- 0   id  integer
> > 0   1 1   value   text
> > 0   0
> 
> Why on earth would you want to do that?

To write obfuscated code, perhaps for a prize or trojan, etc... development

> Eric
> -- 
> ms fnd in a lbry
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance of writing blobs

2018-06-12 Thread Eduardo Morras
On Tue, 12 Jun 2018 14:13:33 +0200
Dominique Devienne  wrote:

> 
> You're right of course. Thank you Clemens.
> 
> With synchronous = OFF, which suits my use-case here, the commit-time
> just vanishes,
> and even out-performs HDF5 now (see below). I might still prefer HDF5,
> mainly because
> the N-d typed arrays are self-describing and thus visible using
> Hdf5View for example, instead
> of completely opaque in SQLite, but now I know SQLite in non-durable
> mode is as fast or even
> faster for these smallish blob sizes than HDF5. I'll need to
> double-check for larger sizes
> (individual blobs get upward of 1GB in size), with my large 31GB, 200K
> blobs, 1.2M entities project.

I join late to this thread but here are some comments, I use sqlite in
environments similar to yours:

- Don't use synchronous=OFF if you need more speed, use pragma
journal=memory or pragma journal=none. They aren't safer but faster,

- As you noted, don't preallocate a zero-blob, insert directly the blob,

- Don't delete/update blobs, instead mark the row as deprecated/invalid
and insert a new one,

- If you have blobs larger than 2GB you'll need to split them, sqlite
can't work with blobs larger than 2GB,

- Use misc/unionvtab extension if you need a safe environment, it's a
bit tricky to use but you will get a RAIDB0/JBODB0 (raid 0/jbod db
split in many files) easily and is faster,

- Compile sqlite with SQLITE_DIRECT_OVERFLOW_READ, with this, you blob
reads don't pollute sqlite cache,

- Compress the blobs with misc/compress extension while writing, less
you write, faster you read, see sqlar project (http://sqlite.org/sqlar).

HTH

---   ---
Eduardo Morras 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Request to add define SQLITE_PREPARE_NONE

2018-06-02 Thread Eduardo Morras

Hello Dr. Hipps

There is a #define SQLITE_PREPARE_PERSISTENT  x01 line in sqlite3 source code 
for sqlite3_prepare_v3() flags. 

Could you add SQLITE_PREPARE_NONE meaning no flag set?

Thanks

---   ---
Eduardo Morras 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] About storage of large amounts of image data

2018-05-08 Thread Eduardo Morras
On Tue, 8 May 2018 14:08:05 +0100
Mike Clark <cyberherbal...@gmail.com> wrote:

>  Hi List!
> 
> I'm developing a project that deals with image files and am
> considering storing all the images in the SQLite database itself,
> rather than (or in addition to) the file system.  Since the
> prospective users will probably be dealing with hundreds of gigabytes
> in their use of the project, I am wondering if this is an effective
> or efficient use of SQLite -- or safe, because of the risk of data
> corruption.
> 
> I know the documentation says that SQLite can handle up to 140 TB (do
> we know of anyone who is doing this?), so hundreds of gigs is clearly
> doable.
> 
> Is it advisable, however?

Completly, I use a sqlite fs on some projects with terabyte db size and
0 problems. One has write file once, update occasional and lots of
reads (no deletes) workload. Read speed is faster than XFS and UFS2.

You need to tune Sqlite to big databases and blob direct read (check the page 
Dr. Hipps linked for compile options)

 
> ?(Sent this about 2 wks ago, but it never posted to the list and no
> moderator response either).?

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] JDBC driver experience

2018-04-19 Thread Eduardo Morras
On Thu, 19 Apr 2018 09:37:20 -0700
"dmp" <da...@dandymadeproductions.com> wrote:

> Currently I trying to complete a plugin for Ajqvue
> that transfers data from others databases to SQLite.

Hi, I made an odbc virtual table to achive this. It's closed source but
the ¿difficulty? to develop one from scratch is easy-medium.

Pass the connection string, user and password, and all queries to vtab
are redirected to the other dbms. I began with the csv file virtual
table and went from there.

Check SQL-MED standard and postgresql docs for syntax.


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How many AUTOINCREMENT tables are in your schema?

2018-03-22 Thread Eduardo Morras
On Fri, 16 Mar 2018 11:37:24 -0400
Richard Hipp <d...@sqlite.org> wrote:

> This is a survey, the results of which will help us to make SQLite
> faster.
> 
> How many tables in your schema(s) use AUTOINCREMENT?

Within all my projects 4 (very old projects)

I don't use it on current projects.

> I just need a single integer, the count of uses of the AUTOINCREMENT
> in your overall schema.  You might compute this using:
> 
>sqlite3 yourfile.db '.schema --indent' | grep -i autoincrement |
> wc -l
> 
> Private email to me is fine.  Thanks for participating in this survey!
> -- 
> D. Richard Hipp
> d...@sqlite.org


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed issue of SELECT in my application

2018-01-18 Thread Eduardo Morras
On Wed, 17 Jan 2018 17:59:22 +
Simon Slavin <slav...@bigfraud.org> wrote:

> Folks.  Nick published a figure of 60ms for his search.  That?s not
> unusually slow.  There was no request to shave every last millisecond
> off that figure.  There wasn?t even a statement that it was too
> slow.  No need to scare the guy by mentioning twenty complications of
> SQLite which may be irrelevant.

... I read "Speed issue..." in Subject, but you're rigth,
60ms is fast enough on common deployment configuration.

> If Nick needs to save more time he?ll need to post more details of
> what he?s doing.
> 
> Simon.

P.S. I sent the mail from my current contractor mail, sorry for that.

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Most efficient way to detect on-disk change

2017-11-07 Thread Eduardo Morras
On Tue, 07 Nov 2017 18:07:42 +
Wout Mertens <wout.mert...@gmail.com> wrote:

> I'm working with a db that's only written to in transations, and each
> transaction increases a db-global version counter.
> 
> This means that I can cache all reads, unless the version changed.
> 
> What would be the most efficient way to make sure I *never* serve
> stale data?
> 
> Right now everything's a single process, so it's really easy, just
> clear the cache on every write. However, I want to be prepared for
> the near future where I will have multiple processes using this db
> file.
> 
> I'm thinking that to detect writes, this might be a safe approach:
> 
> Before serving any cached read, check the timestamp on the wal file.
> If it changed, read the global version. If it changed, clear the
> cache. Otherwise, serve the cached read.
> 
> Is it safe to assume that all writes would mean change of the wal file
> timestamp?
> More importantly, is it faster to check the timestamp or would a
> prepared query for the version actually be faster (and safer)?
> 
> Also, I'm using WAL right now, but I wonder if that's really useful
> given the single-writer-at-a-time?

You can define triggers on insert, update and delete that fires a user defined 
function that warns your other threads or an external process (I use a similar 
setup on a AS400/DB2). 
Something like this:

CREATE TRIGGER tg_night_watcher_insert AFTER INSERT ON table_name_to_watch_up
 BEGIN
  SELECT your_nigth_watcher();
 END

HTH

> 
> Thank you for your insights,
> 
> Wout.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Any change to make this query better?

2017-10-21 Thread Eduardo Morras
On Fri, 20 Oct 2017 14:21:38 -0400
"jose isaias cabrera" <jic...@barrioinvi.net> wrote:

> 
> Greetings!
> 
> This takes about 1.5 minutes to run with sqlite v3.20.1 with about
> 200K records
> 
> sqlite> explain query plan
>...> SELECT
>...>   O.XtraF AS PortalID,
>...>   O.ProjID,
>...>   O.A_No AS GTXNo,
>...>   O.proj AS ProjName,
>...>   O.lang AS Target,
>...>   (SELECT max(edate) from LSOpenJobs AS I WHERE I.ProjID =
> O.ProjID AND
> 
>...>I.PSubClass = 'QuoteAppr' ) AS QuoteAvailable,
>...>   (SELECT min(edate) from LSOpenJobs AS E WHERE E.ProjID =
> O.ProjID AND
> 
>...>PSubClass = 'Delivery') AS DeliveryDate,
>...>   sum(O.Xtra8) AS PriceUSD,
>...>   0 AS PriceCAD,
>...>   sum(O.ProjFund) AS TransferCost,
>...>   O.XtraE AS Department,
>...>   O.XtraA AS BillTo,
>...>   O.pmuk AS Contact,
>...>   '-' AS Notes1,
>...>   '-' AS Notes2
>...>   from LSOpenJobs AS O WHERE lower(cust) = 'xerox' AND
>...> Xtra9 LIKE  '2017-09-%'
>...>  GROUP BY ProjID,lang HAVING sum(ProjFund) >
> 0;
> 0|0|0|SCAN TABLE LSOpenJobs AS O USING INDEX OjPid
> 0|0|0|USE TEMP B-TREE FOR GROUP BY
> 0|0|0|EXECUTE CORRELATED SCALAR SUBQUERY 1
> 1|0|0|SEARCH TABLE LSOpenJobs AS I USING INDEX OjPid (ProjID=?)
> 0|0|0|EXECUTE CORRELATED SCALAR SUBQUERY 2
> 2|0|0|SEARCH TABLE LSOpenJobs AS E USING INDEX OjPid (ProjID=?)
> sqlite>
> 

You could try indexing by (ProjID, PSubClass, lower(cust)).

You do all the work on the same table 'fake' joined with the result itself, 
LSOpenJobs. 

SELECT max(edate) from LSOpenJobs AS I WHERE I.ProjID = O.ProjID AND 
I.PSubClass = 'QuoteAppr'

You can try a WITH with the main query and subselect from it to get those two 
values (max(edate) and min(edate))

HTH
---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] When is db size an issue?

2017-10-01 Thread Eduardo Morras
On Wed, 27 Sep 2017 08:41:25 -0400
"Jason T. Slack-Moehrle" <slackmoeh...@gmail.com> wrote:

> Hello All,
> 
> Off and on for the last few years I have been writing an e-mail
> client to scratch a personal itch. I store the mail in SQLite and
> attachments on the file system. However, I recently brought in all of
> my mail for the last 15 years from mbox format. Now, my database size
> is over 10gb. I'm not seeing any real performance issues and my
> queries are executing nice and fast during search.
> 
> However, does anyone have any thoughts about the size? Should I be
> concerned? Is there a theoretical limit I should keep in the back of
> my mind?

I developed and use a filesystem over sqlite db, focused on filesizes
<1MB and worm (write-once-read-many) access pattern. Some sqlite dbs
have 500GB-1TB (few even bigger) on mechanical disks without speed
problems.

Easy to backup, use fts on someones and can attach-deattach filesystems
db. One advice, increase sqlite internal cache, don't use its default
value.

> 
> Jason



---   ---
Eduardo Morras <emorr...@yahoo.es>

.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] performance issue on a read only database

2017-06-13 Thread Eduardo Morras
On Tue, 13 Jun 2017 11:53:05 +0200
rv.gauth...@free.fr wrote:

> Hi all,
> 
> we are using sqlite3 on an embedded application.
> The database is opened in read only mode and can't be modified by the 
> application.
> 
> I noticed that the first SELECT statement after a BEGIN TRANSACTION 
> takes at least 14 ms.
> All subsequent queries in the same transaction are taking near 0 ms.

The first SELECT says to sqlite to do all the work needed to make your
query ACID constraint (acquires locks, malloc ram, generate structs,
open files, etc...)

> 
> If I omit the BEGIN TRANSACTION, all queries are taking at least 14
> ms.

Because each query automatically is wrapped with BEGIN TRANSACTION /
query / END TRANSACTION and the work to make your query ACID is done on
every query, not on the first one.

> I tried also to do a unique BEGIN TRANSACTION after opening the 
> database. This fixes the timing issue, but my memory increases 
> constantly.

... Do you know what's a transaction? In your case it's a read only db
and don't think the could be a problem, but in normal use cases could
be catastrophic.

> Is there a way (pragma, compile flags, ...) to gain these 14 ms for
> the first query ?

Try openinig with URI, check for 'inmutable' parameter
https://www.sqlite.org/draft/c3ref/open.html , perhaps you can gain
some ms, but it's dangerous, I warn you.

> 
> Thanks.
> 
> 
> Hervé Gauthier.
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite in memory

2017-05-18 Thread Eduardo Morras
On Wed, 17 May 2017 22:18:19 -0700
Gabriele Lanaro <gabriele.lan...@gmail.com> wrote:

> Hi, I'm trying to assess if the performance of my application is
> dependent on disk access from sqlite.
> 
> To rule this out I wanted to make sure that the SQLite DB is
> completely accessed from memory and there are no disk accesses.
> 
> Is it possible to obtain this effect by using pragmas such as
> cache_size?
> 
> Another solution is to copy the existing db to a :memory: db but I'd
> like to achieve the same effect without doing so (because it will
> require substantial modification of the application). For the sake of
> argument, let's image that using :memory: db is not an option.
> 
> Also using a ramdisk is not an option because I don't have root
> access to the machine.

What OS are you using?

You can next tips to make the app less dependant on disk I/O access:

a) change where store temporal tables (mat views, subqueries, temp
tables) and indices (transient, to use ram always (pragma
temp_store=2),
 
b) increase cache size, the more, the better (os disk cache is shared
with other processes and is slower), if cache is equal or bigger than
your db, it'll be fit in ram, 

c) set config SQLITE_CONFIG_STMTJRNL_SPILL=-1 if you use statement
journal files (check http://www.sqlite.org/tempfiles.html#stmtjrnl), 

d) use wal mode to avoid *-shm files, and set wal_checkpointing, 

e) use mmap_size pragma to minimize I/O (check
http://www.sqlite.org/mmap.html it has disadvanteges too)


> Thanks,
> 
> Gabriele

HTH

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Developing a SQLite3 DB remotely

2017-03-24 Thread Eduardo Morras
On Thu, 23 Mar 2017 13:45:58 -0400
Stephen Chrzanowski <pontia...@gmail.com> wrote:

> Right up front, I'm fully aware about the How To Corrupt document
> (Believe me, I've preached about network access in this forum), and
> with the development tools I have in Windows vs the destination OS
> and the purpose of the DB, I'm asking for other peoples experience on
> remote developing a database.
> 
> The SQLite editor of choice for me is SQLite Expert Pro (SEP).  The
> remote system is a Linux based OS.  The databases job is to keep
> track of jobs, hosts, last completed, priorities of the jobs, etc.
> The Linux machine is going to be running a BASH script that runs in
> an infinite loop, periodically poking the database to decide what to
> run next based on a schedule.  There will be frequent sleep periods
> between SQL calls.
> 
> While I'm developing the database, the infinite looping in the bash
> script isn't going to exist.  The script runs, does its thing (To
> start, just ECHO what I want it to do), update the database on
> successful completion, then check for the next job if any are
> available.  When the scripts are done running, I want to re-run a
> query in the SEP to confirm what I've done in the BASH script did
> what it was supposed to do.
> 
> The question for the experienced multi-machine & multi-OS DB
> designers, has anyone ever run into a problem where EXTREMELY LIGHT
> WEIGHT use of the database causes corruption?  What would be a
> recommended way to setup the connections for a DEV-only arena where
> the below paragraph describes?
> 
> By EXTREMELY LIGHT WEIGHT use, I mean I *DO* guarantee that although
> I have one permanent open file handle to the database via SEP, and
> that Linux OS will only open a handle  periodically while I'm writing
> the script, multiple accesses of reading or writing to the DB at the
> exact same time just will not happen.  Once development stops, it'll
> be just this one BASH script that will ever touch the database.

I'll try this manner:

a) write a file with the query in the server
b) bash script pass it to sqlite3 command line interface and it writes output 
to other text file
c) read the file with the result
d) delete the files

You can grow or complicate the steps as you need.

For the network server code, repository has an example server 
http://www.sqlite.org/src/artifact/a2615049954cbb9c and timeline at 
http://www.sqlite.org/src/finfo?name=src/test_server.c 


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite 3.18.0 pre-release snapshot

2017-03-07 Thread Eduardo Morras
On Mon, 6 Mar 2017 18:52:48 -0500
Richard Hipp <d...@sqlite.org> wrote:

> On 3/6/17, Simon Slavin <slav...@bigfraud.org> wrote:
> >
> >> See
> >> https://www.sqlite.org/draft/pragma.html#pragma_optimize for
> >> additional information.
> >
> > I?m sure this is extremely far-future-looking, but a default mask
> > of 254 (0xfe) might be better than the stated default of 14 (0x0e).
> 
> Default mask changed to 0xfffe, which allows for up to 14 new
> default-on optimizations and up to 48 new default-off optimizations.

Could a trigger be fired on optimize? This way we could add database/schema 
specific optimizations (Delete all rows in table tab where column value is 
NULL, for example)

Thanks for the great work.

> -- 
> D. Richard Hipp
> d...@sqlite.org


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Read-only access which does not block writers

2016-11-24 Thread Eduardo Morras
On Thu, 24 Nov 2016 08:54:47 -0500
Richard Hipp <d...@sqlite.org> wrote:

> On 11/24/16, Florian Weimer <fwei...@redhat.com> wrote:
> > I'd like to replace the use of Berkeley DB in RPM with SQLite.
> >
> > The scenario is special in the follow way.  There is no database
> > server, all access goes directly to the database.  Unprivileged
> > users without write access to the RPM database are expected to run
> > read-only queries against the database.  Privileged users
> > (basically, root) is expected to use locking to exclude concurrent
> > writers.  But read-only users should not be able to stop
> > acquisition of a write lock.
> >
> > Is there a way to do this with SQLite?
> 
> The readers can open the database using URI filenames
> (https://www.sqlite.org/uri.html) with query parameters "mode=ro" and
> "locking=0".  That will prevent the readers from blocking the writer.
> But, if a write happens in the middle of a read, the reader might see
> inconsistent data and report SQLITE_CORRUPT.  This is harmless in the
> sense that the database file is not really corrupt (the reader is
> merely seeing parts of the files from two different points in time)
> and subsequent reads should still work.  If you are unlucky, a write
> that happens at the same time as a read might cause the reader to
> return incorrect results, so the reader can never be 100% sure that
> the answer it gets back is correct.

If writer safely forces a schema change, that's no real change to
schema, only an increase  to schema_version pragma; the readers can
check that condition at sqlite3_step.

I don't know how to fire it, and changing directly pragma value is
discouraged.


> 
> How important is it to you that the reader always get a correct
> answer?
> -- 
> D. Richard Hipp
> d...@sqlite.org

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Hung query question

2016-11-18 Thread Eduardo Morras
On Fri, 18 Nov 2016 19:20:06 +
David Raymond <david.raym...@tomtom.com> wrote:

> I've got a query that I've tried a few times here that seems like
> it's hanging up on something and I'm wondering if it's just some
> brain dead thing I'm forgetting or doing wrong.
> 
> I've got a database with a bunch of records, and am trying to
> populate a table in another database with coordinate extremes for
> each state. Using the CLI I open up the side database, attach the
> main one and run
> 
> insert into coordExtremes select State, min(Latitude), max(Latitude),
> min(Longitude), max(Longitude) from foo.bar group by State;
> 
> Twice I've left that running overnight and it's still been sitting
> there not completed the next day when I came in. I gave up on using
> the CLI to do it and wrote a quick Python script to scan through the
> whole big table once and keep track of the extremes, and it finished
> in all of 15 minutes. So I'm left scratching my head as to what's up.
> If anyone can think of something or sees where I'm being an idiot,
> please let me know.

Is python using bigger cache? Increase cache with pragma in CLI. 

Set it to 100-500MB for foo db, don't need to waste cache on destiny db
(where you insert the data). Change foo with the real origin db name.

pragma foo.cache_size=-50

Why don't use R*Tree virtual table?  http://www.sqlite.org/rtree.html
---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Import 5000 xml files in a sqlite database file

2016-10-23 Thread Eduardo Morras
On Sat, 22 Oct 2016 19:26:42 +0200
bob_sql...@web.de wrote:

> Hi,
> 
> I have more than 5000 xml files. All files have the same
> xml-structure. 
> 
> Each file has different values (timestamps, numbers and strings). I
> would like to put all these values in a sqlite database tabke, all in
> one table. => Import the data values into a sqlite database table.
> 
> Can you please tell me a software program, that can do this quickly?

You can convert them to json and use sqlite3 json capabilities.
Duckduckgo search engine takes me to:

http://web-notes.wirehopper.com/2013/12/06/linux-command-line-convert-xml

http://openlife.cc/blogs/2013/november/translating-reliably-between-xml-and-json-xml2json

Which describes some apps to convert from xml to json. Some are in
javascript, python, php, perl and others are online converters.


> Thank you for your answers.
> 
> Best regards 
> 
> Bob
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Virtual table

2016-09-18 Thread Eduardo Morras
On Wed, 31 Aug 2016 10:34:05 -0300
Maria de Jesus Philadelpho <jesus.ph...@gmail.com> wrote:

> Hi,
> 
> I implement the SQLite extension, virtcsv, which allows attaching a
> CSV file as a virtual table.  At a command line everything works just
> fine, why I can't see the result of a select * from table using the
> SQLitestudio 3.0.7?

Beacuse Sqlite Studio must load the Sqlite csv virtual table before
use, or be compiled with it.

Read Sqlite Studio manual how to achive that, I don't use SqliteStudio.

Perhaps you can get better answers from Sqlite Studio developers.

> See the attached snapshot.

Sqlite maillist don't support attachment.

> regards,
> 
> Maria Azevedo
> --
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Convert mysql to sqlite

2016-09-10 Thread Eduardo Morras
On Sat, 10 Sep 2016 14:24:48 -0700
Scott Doctor <sc...@scottdoctor.com> wrote:

> I have a database with a few tables and about 140MB of data in 
> it that exists as a MySQL database. Using MySQL workbench I can 
> export the data, but it dumps as a SQL dump with its flavor of 
> sql. I want to convert this database into a sqlite database. 
> Nothing fancy in it. No triggers, procedures, or foreign keys. 
> Tables are normal structure with a pk field and some text and 
> integer fields in each table.
> 
> Anyone know of a utility to directly convert from MySQL to sqlite?
> 

You can export as csv and import them in sqlite.

> -
> Scott Doctor
> sc...@scottdoctor.com
> -----

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Best way to "split" a large DB into two DBs

2016-09-07 Thread Eduardo Morras
On Wed, 7 Sep 2016 18:08:50 +0200
Dominique Devienne <ddevie...@gmail.com> wrote:

> Imagine there's a 25GB SQLite DB file in version v1 of an application.
> 
> Because the way this DB is structured, with large to very large
> blobs, this leads to problems for the app. That DB has 35 tables, but
> 5 of those represent 95% of the DB size (because of those blobs).
> 
> So for v2 of the app, there's a new design where the 30 "lightweight"
> tables move into a new DB file (with additional new tables), and the 5
> "blob heavy" tables remain in the existing DB file (but the 30 "moved"
> tables should be dropped from it).
> 
> Initial design was to copy the DB file (app is "shutdown", so no
> connection to that DB file).
> Drop the 5 big tables. Vaccum it (the new DB). Add the new tables.
> But the 25GB copy was deemed too long in that case (several minutes).
> (after all, copying 95% of 25GB to discard all those GBs is
> inefficient)
> 
> So alternate design was to create the schema empty in the new DB file,
> attach the old one, and insert the data from the 30 tables into the
> new empty (with identical schema/structure) tables. But that's also
> very slow apparently.
> 
> Even though it's not my app, I know there are indexes and triggers on
> those tables (but the triggers are mostly on deletes, so don't really
> apply here), and I suggested adding those after the selects, but I
> doubt it's going to make a dramatic difference time-wise.
> 
> Conceptually, the ideal situation would be to just copy the DB header,
> sqlite_master page(s) (dropping the 5 big tables definitions and
> related indexes/triggers from sqlite_master), and only the 5% of
> pages related to the other tables (and related objects).  (chaining
> of pages and root pages in sqlite_master have to be updated of
> course). It's almost like a form of VACCUM, except it's not in-place
> and works on a subset of the tables. IO-wise, that has the potential
> to be 20x faster I imagine.
> 
> But of course there's no such "out-of-place" "partial" VACCUM...
> 
> Then I though maybe .backup, which also works at the page level (I
> believe), perhaps can subset what tables to backup. But no luck there
> either. backup works for the whole DB, not a subset.
> 
> Am I missing other ways to achieve this "split" efficiently?
> Any chance the backup API could group table-specific (and related
> objects) alternate form?

Not answering your question, but some comments/tricks about how boost sqlite3 
performance on your scenario.

First, compile last sqlite3 version with these options on

SQLITE_DIRECT_OVERFLOW_READ
SQLITE_DEFAULT_AUTOVACUUM=2

They are described at https://www.sqlite.org/compile.html#direct_overflow_read 
and https://www.sqlite.org/compile.html#omit_autovacuum, don't omit
autovacuum, but set it to 2 or incremental. 

For an explanation about why setting autovacuum incremental works see
http://marc.info/?l=sqlite-users=136265346522617=4

If your blob size is high, bigger than 500KB, set page size to 64KB.

Create the blob column the last one.

Compile with dbstat (SQLITE_ENABLE_DBSTAT_VTAB) option if you want statistics 
about your blob db. You can get information about fragmentation, etc, see
https://www.sqlite.org/dbstat.html#section_3

If your use case is write heavy, don't use wal, modifications (CUD) are not 
atomic when use attached dbs and wal mode. If your use case is read heavy, use 
wal.

I use (and develop) a diy filesystem over sqlite and has a similar scenario 
with hundreds of GB of blobs on multiple databases. Those compile options, 
tricks 
and split metadata from data made a huge improvement.

> Any advice would be appreciated. Thanks, --DD

HTH

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite 3.13.0 does not use indexes as 3.8.11.1 does

2016-07-26 Thread Eduardo Morras
On Tue, 26 Jul 2016 10:37:12 +0200
Alessandro Fardin <alessandro.far...@gavazziacbu.it> wrote:

> After updating from sqlite  3.8.11.1 to sqlite 3.13.0.
> 
> The Query planner with the same SELECT statement on same table with
> the same indexes does not use index at all, but parse the entire
> table. Of course this causes a dramatically slow down of the
> application.
> 
> As temporary  work around we have have added to the query the INDEXED
> BY energy_d_dateTimeIdx statement.
> 
> In sqlite 3.8.11.1 the select was issued by using the  
> energy_d_dateTimeIdx index
> 
> Follows the shema of the table and indexes.
> CREATE INDEX IF NOT EXISTS energy_d_dateTimeIdx 
> ON hst_energy_d (recdate ASC,rectime ASC,idinstrum ASC,enflag ASC);
> CREATE INDEX IF NOT EXISTS hst_energy_d_index_timestamp 
> ON hst_energy_d (timestamp ASC);
> CREATE INDEX IF NOT EXISTS hst_energy_d_index_pupdate 
> ON hst_energy_d (pupdate ASC);
> 

Please, run Sql commnad ANALYZE after index creation and chek again.

> And now the SELECT STATEMENT
> 
> 
> SELECT 
> pupdate,idinstrum,Vlnsys,Vl1n,Vl2n,Vl3n,Vllsys,Vl1l2,Vl2l3,Vl3l1,
> Al1,Al2,Al3,kWsys,kWl1,kWl2,kWl3,kWhac,
> kWhacn,kvarsys,kvarl1,kvarl2,kvarl3,kvarhn,kvarh,
> kvarhacC,kvarhacL,kVAsys,kVAl1,kVAl2 ,kVAl3,PSeq,
> THDAl1,THDAl2,THDAl3,THDVl1n,THDVl2n,THDVl3n,
> kWhl1 ,kWhl2 ,kWhl3 ,counter1,counter2,counter3,Hz,An,Hour,Hourn,
> Alsys,kvarhl1,kvarhl2,kvarhl3,kvarhnl1,kvarhnl2,kvarhnl3,kWhnl1,kWhnl2,
> kWhnl3,kVAh,kVAhl1,kVAhl2,kVAhl3,
> PFsys,PFl1,PFl2,PFl3,Wdmd,vardmd,VAdmd
> FROM hst_energy_d 
> WHERE enflag = 0 ORDER BY recdate DESC, rectime DESC LIMIT 1;

You are querying with ORDER BY DESC, but indexes are created ASC,
convert them to DESC, ANALYZE and retry.

HTH


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problem with savepoints

2016-07-06 Thread Eduardo Morras
On Wed, 6 Jul 2016 18:10:34 +0200
Chris Brody <chris.br...@gmail.com> wrote:

> >
> > Just for my information, what is the purpose of this temporary
> > file?  I see
> >> that -journal file is always stored to disk.
> >>
> >
> > It's a statement journal:
> >
> >   https://www.sqlite.org/tempfiles.html#stmtjrnl
> >
> > Recent changes mean that the first 64KiB of a statement journal are
> > always stored in main-memory, and the temp file only created after
> > the file grows larger than 64KiB. Which is probably why you did not
> > see the problem immediately.
> 
> 
> Personally I wish SQLite would fail upon open or perhaps on first
> write if it cannot write the temporary file. I think this would make
> it easier for programmers to detect and deal with this kind of issue.
> Maybe make it an explicit compile-time option to wait until the
> temporary file is larger than 64KiB before opening the temp file.
> Just a thought. 

This is set up in src/global.c line 172 on 3.13.0 version

   171  #ifndef SQLITE_STMTJRNL_SPILL 
   172  # define SQLITE_STMTJRNL_SPILL (64*1024)
   173  #endif

You can predefine it (define before include sqlite3.h) with the value in bytes 
you want, or set it to -1 to always store them in memory.



---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bad db feature request

2016-06-30 Thread Eduardo Morras
On Thu, 30 Jun 2016 10:12:05 +0100
Paul Sanderson <sandersonforens...@gmail.com> wrote:

> The OP wanted something to check his systems for corruption - if the
> SQLite dev team don't want to add checksums then the OP could possibly
> solve his own problem by adding them to his own internal version.
> 
> Extension may have been a bad choice of word - he can modify/compile
> his own version of SQLite add checksums - and also add a corruption
> engine if he wants.
> Paul

Your right

A simple trigger on insert/update a row to calculate a crc or hash of other 
columns content in the row and update a hidden column with the value can do the 
trick at row level. It will make performance lower, undesired but expected 
effect.

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Using LSM and Memory Mapped files -- Question about LSM_CONFIG_MMAP

2016-05-20 Thread Eduardo Morras
On Thu, 19 May 2016 22:12:48 -0500
"Kasajian, Kenneth"  wrote:

> I am interested in using the LSM
> (https://www.sqlite.org/src4/doc/trunk/www/lsmusr.wiki)
> 
> The characteristics of my application is that I would like to enable
> the memory-mapped file feature so that I can offload data in memory
> to disk temporarily.I don't care about data-recover after my
> application shuts down or it crashes.   In fact, the application will
> create a new database each time it starts up.
> 
> The reason to do this is to be able to address hundreds of gigs of
> data in a key/value store method.   We would it put it in RAM but
> that would require a lot of RAM.By putting the data on disk, we
> can work with very large sets of data.
> 
> I care a lot about performance.  If I had the RAM, all of this would
> be in RAM (non-persistent and transien)
> 
> My question is, if I have LSM_CONFIG_MMAP enabled, and on a 64-bit
> system I believe it's enabled by default, what happens if I also turn
> off LSM_CONFIG_SAFETY and LSM_CONFIG_USE_LOG.   Would that not make
> things a lot faster?
> 
> I don't need data to be written to disk right away..   In fact, I
> would be okay if data were only written to disk when there the memory
> that is occupied by the data has to be reused - because persistence
> is not a factor.
> 
> Anyone use LSM for this type of a use case?

IIRC you can create a :memory: database with sqlite4. Doesn't it feed your 
needs?

If you look at the Sqlite4 timeline, you'll see develop is a bit frozen. You 
can try the lsm-vtab (http://www.sqlite.org/src/timeline?n=100=lsm-vtab) for 
sqlite3, you need fossil-scm to download it and compile, but I don't know if it 
works currently or not (last updated on Feb 2016)

HTH

---   ---
Eduardo Morras 


[sqlite] Fastest way to backup/copy database?

2016-05-04 Thread Eduardo Morras
On Wed, 04 May 2016 11:44:17 +0100
"Rob Willett"  wrote:

> Hi,
> 
> We think we know the answer to this, but we?ll ask the question 
> anyway.
> 
> We?re trying to backup a 10GB live running database 
> ?as-fast-as-we-possibly-can? without stopping updates coming in. The 
> updates come every 2-3 mins, and write a chunk of data in. We can?t 
> really stop the database updates, well we can but we don?t want to.
> 
> 1. We had a quick look to see if we could copy the sqlite file over
> in the short interval between updates but sadly cp simply wasn?t fast 
> enough. We get around 3GB copied before an update happens, which 
> basically renders the cp useless.
> 
> 2. If we use the command line sqlite  .dump >
>  it works, but its very slow.
> 
> 3. Using the Sqlite C API works but is also very slow.
> 
> 4. We don?t have the option of an LVM snapshot as the file system is 
> in a Container .
> 
> So is there any other method of doing a quick snapshot? Failing that, 
> our solution will be to stop any updates for the duration of the cp 
> command, and then restart the process afterwards. Its not the end of
> the world but it would have to be done out of normal working hours.
> 
> This is going to become a bigger problem for us as the database will 
> only get bigger so any advice welcomed.

If you only want the data, you can attach/open a new db file, create schema 
without indexes, select all data from tables and insert them in new db tables. 
You don't write the indexes and should be faster. If you need the indexes, you 
can create them later.


> Thanks
> 
> Rob
---   ---
Eduardo Morras 


[sqlite] Is there something like PHPAdmin for SQLite

2016-04-20 Thread Eduardo Morras
On Wed, 20 Apr 2016 22:21:04 +0200
Cecil Westerhof  wrote:

> 2016-04-20 18:27 GMT+02:00 R Smith :
> 
> > If you are happy to log in to the server, we could suggest a few
> > great tools - but please note that there is nothing wrong with
> > MySQL as far as web development goes. There is no need to change to
> > SQLite, what is MySQL doing wrong? A major consideration is that
> > MySQL offers user log-ins per DB (so you can give access to clients
> > per project for instance).
> >
> >
> > SQLite is awesome for local storage, but not every job requires
> > such a hammer.
> 
> 
> ?He told me he liked the idea of SQLite, but had to use MySQL,
> because he needed to give his client a web interface. So if there is
> a web interface (and there is), then he is not forced to use MySQL.
> And of-course he decides, not me. ;-)

Firefox has an extension to work with sqlite databases. Install it, open a 
sqlite3 db file and work. It uses sqlite3 3.9? (don't remember know).

> -- 
> Cecil Westerhof
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras 


[sqlite] Broken database after experiments with fts.

2016-03-26 Thread Eduardo Morras
On Sat, 26 Mar 2016 16:37:18 +0200
John Found  wrote:

> Why cannot drop the table test?
> 
> sqlite> begin transaction;
> sqlite> create virtual table test using fts5;
> Error: vtable constructor failed: test
> sqlite> commit;
> sqlite> 
> sqlite> drop table test;
> Error: vtable constructor failed: test
> 
> sqlite> .tables
> test  test_content  test_docsize
> test_config   test_data test_idx
> 
> $sqlite3 --version
> 3.11.1 2016-03-03 16:17:53 f047920ce16971e573bc6ec9a48b118c9de2b3a7

Because table test was not created. You get an error when create virtual table. 
Check you sqlite3 shell has fts5 compiled in. You can do that executing "pragma 
compile_options".


> -- 
> http://fresh.flatassembler.net
> http://asm32.info
> John Found 
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras 


[sqlite] Encrypt the SQL query

2016-02-27 Thread Eduardo Morras
On Thu, 25 Feb 2016 14:01:31 +0800
 wrote:

> Hi,
> 
> In my C++ program, I will invoke SQLite to execute SQL queries. But
> these queries are just stored as normal string constants in C++ so it
> is easy to be decoded via reverse engineering method. Does SQLite
> provide a good way to encrypt the SQL query strings while does not
> affect the performance when executing the queries?

No, it hasn't. And as other has said, it's not possible. But you can take other 
way, if you creates a new lemon parser entry for a different query language 
more obfuscate than SQL, you'll force the reverse engineer to learn this new 
query language. You can automate the conversion from actuals SQL queries to 
NewLang queries, even change the NewLang on each compile with different lemon 
parser code. And yes, it's too complex and the time invested (I think) is time 
lost.

> Thanks

L

---   ---
Eduardo Morras 


[sqlite] Why is a separate journal file needed ?

2016-02-25 Thread Eduardo Morras
On Wed, 24 Feb 2016 12:53:48 +
Simon Slavin  wrote:

> In case you're new to this group, please understand that I'm
> experienced with SQLite and I understand how it works and how it uses
> journal files.
> 
> Can someone explain to me -- or point at a web page which does -- why
> a separate journal file is needed.  Why can't the information which
> SQLite stores in a journal file be put in the database file ?  Given
> how SQLite structures its database files this would presumably mean
> that the main database would have journal pages as well as table and
> index pages.  Or that there would be 'stable' table and index pages
> and 'journal' table and index pages. 

There are two too bad side effects putting the journal file inside the database 
(at end, in the middle, at begining, no matter).

a) If you add the journal at the end of the db file, and insert new data, it 
probably will be after the journal. When checkpointing or commiting the journal 
pages converts to empty pages, quickly filled with new data on next commit, but 
making internal data fragmentation worse.

b) Modify fs i-nodes and directory db file information. These are delicate 
structures and, on a crash or a file system problem, there are bigger chances 
to corrupt them if you are continuosly modifying them. Where is the journal 
data if you or the fs can't find the end of the db file?. Better is modify 
i-nodes and directory metadata only when needed.

Of course, b) is unlikely, but a) is a real pita.

> Simon.

---   ---
Eduardo Morras 


[sqlite] query Benchmark

2016-02-15 Thread Eduardo Morras
On Mon, 15 Feb 2016 10:57:29 +0100
Michele Pradella  wrote:

> Sorry you are right, the test I did was with % not with a *just a 
> cut and paste error
> Anyway it's tricky because I have to tell sqlite which index to use
> in LIKE to fast search but I do not have to tell the index if start
> with % or _ because otherwise the query is not executed.. I'll handle
> it...

I have read the thread and still don't know what's your LIKE expression to 
search. If you know something about the string, you can use this tricks:

a) If you search for '%552', any string that ends with '552', you can create an 
index over reverse plate string and use that index,

b) If you search for '_552_' where the number of '_' before and after '552' are 
always the same (so, '552' begins at the same position on every plate), create 
a similar index to a) case, but rotating the letters on the string instead 
reversing it.

You know your data structure and the queries, perhaps other data transforms may 
create better indexs

> Selea s.r.l.
> 
> 
> Michele Pradella R

---   ---
Eduardo Morras 


[sqlite] whish list for 2016

2016-01-13 Thread Eduardo Morras
On Tue, 12 Jan 2016 21:58:01 +0100
Christian Schmitz  wrote:

> 
> > Am 20.12.2015 um 19:12 schrieb Big Stone :
> > 
> > Hi All,
> > 
> > To prepare for 2016 greetings moment, here is my personnal whish
> > list
> 
> Unless I missed something, I may suggest
> 
> * moveprev
> * movefirst
> * movelast
> * move to record in cursor with given index  

Insert the query result in a temporal table or view and work with it. The order 
of the result rows will change if the query hasn't got an "ORDER BY".

Depending what language you are programming, you can store the query result on 
a table(Lua), dictionary(Python, Erlang and others) or create a specifc struct 
for your query in C. 

You can use the deprecated get_table and free_table to get a similar recordset 
struct. 

 HTH

> Looping over a recordset twice is often useful.
> 
> Sincerely
> Christian


---   ---
Eduardo Morras 


[sqlite] SQLite remote access over WAN

2016-01-04 Thread Eduardo Morras
On Mon, 04 Jan 2016 22:52:56 +0100
ajm at zator.com wrote:

> Hi list:
> 
> I've built a Windows desktop app using MS Visual C++ and SQLite. It
> works fair well in local mode, but now I need made it work over a
> TCP/IP network.
> 
> My question is if some in this list can suggest a library to that
> purpose, on the basis that I have not concerns on questions like
> concurrency, access permissions or encriptation. Preferable a light
> weigth open source C/C++ library. Not .NET, MFC or other external
> dependencies, although Boost Asio may be Ok. (already I'm using it)

I used libevent (http://libevent.org/), ZMQ (http://zeromq.org/), nanomsg 
(http://nanomsg.org/) and plain standard libc in different projects. Depending 
on what you need, where your app will work, the workload you'll have, "the 
marketing wise boys" opinion, etc...  you should choose one or another.

> Any sugestion or comment are wellcome.
> 
> --
> Adolfo J. Mill?n

---   ---
Eduardo Morras 


[sqlite] whish list for 2016

2016-01-03 Thread Eduardo Morras

I left the list some days and find this subject discussed and finished!!

Happy New Year 2016 to every colister and sqlite developer.

I add my list and a tiny explanation.

- Lock tables/indexes on cache (make them unevictable)

When a query use a big size table, other tables/index used in the query or 
other queries tables/indexes are dropped from sqlite cache. Lock content on 
cache could boost those queries that "fights" with a big size table. Mark a 
table as non cacheable could do the same in my use case.

As a working example I c from own code, a sqlite filesystem that stores file 
content as blob or as external file:

CREATE TABLE `file` (  \
`fid`   INTEGER,   \
`size`  INTEGER NOT NULL,  \
`permission`INTEGER NOT NULL DEFAULT 700,  \
`lock`  INTEGER NOT NULL DEFAULT 0,\
`blob_id`   INTEGER NOT NULL,  \
`owner_id`  INTEGER NOT NULL,  \
`group_id`  INTEGER NOT NULL,  \
`name`  TEXT NOT NULL, \
`external_path` TEXT NOT NULL, \
PRIMARY KEY(fid)   \
); 

CREATE TABLE `fsblob` (\
`bid`   INTEGER,   \
`size`  INTEGER NOT NULL,  \
`fid`   INTEGER NOT NULL,  \
`compressed`INTEGER NOT NULL DEFAULT 0,\
`content`   BLOB,  \
PRIMARY KEY(bid)   \
); \
   \

 Each time a user access a file, the "file" table (and others like "directory", 
"users") is evicted from cache (fsblob table fills cache) and needs to be 
reread and reparsed from disk for the next query. The problem is worse when 
multiple files are accessed. Making the cache bigger works upto some filesize, 
but locking the "important" tables on cache is the correct (I think) fix. As a 
workaround, I used multiple databases, one with fs metadata and attach others 
with file content (fsblob tables), but dropped it because other tecnical 
problems.

- Access table row by fileposition

In a read only scenario (no writer and/or no wal mode), read and parse the row 
content if we know it's absolute database file position or we know page number 
plus page position offset.

- Lock constant tables on file (stick table rows to page and/or fileposition)

This way, the file position of a row is fixed or easily calculated and can 
create/use hash indexes. Now I'm using a virtual table derived from Dr. Hipps 
csv virtual table years ago that creates a inmemory hash index of an external 
csv file, but it implies use of own sqlite3 version with virtual code and an 
additional file to database.

Additionally, if we can lock constant tables (defined as is by the developer) 
at the begining of the database file, mmap feature can cache constant/static 
data from those tables. In my app, it implies file, user, group, fs and other 
tiny tables (changes are rare) could be mmaped.

- Vacuum/backup reorder tables in database file

If we can tell vacuum or backup the order we want the tables be on the database 
file, on some scenarios (like mine;) ) moving tables to the begining (constant 
data) or the end (timestamp log growing table, f.ex.) could speed up queries 

- Vacuum/backup adds free pages for table/index grow

As far as I know, vacuum and backup write database tables, indexes and metadata 
without free pages. Adding free pages at the end of the pages that store a 
table or index, and reserve those pages for use of that table or index could 
hold back internal fragmentation.


There are others ideas but... hey 2017 is closer ;)

Thanks to Dr. Richard Hipps, Dan Kennedy and Joe Mistachkin for their 
exceptional work, and other colisters for their time and help.

---   ---
Eduardo Morras 


[sqlite] Feature Request: Support for MMAP beyond 2GB

2015-10-31 Thread Eduardo Morras
On Fri, 30 Oct 2015 12:01:15 -0700
Roger Binns  wrote:

> https://sqlite.org/mmap.html
> 
> SQLite supports memory mapping databases, but only does so for the
> first 2GB of the file.  My databases are around 4 to 20GB, and
> completely fit within RAM on my 64 bit systems.  The 2GB mmap limit
> means that only a portion of the file benefits from the improved
> performance, and much effort is wasted copying the data around bits of
> memory.

This limit is set at compile time with SQLITE_MAX_MMAP_SIZE compile option. I 
don't see at pager.c why it can't be bigger than 2GB, it's a 64bit integer. Try 
to compile with -DSQLITE_MAX_MMAP_SIZE=21474836480 and use a copy or backup, 
not the original database.

> Roger

---   ---
Eduardo Morras 


[sqlite] Multiple connections to in-memory DB and virtual tables

2015-10-06 Thread Eduardo Morras
On Tue, 6 Oct 2015 17:39:20 +
Hick Gunter  wrote:

> AFAICT the FROM clause is superflous, as the function has no
> (supported) way of detecting which table(s) the FROM clause contains.
> What is your "reindex_virtual_table()" function (I assume it is a
> user implemented function) supposed to do?

I wrote it thinking it's selfexplaining, but now I think I wrote it too fast.

reindex_virtual_table() is a function defined inside Virtual Table code that 
forces an internal reindex of its data. 

In FTS3/4, for example, you can do an "INSERT INTO 
fts_virtual_table(fts_virtual_table) VALUES('rebuild');" to reindex the FTS 
virtual table.

And I say I wrote it too fast because a virtual table don't need to have an 
internal index, depends on what is it for and its implementation.

> -Urspr?ngliche Nachricht-
> Von: Eduardo Morras [mailto:emorrasg at yahoo.es]
> Gesendet: Dienstag, 06. Oktober 2015 19:08
> An: sqlite-users at mailinglists.sqlite.org
> Betreff: Re: [sqlite] Multiple connections to in-memory DB and
> virtual tables
> 
> On Tue, 6 Oct 2015 15:39:08 +0100
> Simon Slavin  wrote:
> 
> > There are also things Virtual Tables can't do.  For instance you
> > cannot index a Virtual Table using SQL commands.
> 
> Does "SELECT reindex_virtual_table() FROM virtual_table" count as SQL
> command only?
> 
> 
> ---   ---
> Eduardo Morras 
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> 
> 
> ___
>  Gunter Hick
> Software Engineer
> Scientific Games International GmbH
> FN 157284 a, HG Wien
> Klitschgasse 2-4, A-1130 Vienna, Austria
> Tel: +43 1 80100 0
> E-Mail: hick at scigames.at
> 
> This communication (including any attachments) is intended for the
> use of the intended recipient(s) only and may contain information
> that is confidential, privileged or legally protected. Any
> unauthorized use or dissemination of this communication is strictly
> prohibited. If you have received this communication in error, please
> immediately notify the sender by return e-mail message and delete all
> copies of the original communication. Thank you for your cooperation.
> 
> 
> _______
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras 


[sqlite] Multiple connections to in-memory DB and virtual tables

2015-10-06 Thread Eduardo Morras
On Tue, 6 Oct 2015 15:39:08 +0100
Simon Slavin  wrote:

> There are also things Virtual Tables can't do.  For instance you
> cannot index a Virtual Table using SQL commands.

Does "SELECT reindex_virtual_table() FROM virtual_table" count as SQL command 
only?


---   ---
Eduardo Morras 


[sqlite] Detect if db is already opened by another process?

2015-09-23 Thread Eduardo Morras
On Wed, 23 Sep 2015 18:01:47 +0200
Michael Schlenker  wrote:

> Hi,
> 
> i just wondered if there is an API to detect if a sqlite database file
> is already opened by another process.
> 
> I can make the assumptions that:
> 
> 1. WAL mode is in use
> 2. Linux and Windows only
> 3. No network filesystems
> 4. I only care if the access is done by another SQLite library,
>not simple open() calls.
> 
> I didn't see any explicit API to check for this, but assumed there
> might be some way to find out via the .shm files?
> 
> Usecase is a server process that keeps an SQLite DB open while it is
> running and a commandline tool that manipulates the same DB file for
> maintenance tasks. The maintenance tool should not change the DB if
> the server is running.
> 
> I could of course do explicit locking via other means to mediate
> access (actually i do that now, but it is not really elegant), but if
> there is an SQLite API way to do it, it would be nicer.
> 
> Any good hints?

If the other process is yours, you can use pragma user_version to store a 
reference counte  of the number of times the db is opened, increasing each time 
you open it and decreasing each time you close it.

> 
> Michael

---   ---
Eduardo Morras 


[sqlite] Native sqlite4 on FreeBSD

2015-09-13 Thread Eduardo Morras
On Fri, 11 Sep 2015 17:12:50 +0300
Valentin Davydov  wrote:

> On Thu, Sep 10, 2015 at 01:30:37PM +0200, Eduardo Morras wrote:
> > 
> > Use gmake to compile.
> 
> It didn't work either. Finally I've just installed some brand new
> linux on a nearby virtual machine, made there make -f
> Makefile.linux-gcc and thoroughly repeated it's output line-by-line
> on my FreeBSD while replacing gcc by cc. Eventually it compiled well
> (with almost the same set of warnings) and the result seems to work
> (at least within my own coverage).
> 
> Thanks to the high general coding style, there were only two
> idiosyncrasies to be corrected: lack of the system-wide malloc.h
> (which is replaced by unistd.h in FreeBSD) and yet another lack of
> the fdatasync() syscall, which is already carefully screened from the
> sources by a special symbol called __ANDROID__ ;-)

No, malloc.h is on libc stdlib.h

Fdatasync() syscall is linuxism and don't exist in FreeBSD, use fsync. Note 
that FreeBSD filesystem UFS2 softupdates takes care of separate metadata and 
data parts of a file write/update/delete.

> By the way, clang kindly revealed a couple of forgotten "unsigned" in
> the sources, which could (and did in the past) produce some nice
> bugs. Attention to developers.
> 
> > Note that there isn't a port,
> 
> I know. Despite the code being frozen for almost a full year, nobody
> wants to take a trouble of maintaining FreeBSD port (perhaps me too).

It's experimental code, I played with it before, but don't use on any project. 
A port has no sense for now, I think.

> Valentin Davydov.


---   ---
Eduardo Morras 


[sqlite] Improving SQLite performance over a network

2015-09-10 Thread Eduardo Morras
On Thu, 10 Sep 2015 18:24:32 + (UTC)
Mike McWhinney  wrote:

> Hello,
> I am using SQLite in a Windows environment. The file storing the
> database is on a Windows 2008 server.All other file access to this
> server is reasonably fast. However, when doing simple SELECTS and
> INSERTS/UPDATESit is noticeably slower on a network. I am talking
> about 50 records. Now I have read that setting some of the PRAGMAS
> can improve speed, but at the cost of data loss should powerfail or
> some other hardware condition occur. I am willing to risk this if the
> performance of the SQLite database accesscan be increased
> considerably. Here is what I am doing in code (This is C#)
> 
> string connectionString = ""URI=file:mydb.db; Default Timeout=15;
> PRAGMA busy_timeout = 600" SQLiteConnection sqConn = new
> SQLiteConnection(connectionString);sqConn.Open();
> 
> 
> Then immediately after opening the database, I'm? running this code
> 
> string sqlPragma = "PRAGMA cache_size = 16000; PRAGMA synchronous =
> OFF; PRAGMA journal_mode = MEMORY;"; SQLiteCommand sqlite_command =
> OMConnection.sqConn.CreateCommand(); sqlite_command.CommandText =
> sqlPragma; sqlite_command.ExecuteNonQuery();
> 
> 
> This sets up the pragmas for the cache size, synchronous mode,
> journal mode.
> 
> I do not notice any difference in performance with the pragmas vs.
> without them.? Am I doing something wrong?Are there any other
> documented PRAGMAS which might help improve performance or are there
> any other measuresI can use to get faster performance over a network
> file vs. local file? 

Try "PRAGMA temp_store = 2;" In one project I use sqlite3 as network db and it 
is, besides cache_size and page_size, the most important pragma for improve 
performance.

>ThanksMike
> ___ sqlite-users mailing
> list sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras 


[sqlite] sql query

2015-09-10 Thread Eduardo Morras
On Thu, 10 Sep 2015 13:17:03 -0400
H?ctor Fiandor  wrote:

> Dear members:
> 
> I am trying to use a SQL statement like this:
> 
>   fdm.tNegSerAct.SQL:='SELECT * FROM NegSerAct 
> 
>   WHERE codNegSerAct >=desde AND codNegSerAct <=hasta 
> 
>   ORDER BY codNegSerAct';
>   but the trouble is that the variables ?desde? and ?hasta? are
> strings and fixed previously.
>  I have learned the SQLdb Tutorial2 but the offered solution don?t
> work.
>
>  I will appreciate any help in the solution of this problem.

It looks like you're using lazarus/freepascal fo it, but, there's no difference 
from c code. 'desde' and 'hasta' variables must have a name that sqlite3_bind() 
function can recognize and change with variable value. Check this page, 
http://www.sqlite.org/c3ref/bind_blob.html as refence to sqlite3_bind(). Proper 
value names are ,'?' , '?NNN', ':VVV', '@VVV', '$VVV$; where '?' is for one 
variable case, 'NNN' is any number from 0 to 999 and 'VVV' alphanumeric values. 
In your case, use '?des' y '?has'.

>  
> 
>  Thanks in advance
> 
>  
> 
> Ing. H?ctor Fiandor
> 
> hfiandor at ceniai.inf.cu
> 
>  
> 
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras 


[sqlite] Native sqlite4 on FreeBSD

2015-09-10 Thread Eduardo Morras
On Thu, 10 Sep 2015 14:22:24 +0300
Valentin Davydov  wrote:

> Hi, all!
> 
> How to build native sqlite4 (preferrably statically linked shell only,
> without any extensions) on a recent version of FreeBSD? It has clang 
> instead of gcc as a default system compiler, so makefiles bundled with
> sqlite sources don't work.

Use gmake to compile. Note that there isn't a port, sqlite4 is in development 
state and not a final product.

> Valentin Davydov.

---   ---
Eduardo Morras 


[sqlite] Using collation instead a virtual table

2015-09-10 Thread Eduardo Morras
On Wed, 9 Sep 2015 09:54:28 -0400
Igor Tandetnik  wrote:

> On 9/9/2015 6:54 AM, Eduardo Morras wrote:
> > Yes, the comparison operators work correctly and the b-tree binary
> > search should give correct answers, the only tweak is in xCompare,
> > that returns 0 when left(x) and rigth($value) expressions distance
> > is lower than a threshold.
> 
> A comparison like this would not generally be a proper collation. The 
> equivalence relation it induces is not transitive - it's possible to 
> have A == B and B == C but A != C (when A is "close enough" to B and
> B is "close enough" to C, but A and C are just far enough from each
> other).

Yes, in first mail I noted that, but I don't think it's a problem because:

a) xCompare(A,C) returns AC (-1/+1),
b) the query always return only one value, 
c) no ORDER BY in query,
d) hashed data hash are far enough / disperse and expect no new additions.

I'll compare collation vs virtual table with the tests cases, if fails or is 
slower I'll mark the collation branch as closed.

Thanks

> -- 
> Igor Tandetnik

---   ---
Eduardo Morras 


[sqlite] Using collation instead a virtual table

2015-09-09 Thread Eduardo Morras
On Tue, 8 Sep 2015 15:42:28 -0400
Richard Hipp  wrote:

> On 9/8/15, Eduardo Morras  wrote:
> >
> >
> > Hello,
> >
> > I have a virtual table that implements query perceptual hashing data
> > [1]. Now I'm thinking about converting the virtual table
> > implementation in a collation on a normal sqlite3 table, but
> > collation requieres that '=','<' and '>' be well defined by obeying
> > the rules cited on create_collation() page[2]. Sometimes, rule 2
> > may not be true, but I always query for '=',
> 
> Yes, but under the hood, SQLlite never does an == query on the b-trees
> even if you ask for a == query in the SQL.  Instead, the b-trees are
> queried using one of >, >=, <, or <=.  A query of the form:
> 
>  x=$value
> 
> Gets translated (at the b-tree layer) into
> 
>  x>=$value AND x<=$value
> 
> So it is *very* important that the comparison operators all work
> correctly on your collating sequence function.  If they don't, then
> SQLite will give incorrect answers.

Yes, the comparison operators work correctly and the b-tree binary search 
should give correct answers, the only tweak is in xCompare, that returns 0 when 
left(x) and rigth($value) expressions distance is lower than a threshold. I 
begin with the implementation and test cases, I expect it be faster than 
virtual table.

Thanks

---   ---
Eduardo Morras 


[sqlite] Using collation instead a virtual table

2015-09-08 Thread Eduardo Morras


Hello, 

I have a virtual table that implements query perceptual hashing data[1]. Now 
I'm thinking about converting the virtual table implementation in a collation 
on a normal sqlite3 table, but collation requieres that '=','<' and '>' be well 
defined by obeying the rules cited on create_collation() page[2]. Sometimes, 
rule 2 may not be true, but I always query for '=', never need sort query 
output and result is unique.
If I calculate the perceptual hash of an input, I want to get the closer 
(minimal distance) hash in the table calculating equal . Can I use a collation 
in this case?

[1] Perceptual Hashing: Hash function that similar input data has equal or 
similar hash. 
[2] http://www.sqlite.org/c3ref/create_collation.html
1.If A==B then B==A.
2.If A==B and B==C then A==C.
3.If AA.
4.If A


[sqlite] Performance problems on windows

2015-08-28 Thread Eduardo Morras
On Fri, 28 Aug 2015 12:55:00 +
Jakub Zakrzewski  wrote:

> Hi,
> 
> just and update here.
> It seems that the newer libraries perform worse when we use multiple
> worker threads. I don't know why and I don't have time to investigate
> it any further. We'll stay with the old 3.7.16.2 for now.

Perhaps newer has a different compile options or pragma values. Fast check 
executing "pragma compile_options" and post here if we can see some option that 
can cause it. Don't know a pragma that list all pragmas values as pragma 
compile_options do, but it'll be a nice addition for debugging purpouses.

> Thank you for your assistance. 
> 
> --
> Gruesse,
> Jakub
> 

---   ---
Eduardo Morras 


[sqlite] SQlite database access over wireless network

2015-08-24 Thread Eduardo Morras
On Mon, 24 Aug 2015 16:43:40 + (UTC)
Mike McWhinney  wrote:

> Hello,
> I am trying to troubleshoot a problem that I'm having with a program
> usingSQLite over a wireless network.? I know that wireless networks
> can havetheir own sets of issues that cause database access problems.
> What is happening is that something that takes 1 or 2 seconds on a
> wirednetwork will take 15-20 seconds over wireless.? 
> 
> I am using System.Data.SQLite, which is based on version SQLite 3.
> 8.8.3 (version 1.0.96.0)with C# Visual Studio. The program is
> installed and runs off a network. The database is also on a network.
> The reads are somewhat slower running on a wired network than when
> run on a local machine.However, when run over wireless it is
> significantly slower.? These are mostly reads onfairly simple
> queries.? 
> 
> Are there any pragmas that may be used to increase the latency or to
> allow thereads to process faster across a network??? Or are there any
> other tips or tricksthat may be used to speed up access? This is a
> multi-user database with about 2-10 usersreading/writing data. Most
> of the time, the client application sits idle.? The SQLite
> connectionsare opened only on demand, then closed after I/O
> operations. However this opening and closingis kept to a minimum.
> ThanksMike

As Simon Slavin and R.Smith tells you, it's not an scenary (WiFi and concurrent 
r/w access) where Sqlite will work well. Each time sqlite needs to create a 
temp file, lock a file, delete a file, etc... it must wait to network and 
remote os. Said that and if you really wants to work with sqlite, don't use a 
direct System.Data.Sqlite connection. Instead, send the queries (text queries) 
to a process who runs sqlite on local, queues the queries, execute them in 
order and return results to each user. It's a medium complex project and some 
corner cases should be consider. I'll try postgres instead.



---   ---
Eduardo Morras 


[sqlite] Wierd Locking problem

2015-08-24 Thread Eduardo Morras
On Mon, 24 Aug 2015 16:03:24 +0200
Luc Andre  wrote:

> Hi All,
> 
> Our web servers use a daily updated sqlite database file.
> 
> The file is opened by multiple (apache/PHP) threads but always in
> read only mode:
> 
> $db = new SQLite3(DB_FILE, SQLITE3_OPEN_READONLY);
> 
> The file itself has no write access.
> 
> -rw-r--r-- 1 root root 225759232 Aug 24 13:43 db.sqlite
> 
> But sometimes we get PHP warnings:
> 
> PHP Notice: SQLite3Stmt::execute(): Unable to execute statement:
> database is locked
> SQLite3::prepare(): Unable to prepare statement: 5, database is locked
> SQLite3::querySingle(): Unable to execute statement: database is
> locked
> 
> We can not understand how a read only file can get locked.

Some hints (some of them from documentation):

a) If you use wal mode then you can't open it in read only mode, it' will be on 
read-write mode.
b) If you have some triggers that modifies, inserts or delete data, you'll get 
a db locked if any attempts to fire when another is running.
c) If you use wal mode and wal checkpoint isn't on passive mode, you'll get a 
db busy.
d) Check performance on serialize threading mode (2), instead of multi-thread 
mode (1).
e) If a journal size limit is set, whichever journal mode, a db lock happens.
f) Some pragmas lock db, update user_version, cache_spill=on, 


> Any hint / fix ?
> 
> Regards,
> 
> Luc
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras 


[sqlite] When sqlite3_close() returns SQL_BUSY

2015-08-22 Thread Eduardo Morras
On Sat, 22 Aug 2015 05:07:55 -0500
Jeff M  wrote:

> The doc page for SQLITE_BUSY covers SQLITE_BUSY when there are
> multiple connections to the database.
> 
>   SQLite Result Codes (SQLITE_BUSY)
>   http://www.sqlite.org/rescode.html#busy
> 
> But, I have only one connection.  I believe the case where
> SQLITE_BUSY is returned by sqlite_close() due to unfinalized prepared
> statements should be mentioned there.

Perhaps you forget to call sqlite3_finalize() on at least one query of your 
code.

> Jeff


---   ---
Eduardo Morras 


[sqlite] libtclsqlite3 assistance

2015-08-22 Thread Eduardo Morras
On Fri, 21 Aug 2015 20:19:38 -0700
jungle Boogie  wrote:

> Hi Dr. H,
> On 21 August 2015 at 14:23, Richard Hipp  wrote:
> > On 8/21/15, jungle Boogie  wrote:
> >>
> >> Is it possible to compile in libtclsqlite3?
> >>
> >
> > Dunno.  But you can compile the libtclsqlite3.so yourself:
> >
> > make tclsqlite3.c
> > gcc -fPIC -shared -I. -o libtclsqlite3.so tclsqlite3.c
> 
> 
> Made it fine but compiling not so much:
> 
> /usr/local/bin/gcc48 -fPIC -shared -I. -o libtclsqlite3.so
> tclsqlite3.c tclsqlite3.c:162240:17: fatal error: tcl.h: No such file
> or directory
>  #include "tcl.h"
>  ^
> compilation terminated.
> 
> I have tcl.h here:
> /usr/local/include/tcl8.6/generic/tcl.h
> /usr/local/include/tcl8.6/tcl.h

Add the include path to /usr/local/include/tcl8.6

gcc -fPIC -shared -I. -I/usr/local/include/tcl8.6 -o libtclsqlite3.so 
tclsqlite3.c

---   ---
Eduardo Morras 


[sqlite] pragma temp_store_directory is deprecated, what is the alternative?

2015-07-31 Thread Eduardo Morras
On Thu, 30 Jul 2015 23:29:49 +
Howard Kapustein  wrote:

> >There cannot be a fully portable way, because path specifications
> >are not portable
> Which begs the question, why isn't there an xGetTempFilename VFS
> function in sqlite3_vfs?
> 
> Wouldn't the simplify things? Have the VFS handle it apropos, with
> the default VFS' in SQLite (or an older VFS lacking it) use the
> current hunt sequences for compat

It will not fix the other reason to mark it as deprecated, it's not thread 
safe. If one thread changes the temp directory, other threads that use temp 
files will fail and may corrupt db file. 

>   - Howard

---   ---
Eduardo Morras 


[sqlite] create temporary virtual table

2015-07-21 Thread Eduardo Morras
On Mon, 20 Jul 2015 17:34:06 -0500
Andy Rahn  wrote:

> I see there is no way to create a temporary virtual table.

It depends on how the virtual table is implemented. Your virtual table code 
xCreate function can create temporal tables, sqlite3 drop them on 
sqlite3_close() call. Or do you ask about a virtual table extension in 
particular, like fts3 or rtree?

> One idea I had was to create a second, in-memory db and attach that,
> then create the virtual table over there.
>
> Currently I have it working where as soon as I open the db, I use
> sqlite_master to find any left over virtual tables from before and
> drop them.
>
> Any other ideas?

Modify the code of your virtual table or ask sqlite3 developers if it's one of 
the virtual tables provided with sqlite3.

>  - Andy

---   ---
Eduardo Morras 


[sqlite] Schema-less JSON SQLite DB?

2015-07-16 Thread Eduardo Morras
On Wed, 15 Jul 2015 17:22:36 +0200
Sergej Jure?ko  wrote:

> An idea I?ve had a while ago was to implement functions for json
> documents (using sqlite3_create_function_v2)
> 
> Json would be stored in a text column. You would need to define 2
> functions:
> - docget(document,key1,subval1,subval2,?)
> - docset(document,key1,subval1,subval2,..,val)
> 
> Last parameter of docset is value you wish to set.
> 
> So for instance one would write
> INSERT INTO mytable VALUES (1,?{key : 10, subdoc : {subkey : ?a"}}?);
> SELECT id,doc FROM mytable WHERE docget(doc,?key") > 10;
> SELECT id,doc FROM mytable WHERE docget(doc,?key?,?subdoc?,?subkey?)
> = ?a?; UPDATE doc FROM mytable WHERE id=1 SET docset(doc,?key?,12);
> 
> One could even implement indexes on keys within documents using
> additional tables.
> 
> What do you guys think? Is it stupid, could it be improved?

Perhaps Dr. Hipps, R. unql implementation should be resurrected or at least, 
used as good/bad example. It has an extension to sql to query unstructured data 
and a layer on top sqlite to store json data. It's abandonware now afaik.

http://unql.sqlite.org/index.html/wiki?name=UnQL
http://www.dataversity.net/unql-a-standardized-query-language-for-nosql-databases/
http://www.couchbase.com/press-releases/unql-query-language


> 
> Sergej


---   ---
Eduardo Morras 


[sqlite] convert a PostgreSQL to sqlite

2015-07-06 Thread Eduardo Morras
On Sun, 5 Jul 2015 14:44:00 +0200
 wrote:

> Is there a way (free, open source, Ubuntu) to convert a
> PostgreSQL-database with data in it to a sqlite database?

Check VirtualPG virtual table extension, 
https://www.gaia-gis.it/fossil/virtualpg/index

It permits create and fill a virtual table in sqlite with data from a 
Postgresql server.


---   ---
Eduardo Morras 


[sqlite] static sqlite database

2015-06-10 Thread Eduardo Morras
On Wed, 10 Jun 2015 14:13:29 +
Igor Stassiy  wrote:

> Thanks for all your replies. Please let me clarify:
> 
> I need to do certain range operations efficiently and I prefer SQL
> functionality, plus SQLite gives a cross platform file storage
> format. The table having 3.2 megabytes is just an example. I am
> storing much more (orders of 10^5 of points) however the effect is
> similar.
> 
> When I store several tables side by side the effect seems to be worse
> (I assume SQLite leaves the possibility for inserts and hence needs
> to have some space to do them efficiently, please correctly if I am
> wrong). SQLite forums suggest to store tables in separate files to
> avoid vacuum space in between the tables, are there any other methods
> to save space?

Have you look at R-Tree extension? https://www.sqlite.org/rtree.html

It creates a table and indexs to store and query latitude and longitude 
efficiently. 

Or spatialite? https://en.wikipedia.org/wiki/SpatiaLite 
http://www.gaia-gis.it/gaia-sins/

It's opensource under MPL.

A page is dedicated only to one thing, table A, table B, internal sqlite3 data, 
index C, whatever... A page can't store data from one table and data from 
another table or index. This means that if your page size (default 1024bytes) 
is not defined wise enough (f.ex. 64KB=65536byes) and your tables uses a few 
bytes (say 128bytes), the rest of the page is empty (65536-128=65408 bytes) for 
future table grows. There are page header and tail with data you must count 
too, so it's a bit less. Note that you want to store a big table with data, and 
the space lost on unwise page size is minimal. 

You can have empty pages inside the database because you deleted data or drop 
an index or a table, but in your static db, it shouldn't happen. In any case, a 
vacuum and analyze solve the issue.

I use the trick to store tables in separate db files and attach them to main db 
for speed on mechanical hard disks, because those files on different disks 
maximize the iops. An autovacumm and proper db maintenance should minimize the 
vacuum space between tables.

HTH

---   ---
Eduardo Morras 


[sqlite] User-defined types -- in Andl

2015-06-08 Thread Eduardo Morras
On Mon, 8 Jun 2015 15:28:11 +1000
 wrote:

> Thanks for pointing it out, but I knew that the best way to show off a
> language is with examples. That's why there are nine sample Andl
> scripts comprising dozens of individual examples in the Samples
> folder. My guess is if that you're asking me to write examples, the
> real lesson is that I didn't make them easy enough to find.
> 
> I have a formal grammar, but I don't expect anyone to read that. More
> and better examples is the way to go.

No, a big bold No. If I want implement your language in some product I need the 
formal grammar.

Learn by example means learn white rules (the dos), I need to know the black 
rules too (the don'ts) to get full knowledge of the language.

> Regards
> David M Bennett FACS
> 
> Andl - A New Database Language - andl.org


---   ---
Eduardo Morras 


[sqlite] Replace an open database

2015-05-28 Thread Eduardo Morras
On Wed, 27 May 2015 19:07:50 +0200
Dominique Devienne  wrote:

> On Wed, May 27, 2015 at 6:59 PM, Wade, William 
> wrote:
> 
> > (including his own uuid, indicating that this is logically a new
> > file). When existing readers get around to reading again, they will
> > check that uuid, and handle the change in writers "gracefully."
> >
> 
> You can perhaps abuse the Application-Id [1] as your uuid to check.
> But you could also store a real uuid in a 1-row well-known table.
> Just a thought (on that specific point). --DD

Better user-version pragma[2]. Application-Id identifies your sqlite3 db file 
with your application using unix file (1) program or similar.

> [1] https://www.sqlite.org/pragma.html#pragma_application_id
https://www.sqlite.org/pragma.html#pragma_schema_version

If your new db has identical schema, don't modify the schema version. If you do 
so, readers must call sqlite3_reset() and sqlite3_prepare() again.

---   ---
Eduardo Morras 


[sqlite] Virtual Table query - why isn't SQLite using my indexes?

2015-05-16 Thread Eduardo Morras
On Fri, 15 May 2015 18:34:33 +
Eric Hill  wrote:

> Thanks for your reply.
> 
> I went as high as using (number of rows)^4, resulting in a cost of
> ~440 trillion for the unindexed case, along with setting the cost to
> 1 for the indexed case, and it still won't use my index.
> 
> I'd like to step out of my xBestFilter implementation into SQLite
> code to see if I can tell what is going on there, but when I step
> out, the call stack knows where I'm supposed to be, but the debugger
> does not find the right line in sqlite3.c.  I have built sqlite3.c
> simply by adding the amalgamation to my Microsoft Visual Studio 2013 C
> ++ project.  Is there something I can do to make the debugger work?
> 
> I will postpone index creation until the call to xFilter, I reckon,
> once I work out these other issues.  Thanks for the tip!

Perhaps you can disable the unindexed path, forcing to use always the index. 
This way your virtual table always use your index.

What I don't know is: 

Sqlite3 can use only one index per query, I don't know if indexes from virtual 
tables count for this limit. I suppouse not because it should be an internal 
virtual table decission implementation show the selected data the faster way. 

If it counts for index limit, perhaps Sqlite3 found a faster index when joining 
your virtual table data with other table data.

Perhaps I miss read something in documentation.

Dr. Hipps, can you explain if virtual table index count for this limit?

> Eric

---   ---
Eduardo Morras 


[sqlite] Regarding SQLITE_PRIVATE

2015-05-15 Thread Eduardo Morras
On Fri, 15 May 2015 17:13:32 +0530
Sairam Gaddam  wrote:

> On Fri, May 15, 2015 at 4:46 PM, Hick Gunter  wrote:
> 
> > The keyword "static" before a function name limits its visibility
> > to the current source file.
> >
> > But many of the PRIVATE functions are not declared static like the
> > "sqlite3VdbePrintOp"
> function.
> If they do declare, can i know where they did that?

In amalgamation you can search in sqlite3.h for these defines:

#define SQLITE_PRIVATE static
#define SQLITE_API extern

Some lines up, you find in what .h file they are declared.

By default all functions in C are of type extern (if you don't add static, they 
are extern), so, if you declare them on .h file they can be called from other 
.c files. If you don't declare them on .h but at top of .c file where they are 
implemented they can't be called from other .c files. If you declare them as 
static, you can't call them from any other .c files.

HTH

---   ---
Eduardo Morras 


[sqlite] Please confirm what I THINK I know about blobs

2015-05-11 Thread Eduardo Morras
On Mon, 11 May 2015 14:00:41 +
"Drago, William @ CSG - NARDA-MITEQ"  wrote:


> Correct me if I'm wrong, but the only way to read comments is by
> reading back the entire CREATE TABLE string, correct?

Yes, you need to parse it. You search for '--' and '\n'. The chars between them 
are the comment. 

I thought you need it only for documentation purpouse and not automatic/dynamic 
programming.

> Thanks,

---   ---
Eduardo Morras 


[sqlite] Please confirm what I THINK I know about blobs

2015-05-09 Thread Eduardo Morras
On Sat, 09 May 2015 06:09:41 -0400
William Drago  wrote:

> All,
> 
> Say you encounter a blob in a database. There's no way to 
> tell if that blob carries bytes, floats, doubles, etc, correct?
> 
> Assuming the above is true, then is it always prudent to 
> store some metadata along with your blobs so that they can 
> be identified in the future?
> 
> Example table:
> 
> ModelNo TEXT (e.g. SO-239)
> SerialNo TEXT (e.g. 101)
> VSWR BLOB (e.g. x'feab12c...')
> VSWR_Type TEXT (e.g. double)
> 
> 
> Does this make sense?

You can use SQL comments on CREATE TABLE, those comments aren't deleted from 
SQLITE_MASTER table, you can query it as a normal table.

CREATE TABLE blob_table (
 ModelNo TEXT,  -- e.g. S0-239
 SerialNo TEXT, -- e.g. 101
 VSWR BLOB  -- double, e.g. x'feab12c'
);

SELECT sql from sqlite_master where type='table' AND tbl_name='blob_table';

will return

CREATE TABLE blob_table (
 ModelNo TEXT,  -- e.g. S0-239
 SerialNo TEXT, -- e.g. 101
 VSWR BLOB  -- double, e.g. x'feab12c'
)

(Note that ';' is deleted)

I check it in Firefox Sqlite3 Manager extension.

> Thanks,
> -Bill


---   ---
Eduardo Morras 


[sqlite] SQLite queries

2015-05-07 Thread Eduardo Morras
On Thu, 7 May 2015 11:25:33 -0500
John McKown  wrote:

> On Thu, May 7, 2015 at 10:01 AM, Stephen Chrzanowski
>  wrote:
> 
> > Although I can understand the sarcasm you're sending out, a
> > client/server infrastructure would be an interesting task for
> > SQLite to do, but to answer the OP, no, SQLite isn't inherently
> > designed to be a client/server.
> >
> 
> ?And not really that difficult. Embed SQLite into some "server code"
> to which a client connects, probably via TCPIP or some other IPC
> method.? The client connects to the server, which uses the embedded
> SQLite to access the appropriate SQLite data base file. I would guess
> that the server could even be multi-threaded with each thread
> controlling a separate SQLite data base file.

Do you mean create a NewSQL Shared-Nothing distributed database with SQlite3, 
using something like nanomsg or 0MQ for distribute workload and messages, and 
Graph Theory for manage the whole headless distributed server?

Don't try to do that. It's far complex than you think and even Pandora won't 
open it. I see now Stephen answer and he cite some of the problems you may 
encounter at "There are MANY things to be taken care of,.." paragraph.

---   ---
Eduardo Morras 


[sqlite] Possible bug with locking/retying

2015-05-03 Thread Eduardo Morras
On Sun, 3 May 2015 01:18:11 +0100
Simon Slavin  wrote:

> In searching for something else I came across this:
> 
> <http://beets.radbox.org/blog/sqlite-nightmare.html>
> 
> I don't like the fact that it's undated.  For all I know this is
> about a three year old bug which has long since been patched.
> 
> I understand the description, but not what SQLite does internally,
> and I don't know whether it was ever reported to the SQLite dev
> team.  Would someone like to take a look ?

I think it's a misuse of Sqlite and not a real bug. He adds code to ensure only 
one thread access the database in a multithread application. For the 
description, I infer he uses -DSQLITE_THREADSAFE=2 (multithread), where a 
-DSQLITE_THREADSAFE=1 (serialized) solve the problem better because the code to 
ensure only one thread access the database in multithread app, is sqlite own 
code, don't need to reinvent the wheel.

Please, pay attention to my first two words, "I think", it shows my opinion 
about the problem as is described, not a real solution to other problems.

> Simon.
---   ---
Eduardo Morras 


[sqlite] Thoughts about enhancing "PRAGMA quick_check" speed

2015-05-01 Thread Eduardo Morras
On Fri, 01 May 2015 17:58:06 +0200
Jean-Marie CUAZ  wrote:

> Hello,
> 
> We use Sqlite + Tcl/Tk in a payroll application and everything works 
> great at high speed !
> 
> In this application, "PRAGMA quick_check" are processed in a few
> places as a security measure :
> - before processing backup/restore operations.
> - during the process of login into the application.
> 
> The size of a database file for an instance of our application vary
> from 100 MB to 700 MB.
> 
> With a "PRAGMA quick_check" processed when login into the
> application, the waiting time can reach one minute on an average
> computer (3 Ghz + 7200 rpm HD).
> 
> So I ask this naive question : except if already done, could it be 
> possible to leverage information gathered by Sqlite with "PRAGMA 
> threads" to process "PRAGMA quick_check" in a multi-threaded manner ?

Have you measure if it's cpu-bound or io-bound? If it's limited by io, using 
bigger cache or activating mmap will improve that.

Also, if you only see it at login (I suppouse login is done once at start app) 
the system cache don't have the db while doing it at backup/restore db file 
system cache has it. Try to do something like this:

%cat /path/to/db.file > /dev/null

> Thank you
> 
> Jean-Marie
> 
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras 


[sqlite] Fwd: Is there an ODBC driver for SQLite

2015-05-01 Thread Eduardo Morras
On Fri, 1 May 2015 11:28:48 -0400
Igor Korot  wrote:

>   Hi, ALL,
> Is there an ODBC driver for SQLite DB?

http://www.ch-werner.de/sqliteodbc/

> I'm trying to see if it will be possible to have an application
> written in dBase + Paradox
> to be converted to dBase + SQLite.
> 
> So, if there is such a driver I would be very interested in requesting
> it and evaluate it.
> 
> Also it would be nice to know whether this driver is OK to use legally
> in the commercial app.
> 
> Thank you for any info you can provide.

---   ---
Eduardo Morras 


[sqlite] Segfault during FTS index creation from huge data

2015-04-30 Thread Eduardo Morras
On Fri, 3 Apr 2015 18:16:18 +0300
Artem  wrote:

> Hi!
> 
> The situation is like that. There?s a SQLite database with around 3
> billion records. Each record consists of a certain CHAR field and
> several other additional fields with different types. The file size
> is approx. 340 gb. The maximum content length in the doc field is 256
> symbols, the content is in Russian.


You can extend fts3/4 tokenizers to recognize Russian stop words[1] and exclude 
them from FTS index. I don't know Russian, but in English, examples of stop 
words are: 'a', 'the', 'of', etc...

See https://www.sqlite.org/fts3.html#section_8_1 for implement your own 
tokenizer or extend unicode one to exclude your stop words. A fast hack can be 
to add code at/to end of icuNext[2] (file ext/fts3/fts3_icu.c) function and 
check if the token is in your stop word list and skip the token [3](it's a 
pointer to current string) instead, something like this:

233 *piEndOffset = pCsr->aOffset[iEnd];
234 *piPosition = pCsr->iToken++;
235 if ( token_is_stop_word(ppToken,nByte) ){
236   *ppToken = Pointer_to_empty_string;
237   *pnBytes = 0;
238   *piStartOffset = pCsr->aOffset[iStart+nByte];
239   *piEndOffset = pCsr->aOffset[iEnd+nByte];
240   *piPosition = pCsr->iToken--;
241 }
242 return SQLITE_OK;

N.B. It's a fast hack and I haven't compile, run or check with full Sqlite3 
documentation, 

There are list of stop words available[4][5] on internet.

[1] https://en.wikipedia.org/wiki/Stop_words
[2] 
http://www.sqlite.org/src/info/e319e108661147bcca8dd511cd562f33a1ba81b5?txt=1=177
[3] 
http://www.sqlite.org/src/info/e319e108661147bcca8dd511cd562f33a1ba81b5?txt=1=235
[4] https://code.google.com/p/stop-words/ (Warning!! GPLv3 code)
[5] http://www.ranks.nl/stopwords/russian (Warning!! Unknow licence)

> Thank you.

HTH

---   ---
Eduardo Morras 


[sqlite] Segfault during FTS index creation from huge data

2015-04-29 Thread Eduardo Morras
On Wed, 29 Apr 2015 12:00:02 +0200
Dominique Pell?  wrote:

> I'd suggest to try reproducing the problem on Linux after disabling
> memory overcommit  (at least temporarily for the experimentation):
> 
> * If it returns SQLITE_NOMEM without segfault, then it's not a bug
>   but a limitation of SQLite which uses lots of memory for FTS, or
>   a leak in the application.
> * If it still segfaults, then there is a bug in SQLite or in the
> application.

 A fast checking code inside malloc.c shows this warning inside void 
*sqlite3Malloc(u64 n) function. Don't know if FTS uses sqlite3_malloc() 
directly or this one.

/* A memory allocation of a number of bytes which is near the maximum
** signed integer value might cause an integer overflow inside of the
** xMalloc().  Hence we limit the maximum size to 0x7f00, giving
** 255 bytes of overhead.  SQLite itself will never use anything near
** this amount.  The only way to reach the limit is with sqlite3_malloc() */



> 
> Regards
> Dominique


---   ---
Eduardo Morras 


[sqlite] Segfault during FTS index creation from huge data

2015-04-29 Thread Eduardo Morras
On Tue, 28 Apr 2015 16:49:46 -0600
Scott Robison  wrote:

> On Tue, Apr 28, 2015 at 4:27 PM, Artem  wrote:
> 
> > > That's not a segfault, though, is it.
> >
> > When I did the same in linux version of SQLite - I saw
> > the "Segmentation Fault" error.
> >
> 
> I never saw a segfault in my case, though I never tried anything on
> any posix style system. It was strictly Windows.

Don't know if it's asked yet, but do you use a 64bit Windows or a 32bit version?

> -- 
> Scott Robison

---   ---
Eduardo Morras 


[sqlite] Regarding testing

2015-04-27 Thread Eduardo Morras
On Mon, 27 Apr 2015 18:06:31 +0530
Sairam Gaddam  wrote:

> Yeah I read that link previously but how do i get all those test
> cases?

You can buy the professional tests at 
http://www.hwaci.com/sw/sqlite/prosupport.html , Mr. Hipps company behind 
Sqlite.

Or you can trust them or create the tests you need for your use case.

---   ---
Eduardo Morras 


[sqlite] Destroy all evidence of a database

2015-04-23 Thread Eduardo Morras
On Wed, 22 Apr 2015 20:07:45 +0100
Simon Slavin  wrote:

> 
> On 22 Apr 2015, at 7:18pm, Scott Hess  wrote:
> 
> > The only way SQLite can get to the disk is using the vfs, so if the
> > vfs encrypts things, all of the files (main db, temp db, journal,
> > everything) will be encrypted.
> 
> Guys.  Guys.  Guys.  My app doesn't have access to any level below
> standard file system calls.  This is a highly secure system.  Any
> calls which talk directly to hardware (e.g. turn the caps lock light
> on, access SMART diagnostics, try to count the number of displays)
> will fail because my app isn't allowed to do that stuff.  Any attempt
> from my app to mount anything will fail.  My app has access to just
> GUI and files.  I don't have to worry about the security setup at OS
> level, merely not leave files about with sensitive information in
> them.

You can reference count the number of files it creates and deletes. If it 
creates more files than it deletes, you have a problem. It doesn't involve big 
changes in sqlite vfs code and even may be implemented in sqlite3 core code as 
a debug feature.

> 
> Simon.

---   ---
Eduardo Morras 


[sqlite] json_* functions in sqlite

2015-04-22 Thread Eduardo Morras
On Tue, 21 Apr 2015 18:09:33 -0700
Ashish Sharma  wrote:

> Hi
> 
> Many times I store JSON data in sqlite. It will be useful if sqlite
> came with functions which understand JSON. Presto has a nice set
> https://prestodb.io/docs/current/functions/json.html
> 
> I have two questions
> 
> 1. Will sqlite overlords consider adding this to sqlite core?
> 2. If so, what steps are needed for this?

Check unql.sqlite.org. It's abandonware AFAIK, but you can use part of the 
code. Stephen Beal colister has cson 
(http://fossil.wanderinghorse.net/wikis/cson/?page=cson), you can use it too.

Also, you can make a module/functions to work with BLOBs and TEXTs. Check how 
sqlar works (http://www.sqlite.org/sqlar/doc/trunk/README.md)


> Thanks
> Ashish
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras 


[sqlite] Destroy all evidence of a database

2015-04-22 Thread Eduardo Morras
On Wed, 22 Apr 2015 13:50:43 +0100
Simon Slavin  wrote:

> Dear folks,

> 
> Assuming no hardware/OS faults is it possible for any other
> SQLite-created files to still exist ?  Journal ?  Temp index ?
> Shared memory ?  Anything ?

a) If the app crash, it may create a dump file with sqlite cache information.
b) If memory is low and depending of the OS, perhaps part of app code/data may 
reside on swap for a time after quit the app.
c) In Windows, antivirus software may lock a temp file and don't allow delete it
d) If use ZFS or similar filesystem, a snapshot of file system is taken each 
10-30 secs., you can go back in time and recover filesystem state from 2 days 
ago (up to 6 months IIRC)

> I have read <https://www.sqlite.org/tempfiles.html>.
> 
> Simon.

---   ---
Eduardo Morras 


[sqlite] SELECT performance vs PRIMARY KEY/UNIQUE constraints

2015-04-20 Thread Eduardo Morras
On Mon, 20 Apr 2015 12:01:59 +0200
Nicolas Boullis  wrote:

> Hi,
> 
> On Mon, Apr 20, 2015 at 11:18:56AM +0200, Clemens Ladisch wrote:
> > If that does not help, we'll have to look at the actual queries (and
> > their EXPLAIN QUERY PLAN output).
> 
> That would certainly help, but I would have to ask for permission to 
> make this information public, or to anonymize even the names of the 
> tables and columns.

Knowing:

a) the kind of data (do you use blobs or big text columns?)
b) if data was inserted on primary key sort order or randomized
c) if you have a multi-gigabyte db
d) pragmas configuration 
e) column order (put blobs and text at the end of table column)

may point us to help you more without exposing internal information.

Also remember to don't use primary key on multicolumn indexes and upgrade your 
sqlite3 to newer version (from 3.7.13 to 3.8.9)

> 
> Thanks for your help,
> 
> -- 
> Nicolas Boullis
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras 


[sqlite] Query times vary between 0.2 s and 30 s for very similar queries - how to trick the optimizer?

2015-03-21 Thread Eduardo Morras
On Wed, 18 Mar 2015 14:43:26 +0100
"Mario M. Westphal"  wrote:

> I?m using 3.8.8.1 on Windows via the ?C? interface.
> 
> I work with SQLite for several years with good success. And I know
> that no optimizer ever will be perfect. But I?ve just stumbled upon a
> case where a very similar query requires between 0.2 seconds and a
> whopping 30 seconds.
> 
> I?ve simplified things as much as possible and included the create
> instructions and queries below.

<...snip...>

> The question is: When JOINing large tables with a temporary table,
> how to ensure that the optimizer can work optimal? Running ANALYZE
> with a temporary table probably does not work, and ANALYZE takes
> about 1 minute on this database so this is not feasible for each
> query.
> 
> I'm glad to have found an apparently working solution (IN instead of
> JOIN) but I wonder if this could be somehow automated by the
> optimizer? Or maybe this is a worst-case for the optimizer?
> 

You can run ANALYZE on any table, try:

ANALYZE temp_table;
SELECT 

You can also test-stress IN and know where is the limit. I 
think/suppouse/suspect that in this case there is no winning for using a 
temporal table intstead an IN, it should be faster when the temp table has more 
columns used in the where clause or additional join restriction.

---   ---
Eduardo Morras 


[sqlite] Sqlite3 tutorial and reference manuals?

2015-02-22 Thread Eduardo Morras
On Sun, 22 Feb 2015 13:10:25 -0500
russ lyttle  wrote:

> Contiki-os sounds interesting, especially if it can be adapted to run
> sqlite3 on an Arduino UNO or MEGA (with attached SD card). Has anyone
> attempted such a thing?
> 
> The "natural" language for the Raspberry PI is Python. For Arduino it
> is C++. I've confirmed that both PERL and Python drop too many
> messages when using my naif DB with sqlite3, R-PI model A, and Digi
> xbee s2 radios.

Sqlite3 under contiki-os works, but (a big BUT) it must be a sqlite3 bare-bones 
-Os compilation, omitting all advanced features (threading, fts, extensions, 
foreign keys, etc...). You need to create a vfs too.

---   ---
Eduardo Morras 


[sqlite] Sqlite3 tutorial and reference manuals?

2015-02-22 Thread Eduardo Morras
On Sat, 21 Feb 2015 23:04:53 -0500
russ lyttle  wrote:

> Thanks.
> I doing building automation using Raspberry Pi, Arduino, and zigbee
> devices. I started with Python, but that is too slow, so I would like
> to move to C++. Dropped messages are not acceptable, and response to
> user actions needs to be "timely".
> 
> I'll hold futher questions until after looking at the recommendations
> my by you kind people.

For zigbee devices I use contiki-os. I suggest you to use c only if your 
devices are low power. 

---   ---
Eduardo Morras 


[sqlite] Appropriate Uses For SQLite

2015-02-19 Thread Eduardo Morras
On Thu, 19 Feb 2015 08:31:13 +0100
Olivier  wrote:

> Hello all,
> 
> https://www.sqlite.org/whentouse.html :
> "The amount of web traffic that SQLite can handle depends on how
> heavily the website uses its database. Generally speaking, any site
> that gets fewer than 100K hits/day should work fine with SQLite. The
> 100K hits/day figure is a conservative estimate, not a hard upper
> bound. SQLite has been demonstrated to work with 10 times that amount
> of traffic.
> 
> The SQLite website (https://www.sqlite.org/) uses SQLite itself, of 
> course, and as of this writing (2015) it handles about 400K to 500K
> HTTP requests per day, about 15-20% of which are dynamic pages
> touching the database. Each dynamic page does roughly 200 SQL
> statements. This setup runs on a single VM that shares a physical
> server with 23 others and yet still keeps the load average of below
> 0.1 most of the time."
> 
> --
> 
> it would be interesting to put *all* sqlite.org pages in the
> database, even if it is useless. This would test with 500K HTTP
> requests per day. It will then be possible to modify this paragraph
> and indicate that Sqlite smoothly manages the 500K HTTP requests per
> day of this website, thus about 100 000K SQL statements per day.
> 
> And why not test with writing on each visit, and even every page
> visit? If Sqlite accept the charge, it would be impressive. it would
> also demonstrate the interest of WAL mode.
> 
> With the evolution of Sqlite and materials evolution (SSD, 
> microprocessors ...), it might be possible.


You can test drupal with sqlite, IIRC it's drupal7. Create a site or use a 
demostration site and use a http benchmark to test it.

There are others cms that can use sqlite as db, seredipity, Joomla, MediaWiki 
you can play and test with.

> 
> Olivier

---   ---
Eduardo Morras 


[sqlite] Behaviour when subquery behind comparison operator in where-clause returns empty result

2015-02-18 Thread Eduardo Morras
On Wed, 18 Feb 2015 10:23:16 +0100
Clemens Ladisch  wrote:

> gunnar wrote:
> > When I execute a query that has a subquery behind a comparison
> > operator in its where-clause and the subquery returns an empty
> > result, then the result of the complete query is exactly how I want
> > it: also an empty result. Can I assume that this behaviour will
> > stay the same in next versions of sqlite?
> >
> > SELECT *
> > FROM ordercallback
> > WHERE account=@ACCOUNT
> >   AND cb_seq_num>(
> > SELECT cb_seq_num
> > FROM ordercallback
> > WHERE cb_uuid=@CBUUID);
> 
> AFAIK the SQL standard requires such a subquery to return exactly one
> result.
> 
> SQLite allows any number of results; if there are more than one, it
> uses the first one; if there is none, it uses NULL instead.  This
> behaviour is not expected to change because it would break too many
> programs that already rely on it.
> 
> If you want to make this more compatible with other databases, handle
> these cases explicitly:
> 
> ... WHERE cb_seq_num > (
>   SELECT cb_seq_num
>   FROM ordercallback
>   WHERE cb_uuid=@CBUUID
>   UNION ALL
>   SELECT NULL   -- at least one result
>   LIMIT 1)  -- at most one result

Shouldn't add an ORDER BY cb_seq_num to get the lower one?

...
WHERE cb_uuid=@CBUUID
ORDER BY cb_seq_num ASC
UNION ALL
...
> 
> 
> Regards,
> Clemens

---   ---
Eduardo Morras 


Re: [sqlite] Performance increase between 3.7 and 3.8

2015-02-03 Thread Eduardo Morras
On Tue, 3 Feb 2015 06:39:02 -0700 (MST)
Jan Slodicka <j...@resco.net> wrote:

> Eduardo Morras-2 wrote
> > A ~8000MB db with app example data. More than 1000 query-corp
> > created as part of test driven development of the app. We have
> > precalculated the correct results in tables and its number of rows.
> > No write, only read queries and don't use other improvements like
> > partial indexes. Queries run in sequential order at a time.
> 
> That means that (complex) queries are substantially faster in v3.8.
> That's important for me, too.

Yes, the 'next generation query planner' helps a lot in queries with more than 
4 tables, temp or normal tables.

> 
> Thanks.


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance increase between 3.7 and 3.8

2015-02-03 Thread Eduardo Morras
On Tue, 3 Feb 2015 02:43:00 -0700 (MST)
Jan Slodicka <j...@resco.net> wrote:

> Eduardo Morras-2 wrote
> > I use a big test db to assure new versions of sqlite works
> > properly. With 3.7.15.2 it takes 43 minutes, with 3.8.8.2 on same
> > hardware 27 minutes, it's 16/0.43 = 37% less or 27/0.43 = 63%
> > improve.
> 
> Thanks, Eduardo. Could I ask you for a rough characterization of the
> test performed?

A ~8000MB db with app example data. More than 1000 query-corp created as part 
of test driven development of the app. We have precalculated the correct 
results in tables and its number of rows. No write, only read queries and don't 
use other improvements like partial indexes. Queries run in sequential order at 
a time.

Some queries test own sql functions and a virtual table, no improvements 
expected on them.

The test was done with same application version in debug mode and recompiled 
with 3.7.15.2 on same computer. All tests passed.

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance increase between 3.7 and 3.8

2015-02-02 Thread Eduardo Morras
On Mon, 2 Feb 2015 10:08:36 -0700 (MST)
Jan Slodicka <j...@resco.net> wrote:

> I know the reports about huge performance increase achieved within
> the last year. (Compliments for that.) However, those numbers ignore
> processor architecture and I/O. My question is a different one.
> 
> What speed difference do you perceive in real-world applications?
> 
> I know that there can't be any answer valid for everybody and for
> every situation...
> 
> What about your experience? Do you perceive better performance due to
> the last SQLite updates?

I use a big test db to assure new versions of sqlite works properly. With 
3.7.15.2 it takes 43 minutes, with 3.8.8.2 on same hardware 27 minutes, it's 
16/0.43 = 37% less or 27/0.43 = 63% improve.

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3 performance on select count very slow for 16 GB file

2015-01-24 Thread Eduardo Morras
On Tue, 20 Jan 2015 12:12:00 +
"Parakkal, Navin S (Software Engineer)" <navin.parak...@hp.com> wrote:

> Hello,
> 
>I've few questions about sqlite3 , the database it creates.
> Actually I'm finding lot of differences in performance.
> 
>  My story:
> I have this sqlite3 database called  hp.db which is like 100+
> million  records for table1. The size of hp.db on Linux x64 (CentOS
> 7) is like 16 GB. When I do a select count(*) on hp_table1 it takes
> more than 5 mins which is quite a huge time. This file is static for
> now ie we copied it from a production server for analysis. Now I
> create a index by create index nvnhpindex on hp_table1
> (column1,column2) . The primary key of the table hp_table1 is
> (column1,column2). It takes around some time (maybe 20 minutes or
> less , I went for lunch and came back , really didn't note the time).
> Now I do select count(*) on hp_table1 , it takes around 15 secs. This
> is what we want our objective. We want it to be fast. The create
> index has increased the size of hp.db to 18 GB. This is OK with us
> and the customers.
> 
> The problem is this is not a static database. We keep inserting data
> (insert rows every 10 secs or like atleast 1 minute ) and
> occassionally delete rows (like once in 5 days). This is a 24x7
> system.
> 
> 
> So to identify the problem , I created a empty_database similar to
> hp.db with no rows. I created a index on column1,column2 on
> empty_table1 inside empty_database.
> 
> Now I inserted the rows from csv (this CSV was created by .mode csv,
> output myhp.csv, select * from hp_table1).
> 
> The size of database is around 18GB (empty_database) with rows. Now I
> do a select count(*) on empty_table1 (actually it contains lots of
> rows like 100M+ records ) and it takes more than 5 mins. 5 mins is
> too much of a time for us to bear. The customer wants the information
> within a minute.
> 
> 
> Can you please help in resolving this problem ? We are planning to
> deploy this across 1+ nodes on Linux x64 on one customer and many
> other customers are going in the similar direction. 
> 
> How do we go about resolving this ie what should we do to create a
> table with sub minute access for 100-500 million . How do we create
> the indexes ? Any other performance incentives.

Use a trigger on insert and a trigger on delete that modifies a value on 
another table with current count(*) number. Table can be temporal if you want 
and stay in memory, but you should do a count(*) on application startup.

> Some say we should buy/use Oracle but I just am holding onto sqlite3
> assuming it would help me solve our problem.

You can use PostgreSQL, using part of Oracle licence cost for better hardware 
and a dinner for the team.

> Regards,
> Navin
> 
> ___________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Advice needed for optimization options

2015-01-24 Thread Eduardo Morras
On Mon, 19 Jan 2015 13:30:24 -0500
Andy Jewell <a...@alexa.com> wrote:

> I have a single threaded application, which at its heart is
> 
> while (true) {
>   open a database // any one of thousands, so I can’t just keep
> it open do a select
>   close the database
> }
> 
> With that part unchanged, does anyone have any optimization
> suggestions? I am compiling the amalgamation myself, so I’m open to
> both compile time and run time options. I don’t need to worry about
> other processes updating the database. I’m not looking for SQL advice
> at this time. 

If your application is single threaded, you can move the open/close out of 
while(true). You say that you can't keep it open, because you work with 
thousands databases, but you can keep them open with different db pointer 
structure (sqlite3 *), there's no limit to that. There's a limit (30/62) if you 
try to attach a database to another db already open.

> Openning the database with SQLITE_OPEN_READONLY was an enourmous
> help. Explicitly turning off threading didn’t seem to help much.
> Changing my compiler optimization level from -O2 to -O3 (gcc 4.4)
> made it larger but no faster.
> 
> 
> many thanks,
> adj
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Error: database disk image is malformed

2015-01-05 Thread Eduardo Morras
On Sun, 4 Jan 2015 17:31:14 +0800
Neo Anderson <neo_in_mat...@msn.com> wrote:

> I have a customer database which appears corrupt. When I run sqlite3
> mydb and PRAGMA integrity_check I got (literally, no other lines):
> 
> Error: database disk image is malformed
> 
> However, I can .dump and .read to create a new database. The new
> database works fine and the most surprising result is the old and new
> database files are exactly of the same size. But running cmp old.db
> new.db gave me:
> 
> differ: char 27, line 1
> 
> My question is why .dump works but sqlite3 thinks the file is
> corrupt. Attached please find old.db and new.db in a zip package.
> Hope this can help improve sqlite.
> 

Char 27 is on Sqlite3 header. It tracks the file change counter. When you .dump 
the counter is increased in your old.db by 1, that's the difference. Both files 
has the same corruption.

The mail list has attachment set to off, so they're deleted before remailed.

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] VACUUM requires 6.7 times space ?

2015-01-05 Thread Eduardo Morras
On Mon, 5 Jan 2015 14:42:28 -0600
Jay Kreibich <j...@kreibi.ch> wrote:

> 
> 
> On Jan 5, 2015, at 12:25 PM, Jim Wilcoxson <pri...@gmail.com> wrote:
> 
> > Simon - instead of using vacuum, it's much faster to create a new
> > database from the old one, then rename it.  It's easy to do this in
> > Python using iterdump(), or you can connect to the new (empty)
> > database, do your create table statements, attach the old database
> > as olddb, then do:
> > 
> > insert into table1 select * from olddb.table1;
> > insert into table2 select 8 from olddb.table2;
> > 
> > This also lets you do the create table stmts w/o indexes, and add
> > the indexes after the inserts.  Not sure if that is faster or not
> > for your data.
> 
> If you look at code for VACUUM, that’s more or less what it does…
> only it is very smart about it, properly preserving ROWID values, as
> well as exact table definitions, sequences, analytics, and all the
> meta-data in the database.

Sqlite3 Backup API should work too, and covers your vacuum to file proposal.

You can, from sqlite3 shell, .dump the database to .sql file or pipe output to 
gzip or xz and use the compressed file for import again. Don't need full 
compression to minimize the io, -6 for gzip and -3 for xz is enough.

> 
>   -j


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Counting rows

2014-12-12 Thread Eduardo Morras
On Thu, 11 Dec 2014 15:19:26 +
Simon Slavin <slav...@bigfraud.org> wrote:

> In my table which had about 300 million (sic.) rows I did this
> 
> SELECT count(*) FROM myTable;
> 
> to count the number of rows.  After half an hour it was still
> processing and I had to kill it.
> 
> I know that the internal structure of a table means that this number
> isn't simple to produce.  But is there really no faster way ?  This
> table is going to have about six times that amount soon.  I really
> can't count the rows in less than a few hours ?

Try 

SELECT count(myTable.indexed_column) FROM myTable;

indexed_column is a column of your table that is the leftmost NOTNULL column in 
a index, for example the rowid.

count(*) will look at every row in the table pages. The 
count(myTable.indexed_column) with the restrictions I said, should use the 
index, which (again) should use a lot less pages, minimizing the I/O.


> Simon.

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bugreport - slowdown in sqlite after the ANALYZE statement

2014-12-09 Thread Eduardo Morras
On Tue, 9 Dec 2014 10:38:34 -0500
"James K. Lowden" <jklow...@schemamania.org> wrote:

> On Tue, 09 Dec 2014 12:06:20 +0100
> Jan Stan?k <jsta...@redhat.com> wrote:
> 
> >  INSERT INTO CoreCache (ModelID, ItemID)
> >  SELECT
> ... 
> >  ORDER BY Year
> 
> Why ORDER BY on INSERT?  Does it work better?  I would expect the
> unnecessary sort to be pure overhead.  

If you insert in correct index order, the index update phase is faster because 
it don't need rebalance the b-tree so often after each insert.


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Using Sqlite3 as a Change Time Recording Data Store in Glusterfs

2014-11-25 Thread Eduardo Morras
On Sun, 23 Nov 2014 14:34:23 -0500 (EST)
Joseph Fernandes <josfe...@redhat.com> wrote:

> Ok I get it now. Yes we are using a single db connection object, But
> few questions, 
> 1) how would making sqlite3 single thread that improve the
> performance? Is there a special advantage in this mode than the
> multithread one?

The locking code isn't compiled and sqlite3 don't spend time checking them. The 
code in single thread compiled with multithread is minimal, but it's there 
wasting time.

> 2) Will it not block (wait or give a SQLITE_BUSY) the multiple thread
> of the app (in our case glusterfs IO threads) as now they have to
> wait for the single thread to complete the task?

Sqlite3 can manage only one writer (INSERT/UPDATE/DELETE) and multiple readers, 
the whole db is lock when a writer writes. Don't matter if sqlite3 is in 
multithread mode or not.

You must implement in your sqlite3 thread owner, the control logic between 
writing threads. Your threads send data to sqlite3 thread and it decides, with 
your rules, which ones and in what order apply to db.

When I did it, I found particular cases in my data management logic that make 
it faster, f.ex. batch writes sended by some threads to update the same row 
multiple times, so only the last one was applied or apply them in one 
transaction, which is faster than apply them one by one.

> ~Joe


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Using Sqlite3 as a Change Time Recording Data Store in Glusterfs

2014-11-23 Thread Eduardo Morras
On Sat, 22 Nov 2014 11:01:45 -0500 (EST)
Joseph Fernandes <josfe...@redhat.com> wrote:

> Thanks Eduardo.
> 
> Answers inline JOE>>
> 

> a)If you use Sqlite in single thread, compile it without thread
> support.
> JOE>> We are running it in multithread mode, as the database will be
> JOE>> fed by multiple File IO thread. 

Yes and no, your app can be multithread, but if only one thread has the sqlite3 
pointer, you can use single thread sqlite3. This way, you have multiple working 
threads and one sqlite3 dedicated thread that recives the data to write and the 
data queries.

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Using Sqlite3 as a Change Time Recording Data Store in Glusterfs

2014-11-22 Thread Eduardo Morras
On Fri, 21 Nov 2014 14:01:39 -0500 (EST)
Joseph Fernandes <josfe...@redhat.com> wrote:

> Hi There,
> 
> 1) We are trying to use sqlite3 in Glusterfs as a Change Time
> Recording Data Store i.e using sqlite3 db to record any modification
> attempts that happens on the glusterfs per file inode, So that it can
> be used to indicate the hotness of the file in the cluster.
> 
> 2) We have developed a prototype that inserts/updates inode records
> in sqlite3 db(one instance of the db file per glusterfs storage unit
> or as we call it a brick) to record the modification that are
> happening on the inode. These insert/updates are in the IO path of
> the file i.e any data or metadata change time will be record inline,
> w.r.t the IO, in the DB.
> 
> 3) We only save the modification time on the to the db, so that 
> we can easily query for "What files have change during a specific
> period of time" or "What files have not change during a specific
> period of time". The query is done by a scanner that runner on each
> storage unit or brick on a regular interval and chooses HOT or COLD
> files as suggested by the DB for data maintenance operations.
> 
> 4) Therefore, we are looking at a datastore that can give us a very
> quick write(almost zero latency, as the recording is done inline
> w.r.t file IO) and that as good data querying facilities(Slight
> latency in the read is fine but the fresh of that record data should
> be spot on).
> 
> 5) Please find our DB setting,
> Journal mode : WAL
> SYNC MODE : NORMAL 
> Cache Size : 1000 - 4096 Pages (Default Page Size)
> AutoCheck-Pointing: 1000 - 100 Pages 
> 
> We wanted to known the following
> 1) How could we improve the performance on the write side so that we
> have minimal latency?

I'll wait for the ota extension Mr. Hipps is developing. It will (If I 
Understood Correctly) reorder the commit operations to get faster IO throughput.

Some tricks:
a)If you use Sqlite in single thread, compile it without thread support.
b)Adjust the page size to fit in a hard disk cluster size (8KB IIRC) or the 
multiply of it that allows integer number of rows fits in.
c)Disable autovacuum



> 2) Will ther be any write performance hit when the number of records
> in the DB increase?
> 
> Thanks in advance
> 
> ~Joe


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Locking errors on network

2014-11-10 Thread Eduardo Morras
On Mon, 10 Nov 2014 20:33:04 +0200
RSmith <rsm...@rsweb.co.za> wrote:

> 
> On 2014/11/10 20:22, Mike McWhinney wrote:
> > So SQLite shouldn't be used at all on a network?  Aren't there any
> > other provisions to handled the locking errors if/when they occur?
> 
> It is not about SQLite, it is about the Networking systems lying
> about whether a file is locked or not. No RDBMS can trust the
> network, but the client-server types do not care since they control
> the locking and do not depend on the OS / file status. SQLite however
> depends on it and as such cannot accurately (or timeously I should
> say) verify such status via a Network. On a local drive this is never
> a problem.
> 
> If you need Networking or User-control, please use a client-server
> type database.
> 
> There is one Client-Server implementation of SQLite (SQLightening I
> think) but it is neither free nor easy to convert to. You can write
> your own server too, but the best bet is using MySQL or PostGres in
> these cases.

You can create your own sqlite server (I did and use it, with nanomsg for 
client-server communication), it's medium-hard and for tiny hardware, near 
embedded, works.

A good file to start with, as I did, is in Sqlite repository, check 
http://www.sqlite.org/src/artifact/a2615049954cbb9cfb4a62e18e2f0616e4dc38fe 
a.k.a. src/test_server.c

But, as others aim and hit, you should use a real C/S RDBMS, my preference, 
PostgreSQL server.

HTH

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Search query alternatives.

2014-10-16 Thread Eduardo Morras
On Thu, 16 Oct 2014 09:05:51 +1100
Michael Falconer <michael.j.falco...@gmail.com> wrote:

> Hi all,
> 
> first off I must start with an apology. I know I'm sort of doing the
> wrong thing here as this question is NOT related to sqlite. It is a
> general SQL question but I ask it here because I have great respect
> for the answers and discussions I have seen on this forum over many
> years. I rarely post myself as there are always several contributors
> who beat me to the answer and often their response is far better than
> mine would have been. I'm not a code leper, I don't need the actual
> SQL just the method really, though a short code example would be well
> received for illustration.
>
> Any thoughts? Suggestions? Missiles? Good approach, bad approach, or
> completely off the grid? I do use sqlite quite a bit, but not on this
> particular project.

I point you to sqlite closure extension. It may shows you some ideas for tree 
implementation and parent/child relations under sql/sqlite.

http://www.sqlite.org/src/artifact/636024302cde41b2bf0c542f81c40c624cfb7012
http://www.sqlite.org/src/finfo?name=ext/misc/closure.c

HTH
 
> 
> -- 
> Regards,
>  Michael.j.Falconer.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Window functions?

2014-08-28 Thread Eduardo Morras
On Wed, 27 Aug 2014 23:04:40 +0200
Petite Abeille <petite.abei...@gmail.com> wrote:

> 
> On Aug 27, 2014, at 10:57 PM, Eduardo Morras <emorr...@yahoo.es>
> wrote:
> 
> > Sorry, don't understand why others will throw an exception in the
> > group by, perhaps I'm misunderstanding the group by, but that
> > should work on others engines.
> 
> Because not all expressions are accounted for, i.e.:
> 
> "not a GROUP BY expression
> 
> Cause: The GROUP BY clause does not contain all the expressions in
> the SELECT clause. SELECT expressions that are not included in a
> group function, such as AVG, COUNT, MAX, MIN, SUM, STDDEV, or
> VARIANCE, must be listed in the GROUP BY clause.
> 
> Action: Include in the GROUP BY clause all SELECT expressions that
> are not group function arguments.”

Yep, it's true, I didn't notice it and asked too early.

> 
> Try it. See what happen.

It bangs, as expected, in PostgreSQL 9.2, adding the other selects columns 
solves it.

Thanks.

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Window functions?

2014-08-27 Thread Eduardo Morras
On Wed, 27 Aug 2014 21:17:05 +0200
Petite Abeille <petite.abei...@gmail.com> wrote:

> 
> On Aug 26, 2014, at 2:09 AM, Keith Medcalf <kmedc...@dessus.com>
> wrote:
> 
> >  select id, category_id, name, min(price) as minprice
> >from cat_pictures
> > group by category_id;
> > 
> > Done.  And no need for any windowing functions …
> 
> This peculiar behavior is very unique to SQLite. Most reasonable SQL
> engines will throw an exception when confronted with the above.
> SQLite calls it a feature. I personally see it as a misfeature.
> ( Ditto with tagging an implicit limit 1  to scalar queries. Anyway. )

Sorry, don't understand why others will throw an exception in the group by, 
perhaps I'm misunderstanding the group by, but that should work on others 
engines.

> On the other hand, one could look at the current ‘group by’ behavior
> as exhibited by SQLite as a precursor to a proper, more formalize,
> handling of analytic functions…. :)



---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Severe performance degradation between 3.8.4.3 and

2014-08-22 Thread Eduardo Morras
On Fri, 22 Aug 2014 19:14:02 +0200
"Mario M. Westphal" <m...@mwlabs.de> wrote:

> Thanks, Richard
> 
>  
> 
> After swapping back to the latest SQLite version and running an
> Analyze on the sample databases, performance is up to the same level
> as before (maybe even a bit faster). Very good.
> 
>  
> 
> I will send out a recommendation to my users to run the weekly
> diagnostics routine immediately to restore performance.

Next time you can do it automatically. Set pragma user_version on db to your 
app version, and on open db check if it's current or not, and run analyze or 
make schema changes or whatever you want and update user_version.


> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Only allow child record if another field in parent is false.

2014-08-01 Thread Eduardo Morras
On Fri, 1 Aug 2014 09:26:14 +1200
Richard Warburton <rich...@skagerraksoftware.com> wrote:

> Hi,
> 
> Consider:
> 
> CREATE TABLE enrolment (
> id  INTEGER PRIMARY KEY,
> child   INT NOT NULL REFERENCES child (id),
>  start   INT NOT NULL,
> leaver  INT NOT NULL DEFAULT 0 CHECK (leaver BETWEEN 0 AND 1), -- BOOL
> );
> 
> CREATE TABLE enrolmentItem (
> id  INTEGER PRIMARY KEY,
> enrolment   INT NOT NULL REFERENCES enrolment (id) ON DELETE CASCADE,
>  day INT NOT NULL,
> start   INT NOT NULL,
> end INT NOT NULL CHECK (start < end),
> );
> 
> I'm looking for an elegant way to prevent enrolments having
> enrolmentItems if leaver is set to 1.
> This way, you can't change the leaver field if enrolmentItems are
> attached, and you can't add an enrolmentItem if leaver is set to 1.
> 
> I think 2 triggers could work, but it seems overkill.  I could have a
> leaver field in enrolmentItem referencing leaver in enrolment and put
> a check against that, but I'd be storing unneeded data.
> 
> Ideally, I'd like a check in enrolmentItem that can examine a
> different field in the referenced enrolment record.
> 
> Suggestions?

Check 
http://www.sqlite.org/src/artifact/636024302cde41b2bf0c542f81c40c624cfb7012 for 
parent-child relationship example, it's for in-table relation but the in-code 
documentation is awesome, you can "steal" some ideas from it.

> 
> Thanks.


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Building for vxWorks

2014-07-08 Thread Eduardo Morras
On Mon, 7 Jul 2014 12:44:54 +
Andy Ling <andy.l...@quantel.com> wrote:

> Building the original file with the flags.
> 
> -DHAVE_UTIME -DSQLITE_OMIT_LOAD_EXTENSION
> -DSQLITE_ENABLE_LOCKING_STYLE
> 
> I get the following errors.
> 
> sqlite3.c: In function 'posixFchown':
> sqlite3.c:24249: warning: implicit declaration of function 'geteuid'
> sqlite3.c:24249: warning: implicit declaration of function 'fchown'
> sqlite3.c: At top level:
> sqlite3.c:24306: error: 'pread' undeclared here (not in a function)


Are you trying to use -DSQLITE_OMIT* with the amalgamation sqlite3.c or with 
the full sqlite3 src? Most of -DSQLITE_OMIT* won't work if you use the 
amalgamation.



---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problem with many connections

2014-07-02 Thread Eduardo Morras
On Wed, 02 Jul 2014 15:50:58 +0100
Grzegorz Sikorski <g.sikor...@kelvatek.com> wrote:

> > 1020 simultaneus connections? Perhaps iOS exhausted del max. file
> > descriptor per process.
> 1020 is the number of open operations. Number of connections opened
> at the same time is much lower (2-3).

If your connections use ORDER BY then you need another temporal file. If you 
don't define an INDEX and Sqlite decides it's needed, add one fd more. Same for 
temporal tables and others constructions. Add it to db and wal fd for each 
connection and you reach the limit.

When you say connections, do you mean a call to sqlite3_openv2? Or cycle of 
sqlite3_prepare_v2/step/finalize?

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problem with many connections

2014-07-02 Thread Eduardo Morras
On Wed, 02 Jul 2014 12:02:27 +0100
Grzegorz Sikorski <g.sikor...@kelvatek.com> wrote:

> Hi,
> 
> I am not sure if my previous email had reached the list, so I just
> repeat it:
> 
> I am developing sharding database using SQLite3 for embedded
> application. My code works fine up to about 1020 connections to the
> database. After around this number, I get an error "unable to open
> database file". I double checked, permissions are OK and I think I
> always properly close all connections and never open the same file
> twice (I normally keep one connection opened for whole application
> lifetime and open others on demand for each transaction). I found
> this
> topic:http://stackoverflow.com/questions/22801987/sqlite3-unable-to-open-database-file-ios
> and I am not sure if there is any reason why keeping opened
> connection in whole application lifetime is really something I should
> do? I would prefer to open database only when it is needed, to avoid
> risk of file corruption on power loss. Is there any known issue with
> multiple open/close operations?

1020 simultaneus connections? Perhaps iOS exhausted del max. file descriptor 
per process.

> 
> Regards,
> Greg


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


  1   2   3   >