Re: [sqlite] Table was deleted on macOS

2019-10-23 Thread Eduardo Morras
On Tue, 15 Oct 2019 21:11:05 +
"t...@qvgps.com"  wrote:

> Hi Group,
> 
> here is a very strange and rare fault which one of our users
> experienced on macOS .
> 
> He was working in our app for a while creating a lot of new data.
> So over several hours many INSERT INTO mytable 
> Without any errors.
> 
> Then he did a query SELECT COUNT(*) FROM mytable WHERE  (deleted=0 OR
> deleted IS NULL) AND IdCat=2 and an exception occurs:
> "no such table: mytable"
> 
> The client sent us the database and the table "mytable" was really
> gone. The database also seems to be fine, no errors.
> How can this happen?
> In our app is no DROP TABLE command, what else can cause a table to
> be deleted?
> 
> We have a few hundred copies of our app out, and this is the first
> time this error happens. Its on macOS and the sqlite-version is
> 3024000

Did he closed the transaction?? If he did

BEGIN
CREATE TABLE .
INSERT .

INSERT

SELECT COUNT(*)

the table didn't exist and on close, whole transaction in backrolled. I
say he, but it could be the application used to manage the db too that
forgot the COMMIT after the INSERTS

> 
> 
> Thanks
> Tom
> 
> 
> 
> /
> **   Flemming Software Development CC
> **   Thomas Flemming
> **   PO Box 81244
> **   Windhoek, Namibia
> **   http://www.quovadis-gps.com
> **   mail  t...@qvgps.com<mailto:t...@qvgps.com>
> **   +264 (0)81 3329923Nam mobile
> **   +49  (0)175 7313081   D mobile
> **   +49  (0)6182 8492599  D office
> ***/
> 
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite build on Risc-V

2019-06-03 Thread Carlos Eduardo de Paula
I'm on a SiFive Unleashed board running Debian Linux on Kernel 4.19.
-- 
Sent from IPhone
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite build on Risc-V

2019-05-31 Thread Carlos Eduardo de Paula
I tried to build SQLite from sources on Risc-V architecture but the
./configure script fails.

Replacing config.guess and config.sub with the ones from automake 1.16
package fixes the problem and SQLite builds successfully.

Carlos

-- 

*Carlos Eduardo de Paula*
m...@carlosedp.com
http://carlosedp.com
http://twitter.com/carlosedp
Linkedin <http://br.linkedin.com/in/carlosedp>

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Row locking sqlite3

2019-03-27 Thread Eduardo Morras
On Fri, 22 Mar 2019 09:25:24 -0500
Peng Yu  wrote:

> Hi,
> 
> I see that sqlite3 still does not support row locking. This package
> tries to resolve this problem. But it does not have a standard build
> process for Linux.
> 
> https://github.com/sqlumdash/sqlumdash/
> 
> Are there other packages similar to sqlite3 but support row locking?

Row locking of how many rows?

In some projects I use a hidden column with a simple letter to flag the row 
state and make queries filtering that flag. You can use a simple 'L' flag on 
the rows you want locked and add a where to don't touch them.

I use it for softdelete rows, this allows fight against internal fragmentation, 
but adds cost of adding the column to all indexes. At idle times delete all of 
them or recreate the table with live rows.

Of course, if you plan to lock almost all rows, reverse the logic. If you want 
to lock and unlock fast, use other method or PostgreSQL.

You can create a virtual table that adds this feature pretty easily.

> Thanks.
> 
> -- 
> Regards,
> Peng
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is there a way to select using cid?

2019-03-25 Thread Eduardo Morras
On Sat, 23 Mar 2019 12:57:55 +0100 (CET)
Eric  wrote:

> On Fri, 22 Mar 2019 16:34:20 -0500, Peng Yu 
> wrote:
> > There are cid's for each table. Is there a way to use "select" with
> > cid's instead of their names? Thanks.
> > 
> > $ sqlite3 dbfile < > create table test (id integer primary key, value text);
> > insert into test (id, value) values (1, 'abc');
> > .mode column
> > .headers on
> > select * from pragma_table_info('test');
> > EOF
> > cid nametypenotnull dflt_value  pk
> > --  --  --  --  --
> > -- 0   id  integer
> > 0   1 1   value   text
> > 0   0
> 
> Why on earth would you want to do that?

To write obfuscated code, perhaps for a prize or trojan, etc... development

> Eric
> -- 
> ms fnd in a lbry
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance of writing blobs

2018-06-12 Thread Eduardo Morras
On Tue, 12 Jun 2018 14:13:33 +0200
Dominique Devienne  wrote:

> 
> You're right of course. Thank you Clemens.
> 
> With synchronous = OFF, which suits my use-case here, the commit-time
> just vanishes,
> and even out-performs HDF5 now (see below). I might still prefer HDF5,
> mainly because
> the N-d typed arrays are self-describing and thus visible using
> Hdf5View for example, instead
> of completely opaque in SQLite, but now I know SQLite in non-durable
> mode is as fast or even
> faster for these smallish blob sizes than HDF5. I'll need to
> double-check for larger sizes
> (individual blobs get upward of 1GB in size), with my large 31GB, 200K
> blobs, 1.2M entities project.

I join late to this thread but here are some comments, I use sqlite in
environments similar to yours:

- Don't use synchronous=OFF if you need more speed, use pragma
journal=memory or pragma journal=none. They aren't safer but faster,

- As you noted, don't preallocate a zero-blob, insert directly the blob,

- Don't delete/update blobs, instead mark the row as deprecated/invalid
and insert a new one,

- If you have blobs larger than 2GB you'll need to split them, sqlite
can't work with blobs larger than 2GB,

- Use misc/unionvtab extension if you need a safe environment, it's a
bit tricky to use but you will get a RAIDB0/JBODB0 (raid 0/jbod db
split in many files) easily and is faster,

- Compile sqlite with SQLITE_DIRECT_OVERFLOW_READ, with this, you blob
reads don't pollute sqlite cache,

- Compress the blobs with misc/compress extension while writing, less
you write, faster you read, see sqlar project (http://sqlite.org/sqlar).

HTH

---   ---
Eduardo Morras 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite delete slow in 4GB

2018-06-06 Thread Eduardo
On Wed, 6 Jun 2018 14:33:12 +0300
Aydin Ozgur Yagmur  escribió:

> I was applying the changes. After moving blob to last column suggested by
> Hick and eduardo, 500 MB delete took 40-45 seconds.
> I was already attempted to change page_size & cache size but i had not seen
> any significant improvement, so i didnt apply this now.

When you change the page_size you must vacuum the db. If not, the change in
page_size is noop.

The change in cache size will speed up the access to blobs, in a basic way.
There are better ways to do so but bit more complex. 

For example, a 4GB cache_size will put, lazily, all db file in memory.

> could you please explain positive effects of "autovacuum=incremental"
> solution? I did not see too much positive comment on it.

Autovacumm incremental, adds sqlite metadata in pages, performing better read
access to blobs bigger than your page_size. The trick is never call the pragma
incremental_autovacuum. That's, you set incremental but never use it.

> Thanks for your help,

-- 
Eduardo 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite delete slow in 4GB

2018-06-06 Thread Eduardo
On Wed, 6 Jun 2018 12:06:15 +0300
Aydin Ozgur Yagmur  escribió:

> Thank you very quick response.
> 
> We have already planned to change file system. But this problem seem not
> related with the file size, because we did not hit the limit and when i try
> to delete same data in ,for example, 2 GB-sized db, i encounter with the
> same problem. This is my table and index definitions.

Why didn't apply the changes suggested by others on the other thread?

> Thanks for your help.
> 
> CREATE TABLE "ANHXT" (
>   "id" integer primary key autoincrement,
>   "ANH_AD" text,
>   "ANH_DBGMHWID" text,
>   "ANH_TYPE" integer,
>   "ANH_INDEXNO" int64_t)
> CREATE TABLE "PRCXT" (
>   "id" integer primary key autoincrement,
>   "ANP_SEGMENTNO" integer not null,
>   "ANP_VALUE" blob,
>   "ANH_PRC_id" bigint,
>   constraint "fk_ANHPRC_ANH_PRC" foreign key ("ANH_PRC_id") references
>  "ANHXT" ("id") on update cascade on delete cascade deferrable
> initially deferred)
> CREATE UNIQUE INDEX UQC_ANH_TYPE on ANHXT( ANH_TYPE)CREATE UNIQUE
> INDEX UQC_ANP_SEGMENTNO_ANAHTARID on PRCXT(
> ANP_SEGMENTNO,ANH_PRC_id)CREATE INDEX findex on PRCXT( ANH_PRC_id)


Move the blob to the last column

 CREATE TABLE "PRCXT" (
   "id" integer primary key autoincrement,
   "ANP_SEGMENTNO" integer not null,
   "ANH_PRC_id" bigint,
   constraint "fk_ANHPRC_ANH_PRC" foreign key ("ANH_PRC_id") references
  "ANHXT" ("id") on update cascade on delete cascade deferrable
 initially deferred),
   "ANP_VALUE" blob


even better, use a table specific for the blob:

  CREATE TABLE the_blob (
id integer primary key,
content blob
  )

the best for it, use 2 databases, one for metadata (ANHXT PRCXT tables) and
other for the blobs. Open the metadata and attach the blob. 

If you use foreign key constraint to point to blob table, don't use wal mode.
If you want/need wal mode, use an integer column to the blob id table and 
manage updates and deletes yourself.

Set these pragmas for the blob schema before creation:

pragma the_blob.autovacuum = incremental;
pragma the_blob.page_size = 65536;
pragma the_blob.cache_size = 1024; -- must set on open/attach too. 64MB

Depending on your insert/update/delete ratios, there are other patterns that 
may fit better.
 
> On Wed, Jun 6, 2018 at 11:42 AM, Karl Billeter  wrote:
> 
> > On Wed, Jun 06, 2018 at 09:27:57AM +0300, Aydin Ozgur Yagmur wrote:
> >
> > > I have been using fat32 file system. I have a database which has 4 GB
> > size.
> >
> > Er... I'm surprised there aren't more problems due to 4GB being the max
> > file
> > size supported by fat32.  Any chance to change it to exFAT?
> >

Don't use FAT32. Use your native fs, ntfs, ufs2, hfs+, hammer2, ext4,

> > K

-- 
Eduardo 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite delete too slow in 4 GB database

2018-06-04 Thread Eduardo
On Sun, 3 Jun 2018 22:59:05 -0700 (MST)
ayagmur  escribió:

> I have been using ext2 file system. I have a database which has 4 GB size.
> Database consist of 1 parent table and 1 child table. Parent table has 10
> rows and child table has 4000 rows. 1 row of child table has 1 MB size. when
> I delete a row in parent table, deletion cascades 1MB-sized child records.
> (pragma foreign_keys is on) When I try to delete 100 MB data by cascade (1
> parent record - 100 child records) it takes too long time (almost 10 minute)
> to complete, and the duration increase/decrease by size of data (100 Mb: 10
> minute, 300 MB: 30 minute,etc).I tried some pragma commands (synchronous,
> temp_store, journal_mode) suggested by others posts and i also tried to add
> index on foreign key, but those does not help solve my problem.(Actually,
> after adding index on foreign key, 1 MB data deletion became faster/st, but
> 100 MB data deletion duration did not change) Can you give me please any
> suggestion to increase deletion performance?

Don't hard-delete. You have 2 options to soft-delete (I use both on my big fs
over sqlite):

a) Add a new column to child, put it before the data blob column and set it to
0 if it is active or 1 when delete. From time to time select those actives and
move them to a new fresh table.

b) Parent has the foreign key to the childs, just set those foreign keys to 0
or to the new content. From time to time select all foreign keys to child on
parent table and move them to a new fresh table. Disable foreign_key pragma. 

Now I use b) with parent (fs metadata) on one db file and child (file content)
on another, it's faster and don't require an additional column on child.

Clean child table/db when you want, I never did it on mines for now.

-- 
Eduardo 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Request to add define SQLITE_PREPARE_NONE

2018-06-02 Thread Eduardo Morras

Hello Dr. Hipps

There is a #define SQLITE_PREPARE_PERSISTENT  x01 line in sqlite3 source code 
for sqlite3_prepare_v3() flags. 

Could you add SQLITE_PREPARE_NONE meaning no flag set?

Thanks

---   ---
Eduardo Morras 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] About storage of large amounts of image data

2018-05-08 Thread Eduardo Morras
On Tue, 8 May 2018 14:08:05 +0100
Mike Clark <cyberherbal...@gmail.com> wrote:

>  Hi List!
> 
> I'm developing a project that deals with image files and am
> considering storing all the images in the SQLite database itself,
> rather than (or in addition to) the file system.  Since the
> prospective users will probably be dealing with hundreds of gigabytes
> in their use of the project, I am wondering if this is an effective
> or efficient use of SQLite -- or safe, because of the risk of data
> corruption.
> 
> I know the documentation says that SQLite can handle up to 140 TB (do
> we know of anyone who is doing this?), so hundreds of gigs is clearly
> doable.
> 
> Is it advisable, however?

Completly, I use a sqlite fs on some projects with terabyte db size and
0 problems. One has write file once, update occasional and lots of
reads (no deletes) workload. Read speed is faster than XFS and UFS2.

You need to tune Sqlite to big databases and blob direct read (check the page 
Dr. Hipps linked for compile options)

 
> ?(Sent this about 2 wks ago, but it never posted to the list and no
> moderator response either).?

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] JDBC driver experience

2018-04-19 Thread Eduardo Morras
On Thu, 19 Apr 2018 09:37:20 -0700
"dmp" <da...@dandymadeproductions.com> wrote:

> Currently I trying to complete a plugin for Ajqvue
> that transfers data from others databases to SQLite.

Hi, I made an odbc virtual table to achive this. It's closed source but
the ¿difficulty? to develop one from scratch is easy-medium.

Pass the connection string, user and password, and all queries to vtab
are redirected to the other dbms. I began with the csv file virtual
table and went from there.

Check SQL-MED standard and postgresql docs for syntax.


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How many AUTOINCREMENT tables are in your schema?

2018-03-22 Thread Eduardo Morras
On Fri, 16 Mar 2018 11:37:24 -0400
Richard Hipp <d...@sqlite.org> wrote:

> This is a survey, the results of which will help us to make SQLite
> faster.
> 
> How many tables in your schema(s) use AUTOINCREMENT?

Within all my projects 4 (very old projects)

I don't use it on current projects.

> I just need a single integer, the count of uses of the AUTOINCREMENT
> in your overall schema.  You might compute this using:
> 
>sqlite3 yourfile.db '.schema --indent' | grep -i autoincrement |
> wc -l
> 
> Private email to me is fine.  Thanks for participating in this survey!
> -- 
> D. Richard Hipp
> d...@sqlite.org


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to optimise a somewhat-recursive query?

2018-03-21 Thread Eduardo
On Wed, 21 Mar 2018 17:39:45 +
Jonathan Moules <jonathan-li...@lightpear.com> escribió:

> So, I'm back to being stuck on this.
> I have inserted 500,000 random urls (no extra lookups - still just 
> 1000), and now the query (as per the below reply) is back to being 
> somewhat slow (I'm using sqlite 3.15) at about 0.6s per request.

After the huge insert, did you run ANALYZE?
 
> After a couple of hours of investigation, it's only slow when there is 
> either no ORDER BY, or if I use DESC (which is what I need). If I use 
> ORDER BY u.url ASC - it's near instantaneous.
> 
> I've tried every possible combination of indexes I can think up, 
> including of course with url_id DESC. I've also removed the ORDER BY in 
> the view (and set it to DESC as well), but that made absolutely no 
> difference.

Have your indexes in the last row the primary key or rowid?
 
> I'm a little confused as to why I'm seeing this behaviour - my limited 
> understanding of the query-planner and reading 
> https://sqlite.org/queryplanner.html - suggests that at least when using 
> indexes, it'll simply scan an index backwards if that looks like it'll 
> help. I appreciate this is a result-set not an index, but in this case 
> could not the query planner realise that reading through the results 
> backwards would be faster than whatever it's doing?
> 
> And for that matter, shouldn't no ORDER BY be at least the same speed as 
> ORDER BY u.url_id ASC?

What does an integrity check returns?

Try a Reindex and Analyze, all cases should be similar fast (or slow).

> Thoughts welcome; Thanks!
> Jonathan
> 
> 
> On 2018-03-19 00:24, Jonathan Moules wrote:
> > Thanks Simon and Quan.
> > I'm not sure it's the view itself per-se - It takes 0.000s (time too 
> > small to measure) for just the full View to be run on this dataset.
> >
> > It turns out the problem is simpler than that and no data changes are 
> > needed. I did consider Quan Yong Zhai's option and gave it a try, but 
> > couldn't fathom out the necessary SQL to get what I wanted (it's 
> > getting late).
> >
> > Instead prompted by the replies here, I've changed the query very 
> > slightly to the below which solves the problem:
> >
> > SELECT
> > u.url, l.error_code
> > FROM
> > urls u
> > LEFT JOIN
> > lookups l
> > USING(url_id)
> > LEFT JOIN
> > (select * from v_most_recent_lookup_per_url where url_id in (
> > select url_id from urls where url = 'example.com'
> > )) recent
> > -- By definition url's can) recent
> > -- This is the important bit
> > -- Here we use the most recent lookup url_id to link to the 
> > source_seed_id, so we only find its children
> > -- Or alternatively itself
> > ON u.source_seed_id = recent.url_id
> > OR u.url_id = recent.url_id
> > WHERE
> > -- For JSON-spider at least, Generic's are guaranteed to be 
> > generic pages.
> > l.is_generic_flag = 1
> > AND
> > -- Must be "or equal to" so we can get the lookup of the very 
> > base url.
> > l.retrieval_datetime >= recent.retrieval_datetime
> > AND
> > DATETIME(recent.retrieval_datetime) > DATETIME('now', '-5 days')
> > ORDER BY
> > u.url_id DESC
> > LIMIT 1;
> >
> >
> > To save readers having to compare manually, the difference is this: I 
> > turned the "recent" alias item from the View into a subquery (still 
> > using the view), and then moved the "where url = example.com" part in 
> > to there.
> > The query is now literally two orders of magnitude faster, from 0.2s 
> > to 0.004s. No new indexes or anything, just that change.
> >
> > Hopefully this will scale to full datasets; if it doesn't I may have 
> > to consider the other suggestions, but for now this is a 
> > minimum-effort solution.
> >
> > I'm not actually sure what SQLite was doing in the previous query to 
> > make it take so long. , so I imagine there was some hideous recursing 
> > going on or something.
> >
> > Scope for optimisation?
> >
> > Thanks again,
> > Jonathan
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


-- 
Eduardo <eduardo.mor...@mobelservices.com>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why some options (ex. SQLITE_ENABLE_EXPLAIN_COMMENTS, SQLITE_ENABLE_DBPAGE_VTAB) are permanently enabled in Makefile.am, but are documented as user-defined?

2018-03-13 Thread Eduardo
On Tue, 13 Mar 2018 00:07:41 -0700
Yuri <y...@rawbw.com> escribió:

> Makefile.am has:
> AM_CFLAGS = @THREADSAFE_FLAGS@ @DYNAMIC_EXTENSION_FLAGS@ @FTS5_FLAGS@ 
> @JSON1_FLAGS@ @ZLIB_FLAGS@ @SESSION_FLAGS@ -DSQLITE_ENABLE_FTS3 
> -DSQLITE_ENABLE_RTREE
> sqlite3_CFLAGS = $(AM_CFLAGS) -DSQLITE_ENABLE_EXPLAIN_COMMENTS 
> -DSQLITE_ENABLE_DBPAGE_VTAB -DSQLITE_ENABLE_STMTVTAB 
> -DSQLITE_ENABLE_DBSTAT_VTAB
> 
> But all of these options (with -D) are documented here 
> http://www.sqlite.org/compile.html as user-settable.

Yes, they are user-settable, by default they are set ON in amalgamation. 
Perhaps adding "user-settable and user-unsettable" may help?


> 
> Yuri
> 

-- 
Eduardo <eduardo.mor...@mobelservices.com>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question about Practicality of Embedding SQLite on Cortex-M4 Processor

2018-03-05 Thread Eduardo
On Fri, 2 Mar 2018 19:46:17 +
"Obrien, John J" <jobri...@gatech.edu> escribió:

> Hello,
> 
> 
> My team is working on a project that involves transmitting sensor data from a 
> data logger module to a mobile application via Bluetooth. I am interested in 
> finding a relatively fast, reliable way to store the data that was collected 
> by the data logger. Since we aren't guaranteed to always have a reliable 
> Bluetooth connection between the mobile app and the data logger, we will need 
> a way to mark which records are synchronized with the mobile application and 
> which still haven't been sent up, so that the data logger can continue to 
> collect data even when the mobile application is out of range. We collect 
> data continuously at 4Hz, so I anticipate that we will have far too much data 
> to use a flat file and manually traverse each record prior whenever the 
> mobile application requests data.
> 
> 
> I am very new to embedded programming (almost no prior experience) but have 
> prior mobile application development experience. My first thought is to store 
> the data in a SQLite database table and include one column called 
> "IsSynchronized" that can store a boolean value to indicate which datapoints 
> have been synced. When the mobile application requests data, the records 
> where "IsSynchronized == false" will be transmitted to the mobile app. Once 
> successful, the mobile app will let the data logger know that the 
> transmission succeeded and set the IsSynchronized column to true for each 
> synchronized record. This is how I would do it if the data were traveling 
> from a mobile app to a server, but I don't know if this is a good idea for an 
> embedded database to a mobile application.
> 
> 
> Our data logger uses an Atmel-SAM4S microprocessor. We have 8GB of flash 
> memory, so storage isn't an issue. Our RAM is very limited; we only have 
> 160KB. We are working with an external vendor to design the data logger. They 
> have developed an custom, handrolled operating system, but since it contains 
> no Virtual File System, we aren't sure if SQLite will be an option for us. 
> I've heard of other folks using uClinux or other Unix based Operating Systems 
> on the SAM4S to accommodate SQLite 
> (http://sqlite.1065341.n5.nabble.com/VFS-for-an-MCU-internal-flash-td83079.html),
>  but I don't know how practical this is.
> 
> 
> My question is this:
> 
>   1.  First, am I on the wrong track by pursuing an embedded database to 
> solve my data synchronization issue?
>   2.  Are there other embedded database alternatives that might make more 
> sense in my scenario. I've heard a lot about NoSQL DBs like BerkleyDB and 
> UnQlite, but I don't know is they are practical for my case either.
>   3.  If we do need to implement a VFS, does it make sense to use a 
> unix-based embedded operating system? If so, are there any recommendations as 
> to which one may be a good fit for my hardware
>   4.  Last, are we wasting our time with the SAM4S processor to solve this 
> problem? We would like to take advantage of existing code, the nice 
> power-consumption characteristics of the SAM4S, and would rather not have to 
> go through the time and expense of upgrading our processor, but, if our 
> processor choice is a non-starter, it would be good to know sooner rather 
> than later.
> 
> To summarize, my question is regarding what direction I should ask the 
> hardware vendor to take. Does it make sense for them to spend time optimizing 
> the SAM4S for SQLite or should we consider another approach?
> 


For the first, yes, but should it be a 'real' db? If your queries to your
data structure (a table in sql) will be for '=' only and not for '>' '<' or
other more complex, a simple data structure (list, double queue,.. whatever)
will be faster and lighter.

For the second, I use sqlite, naked from every cloth I don't need and added via
virtualtables or functions the ones I want. UnQlite derives from Sqlite (there
were 2 projects with the same name), BerkleyDB is easy to use but
bigger/heavier than sqlite.

For the third, I used contiki-os for embedded (10 years ago or more) with
sqlite. Check it. If you need something more linux like, some friends use
zephyr, but don't know what processors support.

FOr the fourth, depends on what are you trying to solve. If you only want to
send data in IoT, I'll use contiki without anything, a double queue (or list),
one for not synch and other for synch data, moving from the first to the second
a struct will make the work. Don't think big in embedded world



-- 
Eduardo <eduardo.mor...@mobelservices.com>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Header corruption

2018-02-08 Thread Eduardo
On Wed, 7 Feb 2018 17:29:54 +
Deon Brewis <de...@outlook.com> escribió:

> Oh yeah, I don’t think this is a SQLITE bug or anything. 
> 
> I think something in our code is writing to memory after freed. I'm just 
> trying to track it down at the point that it happens. We've tried all 
> Profiling tools on both OSX and Windows without luck, so my next step is 
> trying to find the writing thread at the point of corruption.
> 

Profiling shouldn't help, it isn't the rigth tool. Use Xcode analyzer or from
command line scan-build and scan-view. If you can, use the last version of
clang-analyzer.

In your project directory type:

%mkdir review
%scan-build make -o review/ -V

in directory review is the scan-build html output. Use:
 
%scan-view review/ to see it again


> Dan Kennedy's suggestion seems like that would we that way to do that.
> 
> - Deon

-- 
Eduardo <eduardo.mor...@mobelservices.com>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] question about covering index

2018-02-07 Thread Eduardo
On Wed, 7 Feb 2018 09:17:27 +
Hick Gunter <h...@scigames.at> escribió:

> SQLite can only use a covering index whose prefix satifies the WHERE and/or 
> ORDER BY clause(es).
> 
> WHERE x=1
> ORDER BY y
> 
> The WHERE constraint can be handled by an index that starts off with x.
> The ORDER BY can be handled by an index that starts off with y.
> 
> SQLite *may* realise that an index on (x,y) satisfies both conditions (within 
> the fixed x values, y values are already ordered). In that case you would 
> require the _id field to make it a convering index (x,y,_id).
> 

_id field is always appened at the end of all indexes, it's integer primary
key. Your index internally will be (x, y, _id, _id)


-- 
Eduardo <eduardo.mor...@mobelservices.com>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Feature request for the Shell Tool: .mode json

2018-01-22 Thread Eduardo
On Sun, 21 Jan 2018 05:54:13 +
Simon Slavin <slav...@bigfraud.org> escribió:

> Feature request for the Shell Tool: ".mode json".
> 

Others has pointed to libraries to export to json, so I point to the one I use:
libucl https://github.com/vstakhov/libucl

Using the generation functions [1] you can convert from C structs and types to 
ucl and
export to any suportted formats, json, compact json, yaml and nginx like config 
files.

Licence BSD 2-clause "Simplified" License

> Simon.

[1] 
https://github.com/vstakhov/libucl/blob/master/doc/api.md#generation-functions-1

-- 
Eduardo <eduardo.mor...@mobelservices.com>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed issue of SELECT in my application

2018-01-18 Thread Eduardo Morras
On Wed, 17 Jan 2018 17:59:22 +
Simon Slavin <slav...@bigfraud.org> wrote:

> Folks.  Nick published a figure of 60ms for his search.  That?s not
> unusually slow.  There was no request to shave every last millisecond
> off that figure.  There wasn?t even a statement that it was too
> slow.  No need to scare the guy by mentioning twenty complications of
> SQLite which may be irrelevant.

... I read "Speed issue..." in Subject, but you're rigth,
60ms is fast enough on common deployment configuration.

> If Nick needs to save more time he?ll need to post more details of
> what he?s doing.
> 
> Simon.

P.S. I sent the mail from my current contractor mail, sorry for that.

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Speed issue of SELECT in my application

2018-01-17 Thread Eduardo
On Wed, 17 Jan 2018 10:48:10 +0200
R Smith <ryansmit...@gmail.com> escribió:

> Hang on a second - True, there is nothing to be done to improve the 
> speed of that specific Query - BUT - there is plenty to be done to 
> improve the speed of the database for your specific use case. The usual 
> suspects that comes to mind are:
> 
> 1. Use a faster Journal mode (Have to weigh speed against 
> power-loss-data-integrity-protection),
> 2. Use an in-memory DB (especially if you only reading from it)
> 3. Start a transaction and hold the DB read locks for the duration of 
> your application (again, if it won't need writing)
> 4. Ensure there are no other time-consuming bits of code in the sqlite 
> api RESET()-STEP() loop. (And if you are accessing it through a wrapper, 
> stop that and use the api directly).
> 
> We could and would probably give more useful direct advice if we know a 
> lot more about your specific use case, platforms, access-method and setup.

What about db normalization?

Change:

CREATE TABLE t1 (
a INTEGER PRIMARY KEY AUTOINCREMENT, 
b INTEGER NOT NULL UNIQUE, 
c INTEGER NOT NULL, 
d INTEGER, 
e TEXT, 
f INTEGER, 
g INTEGER, 
h TEXT, 
i INTEGER
);

To: 

CREATE TABLE t1 (
a INTEGER PRIMARY KEY AUTOINCREMENT, 
b INTEGER NOT NULL UNIQUE, 
c INTEGER NOT NULL, 
d INTEGER,  
f INTEGER, 
g INTEGER, 
i INTEGER,
e INTEGER,
h INTEGER,
FOREIGN KEY (e) REFERENCES t2(j),
FOREIGN KEY (h) REFERENCES t2(j)
);

CREATE TABLE t2 (
j INTEGER PRIMARY KEY
k TEXT
);

You don't need to have in t2 e and h columns, only k one. Table t1 will have
fixed row size (not exactly true) and lot less table size; t2 will store each
text only once, if some e and some h originally had the same content.

You can set pragma foreign_keys = ON to force sqlite check the constraint, or
disable it with pragma foreign_keys = OFF if you trust your data input; e, h
have always values in t2.j. 

HTH

-- 
Eduardo <eduardo.mor...@mobelservices.com>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Move to Github!!?

2017-12-26 Thread Eduardo
On Tue, 26 Dec 2017 10:27:39 -0500
"J. King" <jk...@jkingweb.ca> escribió:

> I use Git, but I'm not attached to it. I run my own publicly-accessible 
> remote (using Gitea), but that would be completely replaceable with Fossil 
> (which I am very impressed by).
> 
> What's not so easy to replace is the Git integration in my editor (Visual 
> Studio Code) which allows me to easily perform basic operation like commit, 
> push, pull, and rebase. 
> 
> Even just to the ability to review diffs and perform checkins in my editor 
> would be enough, but I doubt it will happen anytime soon, if ever, and I 
> don't have the expertise required to hack it on myself. 
> 
> Until the landscape changes (or someone can suggest suitable Windows 
> software), I will continue to admire Fossil from afar.
> 


When I used VS2015 in the past, I followed this steps:

Create a new menu entry for Fossil:
a) Menu Tools->Customize, "Commands" tab, "Add New Menu" button  // This create 
a new menu on tool bar
b) Select the "New Menu" created, click on "Modify Selection" button and change 
its name to "Fossil"
c) Move it up or down and click "Close" button when finished

Create "External Tools"
a) Menu Tools->External Tools..., 
b) Click on "Add" button
c) Fill data: "Title=Commit", "Command=C:\Fossil\fossil.exe", "Argument=commit 
-m", "Initial Directory=$(ProjectDir)"
d) Check the check boxes "Use Output Windows", "Prompt for Argument"
e) Annotate the entry number of the external command, if it's the 3rd or 7th.

Add created external tool to Fossil menu
a) Menu Tools->Customize, "Commands" tab,
b) Check "Menu bar" radius button and select "Fossil" from the selectable list 
(Default value is "Menu Bar")
c) The "Controls" display should be empty for the first fossil command, 
d) Click "Add Command" button, select "Tools" from "Categories" left list and 
"External Command XX" where XX is the number annotated,
e) Click "Modify Selection" to change name to "Fossil Commit"


Add commands as needed, but don't change the order in "Tools" menu, the number 
annotated correspond with the "Fossil" menu entry, if you rearrange them, 
you'll call the wrong external tool.

HTH, but command prompt is easier for me.

-- 
Eduardo <eduardo.mor...@mobelservices.com>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Re: Atomic DELETE index optimisation?

2017-12-18 Thread Eduardo
On Mon, 18 Dec 2017 07:21:50 -0700 (MST)
Dinu <dinumar...@gmail.com> escribió:

> Hick Gunter wrote
> > If you are running an SQLite version 3.11.0 or newer, only pages that are
> > changed for the first time in the transaction are copied to the WAL file,
> > thus extending it.
> > 
> > While progressing the transaction, the rate of "newly changed" to "changed
> > again" pages will shift towards re-reading and rewriting the WAL file copy
> > of a page; this will tend to slow down the rate at which the WAL file is
> > growing, even at a constant delete rate, until every page has been updated
> > at least once, and then stop growing until all the remaining deletes have
> > been processed.
> 
> Running 3.19; thanks for the explanation, this never occured to me and makes
> perfect sense.
> 

Is it compiled with SQLITE_ENABLE_UPDATE_DELETE_LIMIT option? If yes you can
delete, let's say 10 (10E5) rows each round. Index and metadata would fit
in sqlite cache and clean wal file, making it faster than a on big delete.

-- 
Eduardo <eduardo.mor...@mobelservices.com>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "BIGINT" doesn't act like INTEGER on table creation [Bug]

2017-11-27 Thread Eduardo
On Thu, 23 Nov 2017 17:35:08 -0700
"Keith Medcalf" <kmedc...@dessus.com> escribió:

> 
> That is correct.  You cannot spell "INTEGER PRIMARY KEY", which is an 
> explicit alias for the rowid, as anything other than that one particular 
> correct spelling.
> 
> Personally, I have never found a use for the AUTOINCREMENT option.  Why is 
> it being used?

If you declare a INTEGER PRIMARY KEY alone, sqlite could reuse rowid number
from deleted rows. Using AUTOINCREMENT forces sqlite to use a rowid bigger than
the last one inserted/updated(1), so you can use it to know if one row is older
or newer than others, without autoincrement no. You can do similar behavior
with a trigger, for example for TEXT columns or automatically with DATE current
time.

(1) You can change the integer primary key to whatever value you want.
 

> The fact that there's a Highway to Hell but only a Stairway to Heaven says a
> lot about anticipated traffic volume.

 See you there then? ;)
 

-- 
Eduardo <eduardo.mor...@mobelservices.com>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [OT] Updating sqlite in PHP windows

2017-11-21 Thread Eduardo
On Mon, 20 Nov 2017 14:33:53 +
Simon Slavin <slav...@bigfraud.org> escribió:

> On 20 Nov 2017, at 11:06am, Eduardo <eduardo.mor...@mobelservices.com> wrote:
> 
> > Or better, a recipe that works to compile sqlite3 on php5.6.x?
> 
> This is the best-looking page I’ve found, but I have never tried it on 
> Windows.
> 
> <https://stackoverflow.com/questions/948899/how-to-enable-sqlite3-for-php#949325>

Thanks Simon. It seems that php 5.6 it's EOL and all updates (except security) 
are on 7.1 and 7.2 branchs. 

See the answers I get from php staff, 7.2 will have 3.21 version plus FTS and 
JSON extensions on (If I understand correctly)

https://bugs.php.net/bug.php?id=75544

> Simon.


-- 
Eduardo <eduardo.mor...@mobelservices.com>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] [OT] Updating sqlite in PHP windows

2017-11-20 Thread Eduardo

Hello and sorry for the light offtopic, but can't find useful answers.

Firstly, the decision of use windows server and php 5.6 (with wampserver) is
not mine, I must use them. As Unix developer windows is harsh to me.

Current PHP5.6.32 (26 Oct 2017) has sqlite 3.8 version, but I want to use JSON
and FTS5 extensions. Almost all answers I get on other sources are, "don't use
sqlite3, use this 'other cool technology (TM)' ". The only valid answer I found
is recompile PHP with last sqlite3 version, using VC11 compiler (others won't
work). 

I tried to compile with VS2015 but the dll don't work with PHP5.6 (php5.6
doesn't detect/show sqlite3 in phpinfo). The dll works with other apps dynamic
linking to it.

cl sqlite3.c -O2 -DDSQLITE_ENABLE_FTS5 -DSQLITE_ENABLE_JSON1 -link -dll 
-out:sqlite3.dll

Does anyone have a modern sqlite3.dll for PHP 5.6 with JSON and FTS5 extensions
on? Is there any paraoficial safe site where I can download it? 

Or better, a recipe that works to compile sqlite3 on php5.6.x?

Thanks 

-- 
Eduardo <eduardo.mor...@mobelservices.com>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Most efficient way to detect on-disk change

2017-11-07 Thread Eduardo Morras
On Tue, 07 Nov 2017 18:07:42 +
Wout Mertens <wout.mert...@gmail.com> wrote:

> I'm working with a db that's only written to in transations, and each
> transaction increases a db-global version counter.
> 
> This means that I can cache all reads, unless the version changed.
> 
> What would be the most efficient way to make sure I *never* serve
> stale data?
> 
> Right now everything's a single process, so it's really easy, just
> clear the cache on every write. However, I want to be prepared for
> the near future where I will have multiple processes using this db
> file.
> 
> I'm thinking that to detect writes, this might be a safe approach:
> 
> Before serving any cached read, check the timestamp on the wal file.
> If it changed, read the global version. If it changed, clear the
> cache. Otherwise, serve the cached read.
> 
> Is it safe to assume that all writes would mean change of the wal file
> timestamp?
> More importantly, is it faster to check the timestamp or would a
> prepared query for the version actually be faster (and safer)?
> 
> Also, I'm using WAL right now, but I wonder if that's really useful
> given the single-writer-at-a-time?

You can define triggers on insert, update and delete that fires a user defined 
function that warns your other threads or an external process (I use a similar 
setup on a AS400/DB2). 
Something like this:

CREATE TRIGGER tg_night_watcher_insert AFTER INSERT ON table_name_to_watch_up
 BEGIN
  SELECT your_nigth_watcher();
 END

HTH

> 
> Thank you for your insights,
> 
> Wout.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Any change to make this query better?

2017-10-21 Thread Eduardo Morras
On Fri, 20 Oct 2017 14:21:38 -0400
"jose isaias cabrera" <jic...@barrioinvi.net> wrote:

> 
> Greetings!
> 
> This takes about 1.5 minutes to run with sqlite v3.20.1 with about
> 200K records
> 
> sqlite> explain query plan
>...> SELECT
>...>   O.XtraF AS PortalID,
>...>   O.ProjID,
>...>   O.A_No AS GTXNo,
>...>   O.proj AS ProjName,
>...>   O.lang AS Target,
>...>   (SELECT max(edate) from LSOpenJobs AS I WHERE I.ProjID =
> O.ProjID AND
> 
>...>I.PSubClass = 'QuoteAppr' ) AS QuoteAvailable,
>...>   (SELECT min(edate) from LSOpenJobs AS E WHERE E.ProjID =
> O.ProjID AND
> 
>...>PSubClass = 'Delivery') AS DeliveryDate,
>...>   sum(O.Xtra8) AS PriceUSD,
>...>   0 AS PriceCAD,
>...>   sum(O.ProjFund) AS TransferCost,
>...>   O.XtraE AS Department,
>...>   O.XtraA AS BillTo,
>...>   O.pmuk AS Contact,
>...>   '-' AS Notes1,
>...>   '-' AS Notes2
>...>   from LSOpenJobs AS O WHERE lower(cust) = 'xerox' AND
>...> Xtra9 LIKE  '2017-09-%'
>...>  GROUP BY ProjID,lang HAVING sum(ProjFund) >
> 0;
> 0|0|0|SCAN TABLE LSOpenJobs AS O USING INDEX OjPid
> 0|0|0|USE TEMP B-TREE FOR GROUP BY
> 0|0|0|EXECUTE CORRELATED SCALAR SUBQUERY 1
> 1|0|0|SEARCH TABLE LSOpenJobs AS I USING INDEX OjPid (ProjID=?)
> 0|0|0|EXECUTE CORRELATED SCALAR SUBQUERY 2
> 2|0|0|SEARCH TABLE LSOpenJobs AS E USING INDEX OjPid (ProjID=?)
> sqlite>
> 

You could try indexing by (ProjID, PSubClass, lower(cust)).

You do all the work on the same table 'fake' joined with the result itself, 
LSOpenJobs. 

SELECT max(edate) from LSOpenJobs AS I WHERE I.ProjID = O.ProjID AND 
I.PSubClass = 'QuoteAppr'

You can try a WITH with the main query and subselect from it to get those two 
values (max(edate) and min(edate))

HTH
---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] When is db size an issue?

2017-10-01 Thread Eduardo Morras
On Wed, 27 Sep 2017 08:41:25 -0400
"Jason T. Slack-Moehrle" <slackmoeh...@gmail.com> wrote:

> Hello All,
> 
> Off and on for the last few years I have been writing an e-mail
> client to scratch a personal itch. I store the mail in SQLite and
> attachments on the file system. However, I recently brought in all of
> my mail for the last 15 years from mbox format. Now, my database size
> is over 10gb. I'm not seeing any real performance issues and my
> queries are executing nice and fast during search.
> 
> However, does anyone have any thoughts about the size? Should I be
> concerned? Is there a theoretical limit I should keep in the back of
> my mind?

I developed and use a filesystem over sqlite db, focused on filesizes
<1MB and worm (write-once-read-many) access pattern. Some sqlite dbs
have 500GB-1TB (few even bigger) on mechanical disks without speed
problems.

Easy to backup, use fts on someones and can attach-deattach filesystems
db. One advice, increase sqlite internal cache, don't use its default
value.

> 
> Jason



---   ---
Eduardo Morras <emorr...@yahoo.es>

.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] performance issue on a read only database

2017-06-13 Thread Eduardo Morras
On Tue, 13 Jun 2017 11:53:05 +0200
rv.gauth...@free.fr wrote:

> Hi all,
> 
> we are using sqlite3 on an embedded application.
> The database is opened in read only mode and can't be modified by the 
> application.
> 
> I noticed that the first SELECT statement after a BEGIN TRANSACTION 
> takes at least 14 ms.
> All subsequent queries in the same transaction are taking near 0 ms.

The first SELECT says to sqlite to do all the work needed to make your
query ACID constraint (acquires locks, malloc ram, generate structs,
open files, etc...)

> 
> If I omit the BEGIN TRANSACTION, all queries are taking at least 14
> ms.

Because each query automatically is wrapped with BEGIN TRANSACTION /
query / END TRANSACTION and the work to make your query ACID is done on
every query, not on the first one.

> I tried also to do a unique BEGIN TRANSACTION after opening the 
> database. This fixes the timing issue, but my memory increases 
> constantly.

... Do you know what's a transaction? In your case it's a read only db
and don't think the could be a problem, but in normal use cases could
be catastrophic.

> Is there a way (pragma, compile flags, ...) to gain these 14 ms for
> the first query ?

Try openinig with URI, check for 'inmutable' parameter
https://www.sqlite.org/draft/c3ref/open.html , perhaps you can gain
some ms, but it's dangerous, I warn you.

> 
> Thanks.
> 
> 
> Hervé Gauthier.
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite in memory

2017-05-18 Thread Eduardo Morras
On Wed, 17 May 2017 22:18:19 -0700
Gabriele Lanaro <gabriele.lan...@gmail.com> wrote:

> Hi, I'm trying to assess if the performance of my application is
> dependent on disk access from sqlite.
> 
> To rule this out I wanted to make sure that the SQLite DB is
> completely accessed from memory and there are no disk accesses.
> 
> Is it possible to obtain this effect by using pragmas such as
> cache_size?
> 
> Another solution is to copy the existing db to a :memory: db but I'd
> like to achieve the same effect without doing so (because it will
> require substantial modification of the application). For the sake of
> argument, let's image that using :memory: db is not an option.
> 
> Also using a ramdisk is not an option because I don't have root
> access to the machine.

What OS are you using?

You can next tips to make the app less dependant on disk I/O access:

a) change where store temporal tables (mat views, subqueries, temp
tables) and indices (transient, to use ram always (pragma
temp_store=2),
 
b) increase cache size, the more, the better (os disk cache is shared
with other processes and is slower), if cache is equal or bigger than
your db, it'll be fit in ram, 

c) set config SQLITE_CONFIG_STMTJRNL_SPILL=-1 if you use statement
journal files (check http://www.sqlite.org/tempfiles.html#stmtjrnl), 

d) use wal mode to avoid *-shm files, and set wal_checkpointing, 

e) use mmap_size pragma to minimize I/O (check
http://www.sqlite.org/mmap.html it has disadvanteges too)


> Thanks,
> 
> Gabriele

HTH

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Developing a SQLite3 DB remotely

2017-03-24 Thread Eduardo Morras
On Thu, 23 Mar 2017 13:45:58 -0400
Stephen Chrzanowski <pontia...@gmail.com> wrote:

> Right up front, I'm fully aware about the How To Corrupt document
> (Believe me, I've preached about network access in this forum), and
> with the development tools I have in Windows vs the destination OS
> and the purpose of the DB, I'm asking for other peoples experience on
> remote developing a database.
> 
> The SQLite editor of choice for me is SQLite Expert Pro (SEP).  The
> remote system is a Linux based OS.  The databases job is to keep
> track of jobs, hosts, last completed, priorities of the jobs, etc.
> The Linux machine is going to be running a BASH script that runs in
> an infinite loop, periodically poking the database to decide what to
> run next based on a schedule.  There will be frequent sleep periods
> between SQL calls.
> 
> While I'm developing the database, the infinite looping in the bash
> script isn't going to exist.  The script runs, does its thing (To
> start, just ECHO what I want it to do), update the database on
> successful completion, then check for the next job if any are
> available.  When the scripts are done running, I want to re-run a
> query in the SEP to confirm what I've done in the BASH script did
> what it was supposed to do.
> 
> The question for the experienced multi-machine & multi-OS DB
> designers, has anyone ever run into a problem where EXTREMELY LIGHT
> WEIGHT use of the database causes corruption?  What would be a
> recommended way to setup the connections for a DEV-only arena where
> the below paragraph describes?
> 
> By EXTREMELY LIGHT WEIGHT use, I mean I *DO* guarantee that although
> I have one permanent open file handle to the database via SEP, and
> that Linux OS will only open a handle  periodically while I'm writing
> the script, multiple accesses of reading or writing to the DB at the
> exact same time just will not happen.  Once development stops, it'll
> be just this one BASH script that will ever touch the database.

I'll try this manner:

a) write a file with the query in the server
b) bash script pass it to sqlite3 command line interface and it writes output 
to other text file
c) read the file with the result
d) delete the files

You can grow or complicate the steps as you need.

For the network server code, repository has an example server 
http://www.sqlite.org/src/artifact/a2615049954cbb9c and timeline at 
http://www.sqlite.org/src/finfo?name=src/test_server.c 


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite 3.18.0 pre-release snapshot

2017-03-07 Thread Eduardo Morras
On Mon, 6 Mar 2017 18:52:48 -0500
Richard Hipp <d...@sqlite.org> wrote:

> On 3/6/17, Simon Slavin <slav...@bigfraud.org> wrote:
> >
> >> See
> >> https://www.sqlite.org/draft/pragma.html#pragma_optimize for
> >> additional information.
> >
> > I?m sure this is extremely far-future-looking, but a default mask
> > of 254 (0xfe) might be better than the stated default of 14 (0x0e).
> 
> Default mask changed to 0xfffe, which allows for up to 14 new
> default-on optimizations and up to 48 new default-off optimizations.

Could a trigger be fired on optimize? This way we could add database/schema 
specific optimizations (Delete all rows in table tab where column value is 
NULL, for example)

Thanks for the great work.

> -- 
> D. Richard Hipp
> d...@sqlite.org


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Read-only access which does not block writers

2016-11-24 Thread Eduardo Morras
On Thu, 24 Nov 2016 08:54:47 -0500
Richard Hipp <d...@sqlite.org> wrote:

> On 11/24/16, Florian Weimer <fwei...@redhat.com> wrote:
> > I'd like to replace the use of Berkeley DB in RPM with SQLite.
> >
> > The scenario is special in the follow way.  There is no database
> > server, all access goes directly to the database.  Unprivileged
> > users without write access to the RPM database are expected to run
> > read-only queries against the database.  Privileged users
> > (basically, root) is expected to use locking to exclude concurrent
> > writers.  But read-only users should not be able to stop
> > acquisition of a write lock.
> >
> > Is there a way to do this with SQLite?
> 
> The readers can open the database using URI filenames
> (https://www.sqlite.org/uri.html) with query parameters "mode=ro" and
> "locking=0".  That will prevent the readers from blocking the writer.
> But, if a write happens in the middle of a read, the reader might see
> inconsistent data and report SQLITE_CORRUPT.  This is harmless in the
> sense that the database file is not really corrupt (the reader is
> merely seeing parts of the files from two different points in time)
> and subsequent reads should still work.  If you are unlucky, a write
> that happens at the same time as a read might cause the reader to
> return incorrect results, so the reader can never be 100% sure that
> the answer it gets back is correct.

If writer safely forces a schema change, that's no real change to
schema, only an increase  to schema_version pragma; the readers can
check that condition at sqlite3_step.

I don't know how to fire it, and changing directly pragma value is
discouraged.


> 
> How important is it to you that the reader always get a correct
> answer?
> -- 
> D. Richard Hipp
> d...@sqlite.org

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Hung query question

2016-11-18 Thread Eduardo Morras
On Fri, 18 Nov 2016 19:20:06 +
David Raymond <david.raym...@tomtom.com> wrote:

> I've got a query that I've tried a few times here that seems like
> it's hanging up on something and I'm wondering if it's just some
> brain dead thing I'm forgetting or doing wrong.
> 
> I've got a database with a bunch of records, and am trying to
> populate a table in another database with coordinate extremes for
> each state. Using the CLI I open up the side database, attach the
> main one and run
> 
> insert into coordExtremes select State, min(Latitude), max(Latitude),
> min(Longitude), max(Longitude) from foo.bar group by State;
> 
> Twice I've left that running overnight and it's still been sitting
> there not completed the next day when I came in. I gave up on using
> the CLI to do it and wrote a quick Python script to scan through the
> whole big table once and keep track of the extremes, and it finished
> in all of 15 minutes. So I'm left scratching my head as to what's up.
> If anyone can think of something or sees where I'm being an idiot,
> please let me know.

Is python using bigger cache? Increase cache with pragma in CLI. 

Set it to 100-500MB for foo db, don't need to waste cache on destiny db
(where you insert the data). Change foo with the real origin db name.

pragma foo.cache_size=-50

Why don't use R*Tree virtual table?  http://www.sqlite.org/rtree.html
---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Import 5000 xml files in a sqlite database file

2016-10-23 Thread Eduardo Morras
On Sat, 22 Oct 2016 19:26:42 +0200
bob_sql...@web.de wrote:

> Hi,
> 
> I have more than 5000 xml files. All files have the same
> xml-structure. 
> 
> Each file has different values (timestamps, numbers and strings). I
> would like to put all these values in a sqlite database tabke, all in
> one table. => Import the data values into a sqlite database table.
> 
> Can you please tell me a software program, that can do this quickly?

You can convert them to json and use sqlite3 json capabilities.
Duckduckgo search engine takes me to:

http://web-notes.wirehopper.com/2013/12/06/linux-command-line-convert-xml

http://openlife.cc/blogs/2013/november/translating-reliably-between-xml-and-json-xml2json

Which describes some apps to convert from xml to json. Some are in
javascript, python, php, perl and others are online converters.


> Thank you for your answers.
> 
> Best regards 
> 
> Bob
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Virtual table

2016-09-18 Thread Eduardo Morras
On Wed, 31 Aug 2016 10:34:05 -0300
Maria de Jesus Philadelpho <jesus.ph...@gmail.com> wrote:

> Hi,
> 
> I implement the SQLite extension, virtcsv, which allows attaching a
> CSV file as a virtual table.  At a command line everything works just
> fine, why I can't see the result of a select * from table using the
> SQLitestudio 3.0.7?

Beacuse Sqlite Studio must load the Sqlite csv virtual table before
use, or be compiled with it.

Read Sqlite Studio manual how to achive that, I don't use SqliteStudio.

Perhaps you can get better answers from Sqlite Studio developers.

> See the attached snapshot.

Sqlite maillist don't support attachment.

> regards,
> 
> Maria Azevedo
> --
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Convert mysql to sqlite

2016-09-10 Thread Eduardo Morras
On Sat, 10 Sep 2016 14:24:48 -0700
Scott Doctor <sc...@scottdoctor.com> wrote:

> I have a database with a few tables and about 140MB of data in 
> it that exists as a MySQL database. Using MySQL workbench I can 
> export the data, but it dumps as a SQL dump with its flavor of 
> sql. I want to convert this database into a sqlite database. 
> Nothing fancy in it. No triggers, procedures, or foreign keys. 
> Tables are normal structure with a pk field and some text and 
> integer fields in each table.
> 
> Anyone know of a utility to directly convert from MySQL to sqlite?
> 

You can export as csv and import them in sqlite.

> -
> Scott Doctor
> sc...@scottdoctor.com
> -

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Best way to "split" a large DB into two DBs

2016-09-07 Thread Eduardo Morras
On Wed, 7 Sep 2016 18:08:50 +0200
Dominique Devienne <ddevie...@gmail.com> wrote:

> Imagine there's a 25GB SQLite DB file in version v1 of an application.
> 
> Because the way this DB is structured, with large to very large
> blobs, this leads to problems for the app. That DB has 35 tables, but
> 5 of those represent 95% of the DB size (because of those blobs).
> 
> So for v2 of the app, there's a new design where the 30 "lightweight"
> tables move into a new DB file (with additional new tables), and the 5
> "blob heavy" tables remain in the existing DB file (but the 30 "moved"
> tables should be dropped from it).
> 
> Initial design was to copy the DB file (app is "shutdown", so no
> connection to that DB file).
> Drop the 5 big tables. Vaccum it (the new DB). Add the new tables.
> But the 25GB copy was deemed too long in that case (several minutes).
> (after all, copying 95% of 25GB to discard all those GBs is
> inefficient)
> 
> So alternate design was to create the schema empty in the new DB file,
> attach the old one, and insert the data from the 30 tables into the
> new empty (with identical schema/structure) tables. But that's also
> very slow apparently.
> 
> Even though it's not my app, I know there are indexes and triggers on
> those tables (but the triggers are mostly on deletes, so don't really
> apply here), and I suggested adding those after the selects, but I
> doubt it's going to make a dramatic difference time-wise.
> 
> Conceptually, the ideal situation would be to just copy the DB header,
> sqlite_master page(s) (dropping the 5 big tables definitions and
> related indexes/triggers from sqlite_master), and only the 5% of
> pages related to the other tables (and related objects).  (chaining
> of pages and root pages in sqlite_master have to be updated of
> course). It's almost like a form of VACCUM, except it's not in-place
> and works on a subset of the tables. IO-wise, that has the potential
> to be 20x faster I imagine.
> 
> But of course there's no such "out-of-place" "partial" VACCUM...
> 
> Then I though maybe .backup, which also works at the page level (I
> believe), perhaps can subset what tables to backup. But no luck there
> either. backup works for the whole DB, not a subset.
> 
> Am I missing other ways to achieve this "split" efficiently?
> Any chance the backup API could group table-specific (and related
> objects) alternate form?

Not answering your question, but some comments/tricks about how boost sqlite3 
performance on your scenario.

First, compile last sqlite3 version with these options on

SQLITE_DIRECT_OVERFLOW_READ
SQLITE_DEFAULT_AUTOVACUUM=2

They are described at https://www.sqlite.org/compile.html#direct_overflow_read 
and https://www.sqlite.org/compile.html#omit_autovacuum, don't omit
autovacuum, but set it to 2 or incremental. 

For an explanation about why setting autovacuum incremental works see
http://marc.info/?l=sqlite-users=136265346522617=4

If your blob size is high, bigger than 500KB, set page size to 64KB.

Create the blob column the last one.

Compile with dbstat (SQLITE_ENABLE_DBSTAT_VTAB) option if you want statistics 
about your blob db. You can get information about fragmentation, etc, see
https://www.sqlite.org/dbstat.html#section_3

If your use case is write heavy, don't use wal, modifications (CUD) are not 
atomic when use attached dbs and wal mode. If your use case is read heavy, use 
wal.

I use (and develop) a diy filesystem over sqlite and has a similar scenario 
with hundreds of GB of blobs on multiple databases. Those compile options, 
tricks 
and split metadata from data made a huge improvement.

> Any advice would be appreciated. Thanks, --DD

HTH

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite 3.13.0 does not use indexes as 3.8.11.1 does

2016-07-26 Thread Eduardo Morras
On Tue, 26 Jul 2016 10:37:12 +0200
Alessandro Fardin <alessandro.far...@gavazziacbu.it> wrote:

> After updating from sqlite  3.8.11.1 to sqlite 3.13.0.
> 
> The Query planner with the same SELECT statement on same table with
> the same indexes does not use index at all, but parse the entire
> table. Of course this causes a dramatically slow down of the
> application.
> 
> As temporary  work around we have have added to the query the INDEXED
> BY energy_d_dateTimeIdx statement.
> 
> In sqlite 3.8.11.1 the select was issued by using the  
> energy_d_dateTimeIdx index
> 
> Follows the shema of the table and indexes.
> CREATE INDEX IF NOT EXISTS energy_d_dateTimeIdx 
> ON hst_energy_d (recdate ASC,rectime ASC,idinstrum ASC,enflag ASC);
> CREATE INDEX IF NOT EXISTS hst_energy_d_index_timestamp 
> ON hst_energy_d (timestamp ASC);
> CREATE INDEX IF NOT EXISTS hst_energy_d_index_pupdate 
> ON hst_energy_d (pupdate ASC);
> 

Please, run Sql commnad ANALYZE after index creation and chek again.

> And now the SELECT STATEMENT
> 
> 
> SELECT 
> pupdate,idinstrum,Vlnsys,Vl1n,Vl2n,Vl3n,Vllsys,Vl1l2,Vl2l3,Vl3l1,
> Al1,Al2,Al3,kWsys,kWl1,kWl2,kWl3,kWhac,
> kWhacn,kvarsys,kvarl1,kvarl2,kvarl3,kvarhn,kvarh,
> kvarhacC,kvarhacL,kVAsys,kVAl1,kVAl2 ,kVAl3,PSeq,
> THDAl1,THDAl2,THDAl3,THDVl1n,THDVl2n,THDVl3n,
> kWhl1 ,kWhl2 ,kWhl3 ,counter1,counter2,counter3,Hz,An,Hour,Hourn,
> Alsys,kvarhl1,kvarhl2,kvarhl3,kvarhnl1,kvarhnl2,kvarhnl3,kWhnl1,kWhnl2,
> kWhnl3,kVAh,kVAhl1,kVAhl2,kVAhl3,
> PFsys,PFl1,PFl2,PFl3,Wdmd,vardmd,VAdmd
> FROM hst_energy_d 
> WHERE enflag = 0 ORDER BY recdate DESC, rectime DESC LIMIT 1;

You are querying with ORDER BY DESC, but indexes are created ASC,
convert them to DESC, ANALYZE and retry.

HTH


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problem with savepoints

2016-07-06 Thread Eduardo Morras
On Wed, 6 Jul 2016 18:10:34 +0200
Chris Brody <chris.br...@gmail.com> wrote:

> >
> > Just for my information, what is the purpose of this temporary
> > file?  I see
> >> that -journal file is always stored to disk.
> >>
> >
> > It's a statement journal:
> >
> >   https://www.sqlite.org/tempfiles.html#stmtjrnl
> >
> > Recent changes mean that the first 64KiB of a statement journal are
> > always stored in main-memory, and the temp file only created after
> > the file grows larger than 64KiB. Which is probably why you did not
> > see the problem immediately.
> 
> 
> Personally I wish SQLite would fail upon open or perhaps on first
> write if it cannot write the temporary file. I think this would make
> it easier for programmers to detect and deal with this kind of issue.
> Maybe make it an explicit compile-time option to wait until the
> temporary file is larger than 64KiB before opening the temp file.
> Just a thought. 

This is set up in src/global.c line 172 on 3.13.0 version

   171  #ifndef SQLITE_STMTJRNL_SPILL 
   172  # define SQLITE_STMTJRNL_SPILL (64*1024)
   173  #endif

You can predefine it (define before include sqlite3.h) with the value in bytes 
you want, or set it to -1 to always store them in memory.



---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bad db feature request

2016-06-30 Thread Eduardo Morras
On Thu, 30 Jun 2016 10:12:05 +0100
Paul Sanderson <sandersonforens...@gmail.com> wrote:

> The OP wanted something to check his systems for corruption - if the
> SQLite dev team don't want to add checksums then the OP could possibly
> solve his own problem by adding them to his own internal version.
> 
> Extension may have been a bad choice of word - he can modify/compile
> his own version of SQLite add checksums - and also add a corruption
> engine if he wants.
> Paul

Your right

A simple trigger on insert/update a row to calculate a crc or hash of other 
columns content in the row and update a hidden column with the value can do the 
trick at row level. It will make performance lower, undesired but expected 
effect.

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Using LSM and Memory Mapped files -- Question about LSM_CONFIG_MMAP

2016-05-20 Thread Eduardo Morras
On Thu, 19 May 2016 22:12:48 -0500
"Kasajian, Kenneth"  wrote:

> I am interested in using the LSM
> (https://www.sqlite.org/src4/doc/trunk/www/lsmusr.wiki)
> 
> The characteristics of my application is that I would like to enable
> the memory-mapped file feature so that I can offload data in memory
> to disk temporarily.I don't care about data-recover after my
> application shuts down or it crashes.   In fact, the application will
> create a new database each time it starts up.
> 
> The reason to do this is to be able to address hundreds of gigs of
> data in a key/value store method.   We would it put it in RAM but
> that would require a lot of RAM.By putting the data on disk, we
> can work with very large sets of data.
> 
> I care a lot about performance.  If I had the RAM, all of this would
> be in RAM (non-persistent and transien)
> 
> My question is, if I have LSM_CONFIG_MMAP enabled, and on a 64-bit
> system I believe it's enabled by default, what happens if I also turn
> off LSM_CONFIG_SAFETY and LSM_CONFIG_USE_LOG.   Would that not make
> things a lot faster?
> 
> I don't need data to be written to disk right away..   In fact, I
> would be okay if data were only written to disk when there the memory
> that is occupied by the data has to be reused - because persistence
> is not a factor.
> 
> Anyone use LSM for this type of a use case?

IIRC you can create a :memory: database with sqlite4. Doesn't it feed your 
needs?

If you look at the Sqlite4 timeline, you'll see develop is a bit frozen. You 
can try the lsm-vtab (http://www.sqlite.org/src/timeline?n=100=lsm-vtab) for 
sqlite3, you need fossil-scm to download it and compile, but I don't know if it 
works currently or not (last updated on Feb 2016)

HTH

---   ---
Eduardo Morras 


[sqlite] Fastest way to backup/copy database?

2016-05-04 Thread Eduardo Morras
On Wed, 04 May 2016 11:44:17 +0100
"Rob Willett"  wrote:

> Hi,
> 
> We think we know the answer to this, but we?ll ask the question 
> anyway.
> 
> We?re trying to backup a 10GB live running database 
> ?as-fast-as-we-possibly-can? without stopping updates coming in. The 
> updates come every 2-3 mins, and write a chunk of data in. We can?t 
> really stop the database updates, well we can but we don?t want to.
> 
> 1. We had a quick look to see if we could copy the sqlite file over
> in the short interval between updates but sadly cp simply wasn?t fast 
> enough. We get around 3GB copied before an update happens, which 
> basically renders the cp useless.
> 
> 2. If we use the command line sqlite  .dump >
>  it works, but its very slow.
> 
> 3. Using the Sqlite C API works but is also very slow.
> 
> 4. We don?t have the option of an LVM snapshot as the file system is 
> in a Container .
> 
> So is there any other method of doing a quick snapshot? Failing that, 
> our solution will be to stop any updates for the duration of the cp 
> command, and then restart the process afterwards. Its not the end of
> the world but it would have to be done out of normal working hours.
> 
> This is going to become a bigger problem for us as the database will 
> only get bigger so any advice welcomed.

If you only want the data, you can attach/open a new db file, create schema 
without indexes, select all data from tables and insert them in new db tables. 
You don't write the indexes and should be faster. If you need the indexes, you 
can create them later.


> Thanks
> 
> Rob
---   ---
Eduardo Morras 


[sqlite] Is there something like PHPAdmin for SQLite

2016-04-20 Thread Eduardo Morras
On Wed, 20 Apr 2016 22:21:04 +0200
Cecil Westerhof  wrote:

> 2016-04-20 18:27 GMT+02:00 R Smith :
> 
> > If you are happy to log in to the server, we could suggest a few
> > great tools - but please note that there is nothing wrong with
> > MySQL as far as web development goes. There is no need to change to
> > SQLite, what is MySQL doing wrong? A major consideration is that
> > MySQL offers user log-ins per DB (so you can give access to clients
> > per project for instance).
> >
> >
> > SQLite is awesome for local storage, but not every job requires
> > such a hammer.
> 
> 
> ?He told me he liked the idea of SQLite, but had to use MySQL,
> because he needed to give his client a web interface. So if there is
> a web interface (and there is), then he is not forced to use MySQL.
> And of-course he decides, not me. ;-)

Firefox has an extension to work with sqlite databases. Install it, open a 
sqlite3 db file and work. It uses sqlite3 3.9? (don't remember know).

> -- 
> Cecil Westerhof
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras 


[sqlite] Broken database after experiments with fts.

2016-03-26 Thread Eduardo Morras
On Sat, 26 Mar 2016 16:37:18 +0200
John Found  wrote:

> Why cannot drop the table test?
> 
> sqlite> begin transaction;
> sqlite> create virtual table test using fts5;
> Error: vtable constructor failed: test
> sqlite> commit;
> sqlite> 
> sqlite> drop table test;
> Error: vtable constructor failed: test
> 
> sqlite> .tables
> test  test_content  test_docsize
> test_config   test_data test_idx
> 
> $sqlite3 --version
> 3.11.1 2016-03-03 16:17:53 f047920ce16971e573bc6ec9a48b118c9de2b3a7

Because table test was not created. You get an error when create virtual table. 
Check you sqlite3 shell has fts5 compiled in. You can do that executing "pragma 
compile_options".


> -- 
> http://fresh.flatassembler.net
> http://asm32.info
> John Found 
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras 


[sqlite] Encrypt the SQL query

2016-02-27 Thread Eduardo Morras
On Thu, 25 Feb 2016 14:01:31 +0800
 wrote:

> Hi,
> 
> In my C++ program, I will invoke SQLite to execute SQL queries. But
> these queries are just stored as normal string constants in C++ so it
> is easy to be decoded via reverse engineering method. Does SQLite
> provide a good way to encrypt the SQL query strings while does not
> affect the performance when executing the queries?

No, it hasn't. And as other has said, it's not possible. But you can take other 
way, if you creates a new lemon parser entry for a different query language 
more obfuscate than SQL, you'll force the reverse engineer to learn this new 
query language. You can automate the conversion from actuals SQL queries to 
NewLang queries, even change the NewLang on each compile with different lemon 
parser code. And yes, it's too complex and the time invested (I think) is time 
lost.

> Thanks

L

---   ---
Eduardo Morras 


[sqlite] Why is a separate journal file needed ?

2016-02-25 Thread Eduardo Morras
On Wed, 24 Feb 2016 12:53:48 +
Simon Slavin  wrote:

> In case you're new to this group, please understand that I'm
> experienced with SQLite and I understand how it works and how it uses
> journal files.
> 
> Can someone explain to me -- or point at a web page which does -- why
> a separate journal file is needed.  Why can't the information which
> SQLite stores in a journal file be put in the database file ?  Given
> how SQLite structures its database files this would presumably mean
> that the main database would have journal pages as well as table and
> index pages.  Or that there would be 'stable' table and index pages
> and 'journal' table and index pages. 

There are two too bad side effects putting the journal file inside the database 
(at end, in the middle, at begining, no matter).

a) If you add the journal at the end of the db file, and insert new data, it 
probably will be after the journal. When checkpointing or commiting the journal 
pages converts to empty pages, quickly filled with new data on next commit, but 
making internal data fragmentation worse.

b) Modify fs i-nodes and directory db file information. These are delicate 
structures and, on a crash or a file system problem, there are bigger chances 
to corrupt them if you are continuosly modifying them. Where is the journal 
data if you or the fs can't find the end of the db file?. Better is modify 
i-nodes and directory metadata only when needed.

Of course, b) is unlikely, but a) is a real pita.

> Simon.

---   ---
Eduardo Morras 


[sqlite] query Benchmark

2016-02-15 Thread Eduardo Morras
On Mon, 15 Feb 2016 10:57:29 +0100
Michele Pradella  wrote:

> Sorry you are right, the test I did was with % not with a *just a 
> cut and paste error
> Anyway it's tricky because I have to tell sqlite which index to use
> in LIKE to fast search but I do not have to tell the index if start
> with % or _ because otherwise the query is not executed.. I'll handle
> it...

I have read the thread and still don't know what's your LIKE expression to 
search. If you know something about the string, you can use this tricks:

a) If you search for '%552', any string that ends with '552', you can create an 
index over reverse plate string and use that index,

b) If you search for '_552_' where the number of '_' before and after '552' are 
always the same (so, '552' begins at the same position on every plate), create 
a similar index to a) case, but rotating the letters on the string instead 
reversing it.

You know your data structure and the queries, perhaps other data transforms may 
create better indexs

> Selea s.r.l.
> 
> 
> Michele Pradella R

---   ---
Eduardo Morras 


[sqlite] whish list for 2016

2016-01-13 Thread Eduardo Morras
On Tue, 12 Jan 2016 21:58:01 +0100
Christian Schmitz  wrote:

> 
> > Am 20.12.2015 um 19:12 schrieb Big Stone :
> > 
> > Hi All,
> > 
> > To prepare for 2016 greetings moment, here is my personnal whish
> > list
> 
> Unless I missed something, I may suggest
> 
> * moveprev
> * movefirst
> * movelast
> * move to record in cursor with given index  

Insert the query result in a temporal table or view and work with it. The order 
of the result rows will change if the query hasn't got an "ORDER BY".

Depending what language you are programming, you can store the query result on 
a table(Lua), dictionary(Python, Erlang and others) or create a specifc struct 
for your query in C. 

You can use the deprecated get_table and free_table to get a similar recordset 
struct. 

 HTH

> Looping over a recordset twice is often useful.
> 
> Sincerely
> Christian


---   ---
Eduardo Morras 


[sqlite] SQLite remote access over WAN

2016-01-04 Thread Eduardo Morras
On Mon, 04 Jan 2016 22:52:56 +0100
ajm at zator.com wrote:

> Hi list:
> 
> I've built a Windows desktop app using MS Visual C++ and SQLite. It
> works fair well in local mode, but now I need made it work over a
> TCP/IP network.
> 
> My question is if some in this list can suggest a library to that
> purpose, on the basis that I have not concerns on questions like
> concurrency, access permissions or encriptation. Preferable a light
> weigth open source C/C++ library. Not .NET, MFC or other external
> dependencies, although Boost Asio may be Ok. (already I'm using it)

I used libevent (http://libevent.org/), ZMQ (http://zeromq.org/), nanomsg 
(http://nanomsg.org/) and plain standard libc in different projects. Depending 
on what you need, where your app will work, the workload you'll have, "the 
marketing wise boys" opinion, etc...  you should choose one or another.

> Any sugestion or comment are wellcome.
> 
> --
> Adolfo J. Mill?n

---   ---
Eduardo Morras 


[sqlite] whish list for 2016

2016-01-03 Thread Eduardo Morras

I left the list some days and find this subject discussed and finished!!

Happy New Year 2016 to every colister and sqlite developer.

I add my list and a tiny explanation.

- Lock tables/indexes on cache (make them unevictable)

When a query use a big size table, other tables/index used in the query or 
other queries tables/indexes are dropped from sqlite cache. Lock content on 
cache could boost those queries that "fights" with a big size table. Mark a 
table as non cacheable could do the same in my use case.

As a working example I c from own code, a sqlite filesystem that stores file 
content as blob or as external file:

CREATE TABLE `file` (  \
`fid`   INTEGER,   \
`size`  INTEGER NOT NULL,  \
`permission`INTEGER NOT NULL DEFAULT 700,  \
`lock`  INTEGER NOT NULL DEFAULT 0,\
`blob_id`   INTEGER NOT NULL,  \
`owner_id`  INTEGER NOT NULL,  \
`group_id`  INTEGER NOT NULL,  \
`name`  TEXT NOT NULL, \
`external_path` TEXT NOT NULL, \
PRIMARY KEY(fid)   \
); 

CREATE TABLE `fsblob` (\
`bid`   INTEGER,   \
`size`  INTEGER NOT NULL,  \
`fid`   INTEGER NOT NULL,  \
`compressed`INTEGER NOT NULL DEFAULT 0,\
`content`   BLOB,  \
PRIMARY KEY(bid)   \
); \
   \

 Each time a user access a file, the "file" table (and others like "directory", 
"users") is evicted from cache (fsblob table fills cache) and needs to be 
reread and reparsed from disk for the next query. The problem is worse when 
multiple files are accessed. Making the cache bigger works upto some filesize, 
but locking the "important" tables on cache is the correct (I think) fix. As a 
workaround, I used multiple databases, one with fs metadata and attach others 
with file content (fsblob tables), but dropped it because other tecnical 
problems.

- Access table row by fileposition

In a read only scenario (no writer and/or no wal mode), read and parse the row 
content if we know it's absolute database file position or we know page number 
plus page position offset.

- Lock constant tables on file (stick table rows to page and/or fileposition)

This way, the file position of a row is fixed or easily calculated and can 
create/use hash indexes. Now I'm using a virtual table derived from Dr. Hipps 
csv virtual table years ago that creates a inmemory hash index of an external 
csv file, but it implies use of own sqlite3 version with virtual code and an 
additional file to database.

Additionally, if we can lock constant tables (defined as is by the developer) 
at the begining of the database file, mmap feature can cache constant/static 
data from those tables. In my app, it implies file, user, group, fs and other 
tiny tables (changes are rare) could be mmaped.

- Vacuum/backup reorder tables in database file

If we can tell vacuum or backup the order we want the tables be on the database 
file, on some scenarios (like mine;) ) moving tables to the begining (constant 
data) or the end (timestamp log growing table, f.ex.) could speed up queries 

- Vacuum/backup adds free pages for table/index grow

As far as I know, vacuum and backup write database tables, indexes and metadata 
without free pages. Adding free pages at the end of the pages that store a 
table or index, and reserve those pages for use of that table or index could 
hold back internal fragmentation.


There are others ideas but... hey 2017 is closer ;)

Thanks to Dr. Richard Hipps, Dan Kennedy and Joe Mistachkin for their 
exceptional work, and other colisters for their time and help.

---   ---
Eduardo Morras 


[sqlite] Feature Request: Support for MMAP beyond 2GB

2015-10-31 Thread Eduardo Morras
On Fri, 30 Oct 2015 12:01:15 -0700
Roger Binns  wrote:

> https://sqlite.org/mmap.html
> 
> SQLite supports memory mapping databases, but only does so for the
> first 2GB of the file.  My databases are around 4 to 20GB, and
> completely fit within RAM on my 64 bit systems.  The 2GB mmap limit
> means that only a portion of the file benefits from the improved
> performance, and much effort is wasted copying the data around bits of
> memory.

This limit is set at compile time with SQLITE_MAX_MMAP_SIZE compile option. I 
don't see at pager.c why it can't be bigger than 2GB, it's a 64bit integer. Try 
to compile with -DSQLITE_MAX_MMAP_SIZE=21474836480 and use a copy or backup, 
not the original database.

> Roger

---   ---
Eduardo Morras 


[sqlite] Multiple connections to in-memory DB and virtual tables

2015-10-06 Thread Eduardo Morras
On Tue, 6 Oct 2015 17:39:20 +
Hick Gunter  wrote:

> AFAICT the FROM clause is superflous, as the function has no
> (supported) way of detecting which table(s) the FROM clause contains.
> What is your "reindex_virtual_table()" function (I assume it is a
> user implemented function) supposed to do?

I wrote it thinking it's selfexplaining, but now I think I wrote it too fast.

reindex_virtual_table() is a function defined inside Virtual Table code that 
forces an internal reindex of its data. 

In FTS3/4, for example, you can do an "INSERT INTO 
fts_virtual_table(fts_virtual_table) VALUES('rebuild');" to reindex the FTS 
virtual table.

And I say I wrote it too fast because a virtual table don't need to have an 
internal index, depends on what is it for and its implementation.

> -Urspr?ngliche Nachricht-
> Von: Eduardo Morras [mailto:emorrasg at yahoo.es]
> Gesendet: Dienstag, 06. Oktober 2015 19:08
> An: sqlite-users at mailinglists.sqlite.org
> Betreff: Re: [sqlite] Multiple connections to in-memory DB and
> virtual tables
> 
> On Tue, 6 Oct 2015 15:39:08 +0100
> Simon Slavin  wrote:
> 
> > There are also things Virtual Tables can't do.  For instance you
> > cannot index a Virtual Table using SQL commands.
> 
> Does "SELECT reindex_virtual_table() FROM virtual_table" count as SQL
> command only?
> 
> 
> ---   ---
> Eduardo Morras 
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> 
> 
> ___
>  Gunter Hick
> Software Engineer
> Scientific Games International GmbH
> FN 157284 a, HG Wien
> Klitschgasse 2-4, A-1130 Vienna, Austria
> Tel: +43 1 80100 0
> E-Mail: hick at scigames.at
> 
> This communication (including any attachments) is intended for the
> use of the intended recipient(s) only and may contain information
> that is confidential, privileged or legally protected. Any
> unauthorized use or dissemination of this communication is strictly
> prohibited. If you have received this communication in error, please
> immediately notify the sender by return e-mail message and delete all
> copies of the original communication. Thank you for your cooperation.
> 
> 
> _______
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras 


[sqlite] Multiple connections to in-memory DB and virtual tables

2015-10-06 Thread Eduardo Morras
On Tue, 6 Oct 2015 15:39:08 +0100
Simon Slavin  wrote:

> There are also things Virtual Tables can't do.  For instance you
> cannot index a Virtual Table using SQL commands.

Does "SELECT reindex_virtual_table() FROM virtual_table" count as SQL command 
only?


---   ---
Eduardo Morras 


[sqlite] Detect if db is already opened by another process?

2015-09-23 Thread Eduardo Morras
On Wed, 23 Sep 2015 18:01:47 +0200
Michael Schlenker  wrote:

> Hi,
> 
> i just wondered if there is an API to detect if a sqlite database file
> is already opened by another process.
> 
> I can make the assumptions that:
> 
> 1. WAL mode is in use
> 2. Linux and Windows only
> 3. No network filesystems
> 4. I only care if the access is done by another SQLite library,
>not simple open() calls.
> 
> I didn't see any explicit API to check for this, but assumed there
> might be some way to find out via the .shm files?
> 
> Usecase is a server process that keeps an SQLite DB open while it is
> running and a commandline tool that manipulates the same DB file for
> maintenance tasks. The maintenance tool should not change the DB if
> the server is running.
> 
> I could of course do explicit locking via other means to mediate
> access (actually i do that now, but it is not really elegant), but if
> there is an SQLite API way to do it, it would be nicer.
> 
> Any good hints?

If the other process is yours, you can use pragma user_version to store a 
reference counte  of the number of times the db is opened, increasing each time 
you open it and decreasing each time you close it.

> 
> Michael

---   ---
Eduardo Morras 


[sqlite] Native sqlite4 on FreeBSD

2015-09-13 Thread Eduardo Morras
On Fri, 11 Sep 2015 17:12:50 +0300
Valentin Davydov  wrote:

> On Thu, Sep 10, 2015 at 01:30:37PM +0200, Eduardo Morras wrote:
> > 
> > Use gmake to compile.
> 
> It didn't work either. Finally I've just installed some brand new
> linux on a nearby virtual machine, made there make -f
> Makefile.linux-gcc and thoroughly repeated it's output line-by-line
> on my FreeBSD while replacing gcc by cc. Eventually it compiled well
> (with almost the same set of warnings) and the result seems to work
> (at least within my own coverage).
> 
> Thanks to the high general coding style, there were only two
> idiosyncrasies to be corrected: lack of the system-wide malloc.h
> (which is replaced by unistd.h in FreeBSD) and yet another lack of
> the fdatasync() syscall, which is already carefully screened from the
> sources by a special symbol called __ANDROID__ ;-)

No, malloc.h is on libc stdlib.h

Fdatasync() syscall is linuxism and don't exist in FreeBSD, use fsync. Note 
that FreeBSD filesystem UFS2 softupdates takes care of separate metadata and 
data parts of a file write/update/delete.

> By the way, clang kindly revealed a couple of forgotten "unsigned" in
> the sources, which could (and did in the past) produce some nice
> bugs. Attention to developers.
> 
> > Note that there isn't a port,
> 
> I know. Despite the code being frozen for almost a full year, nobody
> wants to take a trouble of maintaining FreeBSD port (perhaps me too).

It's experimental code, I played with it before, but don't use on any project. 
A port has no sense for now, I think.

> Valentin Davydov.


---   ---
Eduardo Morras 


[sqlite] Improving SQLite performance over a network

2015-09-10 Thread Eduardo Morras
On Thu, 10 Sep 2015 18:24:32 + (UTC)
Mike McWhinney  wrote:

> Hello,
> I am using SQLite in a Windows environment. The file storing the
> database is on a Windows 2008 server.All other file access to this
> server is reasonably fast. However, when doing simple SELECTS and
> INSERTS/UPDATESit is noticeably slower on a network. I am talking
> about 50 records. Now I have read that setting some of the PRAGMAS
> can improve speed, but at the cost of data loss should powerfail or
> some other hardware condition occur. I am willing to risk this if the
> performance of the SQLite database accesscan be increased
> considerably. Here is what I am doing in code (This is C#)
> 
> string connectionString = ""URI=file:mydb.db; Default Timeout=15;
> PRAGMA busy_timeout = 600" SQLiteConnection sqConn = new
> SQLiteConnection(connectionString);sqConn.Open();
> 
> 
> Then immediately after opening the database, I'm? running this code
> 
> string sqlPragma = "PRAGMA cache_size = 16000; PRAGMA synchronous =
> OFF; PRAGMA journal_mode = MEMORY;"; SQLiteCommand sqlite_command =
> OMConnection.sqConn.CreateCommand(); sqlite_command.CommandText =
> sqlPragma; sqlite_command.ExecuteNonQuery();
> 
> 
> This sets up the pragmas for the cache size, synchronous mode,
> journal mode.
> 
> I do not notice any difference in performance with the pragmas vs.
> without them.? Am I doing something wrong?Are there any other
> documented PRAGMAS which might help improve performance or are there
> any other measuresI can use to get faster performance over a network
> file vs. local file? 

Try "PRAGMA temp_store = 2;" In one project I use sqlite3 as network db and it 
is, besides cache_size and page_size, the most important pragma for improve 
performance.

>ThanksMike
> ___ sqlite-users mailing
> list sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras 


[sqlite] sql query

2015-09-10 Thread Eduardo Morras
On Thu, 10 Sep 2015 13:17:03 -0400
H?ctor Fiandor  wrote:

> Dear members:
> 
> I am trying to use a SQL statement like this:
> 
>   fdm.tNegSerAct.SQL:='SELECT * FROM NegSerAct 
> 
>   WHERE codNegSerAct >=desde AND codNegSerAct <=hasta 
> 
>   ORDER BY codNegSerAct';
>   but the trouble is that the variables ?desde? and ?hasta? are
> strings and fixed previously.
>  I have learned the SQLdb Tutorial2 but the offered solution don?t
> work.
>
>  I will appreciate any help in the solution of this problem.

It looks like you're using lazarus/freepascal fo it, but, there's no difference 
from c code. 'desde' and 'hasta' variables must have a name that sqlite3_bind() 
function can recognize and change with variable value. Check this page, 
http://www.sqlite.org/c3ref/bind_blob.html as refence to sqlite3_bind(). Proper 
value names are ,'?' , '?NNN', ':VVV', '@VVV', '$VVV$; where '?' is for one 
variable case, 'NNN' is any number from 0 to 999 and 'VVV' alphanumeric values. 
In your case, use '?des' y '?has'.

>  
> 
>  Thanks in advance
> 
>  
> 
> Ing. H?ctor Fiandor
> 
> hfiandor at ceniai.inf.cu
> 
>  
> 
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras 


[sqlite] Native sqlite4 on FreeBSD

2015-09-10 Thread Eduardo Morras
On Thu, 10 Sep 2015 14:22:24 +0300
Valentin Davydov  wrote:

> Hi, all!
> 
> How to build native sqlite4 (preferrably statically linked shell only,
> without any extensions) on a recent version of FreeBSD? It has clang 
> instead of gcc as a default system compiler, so makefiles bundled with
> sqlite sources don't work.

Use gmake to compile. Note that there isn't a port, sqlite4 is in development 
state and not a final product.

> Valentin Davydov.

---   ---
Eduardo Morras 


[sqlite] Using collation instead a virtual table

2015-09-10 Thread Eduardo Morras
On Wed, 9 Sep 2015 09:54:28 -0400
Igor Tandetnik  wrote:

> On 9/9/2015 6:54 AM, Eduardo Morras wrote:
> > Yes, the comparison operators work correctly and the b-tree binary
> > search should give correct answers, the only tweak is in xCompare,
> > that returns 0 when left(x) and rigth($value) expressions distance
> > is lower than a threshold.
> 
> A comparison like this would not generally be a proper collation. The 
> equivalence relation it induces is not transitive - it's possible to 
> have A == B and B == C but A != C (when A is "close enough" to B and
> B is "close enough" to C, but A and C are just far enough from each
> other).

Yes, in first mail I noted that, but I don't think it's a problem because:

a) xCompare(A,C) returns AC (-1/+1),
b) the query always return only one value, 
c) no ORDER BY in query,
d) hashed data hash are far enough / disperse and expect no new additions.

I'll compare collation vs virtual table with the tests cases, if fails or is 
slower I'll mark the collation branch as closed.

Thanks

> -- 
> Igor Tandetnik

---   ---
Eduardo Morras 


[sqlite] Using collation instead a virtual table

2015-09-09 Thread Eduardo Morras
On Tue, 8 Sep 2015 15:42:28 -0400
Richard Hipp  wrote:

> On 9/8/15, Eduardo Morras  wrote:
> >
> >
> > Hello,
> >
> > I have a virtual table that implements query perceptual hashing data
> > [1]. Now I'm thinking about converting the virtual table
> > implementation in a collation on a normal sqlite3 table, but
> > collation requieres that '=','<' and '>' be well defined by obeying
> > the rules cited on create_collation() page[2]. Sometimes, rule 2
> > may not be true, but I always query for '=',
> 
> Yes, but under the hood, SQLlite never does an == query on the b-trees
> even if you ask for a == query in the SQL.  Instead, the b-trees are
> queried using one of >, >=, <, or <=.  A query of the form:
> 
>  x=$value
> 
> Gets translated (at the b-tree layer) into
> 
>  x>=$value AND x<=$value
> 
> So it is *very* important that the comparison operators all work
> correctly on your collating sequence function.  If they don't, then
> SQLite will give incorrect answers.

Yes, the comparison operators work correctly and the b-tree binary search 
should give correct answers, the only tweak is in xCompare, that returns 0 when 
left(x) and rigth($value) expressions distance is lower than a threshold. I 
begin with the implementation and test cases, I expect it be faster than 
virtual table.

Thanks

---   ---
Eduardo Morras 


[sqlite] Using collation instead a virtual table

2015-09-08 Thread Eduardo Morras


Hello, 

I have a virtual table that implements query perceptual hashing data[1]. Now 
I'm thinking about converting the virtual table implementation in a collation 
on a normal sqlite3 table, but collation requieres that '=','<' and '>' be well 
defined by obeying the rules cited on create_collation() page[2]. Sometimes, 
rule 2 may not be true, but I always query for '=', never need sort query 
output and result is unique.
If I calculate the perceptual hash of an input, I want to get the closer 
(minimal distance) hash in the table calculating equal . Can I use a collation 
in this case?

[1] Perceptual Hashing: Hash function that similar input data has equal or 
similar hash. 
[2] http://www.sqlite.org/c3ref/create_collation.html
1.If A==B then B==A.
2.If A==B and B==C then A==C.
3.If AA.
4.If A


[sqlite] Performance problems on windows

2015-08-28 Thread Eduardo Morras
On Fri, 28 Aug 2015 12:55:00 +
Jakub Zakrzewski  wrote:

> Hi,
> 
> just and update here.
> It seems that the newer libraries perform worse when we use multiple
> worker threads. I don't know why and I don't have time to investigate
> it any further. We'll stay with the old 3.7.16.2 for now.

Perhaps newer has a different compile options or pragma values. Fast check 
executing "pragma compile_options" and post here if we can see some option that 
can cause it. Don't know a pragma that list all pragmas values as pragma 
compile_options do, but it'll be a nice addition for debugging purpouses.

> Thank you for your assistance. 
> 
> --
> Gruesse,
> Jakub
> 

---   ---
Eduardo Morras 


[sqlite] SQlite database access over wireless network

2015-08-24 Thread Eduardo Morras
On Mon, 24 Aug 2015 16:43:40 + (UTC)
Mike McWhinney  wrote:

> Hello,
> I am trying to troubleshoot a problem that I'm having with a program
> usingSQLite over a wireless network.? I know that wireless networks
> can havetheir own sets of issues that cause database access problems.
> What is happening is that something that takes 1 or 2 seconds on a
> wirednetwork will take 15-20 seconds over wireless.? 
> 
> I am using System.Data.SQLite, which is based on version SQLite 3.
> 8.8.3 (version 1.0.96.0)with C# Visual Studio. The program is
> installed and runs off a network. The database is also on a network.
> The reads are somewhat slower running on a wired network than when
> run on a local machine.However, when run over wireless it is
> significantly slower.? These are mostly reads onfairly simple
> queries.? 
> 
> Are there any pragmas that may be used to increase the latency or to
> allow thereads to process faster across a network??? Or are there any
> other tips or tricksthat may be used to speed up access? This is a
> multi-user database with about 2-10 usersreading/writing data. Most
> of the time, the client application sits idle.? The SQLite
> connectionsare opened only on demand, then closed after I/O
> operations. However this opening and closingis kept to a minimum.
> ThanksMike

As Simon Slavin and R.Smith tells you, it's not an scenary (WiFi and concurrent 
r/w access) where Sqlite will work well. Each time sqlite needs to create a 
temp file, lock a file, delete a file, etc... it must wait to network and 
remote os. Said that and if you really wants to work with sqlite, don't use a 
direct System.Data.Sqlite connection. Instead, send the queries (text queries) 
to a process who runs sqlite on local, queues the queries, execute them in 
order and return results to each user. It's a medium complex project and some 
corner cases should be consider. I'll try postgres instead.



---   ---
Eduardo Morras 


[sqlite] Wierd Locking problem

2015-08-24 Thread Eduardo Morras
On Mon, 24 Aug 2015 16:03:24 +0200
Luc Andre  wrote:

> Hi All,
> 
> Our web servers use a daily updated sqlite database file.
> 
> The file is opened by multiple (apache/PHP) threads but always in
> read only mode:
> 
> $db = new SQLite3(DB_FILE, SQLITE3_OPEN_READONLY);
> 
> The file itself has no write access.
> 
> -rw-r--r-- 1 root root 225759232 Aug 24 13:43 db.sqlite
> 
> But sometimes we get PHP warnings:
> 
> PHP Notice: SQLite3Stmt::execute(): Unable to execute statement:
> database is locked
> SQLite3::prepare(): Unable to prepare statement: 5, database is locked
> SQLite3::querySingle(): Unable to execute statement: database is
> locked
> 
> We can not understand how a read only file can get locked.

Some hints (some of them from documentation):

a) If you use wal mode then you can't open it in read only mode, it' will be on 
read-write mode.
b) If you have some triggers that modifies, inserts or delete data, you'll get 
a db locked if any attempts to fire when another is running.
c) If you use wal mode and wal checkpoint isn't on passive mode, you'll get a 
db busy.
d) Check performance on serialize threading mode (2), instead of multi-thread 
mode (1).
e) If a journal size limit is set, whichever journal mode, a db lock happens.
f) Some pragmas lock db, update user_version, cache_spill=on, 


> Any hint / fix ?
> 
> Regards,
> 
> Luc
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras 


[sqlite] When sqlite3_close() returns SQL_BUSY

2015-08-22 Thread Eduardo Morras
On Sat, 22 Aug 2015 05:07:55 -0500
Jeff M  wrote:

> The doc page for SQLITE_BUSY covers SQLITE_BUSY when there are
> multiple connections to the database.
> 
>   SQLite Result Codes (SQLITE_BUSY)
>   http://www.sqlite.org/rescode.html#busy
> 
> But, I have only one connection.  I believe the case where
> SQLITE_BUSY is returned by sqlite_close() due to unfinalized prepared
> statements should be mentioned there.

Perhaps you forget to call sqlite3_finalize() on at least one query of your 
code.

> Jeff


---   ---
Eduardo Morras 


[sqlite] libtclsqlite3 assistance

2015-08-22 Thread Eduardo Morras
On Fri, 21 Aug 2015 20:19:38 -0700
jungle Boogie  wrote:

> Hi Dr. H,
> On 21 August 2015 at 14:23, Richard Hipp  wrote:
> > On 8/21/15, jungle Boogie  wrote:
> >>
> >> Is it possible to compile in libtclsqlite3?
> >>
> >
> > Dunno.  But you can compile the libtclsqlite3.so yourself:
> >
> > make tclsqlite3.c
> > gcc -fPIC -shared -I. -o libtclsqlite3.so tclsqlite3.c
> 
> 
> Made it fine but compiling not so much:
> 
> /usr/local/bin/gcc48 -fPIC -shared -I. -o libtclsqlite3.so
> tclsqlite3.c tclsqlite3.c:162240:17: fatal error: tcl.h: No such file
> or directory
>  #include "tcl.h"
>  ^
> compilation terminated.
> 
> I have tcl.h here:
> /usr/local/include/tcl8.6/generic/tcl.h
> /usr/local/include/tcl8.6/tcl.h

Add the include path to /usr/local/include/tcl8.6

gcc -fPIC -shared -I. -I/usr/local/include/tcl8.6 -o libtclsqlite3.so 
tclsqlite3.c

---   ---
Eduardo Morras 


[sqlite] pragma temp_store_directory is deprecated, what is the alternative?

2015-07-31 Thread Eduardo Morras
On Thu, 30 Jul 2015 23:29:49 +
Howard Kapustein  wrote:

> >There cannot be a fully portable way, because path specifications
> >are not portable
> Which begs the question, why isn't there an xGetTempFilename VFS
> function in sqlite3_vfs?
> 
> Wouldn't the simplify things? Have the VFS handle it apropos, with
> the default VFS' in SQLite (or an older VFS lacking it) use the
> current hunt sequences for compat

It will not fix the other reason to mark it as deprecated, it's not thread 
safe. If one thread changes the temp directory, other threads that use temp 
files will fail and may corrupt db file. 

>   - Howard

---   ---
Eduardo Morras 


[sqlite] create temporary virtual table

2015-07-21 Thread Eduardo Morras
On Mon, 20 Jul 2015 17:34:06 -0500
Andy Rahn  wrote:

> I see there is no way to create a temporary virtual table.

It depends on how the virtual table is implemented. Your virtual table code 
xCreate function can create temporal tables, sqlite3 drop them on 
sqlite3_close() call. Or do you ask about a virtual table extension in 
particular, like fts3 or rtree?

> One idea I had was to create a second, in-memory db and attach that,
> then create the virtual table over there.
>
> Currently I have it working where as soon as I open the db, I use
> sqlite_master to find any left over virtual tables from before and
> drop them.
>
> Any other ideas?

Modify the code of your virtual table or ask sqlite3 developers if it's one of 
the virtual tables provided with sqlite3.

>  - Andy

---   ---
Eduardo Morras 


[sqlite] Schema-less JSON SQLite DB?

2015-07-16 Thread Eduardo Morras
On Wed, 15 Jul 2015 17:22:36 +0200
Sergej Jure?ko  wrote:

> An idea I?ve had a while ago was to implement functions for json
> documents (using sqlite3_create_function_v2)
> 
> Json would be stored in a text column. You would need to define 2
> functions:
> - docget(document,key1,subval1,subval2,?)
> - docset(document,key1,subval1,subval2,..,val)
> 
> Last parameter of docset is value you wish to set.
> 
> So for instance one would write
> INSERT INTO mytable VALUES (1,?{key : 10, subdoc : {subkey : ?a"}}?);
> SELECT id,doc FROM mytable WHERE docget(doc,?key") > 10;
> SELECT id,doc FROM mytable WHERE docget(doc,?key?,?subdoc?,?subkey?)
> = ?a?; UPDATE doc FROM mytable WHERE id=1 SET docset(doc,?key?,12);
> 
> One could even implement indexes on keys within documents using
> additional tables.
> 
> What do you guys think? Is it stupid, could it be improved?

Perhaps Dr. Hipps, R. unql implementation should be resurrected or at least, 
used as good/bad example. It has an extension to sql to query unstructured data 
and a layer on top sqlite to store json data. It's abandonware now afaik.

http://unql.sqlite.org/index.html/wiki?name=UnQL
http://www.dataversity.net/unql-a-standardized-query-language-for-nosql-databases/
http://www.couchbase.com/press-releases/unql-query-language


> 
> Sergej


---   ---
Eduardo Morras 


[sqlite] convert a PostgreSQL to sqlite

2015-07-06 Thread Eduardo Morras
On Sun, 5 Jul 2015 14:44:00 +0200
 wrote:

> Is there a way (free, open source, Ubuntu) to convert a
> PostgreSQL-database with data in it to a sqlite database?

Check VirtualPG virtual table extension, 
https://www.gaia-gis.it/fossil/virtualpg/index

It permits create and fill a virtual table in sqlite with data from a 
Postgresql server.


---   ---
Eduardo Morras 


[sqlite] static sqlite database

2015-06-10 Thread Eduardo Morras
On Wed, 10 Jun 2015 14:13:29 +
Igor Stassiy  wrote:

> Thanks for all your replies. Please let me clarify:
> 
> I need to do certain range operations efficiently and I prefer SQL
> functionality, plus SQLite gives a cross platform file storage
> format. The table having 3.2 megabytes is just an example. I am
> storing much more (orders of 10^5 of points) however the effect is
> similar.
> 
> When I store several tables side by side the effect seems to be worse
> (I assume SQLite leaves the possibility for inserts and hence needs
> to have some space to do them efficiently, please correctly if I am
> wrong). SQLite forums suggest to store tables in separate files to
> avoid vacuum space in between the tables, are there any other methods
> to save space?

Have you look at R-Tree extension? https://www.sqlite.org/rtree.html

It creates a table and indexs to store and query latitude and longitude 
efficiently. 

Or spatialite? https://en.wikipedia.org/wiki/SpatiaLite 
http://www.gaia-gis.it/gaia-sins/

It's opensource under MPL.

A page is dedicated only to one thing, table A, table B, internal sqlite3 data, 
index C, whatever... A page can't store data from one table and data from 
another table or index. This means that if your page size (default 1024bytes) 
is not defined wise enough (f.ex. 64KB=65536byes) and your tables uses a few 
bytes (say 128bytes), the rest of the page is empty (65536-128=65408 bytes) for 
future table grows. There are page header and tail with data you must count 
too, so it's a bit less. Note that you want to store a big table with data, and 
the space lost on unwise page size is minimal. 

You can have empty pages inside the database because you deleted data or drop 
an index or a table, but in your static db, it shouldn't happen. In any case, a 
vacuum and analyze solve the issue.

I use the trick to store tables in separate db files and attach them to main db 
for speed on mechanical hard disks, because those files on different disks 
maximize the iops. An autovacumm and proper db maintenance should minimize the 
vacuum space between tables.

HTH

---   ---
Eduardo Morras 


[sqlite] User-defined types -- in Andl

2015-06-08 Thread Eduardo Morras
On Mon, 8 Jun 2015 15:28:11 +1000
 wrote:

> Thanks for pointing it out, but I knew that the best way to show off a
> language is with examples. That's why there are nine sample Andl
> scripts comprising dozens of individual examples in the Samples
> folder. My guess is if that you're asking me to write examples, the
> real lesson is that I didn't make them easy enough to find.
> 
> I have a formal grammar, but I don't expect anyone to read that. More
> and better examples is the way to go.

No, a big bold No. If I want implement your language in some product I need the 
formal grammar.

Learn by example means learn white rules (the dos), I need to know the black 
rules too (the don'ts) to get full knowledge of the language.

> Regards
> David M Bennett FACS
> 
> Andl - A New Database Language - andl.org


---   ---
Eduardo Morras 


[sqlite] Replace an open database

2015-05-28 Thread Eduardo Morras
On Wed, 27 May 2015 19:07:50 +0200
Dominique Devienne  wrote:

> On Wed, May 27, 2015 at 6:59 PM, Wade, William 
> wrote:
> 
> > (including his own uuid, indicating that this is logically a new
> > file). When existing readers get around to reading again, they will
> > check that uuid, and handle the change in writers "gracefully."
> >
> 
> You can perhaps abuse the Application-Id [1] as your uuid to check.
> But you could also store a real uuid in a 1-row well-known table.
> Just a thought (on that specific point). --DD

Better user-version pragma[2]. Application-Id identifies your sqlite3 db file 
with your application using unix file (1) program or similar.

> [1] https://www.sqlite.org/pragma.html#pragma_application_id
https://www.sqlite.org/pragma.html#pragma_schema_version

If your new db has identical schema, don't modify the schema version. If you do 
so, readers must call sqlite3_reset() and sqlite3_prepare() again.

---   ---
Eduardo Morras 


[sqlite] Virtual Table query - why isn't SQLite using my indexes?

2015-05-16 Thread Eduardo Morras
On Fri, 15 May 2015 18:34:33 +
Eric Hill  wrote:

> Thanks for your reply.
> 
> I went as high as using (number of rows)^4, resulting in a cost of
> ~440 trillion for the unindexed case, along with setting the cost to
> 1 for the indexed case, and it still won't use my index.
> 
> I'd like to step out of my xBestFilter implementation into SQLite
> code to see if I can tell what is going on there, but when I step
> out, the call stack knows where I'm supposed to be, but the debugger
> does not find the right line in sqlite3.c.  I have built sqlite3.c
> simply by adding the amalgamation to my Microsoft Visual Studio 2013 C
> ++ project.  Is there something I can do to make the debugger work?
> 
> I will postpone index creation until the call to xFilter, I reckon,
> once I work out these other issues.  Thanks for the tip!

Perhaps you can disable the unindexed path, forcing to use always the index. 
This way your virtual table always use your index.

What I don't know is: 

Sqlite3 can use only one index per query, I don't know if indexes from virtual 
tables count for this limit. I suppouse not because it should be an internal 
virtual table decission implementation show the selected data the faster way. 

If it counts for index limit, perhaps Sqlite3 found a faster index when joining 
your virtual table data with other table data.

Perhaps I miss read something in documentation.

Dr. Hipps, can you explain if virtual table index count for this limit?

> Eric

---   ---
Eduardo Morras 


[sqlite] Regarding SQLITE_PRIVATE

2015-05-15 Thread Eduardo Morras
On Fri, 15 May 2015 17:13:32 +0530
Sairam Gaddam  wrote:

> On Fri, May 15, 2015 at 4:46 PM, Hick Gunter  wrote:
> 
> > The keyword "static" before a function name limits its visibility
> > to the current source file.
> >
> > But many of the PRIVATE functions are not declared static like the
> > "sqlite3VdbePrintOp"
> function.
> If they do declare, can i know where they did that?

In amalgamation you can search in sqlite3.h for these defines:

#define SQLITE_PRIVATE static
#define SQLITE_API extern

Some lines up, you find in what .h file they are declared.

By default all functions in C are of type extern (if you don't add static, they 
are extern), so, if you declare them on .h file they can be called from other 
.c files. If you don't declare them on .h but at top of .c file where they are 
implemented they can't be called from other .c files. If you declare them as 
static, you can't call them from any other .c files.

HTH

---   ---
Eduardo Morras 


[sqlite] Please confirm what I THINK I know about blobs

2015-05-11 Thread Eduardo Morras
On Mon, 11 May 2015 14:00:41 +
"Drago, William @ CSG - NARDA-MITEQ"  wrote:


> Correct me if I'm wrong, but the only way to read comments is by
> reading back the entire CREATE TABLE string, correct?

Yes, you need to parse it. You search for '--' and '\n'. The chars between them 
are the comment. 

I thought you need it only for documentation purpouse and not automatic/dynamic 
programming.

> Thanks,

---   ---
Eduardo Morras 


[sqlite] Please confirm what I THINK I know about blobs

2015-05-09 Thread Eduardo Morras
On Sat, 09 May 2015 06:09:41 -0400
William Drago  wrote:

> All,
> 
> Say you encounter a blob in a database. There's no way to 
> tell if that blob carries bytes, floats, doubles, etc, correct?
> 
> Assuming the above is true, then is it always prudent to 
> store some metadata along with your blobs so that they can 
> be identified in the future?
> 
> Example table:
> 
> ModelNo TEXT (e.g. SO-239)
> SerialNo TEXT (e.g. 101)
> VSWR BLOB (e.g. x'feab12c...')
> VSWR_Type TEXT (e.g. double)
> 
> 
> Does this make sense?

You can use SQL comments on CREATE TABLE, those comments aren't deleted from 
SQLITE_MASTER table, you can query it as a normal table.

CREATE TABLE blob_table (
 ModelNo TEXT,  -- e.g. S0-239
 SerialNo TEXT, -- e.g. 101
 VSWR BLOB  -- double, e.g. x'feab12c'
);

SELECT sql from sqlite_master where type='table' AND tbl_name='blob_table';

will return

CREATE TABLE blob_table (
 ModelNo TEXT,  -- e.g. S0-239
 SerialNo TEXT, -- e.g. 101
 VSWR BLOB  -- double, e.g. x'feab12c'
)

(Note that ';' is deleted)

I check it in Firefox Sqlite3 Manager extension.

> Thanks,
> -Bill


---   ---
Eduardo Morras 


[sqlite] SQLite queries

2015-05-07 Thread Eduardo Morras
On Thu, 7 May 2015 11:25:33 -0500
John McKown  wrote:

> On Thu, May 7, 2015 at 10:01 AM, Stephen Chrzanowski
>  wrote:
> 
> > Although I can understand the sarcasm you're sending out, a
> > client/server infrastructure would be an interesting task for
> > SQLite to do, but to answer the OP, no, SQLite isn't inherently
> > designed to be a client/server.
> >
> 
> ?And not really that difficult. Embed SQLite into some "server code"
> to which a client connects, probably via TCPIP or some other IPC
> method.? The client connects to the server, which uses the embedded
> SQLite to access the appropriate SQLite data base file. I would guess
> that the server could even be multi-threaded with each thread
> controlling a separate SQLite data base file.

Do you mean create a NewSQL Shared-Nothing distributed database with SQlite3, 
using something like nanomsg or 0MQ for distribute workload and messages, and 
Graph Theory for manage the whole headless distributed server?

Don't try to do that. It's far complex than you think and even Pandora won't 
open it. I see now Stephen answer and he cite some of the problems you may 
encounter at "There are MANY things to be taken care of,.." paragraph.

---   ---
Eduardo Morras 


[sqlite] Possible bug with locking/retying

2015-05-03 Thread Eduardo Morras
On Sun, 3 May 2015 01:18:11 +0100
Simon Slavin  wrote:

> In searching for something else I came across this:
> 
> <http://beets.radbox.org/blog/sqlite-nightmare.html>
> 
> I don't like the fact that it's undated.  For all I know this is
> about a three year old bug which has long since been patched.
> 
> I understand the description, but not what SQLite does internally,
> and I don't know whether it was ever reported to the SQLite dev
> team.  Would someone like to take a look ?

I think it's a misuse of Sqlite and not a real bug. He adds code to ensure only 
one thread access the database in a multithread application. For the 
description, I infer he uses -DSQLITE_THREADSAFE=2 (multithread), where a 
-DSQLITE_THREADSAFE=1 (serialized) solve the problem better because the code to 
ensure only one thread access the database in multithread app, is sqlite own 
code, don't need to reinvent the wheel.

Please, pay attention to my first two words, "I think", it shows my opinion 
about the problem as is described, not a real solution to other problems.

> Simon.
---   ---
Eduardo Morras 


[sqlite] Thoughts about enhancing "PRAGMA quick_check" speed

2015-05-01 Thread Eduardo Morras
On Fri, 01 May 2015 17:58:06 +0200
Jean-Marie CUAZ  wrote:

> Hello,
> 
> We use Sqlite + Tcl/Tk in a payroll application and everything works 
> great at high speed !
> 
> In this application, "PRAGMA quick_check" are processed in a few
> places as a security measure :
> - before processing backup/restore operations.
> - during the process of login into the application.
> 
> The size of a database file for an instance of our application vary
> from 100 MB to 700 MB.
> 
> With a "PRAGMA quick_check" processed when login into the
> application, the waiting time can reach one minute on an average
> computer (3 Ghz + 7200 rpm HD).
> 
> So I ask this naive question : except if already done, could it be 
> possible to leverage information gathered by Sqlite with "PRAGMA 
> threads" to process "PRAGMA quick_check" in a multi-threaded manner ?

Have you measure if it's cpu-bound or io-bound? If it's limited by io, using 
bigger cache or activating mmap will improve that.

Also, if you only see it at login (I suppouse login is done once at start app) 
the system cache don't have the db while doing it at backup/restore db file 
system cache has it. Try to do something like this:

%cat /path/to/db.file > /dev/null

> Thank you
> 
> Jean-Marie
> 
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras 


[sqlite] Fwd: Is there an ODBC driver for SQLite

2015-05-01 Thread Eduardo Morras
On Fri, 1 May 2015 11:28:48 -0400
Igor Korot  wrote:

>   Hi, ALL,
> Is there an ODBC driver for SQLite DB?

http://www.ch-werner.de/sqliteodbc/

> I'm trying to see if it will be possible to have an application
> written in dBase + Paradox
> to be converted to dBase + SQLite.
> 
> So, if there is such a driver I would be very interested in requesting
> it and evaluate it.
> 
> Also it would be nice to know whether this driver is OK to use legally
> in the commercial app.
> 
> Thank you for any info you can provide.

---   ---
Eduardo Morras 


[sqlite] Segfault during FTS index creation from huge data

2015-04-30 Thread Eduardo Morras
On Fri, 3 Apr 2015 18:16:18 +0300
Artem  wrote:

> Hi!
> 
> The situation is like that. There?s a SQLite database with around 3
> billion records. Each record consists of a certain CHAR field and
> several other additional fields with different types. The file size
> is approx. 340 gb. The maximum content length in the doc field is 256
> symbols, the content is in Russian.


You can extend fts3/4 tokenizers to recognize Russian stop words[1] and exclude 
them from FTS index. I don't know Russian, but in English, examples of stop 
words are: 'a', 'the', 'of', etc...

See https://www.sqlite.org/fts3.html#section_8_1 for implement your own 
tokenizer or extend unicode one to exclude your stop words. A fast hack can be 
to add code at/to end of icuNext[2] (file ext/fts3/fts3_icu.c) function and 
check if the token is in your stop word list and skip the token [3](it's a 
pointer to current string) instead, something like this:

233 *piEndOffset = pCsr->aOffset[iEnd];
234 *piPosition = pCsr->iToken++;
235 if ( token_is_stop_word(ppToken,nByte) ){
236   *ppToken = Pointer_to_empty_string;
237   *pnBytes = 0;
238   *piStartOffset = pCsr->aOffset[iStart+nByte];
239   *piEndOffset = pCsr->aOffset[iEnd+nByte];
240   *piPosition = pCsr->iToken--;
241 }
242 return SQLITE_OK;

N.B. It's a fast hack and I haven't compile, run or check with full Sqlite3 
documentation, 

There are list of stop words available[4][5] on internet.

[1] https://en.wikipedia.org/wiki/Stop_words
[2] 
http://www.sqlite.org/src/info/e319e108661147bcca8dd511cd562f33a1ba81b5?txt=1=177
[3] 
http://www.sqlite.org/src/info/e319e108661147bcca8dd511cd562f33a1ba81b5?txt=1=235
[4] https://code.google.com/p/stop-words/ (Warning!! GPLv3 code)
[5] http://www.ranks.nl/stopwords/russian (Warning!! Unknow licence)

> Thank you.

HTH

---   ---
Eduardo Morras 


[sqlite] Segfault during FTS index creation from huge data

2015-04-29 Thread Eduardo Morras
On Wed, 29 Apr 2015 12:00:02 +0200
Dominique Pell?  wrote:

> I'd suggest to try reproducing the problem on Linux after disabling
> memory overcommit  (at least temporarily for the experimentation):
> 
> * If it returns SQLITE_NOMEM without segfault, then it's not a bug
>   but a limitation of SQLite which uses lots of memory for FTS, or
>   a leak in the application.
> * If it still segfaults, then there is a bug in SQLite or in the
> application.

 A fast checking code inside malloc.c shows this warning inside void 
*sqlite3Malloc(u64 n) function. Don't know if FTS uses sqlite3_malloc() 
directly or this one.

/* A memory allocation of a number of bytes which is near the maximum
** signed integer value might cause an integer overflow inside of the
** xMalloc().  Hence we limit the maximum size to 0x7f00, giving
** 255 bytes of overhead.  SQLite itself will never use anything near
** this amount.  The only way to reach the limit is with sqlite3_malloc() */



> 
> Regards
> Dominique


---   ---
Eduardo Morras 


[sqlite] Segfault during FTS index creation from huge data

2015-04-29 Thread Eduardo Morras
On Tue, 28 Apr 2015 16:49:46 -0600
Scott Robison  wrote:

> On Tue, Apr 28, 2015 at 4:27 PM, Artem  wrote:
> 
> > > That's not a segfault, though, is it.
> >
> > When I did the same in linux version of SQLite - I saw
> > the "Segmentation Fault" error.
> >
> 
> I never saw a segfault in my case, though I never tried anything on
> any posix style system. It was strictly Windows.

Don't know if it's asked yet, but do you use a 64bit Windows or a 32bit version?

> -- 
> Scott Robison

---   ---
Eduardo Morras 


[sqlite] Regarding testing

2015-04-27 Thread Eduardo Morras
On Mon, 27 Apr 2015 18:06:31 +0530
Sairam Gaddam  wrote:

> Yeah I read that link previously but how do i get all those test
> cases?

You can buy the professional tests at 
http://www.hwaci.com/sw/sqlite/prosupport.html , Mr. Hipps company behind 
Sqlite.

Or you can trust them or create the tests you need for your use case.

---   ---
Eduardo Morras 


[sqlite] Destroy all evidence of a database

2015-04-23 Thread Eduardo Morras
On Wed, 22 Apr 2015 20:07:45 +0100
Simon Slavin  wrote:

> 
> On 22 Apr 2015, at 7:18pm, Scott Hess  wrote:
> 
> > The only way SQLite can get to the disk is using the vfs, so if the
> > vfs encrypts things, all of the files (main db, temp db, journal,
> > everything) will be encrypted.
> 
> Guys.  Guys.  Guys.  My app doesn't have access to any level below
> standard file system calls.  This is a highly secure system.  Any
> calls which talk directly to hardware (e.g. turn the caps lock light
> on, access SMART diagnostics, try to count the number of displays)
> will fail because my app isn't allowed to do that stuff.  Any attempt
> from my app to mount anything will fail.  My app has access to just
> GUI and files.  I don't have to worry about the security setup at OS
> level, merely not leave files about with sensitive information in
> them.

You can reference count the number of files it creates and deletes. If it 
creates more files than it deletes, you have a problem. It doesn't involve big 
changes in sqlite vfs code and even may be implemented in sqlite3 core code as 
a debug feature.

> 
> Simon.

---   ---
Eduardo Morras 


[sqlite] json_* functions in sqlite

2015-04-22 Thread Eduardo Morras
On Tue, 21 Apr 2015 18:09:33 -0700
Ashish Sharma  wrote:

> Hi
> 
> Many times I store JSON data in sqlite. It will be useful if sqlite
> came with functions which understand JSON. Presto has a nice set
> https://prestodb.io/docs/current/functions/json.html
> 
> I have two questions
> 
> 1. Will sqlite overlords consider adding this to sqlite core?
> 2. If so, what steps are needed for this?

Check unql.sqlite.org. It's abandonware AFAIK, but you can use part of the 
code. Stephen Beal colister has cson 
(http://fossil.wanderinghorse.net/wikis/cson/?page=cson), you can use it too.

Also, you can make a module/functions to work with BLOBs and TEXTs. Check how 
sqlar works (http://www.sqlite.org/sqlar/doc/trunk/README.md)


> Thanks
> Ashish
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras 


[sqlite] Destroy all evidence of a database

2015-04-22 Thread Eduardo Morras
On Wed, 22 Apr 2015 13:50:43 +0100
Simon Slavin  wrote:

> Dear folks,

> 
> Assuming no hardware/OS faults is it possible for any other
> SQLite-created files to still exist ?  Journal ?  Temp index ?
> Shared memory ?  Anything ?

a) If the app crash, it may create a dump file with sqlite cache information.
b) If memory is low and depending of the OS, perhaps part of app code/data may 
reside on swap for a time after quit the app.
c) In Windows, antivirus software may lock a temp file and don't allow delete it
d) If use ZFS or similar filesystem, a snapshot of file system is taken each 
10-30 secs., you can go back in time and recover filesystem state from 2 days 
ago (up to 6 months IIRC)

> I have read <https://www.sqlite.org/tempfiles.html>.
> 
> Simon.

---   ---
Eduardo Morras 


[sqlite] SELECT performance vs PRIMARY KEY/UNIQUE constraints

2015-04-20 Thread Eduardo Morras
On Mon, 20 Apr 2015 12:01:59 +0200
Nicolas Boullis  wrote:

> Hi,
> 
> On Mon, Apr 20, 2015 at 11:18:56AM +0200, Clemens Ladisch wrote:
> > If that does not help, we'll have to look at the actual queries (and
> > their EXPLAIN QUERY PLAN output).
> 
> That would certainly help, but I would have to ask for permission to 
> make this information public, or to anonymize even the names of the 
> tables and columns.

Knowing:

a) the kind of data (do you use blobs or big text columns?)
b) if data was inserted on primary key sort order or randomized
c) if you have a multi-gigabyte db
d) pragmas configuration 
e) column order (put blobs and text at the end of table column)

may point us to help you more without exposing internal information.

Also remember to don't use primary key on multicolumn indexes and upgrade your 
sqlite3 to newer version (from 3.7.13 to 3.8.9)

> 
> Thanks for your help,
> 
> -- 
> Nicolas Boullis
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras 


[sqlite] Query times vary between 0.2 s and 30 s for very similar queries - how to trick the optimizer?

2015-03-21 Thread Eduardo Morras
On Wed, 18 Mar 2015 14:43:26 +0100
"Mario M. Westphal"  wrote:

> I?m using 3.8.8.1 on Windows via the ?C? interface.
> 
> I work with SQLite for several years with good success. And I know
> that no optimizer ever will be perfect. But I?ve just stumbled upon a
> case where a very similar query requires between 0.2 seconds and a
> whopping 30 seconds.
> 
> I?ve simplified things as much as possible and included the create
> instructions and queries below.

<...snip...>

> The question is: When JOINing large tables with a temporary table,
> how to ensure that the optimizer can work optimal? Running ANALYZE
> with a temporary table probably does not work, and ANALYZE takes
> about 1 minute on this database so this is not feasible for each
> query.
> 
> I'm glad to have found an apparently working solution (IN instead of
> JOIN) but I wonder if this could be somehow automated by the
> optimizer? Or maybe this is a worst-case for the optimizer?
> 

You can run ANALYZE on any table, try:

ANALYZE temp_table;
SELECT 

You can also test-stress IN and know where is the limit. I 
think/suppouse/suspect that in this case there is no winning for using a 
temporal table intstead an IN, it should be faster when the temp table has more 
columns used in the where clause or additional join restriction.

---   ---
Eduardo Morras 


[sqlite] Sqlite3 tutorial and reference manuals?

2015-02-22 Thread Eduardo Morras
On Sun, 22 Feb 2015 13:10:25 -0500
russ lyttle  wrote:

> Contiki-os sounds interesting, especially if it can be adapted to run
> sqlite3 on an Arduino UNO or MEGA (with attached SD card). Has anyone
> attempted such a thing?
> 
> The "natural" language for the Raspberry PI is Python. For Arduino it
> is C++. I've confirmed that both PERL and Python drop too many
> messages when using my naif DB with sqlite3, R-PI model A, and Digi
> xbee s2 radios.

Sqlite3 under contiki-os works, but (a big BUT) it must be a sqlite3 bare-bones 
-Os compilation, omitting all advanced features (threading, fts, extensions, 
foreign keys, etc...). You need to create a vfs too.

---   ---
Eduardo Morras 


[sqlite] Sqlite3 tutorial and reference manuals?

2015-02-22 Thread Eduardo Morras
On Sat, 21 Feb 2015 23:04:53 -0500
russ lyttle  wrote:

> Thanks.
> I doing building automation using Raspberry Pi, Arduino, and zigbee
> devices. I started with Python, but that is too slow, so I would like
> to move to C++. Dropped messages are not acceptable, and response to
> user actions needs to be "timely".
> 
> I'll hold futher questions until after looking at the recommendations
> my by you kind people.

For zigbee devices I use contiki-os. I suggest you to use c only if your 
devices are low power. 

---   ---
Eduardo Morras 


[sqlite] Appropriate Uses For SQLite

2015-02-19 Thread Eduardo Morras
On Thu, 19 Feb 2015 08:31:13 +0100
Olivier  wrote:

> Hello all,
> 
> https://www.sqlite.org/whentouse.html :
> "The amount of web traffic that SQLite can handle depends on how
> heavily the website uses its database. Generally speaking, any site
> that gets fewer than 100K hits/day should work fine with SQLite. The
> 100K hits/day figure is a conservative estimate, not a hard upper
> bound. SQLite has been demonstrated to work with 10 times that amount
> of traffic.
> 
> The SQLite website (https://www.sqlite.org/) uses SQLite itself, of 
> course, and as of this writing (2015) it handles about 400K to 500K
> HTTP requests per day, about 15-20% of which are dynamic pages
> touching the database. Each dynamic page does roughly 200 SQL
> statements. This setup runs on a single VM that shares a physical
> server with 23 others and yet still keeps the load average of below
> 0.1 most of the time."
> 
> --
> 
> it would be interesting to put *all* sqlite.org pages in the
> database, even if it is useless. This would test with 500K HTTP
> requests per day. It will then be possible to modify this paragraph
> and indicate that Sqlite smoothly manages the 500K HTTP requests per
> day of this website, thus about 100 000K SQL statements per day.
> 
> And why not test with writing on each visit, and even every page
> visit? If Sqlite accept the charge, it would be impressive. it would
> also demonstrate the interest of WAL mode.
> 
> With the evolution of Sqlite and materials evolution (SSD, 
> microprocessors ...), it might be possible.


You can test drupal with sqlite, IIRC it's drupal7. Create a site or use a 
demostration site and use a http benchmark to test it.

There are others cms that can use sqlite as db, seredipity, Joomla, MediaWiki 
you can play and test with.

> 
> Olivier

---   ---
Eduardo Morras 


[sqlite] Behaviour when subquery behind comparison operator in where-clause returns empty result

2015-02-18 Thread Eduardo Morras
On Wed, 18 Feb 2015 10:23:16 +0100
Clemens Ladisch  wrote:

> gunnar wrote:
> > When I execute a query that has a subquery behind a comparison
> > operator in its where-clause and the subquery returns an empty
> > result, then the result of the complete query is exactly how I want
> > it: also an empty result. Can I assume that this behaviour will
> > stay the same in next versions of sqlite?
> >
> > SELECT *
> > FROM ordercallback
> > WHERE account=@ACCOUNT
> >   AND cb_seq_num>(
> > SELECT cb_seq_num
> > FROM ordercallback
> > WHERE cb_uuid=@CBUUID);
> 
> AFAIK the SQL standard requires such a subquery to return exactly one
> result.
> 
> SQLite allows any number of results; if there are more than one, it
> uses the first one; if there is none, it uses NULL instead.  This
> behaviour is not expected to change because it would break too many
> programs that already rely on it.
> 
> If you want to make this more compatible with other databases, handle
> these cases explicitly:
> 
> ... WHERE cb_seq_num > (
>   SELECT cb_seq_num
>   FROM ordercallback
>   WHERE cb_uuid=@CBUUID
>   UNION ALL
>   SELECT NULL   -- at least one result
>   LIMIT 1)  -- at most one result

Shouldn't add an ORDER BY cb_seq_num to get the lower one?

...
WHERE cb_uuid=@CBUUID
ORDER BY cb_seq_num ASC
UNION ALL
...
> 
> 
> Regards,
> Clemens

---   ---
Eduardo Morras 


Re: [sqlite] Performance increase between 3.7 and 3.8

2015-02-03 Thread Eduardo Morras
On Tue, 3 Feb 2015 06:39:02 -0700 (MST)
Jan Slodicka <j...@resco.net> wrote:

> Eduardo Morras-2 wrote
> > A ~8000MB db with app example data. More than 1000 query-corp
> > created as part of test driven development of the app. We have
> > precalculated the correct results in tables and its number of rows.
> > No write, only read queries and don't use other improvements like
> > partial indexes. Queries run in sequential order at a time.
> 
> That means that (complex) queries are substantially faster in v3.8.
> That's important for me, too.

Yes, the 'next generation query planner' helps a lot in queries with more than 
4 tables, temp or normal tables.

> 
> Thanks.


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance increase between 3.7 and 3.8

2015-02-03 Thread Eduardo Morras
On Tue, 3 Feb 2015 02:43:00 -0700 (MST)
Jan Slodicka <j...@resco.net> wrote:

> Eduardo Morras-2 wrote
> > I use a big test db to assure new versions of sqlite works
> > properly. With 3.7.15.2 it takes 43 minutes, with 3.8.8.2 on same
> > hardware 27 minutes, it's 16/0.43 = 37% less or 27/0.43 = 63%
> > improve.
> 
> Thanks, Eduardo. Could I ask you for a rough characterization of the
> test performed?

A ~8000MB db with app example data. More than 1000 query-corp created as part 
of test driven development of the app. We have precalculated the correct 
results in tables and its number of rows. No write, only read queries and don't 
use other improvements like partial indexes. Queries run in sequential order at 
a time.

Some queries test own sql functions and a virtual table, no improvements 
expected on them.

The test was done with same application version in debug mode and recompiled 
with 3.7.15.2 on same computer. All tests passed.

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance increase between 3.7 and 3.8

2015-02-02 Thread Eduardo Morras
On Mon, 2 Feb 2015 10:08:36 -0700 (MST)
Jan Slodicka <j...@resco.net> wrote:

> I know the reports about huge performance increase achieved within
> the last year. (Compliments for that.) However, those numbers ignore
> processor architecture and I/O. My question is a different one.
> 
> What speed difference do you perceive in real-world applications?
> 
> I know that there can't be any answer valid for everybody and for
> every situation...
> 
> What about your experience? Do you perceive better performance due to
> the last SQLite updates?

I use a big test db to assure new versions of sqlite works properly. With 
3.7.15.2 it takes 43 minutes, with 3.8.8.2 on same hardware 27 minutes, it's 
16/0.43 = 37% less or 27/0.43 = 63% improve.

---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3 performance on select count very slow for 16 GB file

2015-01-24 Thread Eduardo Morras
On Tue, 20 Jan 2015 12:12:00 +
"Parakkal, Navin S (Software Engineer)" <navin.parak...@hp.com> wrote:

> Hello,
> 
>I've few questions about sqlite3 , the database it creates.
> Actually I'm finding lot of differences in performance.
> 
>  My story:
> I have this sqlite3 database called  hp.db which is like 100+
> million  records for table1. The size of hp.db on Linux x64 (CentOS
> 7) is like 16 GB. When I do a select count(*) on hp_table1 it takes
> more than 5 mins which is quite a huge time. This file is static for
> now ie we copied it from a production server for analysis. Now I
> create a index by create index nvnhpindex on hp_table1
> (column1,column2) . The primary key of the table hp_table1 is
> (column1,column2). It takes around some time (maybe 20 minutes or
> less , I went for lunch and came back , really didn't note the time).
> Now I do select count(*) on hp_table1 , it takes around 15 secs. This
> is what we want our objective. We want it to be fast. The create
> index has increased the size of hp.db to 18 GB. This is OK with us
> and the customers.
> 
> The problem is this is not a static database. We keep inserting data
> (insert rows every 10 secs or like atleast 1 minute ) and
> occassionally delete rows (like once in 5 days). This is a 24x7
> system.
> 
> 
> So to identify the problem , I created a empty_database similar to
> hp.db with no rows. I created a index on column1,column2 on
> empty_table1 inside empty_database.
> 
> Now I inserted the rows from csv (this CSV was created by .mode csv,
> output myhp.csv, select * from hp_table1).
> 
> The size of database is around 18GB (empty_database) with rows. Now I
> do a select count(*) on empty_table1 (actually it contains lots of
> rows like 100M+ records ) and it takes more than 5 mins. 5 mins is
> too much of a time for us to bear. The customer wants the information
> within a minute.
> 
> 
> Can you please help in resolving this problem ? We are planning to
> deploy this across 1+ nodes on Linux x64 on one customer and many
> other customers are going in the similar direction. 
> 
> How do we go about resolving this ie what should we do to create a
> table with sub minute access for 100-500 million . How do we create
> the indexes ? Any other performance incentives.

Use a trigger on insert and a trigger on delete that modifies a value on 
another table with current count(*) number. Table can be temporal if you want 
and stay in memory, but you should do a count(*) on application startup.

> Some say we should buy/use Oracle but I just am holding onto sqlite3
> assuming it would help me solve our problem.

You can use PostgreSQL, using part of Oracle licence cost for better hardware 
and a dinner for the team.

> Regards,
> Navin
> 
> _______
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


---   ---
Eduardo Morras <emorr...@yahoo.es>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


  1   2   3   >