Re: [sqlite] Partial indexes on JSON properties?

2016-10-04 Thread Richard Hipp
On 10/4/16, Keith Medcalf  wrote:

> This raises another question.  Is there any way to mark a function
> in-between volatile and deterministic?  Currently if the deterministic flag
> is not set the optimizer assumes that the function is truly volatile (it is
> called every reference, even for duplicated arguments).  What about
> functions that are deterministic (enough) for a statement/transaction but
> are not deterministic enough to be used in an index?
>
> An example would be a function which calls the Windows CheckTokenMembership
> function.  This function returns whether or current process token contains
> the provided SID (group/user).  This (and other like it) functions are
> deterministic enough for the query optimizer to treat as deterministic, but
> volatile enough that they should not be used to build anything persistent
> (such as in a conditional index).
>

Another example is sqlite_version() which is always the same for any
database connection, but might change when you upgrade your
sqlite3.dll, and so cannot be used in an index expression or in the
WHERE clause of a partial index.

Such functions are internally identified as "Slow Change" functions
using the internal flag SQLITE_FUNC_SLOCHNG.  But that flag cannot
currently be set using the public API.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Partial indexes on JSON properties?

2016-10-04 Thread Keith Medcalf

On Monday, 3 October, 2016 12:30, Richard Hipp wrote:
> On 10/1/16, Jens Alfke  wrote:
> > the WHERE clause in a CREATE INDEX statement
> > explicitly disallows function calls Is this limitation something
> > that might be lifted soon
 
> Deterministic SQL functions are now allowed in partial index WHERE
> clauses, as of a few minutes ago. The current "Prerelease Snapshot"
> (https://www.sqlite.org/download.html) supports this capability.
> Please try it out and report any problems.  Thanks.

This raises another question.  Is there any way to mark a function in-between 
volatile and deterministic?  Currently if the deterministic flag is not set the 
optimizer assumes that the function is truly volatile (it is called every 
reference, even for duplicated arguments).  What about functions that are 
deterministic (enough) for a statement/transaction but are not deterministic 
enough to be used in an index?

An example would be a function which calls the Windows CheckTokenMembership 
function.  This function returns whether or current process token contains the 
provided SID (group/user).  This (and other like it) functions are 
deterministic enough for the query optimizer to treat as deterministic, but 
volatile enough that they should not be used to build anything persistent (such 
as in a conditional index).





___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How/Where to check if a table exists and read it ?

2016-10-04 Thread Domingo Alvarez Duarte

Hello Richard !

Thanks for reply !

I found the second point that was also controlling the restriction of 
referencing objects in other databases.


Now it seems to work and I'll leave the initialization to the user level 
code for now, when the usage normalize I'll revisit it again to see if 
is worth move that code to my custom sqlite3.


Cheers !

 my changes

  /*
  **   PRAGMA use_attached_dbs
  **   PRAGMA use_attached_dbs = ON/OFF
  **
  ** The first form reports the current setting for the
  ** use_attached_dbs flag.  The second form changes the use_attached_dbs
  ** flag setting and reports thenew value.
  */
  case PragTyp_USE_ATTACHED_DBS: {
int b = -1;
if( zRight ){
  b = sqlite3GetBoolean(zRight, 0);
  sqlite3_limit(db, SQLITE_LIMIT_USE_ATTACHED_DBS, b);
}
b = sqlite3_limit(db, SQLITE_LIMIT_USE_ATTACHED_DBS, -1);
returnSingleInt(v, "use_attached_dbs", b);
break;
  }



Table *sqlite3LocateTableItem(
  Parse *pParse,
  u32 flags,
  struct SrcList_item *p
){
  const char *zDb;
  int use_attached_dbs = sqlite3_limit(pParse->db, 
SQLITE_LIMIT_USE_ATTACHED_DBS, -1);  my changes

  assert( p->pSchema==0 || p->zDatabase==0 );
  if( p->pSchema && !use_attached_dbs){ my changes
int iDb = sqlite3SchemaToIndex(pParse->db, p->pSchema);
zDb = pParse->db->aDb[iDb].zDbSName;
  }else{
zDb = p->zDatabase;
  }
  return sqlite3LocateTable(pParse, flags, p->zName, zDb);
}


int sqlite3FixSrcList(
  DbFixer *pFix,   /* Context of the fixation */
  SrcList *pList   /* The Source list to check and modify */
){
  int i, use_attached_dbs;
  const char *zDb;
  struct SrcList_item *pItem;

  use_attached_dbs = sqlite3_limit(pFix->pParse->db, 
SQLITE_LIMIT_USE_ATTACHED_DBS, -1); my changes

  if( NEVER(pList==0) ) return 0;
  zDb = pFix->zDb;
  for(i=0, pItem=pList->a; inSrc; i++, pItem++){
if( pFix->bVarOnly==0 ){
  if( pItem->zDatabase && !use_attached_dbs && 
sqlite3StrICmp(pItem->zDatabase, zDb) ){ my changes

sqlite3ErrorMsg(pFix->pParse,
"%s %T cannot reference objects in database %s",
pFix->zType, pFix->pName, pItem->zDatabase);
return 1;
  }
  sqlite3DbFree(pFix->pParse->db, pItem->zDatabase);
  pItem->zDatabase = 0;
  pItem->pSchema = pFix->pSchema;
}
#if !defined(SQLITE_OMIT_VIEW) || !defined(SQLITE_OMIT_TRIGGER)
if( sqlite3FixSelect(pFix, pItem->pSelect) ) return 1;
if( sqlite3FixExpr(pFix, pItem->pOn) ) return 1;
#endif
  }
  return 0;
}



On 04/10/16 17:30, Richard Hipp wrote:

On 10/4/16, Domingo Alvarez Duarte  wrote:

The problem is I didn't found yet the point where I should intercept the
"openDatabase" where I plan to check if there is a table named for
example "use_attached_dbs" and then attach the databases on that table
and then reread the schema to make the views work.

The place to do so probably is at the end of "openDatabase", can someone
shed some light here ?


The sqlite3Init() routine found at
https://www.sqlite.org/src/artifact/b1140c3d0cf59bc8?ln=355



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How/Where to check if a table exists and read it ?

2016-10-04 Thread Richard Hipp
On 10/4/16, Domingo Alvarez Duarte  wrote:
>>
> The problem is I didn't found yet the point where I should intercept the
> "openDatabase" where I plan to check if there is a table named for
> example "use_attached_dbs" and then attach the databases on that table
> and then reread the schema to make the views work.
>
> The place to do so probably is at the end of "openDatabase", can someone
> shed some light here ?
>

The sqlite3Init() routine found at
https://www.sqlite.org/src/artifact/b1140c3d0cf59bc8?ln=355

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How/Where to check if a table exists and read it ?

2016-10-04 Thread Domingo Alvarez Duarte

Hello !

I'm trying to make changes to sqlite to allow multi-databases databases 
(I mean have a database that is onl used to anchor attached databases 
and store mutli-database views/triggers).


I already managed to add a new pragma "use_attached_databases=ON/OFF" to 
allow views/triggers to have references to attached databases and have 
it working (I mean I can create views/triggers that references other 
databases and it's accepted).


The problem is when I open a database that contains those views/triggers 
it shows errors saying:


Error: malformed database schema (v1) - view v1 cannot reference objects 
in database db1


The problem is I didn't found yet the point where I should intercept the 
"openDatabase" where I plan to check if there is a table named for 
example "use_attached_dbs" and then attach the databases on that table 
and then reread the schema to make the views work.


The place to do so probably is at the end of "openDatabase", can someone 
shed some light here ?


P.S.: This is for a custom use of sqlite3 so please don't bother to 
reply to say why I should not been doing this.


Cheers !

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database malformed after 6000 Inserts?

2016-10-04 Thread Simon Slavin

On 4 Oct 2016, at 6:30pm, Roger Binns  wrote:

> On 04/10/16 03:11, Werner Kleiner wrote:
>> ... after 6000 records.
>> 
>> Is there a limitation with huge inserts?
> 
> While there may be "Lite" in the name, SQLite copes very well with
> "huge" stuff.
> 
> That means many many gigabytes in database sizes, many many millions of
> rows, up to 2GB per row etc.  6,000 records probably fits in caches, and
> isn't remotely close to huge as far as SQLite is concerned.

But note also that you can mess yourself up.  So if you think you're going to 
speed things up by setting journal_mode = MEMORY, and you have 8GB of memory 
free, and you do a transaction which takes up 40GB then you have only yourself 
to blame.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database malformed after 6000 Inserts?

2016-10-04 Thread Roger Binns
On 04/10/16 03:11, Werner Kleiner wrote:
> ... after 6000 records.
> 
> Is there a limitation with huge inserts?

While there may be "Lite" in the name, SQLite copes very well with
"huge" stuff.

That means many many gigabytes in database sizes, many many millions of
rows, up to 2GB per row etc.  6,000 records probably fits in caches, and
isn't remotely close to huge as far as SQLite is concerned.

Roger




signature.asc
Description: OpenPGP digital signature
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Pencils down for SQLite 3.15.0

2016-10-04 Thread R Smith

Draft documentation typo:

This page: https://www.sqlite.org/draft/deterministic.html

Under 1. Restrictions on the use of non-deterministic functions
Last paragraph starts with:
"In the cases above, the valued returned by the function is recorded in 
the index b-tree."


I'm assuming this to be the intended correct sentence:
"In the cases above, the *values* returned by the function *are* 
recorded in the index b-tree"




On 2016/10/04 6:21 PM, Richard Hipp wrote:

Our intent is bug-fix changes only on trunk from now until the 3.15.0
release.  Please test the beta available in the "Prerelease Snapshot"
section of the download page:  https://www.sqlite.org/download.html

Release notes: https://www.sqlite.org/draft/releaselog/current.html

Release check-list: https://www.sqlite.org/checklists/315/index

We will begin testing soon.  When the release check-list goes all
green, we will cut the release.  If you have any questions or concerns
please raise them immediately.  Thanks.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Pencils down for SQLite 3.15.0

2016-10-04 Thread Richard Hipp
Our intent is bug-fix changes only on trunk from now until the 3.15.0
release.  Please test the beta available in the "Prerelease Snapshot"
section of the download page:  https://www.sqlite.org/download.html

Release notes: https://www.sqlite.org/draft/releaselog/current.html

Release check-list: https://www.sqlite.org/checklists/315/index

We will begin testing soon.  When the release check-list goes all
green, we will cut the release.  If you have any questions or concerns
please raise them immediately.  Thanks.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Fastest way to backup a "live" database

2016-10-04 Thread Clemens Ladisch
Eric Grange wrote:
> If all else fail, I could also suspend DB writes during backups (suspending
> DB reads would be more problematic).

With WAL, the backup reader does not block writers.

>> Use the backup API, and copy everything in one step.
>> (The restart-on-write feature should not be necessary with WAL.)
>
> That was what I thought initially, but I can only explain the multi-hours
> backups with it: usually the backup API takes 4-5 minutes. It is just once
> in a while that a very long backup occurs.
>
>> It calls sqlite3_backup_step() with a size of 100 pages.
>
> Ok, so I guess the huge cache is overkill with the default CLI!

No, this is what makes the backup restart.  With a step size of -1,
it would never restart.

Use a tool like the one below to do the backup in one step, without
restarts.


Regards,
Clemens
--

#include 
#include 

int main(int argc, char *argv[])
{
sqlite3 *src = NULL;
sqlite3 *dst = NULL;
sqlite3_backup *backup;
int rc;
int ok = 0;

if (argc != 3) {
fputs("I want two file names: source, destination\n", stderr);
goto error;
}

rc = sqlite3_open_v2(argv[1], , SQLITE_OPEN_READONLY, NULL);
if (rc != SQLITE_OK) {
fprintf(stderr, "Cannot open %s: %s\n", argv[1], 
sqlite3_errmsg(src));
goto error;
}

rc = sqlite3_open_v2(argv[2], , SQLITE_OPEN_READWRITE | 
SQLITE_OPEN_CREATE, NULL);
if (rc != SQLITE_OK) {
fprintf(stderr, "Cannot open %s: %s\n", argv[2], 
sqlite3_errmsg(dst));
goto error;
}

sqlite3_exec(src, "PRAGMA busy_timeout=1", NULL, NULL, NULL);
sqlite3_exec(dst, "PRAGMA busy_timeout=1", NULL, NULL, NULL);

backup = sqlite3_backup_init(dst, "main", src, "main");
if (backup == NULL) {
fprintf(stderr, "Cannot initialize backup: %s\n", 
sqlite3_errmsg(dst));
goto error;
}

do {
rc = sqlite3_backup_step(backup, -1);
} while (rc == SQLITE_BUSY || rc == SQLITE_LOCKED);

rc = sqlite3_backup_finish(backup);
if (rc == SQLITE_OK) {
ok = 1;
} else {
fprintf(stderr, "Backup failed: %s\n", sqlite3_errmsg(dst));
}

error:
sqlite3_close(dst);
sqlite3_close(src);

return ok ? 0 : 1;
}
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Fastest way to backup a "live" database

2016-10-04 Thread Simon Slavin

On 4 Oct 2016, at 3:58pm, Eric Grange  wrote:

> If all else fail, I could also suspend DB writes during backups

If it's easy to do that, then I'd recommend it.  Used in conjunction with the 
SQLite backup API it should provide the simplest solution least likely to 
present a problem.  It should not be necessary to suspend DB reads during 
backups.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Fastest way to backup a "live" database

2016-10-04 Thread Eric Grange
> Have you looked at the ext/session extension ?

Yes, but it is a bit more complicated to integrate, and would impose a
penalty during execution as far as I understand: there are quite a few
intermediate states that would be stored by a changeset, but that do not
really need to be preserved at the end of the day.

Those intermediate states could be moved to a secondary DB, but would then
lose foreign keys and other integrity constraints

This is however an extension that I have been looking at, should the need
for some form of "live" db replication occur.

> I'm sorry.  I misunderstood your question and thought you were just
duplicating the file using OS calls.  The SQLite backup API takes care of
all necessary locking and > consistency problems for you. You should be
fine.

Ok, thanks!

> The problem of backing up a changing database is one of the difficult
problems in database management.  Please don't expect an easy solution.

Right now I am tackling it with brute force: a backup usually takes about
4-5 minutes (to a secondary SSD, that offline copy is then uploaded to a
rempte server).
It is mostly the odd occurences when the backup takes several hours that
are problematic.

If all else fail, I could also suspend DB writes during backups (suspending
DB reads would be more problematic).

> Use the backup API, and copy everything in one step.
> (The restart-on-write feature should not be necessary with WAL.)

That was what I thought initially, but I can only explain the multi-hours
backups with it: usually the backup API takes 4-5 minutes. It is just once
in a while that a very long backup occurs.

> It calls sqlite3_backup_step() with a size of 100 pages.

Ok, so I guess the huge cache is overkill with the default CLI!

Thanks


On Tue, Oct 4, 2016 at 4:05 PM, Simon Slavin  wrote:

>
> On 4 Oct 2016, at 2:53pm, Eric Grange  wrote:
>
> > I am going on the assumption that if something fails during backup, the
> > backup itself will be toast anyway, but is that safe otherwise?
>
> No.  You have no locking.  You might copy the beginning of the file before
> a transaction and the end of the file after it, meaning that pointers at
> one part of the file point to things which no longer exist.
>
> The problem of backing up a changing database is one of the difficult
> problems in database management.  Please don't expect an easy solution.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Fastest way to backup a "live" database

2016-10-04 Thread Simon Slavin

On 4 Oct 2016, at 3:05pm, Simon Slavin  wrote:

> No.  You have no locking.  You might copy the beginning of the file before a 
> transaction and the end of the file after it, meaning that pointers at one 
> part of the file point to things which no longer exist.

I'm sorry.  I misunderstood your question and thought you were just duplicating 
the file using OS calls.  The SQLite backup API takes care of all necessary 
locking and consistency problems for you. You should be fine.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Mac OS Sierra deprecation in bindings

2016-10-04 Thread Richard Hipp
On 10/4/16, Judson Lester  wrote:
> Thanks for the info. Do you mind if I copy some of this information back to
> the go-sqlite3 issues?

I do not mind.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Fastest way to backup a "live" database

2016-10-04 Thread Clemens Ladisch
Eric Grange wrote:
> Given a fairly large database (dozens of gigabytes), which uses WAL, and is
> being accessed continuously (mostly read transactions, but regular write
> transactions as well), what are the fastest and less disruptive ways to
> back it up?

Use the backup API, and copy everything in one step.
(The restart-on-write feature should not be necessary with WAL.)

> A basic ".backup" from the CLI can occasionnally take hours, as it is
> thrown thrown off whenever large write transactions occur.

It calls sqlite3_backup_step() with a size of 100 pages.
I guess you have to implement your own backup tool (or modify the shell).

> I am going on the assumption that if something fails during backup, the
> backup itself will be toast anyway, but is that safe otherwise?

Yes; the backup API uses a transaction, like any other read access.


Regards,
Clemens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Fastest way to backup a "live" database

2016-10-04 Thread Simon Slavin

On 4 Oct 2016, at 2:53pm, Eric Grange  wrote:

> I am going on the assumption that if something fails during backup, the
> backup itself will be toast anyway, but is that safe otherwise?

No.  You have no locking.  You might copy the beginning of the file before a 
transaction and the end of the file after it, meaning that pointers at one part 
of the file point to things which no longer exist.

The problem of backing up a changing database is one of the difficult problems 
in database management.  Please don't expect an easy solution.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Fastest way to backup a "live" database

2016-10-04 Thread Domingo Alvarez Duarte

Hello Eric !

Have you looked at the ext/session extension ?

It seems that it can be the answer for your problem. The basic idea is 
you'll create a function that will receive the changes made to the 
database and then you can incrementally apply then on the backup database.


This way the memory usage will be low (as long you do not have millions 
changes in a short period of time).


Cheers !


On 04/10/16 10:53, Eric Grange wrote:

Hi,

Given a fairly large database (dozens of gigabytes), which uses WAL, and is
being accessed continuously (mostly read transactions, but regular write
transactions as well), what are the fastest and less disruptive ways to
back it up?

A basic ".backup" from the CLI can occasionnally take hours, as it is
thrown thrown off whenever large write transactions occur.

I have found the following approaches to seem to work, but would like a
confirmation:
- using temp_store = MEMORY
- using synchronous = 0
- using a cache_size as large a possible (as high as possible without
running out of memory)

I am going on the assumption that if something fails during backup, the
backup itself will be toast anyway, but is that safe otherwise?

Also the precompiled CLI for Windows (sqlite3.exe) of 3.14.2 fails at
around 2 GB RAM, which in my case is about 500,000 pages, is that already
too high or could it be worth going with a 64bits CLI?

Thanks!

Eric
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Fastest way to backup a "live" database

2016-10-04 Thread Eric Grange
Hi,

Given a fairly large database (dozens of gigabytes), which uses WAL, and is
being accessed continuously (mostly read transactions, but regular write
transactions as well), what are the fastest and less disruptive ways to
back it up?

A basic ".backup" from the CLI can occasionnally take hours, as it is
thrown thrown off whenever large write transactions occur.

I have found the following approaches to seem to work, but would like a
confirmation:
- using temp_store = MEMORY
- using synchronous = 0
- using a cache_size as large a possible (as high as possible without
running out of memory)

I am going on the assumption that if something fails during backup, the
backup itself will be toast anyway, but is that safe otherwise?

Also the precompiled CLI for Windows (sqlite3.exe) of 3.14.2 fails at
around 2 GB RAM, which in my case is about 500,000 pages, is that already
too high or could it be worth going with a 64bits CLI?

Thanks!

Eric
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database malformed after 6000 Inserts?

2016-10-04 Thread Simon Slavin

On 4 Oct 2016, at 1:24pm, Werner Kleiner  wrote:

> Hopefully I give you the correct answer, because a collegue has
> written the C# program.
> We had no problems with inserts in the past, but now if we have
> records about 6000 inserts we get the errors.
> 
> The OS is Server 2012, there are no pragma settings (but collegue has
> also used with pragma settings)
> It is a single thread with "synchronize full"

I'm not going to claim that there are no bugs in SQLite since some of the 
traffic on this list is about fixing bugs in SQLite.  However, a random bug 
which corrupts the database is extremely unlikely.  Can I ask you to read this 
page



and tell us if you're doing any of those things, or anything close to one of 
those things ?  There are numerous things you could be doing which would 
corrupt the database and it would help us narrow down the cause.  My suspicions 
in this case lead me to section 3.2, for example, but you know your code better 
than we do.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database malformed after 6000 Inserts?

2016-10-04 Thread Rob Willett

We don't use Windows Server or System.Data.SQLite so can't comment.

I'd be astonished if its Sqlite itself thats at fault here.

Rob

On 4 Oct 2016, at 13:24, Werner Kleiner wrote:


Thanks for help.
Hopefully I give you the correct answer, because a collegue has
written the C# program.
We had no problems with inserts in the past, but now if we have
records about 6000 inserts we get the errors.

The OS is Server 2012, there are no pragma settings (but collegue has
also used with pragma settings)
It is a single thread with "synchronize full"
We use the System.Data.SQLite.dll with version 1.0.89.0 from 
12.12.2013


hope this is what you mean

2016-10-04 13:48 GMT+02:00 Jim Borden :
I had a problem similar to this before. What is the threading model 
for access to the database and how is the native library compiled and 
configured?


Jim Borden
(Sent from a mobile device)

On 4 Oct 2016, at 19:12, Werner Kleiner  
wrote:


Hello,
a program written in C# makes inserts from an SQL script into a 
sqlite db.

We now saw that the database will be malformed after 6000 records.

Is there a limitation with huge inserts?
What could be the problem?

regards
Werner
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Mac OS Sierra deprecation in bindings

2016-10-04 Thread Richard Hipp
On 10/4/16, Judson Lester  wrote:
>
> That said, it's harmless until the deprecation turns into a removal at the
> OS level, at which point ignoring the error becomes a sudden
> incompatibility with SQLite bindings.
>

Engineers at Apple tell me that they are still using
OSAtomicCompareAndSwapPtrBarrier() inside of essential OS-X components
and that the routine will not be going away any time soon.

If you are still concerned, your work-around is to compile with
-DSQLITE_WITHOUT_ZONEMALLOC.  (I have to do that when compiling SQLite
for OS 10.2, since that release predates the
OSAtomicCompareAndSwapPtrBarrier().)

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database malformed after 6000 Inserts?

2016-10-04 Thread Werner Kleiner
Thanks for help.
Hopefully I give you the correct answer, because a collegue has
written the C# program.
We had no problems with inserts in the past, but now if we have
records about 6000 inserts we get the errors.

The OS is Server 2012, there are no pragma settings (but collegue has
also used with pragma settings)
It is a single thread with "synchronize full"
We use the System.Data.SQLite.dll with version 1.0.89.0 from 12.12.2013

hope this is what you mean

2016-10-04 13:48 GMT+02:00 Jim Borden :
> I had a problem similar to this before. What is the threading model for 
> access to the database and how is the native library compiled and configured?
>
> Jim Borden
> (Sent from a mobile device)
>
>> On 4 Oct 2016, at 19:12, Werner Kleiner  wrote:
>>
>> Hello,
>> a program written in C# makes inserts from an SQL script into a sqlite db.
>> We now saw that the database will be malformed after 6000 records.
>>
>> Is there a limitation with huge inserts?
>> What could be the problem?
>>
>> regards
>> Werner
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Mac OS Sierra deprecation in bindings

2016-10-04 Thread Judson Lester
I can't prevent it being output as a consumer of the Go library. I've
needed to invest effort into my build chain to filter the warning; so
there's been some amount of pointless labor as a consequence. It might be
possible for the go-sqlite3 author to update the build commands to suppress
it?

All that said, certainly, I can live with the extra output in most of my
builds.

That said, it's harmless until the deprecation turns into a removal at the
OS level, at which point ignoring the error becomes a sudden
incompatibility with SQLite bindings.

On Tue, Oct 4, 2016 at 12:59 PM Richard Hipp  wrote:

> On 10/4/16, Judson Lester  wrote:
> >
> > Would you be so kind as to let me know when/if there's a SQLite ticket
> for
> > this so that I can link it back to the project I encountered this in?
>
> It's a harmless warning.  *Harmless*.  Can you not simply ignore it?
>
> --
> D. Richard Hipp
> d...@sqlite.org
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Two problems in latest version of lempar.c

2016-10-04 Thread Richard Hipp
On 10/4/16, Benjamin Franksen  wrote:
> Hello
>
> I am using the lemon parser generator in a project
> (http://www-csr.bessy.de/control/SoftDist/sequencer/). I updated the
> lemon parser template tools/lempar.c to the latest available version and
> found two problems, reported by gcc:
>
> ./snl.c: In function ‘yy_destructor’:
> ./snl.c:116:36: warning: unused variable ‘presult’ [-Wunused-variable]
>  #define snlParserARG_FETCH  Node **presult  = yypParser->presult

This can be resolved by using the "presult" variable in one of your destructors.

> ./snl.c: In function ‘snlParser’:
> ./snl.c:2446:33: warning: comparison of distinct pointer types lacks a
> cast [enabled by default]
>  while( yypParser->yytos >= >yystack

I think the latest trunk check-in may fix this.
https://www.sqlite.org/src/info/3a9d802fda105856

But that fix is untested, as the error handling logic where this code
is found is #ifdef-ed out in SQLite.  Please give the latest a try and
let me know if that fails to fix the problem.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Two problems in latest version of lempar.c

2016-10-04 Thread Benjamin Franksen
Hello

I am using the lemon parser generator in a project
(http://www-csr.bessy.de/control/SoftDist/sequencer/). I updated the
lemon parser template tools/lempar.c to the latest available version and
found two problems, reported by gcc:

./snl.c: In function ‘yy_destructor’:
./snl.c:116:36: warning: unused variable ‘presult’ [-Wunused-variable]
 #define snlParserARG_FETCH  Node **presult  = yypParser->presult
^
./snl.c:1031:3: note: in expansion of macro ‘snlParserARG_FETCH’
   snlParserARG_FETCH;
   ^
./snl.c: In function ‘snlParser’:
./snl.c:2446:33: warning: comparison of distinct pointer types lacks a
cast [enabled by default]
 while( yypParser->yytos >= >yystack

The first is a harmless warning which can be suppressed by using the
snlParserARG_STORE macro in yy_destructor right after
snlParserARG_FETCH, as is done similarly in a number of other functions
in lempar.c.

The second one is *NOT* harmless. The '&' before yypParser->yystack is
indeed wrong and could in corner cases lead to a crash or worse a
security vulnerability.

Cheers
Ben

PS: I am not subscribed, so please add my address to any replies.
-- 
"Make it so they have to reboot after every typo." ― Scott Adams



signature.asc
Description: OpenPGP digital signature
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Mac OS Sierra deprecation in bindings

2016-10-04 Thread Judson Lester
Thanks very much. The warning itself recommends
atomic_compare_exchange_strong(),
but I can't tell you much more than that.

Would you be so kind as to let me know when/if there's a SQLite ticket for
this so that I can link it back to the project I encountered this in? It'll
help in terms of that messaging, and it'll let the author there know when
to update.

Thanks again,
Judson

On Sat, Oct 1, 2016 at 8:30 PM Richard Hipp  wrote:

> On 10/1/16, Judson Lester  wrote:
> > I understand that this project doesn't produce go-sqlite3, but the
> > sqlite-bindings.c that project incorporates (and which is the origin of
> the
> > deprecation warning) do seem to be part of this project. If that's not
> so,
> > can you direct me to the true source so I can raise the issue there?
>
> I stand corrected.  "sqlite-bindings.c" is apparently a renamed copy
> of "sqlite3.c".
>
> I didn't remember that sqlite3.c had a call to
> OSAtomicCompareAndSwapPtrBarrier().  That only comes up on MacOS/iOS
> systems.  I will investigate and figure out what Apple's recommended
> substitute for this is and add that to the next release.
>
> --
> D. Richard Hipp
> d...@sqlite.org
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Mac OS Sierra deprecation in bindings

2016-10-04 Thread Richard Hipp
On 10/4/16, Judson Lester  wrote:
>
> Would you be so kind as to let me know when/if there's a SQLite ticket for
> this so that I can link it back to the project I encountered this in?

It's a harmless warning.  *Harmless*.  Can you not simply ignore it?

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database malformed after 6000 Inserts?

2016-10-04 Thread Jim Borden
I had a problem similar to this before. What is the threading model for access 
to the database and how is the native library compiled and configured?

Jim Borden
(Sent from a mobile device)

> On 4 Oct 2016, at 19:12, Werner Kleiner  wrote:
> 
> Hello,
> a program written in C# makes inserts from an SQL script into a sqlite db.
> We now saw that the database will be malformed after 6000 records.
> 
> Is there a limitation with huge inserts?
> What could be the problem?
> 
> regards
> Werner
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database malformed after 6000 Inserts?

2016-10-04 Thread Rob Willett
We've done inserts of tens of thousand at a time, we may well have done 
hundreds of thousands in one single transaction. I've no doubt other 
people do even larger transactions.


I would assume the problem lies elsewhere.

What error message are you getting? Whats the OS, the environment, disk, 
pragma settings?


Rob

On 4 Oct 2016, at 11:11, Werner Kleiner wrote:


Hello,
a program written in C# makes inserts from an SQL script into a sqlite 
db.

We now saw that the database will be malformed after 6000 records.

Is there a limitation with huge inserts?
What could be the problem?

regards
Werner
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] smartest way to exchange a sqlite3 database with another empty

2016-10-04 Thread Luca Ferrari
On Mon, Oct 3, 2016 at 11:51 AM, Richard Hipp  wrote:
> Safe way:  In a separate process, use the backup API
> (https://www.sqlite.org/backup.html) to copy the content of the main
> DB over to a separate DB, then "DELETE FROM log;" on the main DB.
> This will work without any cooperation on the part of the application.
> But it does involving a lot of I/O.

This sounds good to built a maintanance program to keep just in case,
I will go for this solution at the moment.


>
> Alternative:  Modify the application so that it automatically detects
> when the database is getting to large (perhaps using PRAGMA page_count
> - https://www.sqlite.org/pragma.html#pragma_page_count) and then (1)
> closes the database connection, (2) renames the database file to a
> backup, and (3) reopens the main DB and reinitializes the schema.

When working with PostgreSQL (not meant to flame) I would solve with
partitioning: create a table which refers to a time period and move
all records of that period to such table. In the cas eof sqlite I have
to move the period table to another database, but that be simpler than
dumping the whole database and archive it (I mean it could keep the
original database online).

By the way, my database is growing around 1 MB per day, so nothing
scaring (at least now). In other words, I've a chance to study the
problem and get a suitable solution.

Thanks all,
Luca
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Database malformed after 6000 Inserts?

2016-10-04 Thread Werner Kleiner
Hello,
a program written in C# makes inserts from an SQL script into a sqlite db.
We now saw that the database will be malformed after 6000 records.

Is there a limitation with huge inserts?
What could be the problem?

regards
Werner
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users