Re: [sqlite] Cross-compiled sqlite3 tool runs into segmentation fault when creating a table

2011-12-15 Thread imin imup
On Mon, Dec 12, 2011 at 10:41 PM, Richard Hipp  wrote:

> On Mon, Dec 12, 2011 at 11:34 PM, imin imup  wrote:
>
> >  Hello,
> >
> > I'm cross-compiling SQLite 3.6.12 onto Fedora 14 on MIPS cpu. The command
> > line tool sqlite3 runs into segmentation fault when I tried to create a
> > table:
> >
> >
> > # sqlite3_mipsel test.db
> > SQLite version 3.6.12
> > Enter ".help" for instructions
> > Enter SQL statements terminated with a ";"
> > sqlite> create table tbl1(one varchar(10), two smallint);
> > Segmentation fault
> >
> >
> > Any idea how to fix this?
> >
>
> Compile the following patch instead:
> http://www.sqlite.org/src/ci/54cc119811?sbs=0
>
> It works! Tons of thanks!
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Index usage when using UNION

2011-12-15 Thread Igor Tandetnik
Simon Slavin  wrote:
> On 15 Dec 2011, at 7:19pm, Alexandr Němec wrote:
> 
>> just a quick question, I did not find the answer in the various technical 
>> documents. I have two identical tables with a id
>> INTEGER as a primary key, which means that SELECTions ORDERed BY id are very 
>> fast. Now if I do SELECT * FROM table1 UNION ALL
>> SELECT * FROM table2 ORDER by id, rows from both tables appear in the 
>> resultset. But I was surprised that the speed of the
>> sorting is still the same (as for one indexed column). Does it mean, that 
>> SQLite can (somehow) use the index to speed up the
>> sorting when UNIONing several tables?
> 
> Your 'ORDER BY' clause applies only to the second SELECT.

Not true. ORDER BY applies to the final resultset, after the union. Try adding 
an ORDER BY clause between the first SELECT and UNION ALL - you'll get an error.
-- 
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Index usage when using UNION

2011-12-15 Thread Richard Hipp
2011/12/15 Alexandr Němec 

>
> Dear all,
>
> just a quick question, I did not find the answer in the various technical
> documents. I have two identical tables with a id INTEGER as a primary key,
> which means that SELECTions ORDERed BY id are very fast. Now if I do SELECT
> * FROM table1 UNION ALL SELECT * FROM table2 ORDER by id, rows from both
> tables appear in the resultset. But I was surprised that the speed of the
> sorting is still the same (as for one indexed column). Does it mean, that
> SQLite can (somehow) use the index to speed up the sorting when UNIONing
> several tables?
>

Yes.

The ORDER BY applies to the complete result of the UNION ALL.  So what
SQLite does in this case is run both subqueries in a parallel, delivering
the results of each subquery in sorted order (which is easy since the
source key is the primary key), and merge the results together.



>
> Thanks
> Alex
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Load a .NET DLL with additional functionsusingload_extension

2011-12-15 Thread Joe Mistachkin

H K wrote:
> 
> Sorry but the file has a content table but no contents.
> Am I doing something wrong with the chm?
> Thx
> 

You may need to right-click the file in Windows Explorer, go to
"Properties",
and then click on "Unblock" in order to make it work properly.

--
Joe Mistachkin

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Load a .NET DLL with additional functions usingload_extension

2011-12-15 Thread H K
Sorry but the file has a content table but no contents.
Am I doing something wrong with the chm?
Thx

On Wed, Dec 7, 2011 at 5:22 AM, Joe Mistachkin wrote:

>
> H K wrote:
> >
> > Is it possible to load these functions with the load_extension function?
> >
>
> Please take a look at the SQLiteFunction help topic in the
> System.Data.SQLite CHM file,
> here:
>
>
> http://system.data.sqlite.org/index.html/artifact?filename=Doc/SQLite.NET.ch
> m=tip
>
> --
> Joe Mistachkin
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Index usage when using UNION

2011-12-15 Thread Simon Slavin

On 15 Dec 2011, at 7:19pm, Alexandr Němec wrote:

> just a quick question, I did not find the answer in the various technical 
> documents. I have two identical tables with a id INTEGER as a primary key, 
> which means that SELECTions ORDERed BY id are very fast. Now if I do SELECT * 
> FROM table1 UNION ALL SELECT * FROM table2 ORDER by id, rows from both tables 
> appear in the resultset. But I was surprised that the speed of the sorting is 
> still the same (as for one indexed column). Does it mean, that SQLite can 
> (somehow) use the index to speed up the sorting when UNIONing several tables?

Your 'ORDER BY' clause applies only to the second SELECT.  The rows from table1 
are not being sorted at all, they're appearing in whatever order SQLite finds 
them.  Which just happens to be 'id' order.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite reading old data

2011-12-15 Thread Fabrizio Steiner
The answers to your questions :
1)Yes I'm still using System.Data.SQLite but without any key.
2)No I don't use WAL.
3)Running only one thread didn't bring up the problem.
4)Didn't have any time yet.
5)Standard System.Data.SQLite with RC4 algorithm impleemnted there.

I've taken a look into the the caching of SQLite during the afternoon because I 
suspected some issues with caching. And I found a problem with the change 
counter exactly as you thought.

I took a look into the source code of the pager and the pager cache. I've found 
the pager cache decides if the cache needs to be flushed whenever a shared lock 
gets requested. In order to determine if a flush is needed or not, the change 
counter of the db file is compared to the stored one in the pager structure. 
The change counter is directly retrieved from the underlying OS 
(sqlite3OsRead), so if the database is encrypted, the encrypted bytes will be 
retrieved here (Sytem.Data.SQLite encrypts the entire database, including page 
1 and the SQLite header).

So dbFileVers of the pager structure is always the encrypted value. I started 
checking if there was a situation where it was used as the plaintext value.

In the function pager_write_changecounter the change counter will get 
incremented, but the encrypted value pPg->pPager->dbFileVer is used. After 
incrementation the value will be put back into the page buffer. During the 
write of the page 1, the is again encrypted (see, pager_write_pagelist). After 
the page has been written the dbFileVers gets updated (pager_write_pagelist 
Line 4049) with the encrypted value.

So at least for incrementing the change counter the plaintext value should be 
used.

When RC4 with the full database encryption (as implemented in 
System.Data.SQLite) is used the following happens.
RC4 basically XORs a plaintext byte with a pseudo random byte. Let's assume the 
following change counter values, for simplicity consider only the last 4 bits 
of it. As we've seen the encrypted change counter is incremented, encrypted and 
stored back into the file. Let's consider the following operations.

DB Initial State
- Encrypted DB File counter:  X   Y   Z 
0(The bits X, Y, Z are unknown, but the last bit is 0.)

First update of Database:
- Update DB, Increment counter:   X   Y   Z 
1(Adding one to the encrypted counter.)
- New encrypted Value in DB: (X XOR K1)  (Y XOR K2)  (Z XOR 
K3) (1 XOR 1) = 0(Assuming the LSB of the pseudo byte is 1, the 
probability is 1/2 for this if its purely random.)

Let's update the database again:
- Update DB, Increment counter:  (X XOR K1)  (Y XOR K2)  (Z XOR 
K3) 1(Adding one to the encrypted counter.)
- New encrypted Value in DB:((X XOR K1) XOR K1) ((Y XOR K2) XOR K2) ((Z XOR 
K3) XOR K3) 0(The same pseudo byte is again used for encryption.)
   =  X   Y   Z 
0

As a result after the second db update the encrypted change counter is the same 
as before the updates occured. dbFileVers contains 12 more bytes but these 
represent the "db size in pages", the "page number of the first free page" and 
the "number of free pages". But these may be unchanged if no new pages were 
needed and there was no free page.

=> Therefore theres a good chance that a db change is undetected depending on 
the encryption algorithm.

Is it allowed to encrypt the entire databse especially the header with the 
pagesize, change counter and so on?
   - If yes, then SQLite should make sure all data get's decrypted prior using 
these values.
   - If not, shouldn't SQLite make sure the crypt api never sees the header?

I've found no documentation about implementing the crypting api.

Kind Regards
Fabrizio

> -Ursprüngliche Nachricht-
> Von: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] Im Auftrag von Michael Stephenson
> Gesendet: Donnerstag, 15. Dezember 2011 20:40
> An: 'General Discussion of SQLite Database'
> Betreff: Re: [sqlite] SQLite reading old data
>
> A couple more questions:
>
> 1)  Does "without encryption" mean still using System.Data.SQLite, just
> without using a key?
> 2)  Are you using WAL?
> 3)  Do you also see the problem with a single process with two connections?
> 4)  Any chance you could provide a small demo project that demonstrates
> the problem?
> 5)  Are you using standard System.Data.SQLite, or the SqlCipher version?
> (http://sqlcipher.net)
>
> I'm pretty fuzzy on this, but here are some (possibly way off base)
> thoughts...
>
> The way I understand things, when you do an update inside of a transaction,
> the new data is written to the log until it is committed, and then a rollback
> involves discarding the log page with the changes.  This means you expect to
> see the new data in your "session" but no one else should 

Re: [sqlite] SQLite reading old data

2011-12-15 Thread Michael Stephenson
A couple more questions:

1)  Does "without encryption" mean still using System.Data.SQLite, just
without using a key?
2)  Are you using WAL?
3)  Do you also see the problem with a single process with two connections?
4)  Any chance you could provide a small demo project that demonstrates the
problem?
5)  Are you using standard System.Data.SQLite, or the SqlCipher version?
(http://sqlcipher.net)

I'm pretty fuzzy on this, but here are some (possibly way off base)
thoughts...

The way I understand things, when you do an update inside of a transaction,
the new data is written to the log until it is committed, and then a
rollback involves discarding the log page with the changes.  This means you
expect to see the new data in your "session" but no one else should see the
new data until you commit.  

The question of which version of the data a session sees is, I think, tied
to some global counter-type metadata that tracks changes.  In Oracle, this
would be perhaps a system change number that would determine which undo
blocks your session would see versus other sessions.  I think SQLite has
something similar, and I think this is stored in the database header on the
first database page.  

I also recall that there are a few places in SQLite where it reads data for
page 1 using system-level I/O rather than the pager (e.g., direct reads on
the database file), and this causes issues for page-level encryptors because
they don't get an opportunity to decrypt the encrypted page before SQLite
reads that piece of data, so they typically have to hard-code some value at
that data location or hope that SQLite ignores what would be an obviously
invalid value.  I think the change counter is one of these items.

Maybe folks with much more understanding could make sense of this, but my
thought is that the equivalent of the "system change number" is being mucked
up by the page-level encryptor, causing SQLite to get the old data because
as far as it knows that's the data at the given (invalid) change count.  

Well, something like that.  Again, probably way off base, but I had a couple
of minutes and thought I'd throw it out there...

~Mike

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Fabrizio Steiner
Sent: Thursday, December 15, 2011 9:01 AM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] SQLite reading old data

I think the following observation, which I made in the meantime, may be very
interesting. Which also gives a hint to a problem with SQLite in combination
with the encryption of System.Data.SQLite.
 
I tried the same test with an unencrypted database, and surprisingly the 3
threads do their work without any exception. I executed several test runs
with the 3 threads and it wasn't reproducable anymore. 
Activating the encryption again it's reproducable and exceptions occur due
to an old stamp that is being retrieved. 

Answers to your questions:
- No connections are shared at all between threads. So every thread creates
2 private connections. 
- Updating always occurs in the second connection.
- The after-update checks will be perfomed in the same transaction as the
update itself. 
- No thread ever deletes a row.

Regards
Fabrizio

> -Ursprüngliche Nachricht-
> Von: sqlite-users-boun...@sqlite.org [mailto:sqlite-users- 
> boun...@sqlite.org] Im Auftrag von Igor Tandetnik
> Gesendet: Donnerstag, 15. Dezember 2011 14:26
> An: sqlite-users@sqlite.org
> Betreff: Re: [sqlite] SQLite reading old data
> 
> Fabrizio Steiner  wrote:
> > I'm currently facing a problem with the System.Data.SQLite ADO.NET 
> > Provider. I'm using stamps (simple integer column) on the rows in 
> > order to detect if another user has altered the same datarow during 
> > the
> time the first user started the edit and when he invoked the save. 
> Every update increases this stamp and the update statement is 
> constrained with the old stamp. If the update has
> > no affected rows, there are different cases which are checked:
> > - If the datarow has been deleted, the user is informed about this.
> > - If the datarow still exists the current stamp in the database is
retrieved.
> >   - If the db stamp is greater than the reference stamp, the user 
> > will be informed that another user changed the data in the meantime.
> >   - If the db stamp is smaller or equal to the reference stamp, an 
> > exception
> is thrown because the stamp can never get smaller.
> >
> > I tried to simulate 3 users which are updating a datarow in the same 
> > table, every user changes it's own row.  Basically I had 3 threads 
> > each of one first retrieving a row in one connection. Updating the 
> > data and
> storing it back into the table with another connection. Every thread 
> made a loop with several edits and a sleep in between.
> 
> Do you have two connections, each shared by three threads; or six 
> separate connections? Do you ever perform any writing 

[sqlite] SQLite with Vis. Studio C#

2011-12-15 Thread Jeff Matthews
Regarding my earlier question as to .NET SQlite syntax, I was wondering if I
had to use DataAdapters and create and fill objects (DataSets), or not. 

Here is some sample code dealing with SQLite in C#:
http://www.codeproject.com/KB/cs/SQLiteCSharp.aspx

It looks to me like he is loading the database into a DataSet and then,
doing his thing on the DataSet, rather than on the database directly.

See how he is building his query as a string (surrounded by quotes) and
then, calling a function to conduct it?

Is this necessary - i.e., is this the only way to query and manipulate
SQLite data from C#?

I was thinking that we could just use sql statements literally, without
having to send them as a string to a function.

Does my question make sense?

Can it not be done just using the sql literally and skipping all the
DataAdapter, DataSet stuff?




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Index usage when using UNION

2011-12-15 Thread Alexandr Němec

Dear all,
 
just a quick question, I did not find the answer in the various technical 
documents. I have two identical tables with a id INTEGER as a primary key, 
which means that SELECTions ORDERed BY id are very fast. Now if I do SELECT * 
FROM table1 UNION ALL SELECT * FROM table2 ORDER by id, rows from both tables 
appear in the resultset. But I was surprised that the speed of the sorting is 
still the same (as for one indexed column). Does it mean, that SQLite can 
(somehow) use the index to speed up the sorting when UNIONing several tables?
 
Thanks
Alex
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Collation with concatenation

2011-12-15 Thread Dan Kennedy

On 12/15/2011 11:59 PM, Jeff Matthews wrote:

This is in the documentation:



CREATE TABLE t1(
 x INTEGER PRIMARY KEY,
 a, /* collating sequence BINARY */
 b COLLATE BINARY,  /* collating sequence BINARY */
 c COLLATE RTRIM,   /* collating sequence RTRIM  */
 d COLLATE NOCASE   /* collating sequence NOCASE */
);


/* x   a b c   d */
INSERT INTO t1 VALUES(1,'abc','abc', 'abc  ','abc');
INSERT INTO t1 VALUES(2,'abc','abc', 'abc',  'ABC');
INSERT INTO t1 VALUES(3,'abc','abc', 'abc ', 'Abc');
INSERT INTO t1 VALUES(4,'abc','abc ','ABC',  'abc');

/* Grouping is performed using the NOCASE collating sequence (Values
** 'abc', 'ABC', and 'Abc' are placed in the same group). */
SELECT count(*) FROM t1 GROUP BY d ORDER BY 1;
--result 4

The above example is intuitive to me.

But what about this one?

/* Grouping is performed using the BINARY collating sequence.  'abc' and
** 'ABC' and 'Abc' form different groups */
SELECT count(*) FROM t1 GROUP BY (d || '') ORDER BY 1;
--result 1 1 2

First, if d is collated on NOCASE, then, why does the second operation use
BINARY?   Is it because of this rule as stated in the documentation:   "The
collating sequence set by the COLLATE operator overrides the collating
sequence determined by the COLLATE clause in a table column definition
  ."


It works the same way as ORDER BY:

  "If the expression is not a column and has no COLLATE clause, then
   the BINARY collating sequence is used"

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Collation with concatenation

2011-12-15 Thread Jeff Matthews
This is in the documentation:

 

CREATE TABLE t1(
x INTEGER PRIMARY KEY,
a, /* collating sequence BINARY */
b COLLATE BINARY,  /* collating sequence BINARY */
c COLLATE RTRIM,   /* collating sequence RTRIM  */
d COLLATE NOCASE   /* collating sequence NOCASE */
);
   
 
   /* x   a b c   d */
INSERT INTO t1 VALUES(1,'abc','abc', 'abc  ','abc');
INSERT INTO t1 VALUES(2,'abc','abc', 'abc',  'ABC');
INSERT INTO t1 VALUES(3,'abc','abc', 'abc ', 'Abc');
INSERT INTO t1 VALUES(4,'abc','abc ','ABC',  'abc');
 
/* Grouping is performed using the NOCASE collating sequence (Values
** 'abc', 'ABC', and 'Abc' are placed in the same group). */
SELECT count(*) FROM t1 GROUP BY d ORDER BY 1;
--result 4
 
The above example is intuitive to me.
 
But what about this one?
 
/* Grouping is performed using the BINARY collating sequence.  'abc' and
** 'ABC' and 'Abc' form different groups */
SELECT count(*) FROM t1 GROUP BY (d || '') ORDER BY 1;
--result 1 1 2
 
First, if d is collated on NOCASE, then, why does the second operation use
BINARY?   Is it because of this rule as stated in the documentation:   "The
collating sequence set by the COLLATE operator overrides the collating
sequence determined by the COLLATE clause in a table column definition
 ." 
 

Second, was the example of adding concatenation of the ' ' supposed to have
any impact on the result other than showing that the collation is now done
on binary?   If I concatenate anything (e.g., 'bigcheeseburger'), it doesn't
make a difference.   So, I think the answer is that the concatenation
example is just to demonstrate that binary takes precedence over the
collation setting for the column.

 

Thanks for advice.

 

 

 

 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] No error when deleting a record already deleted?

2011-12-15 Thread Paxdo Presse

ok, Thank you to both.

Le 15 déc. 2011 à 16:24, Richard Hipp a écrit :

> On Thu, Dec 15, 2011 at 10:08 AM, Pavel Ivanov  wrote:
> 
>> On Thu, Dec 15, 2011 at 10:05 AM, Paxdo Presse  wrote:
>>> It's normal not to get an error message
>>> when trying to remove or edit a record that has already been removed?
>> 
>> Yes, it's normal. That's how SQL (in any DBMS, not only SQLite) works.
>> 
> 
> Your application can call sqlite3_changes() (
> http://www.sqlite.org/c3ref/changes.html) after the DELETE to get the
> number of rows deleted, then raise whatever errors it wants if the number
> deleted is zero.
> 
> 
>> 
>> Pavel
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> 
> 
> 
> 
> -- 
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] No error when deleting a record already deleted?

2011-12-15 Thread Richard Hipp
On Thu, Dec 15, 2011 at 10:08 AM, Pavel Ivanov  wrote:

> On Thu, Dec 15, 2011 at 10:05 AM, Paxdo Presse  wrote:
> > It's normal not to get an error message
> > when trying to remove or edit a record that has already been removed?
>
> Yes, it's normal. That's how SQL (in any DBMS, not only SQLite) works.
>

Your application can call sqlite3_changes() (
http://www.sqlite.org/c3ref/changes.html) after the DELETE to get the
number of rows deleted, then raise whatever errors it wants if the number
deleted is zero.


>
> Pavel
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] No error when deleting a record already deleted?

2011-12-15 Thread Pavel Ivanov
On Thu, Dec 15, 2011 at 10:05 AM, Paxdo Presse  wrote:
> It's normal not to get an error message
> when trying to remove or edit a record that has already been removed?

Yes, it's normal. That's how SQL (in any DBMS, not only SQLite) works.

Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] No error when deleting a record already deleted?

2011-12-15 Thread Paxdo Presse

Hi all,

It's normal not to get an error message 
when trying to remove or edit a record that has already been removed?

Thank you,
olivier
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite reading old data

2011-12-15 Thread Fabrizio Steiner
I think the following observation, which I made in the meantime, may be very 
interesting. Which also gives a hint to a problem with SQLite in combination 
with the encryption of System.Data.SQLite.
 
I tried the same test with an unencrypted database, and surprisingly the 3 
threads do their work without any exception. I executed several test runs with 
the 3 threads and it wasn't reproducable anymore. 
Activating the encryption again it's reproducable and exceptions occur due to 
an old stamp that is being retrieved. 

Answers to your questions:
- No connections are shared at all between threads. So every thread creates 2 
private connections. 
- Updating always occurs in the second connection.
- The after-update checks will be perfomed in the same transaction as the 
update itself. 
- No thread ever deletes a row.

Regards
Fabrizio

> -Ursprüngliche Nachricht-
> Von: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] Im Auftrag von Igor Tandetnik
> Gesendet: Donnerstag, 15. Dezember 2011 14:26
> An: sqlite-users@sqlite.org
> Betreff: Re: [sqlite] SQLite reading old data
> 
> Fabrizio Steiner  wrote:
> > I'm currently facing a problem with the System.Data.SQLite ADO.NET
> > Provider. I'm using stamps (simple integer column) on the rows in
> > order to detect if another user has altered the same datarow during the
> time the first user started the edit and when he invoked the save. Every
> update increases this stamp and the update statement is constrained with
> the old stamp. If the update has
> > no affected rows, there are different cases which are checked:
> > - If the datarow has been deleted, the user is informed about this.
> > - If the datarow still exists the current stamp in the database is 
> > retrieved.
> >   - If the db stamp is greater than the reference stamp, the user will
> > be informed that another user changed the data in the meantime.
> >   - If the db stamp is smaller or equal to the reference stamp, an exception
> is thrown because the stamp can never get smaller.
> >
> > I tried to simulate 3 users which are updating a datarow in the same
> > table, every user changes it's own row.  Basically I had 3 threads
> > each of one first retrieving a row in one connection. Updating the data and
> storing it back into the table with another connection. Every thread made a
> loop with several edits and a sleep in between.
> 
> Do you have two connections, each shared by three threads; or six separate
> connections? Do you ever perform any writing on the connection that reads
> the original version of the row? Do you perform after-update checks on the
> same connection as the update itself? Do you wrap the update and the
> checks together in a transaction?
> 
> How is a particular "datarow" identified? Is it possible for one thread to
> delete a row, and then for another to insert it back later as new (presumably
> with the version stamp starting back from zero)?
> --
> Igor Tandetnik
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite reading old data

2011-12-15 Thread Igor Tandetnik
Fabrizio Steiner  wrote:
> I'm currently facing a problem with the System.Data.SQLite ADO.NET Provider. 
> I'm using stamps (simple integer column) on the rows
> in order to detect if another user has altered the same datarow during the 
> time the first user started the edit and when he
> invoked the save. Every update increases this stamp and the update statement 
> is constrained with the old stamp. If the update has
> no affected rows, there are different cases which are checked:   
> - If the datarow has been deleted, the user is informed about this.
> - If the datarow still exists the current stamp in the database is retrieved.
>   - If the db stamp is greater than the reference stamp, the user will be 
> informed that another user changed the data in the
> meantime. 
>   - If the db stamp is smaller or equal to the reference stamp, an exception 
> is thrown because the stamp can never get smaller.
> 
> I tried to simulate 3 users which are updating a datarow in the same table, 
> every user changes it's own row.  Basically I had 3
> threads each of one first retrieving a row in one connection. Updating the 
> data and storing it back into the table with another
> connection. Every thread made a loop with several edits and a sleep in 
> between.  

Do you have two connections, each shared by three threads; or six separate 
connections? Do you ever perform any writing on the connection that reads the 
original version of the row? Do you perform after-update checks on the same 
connection as the update itself? Do you wrap the update and the checks together 
in a transaction?

How is a particular "datarow" identified? Is it possible for one thread to 
delete a row, and then for another to insert it back later as new (presumably 
with the version stamp starting back from zero)?
-- 
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] error while building for sparc-Solaris 2.10 with gcc 4.6.2

2011-12-15 Thread Michael Buchholz
Hi,

Version 3.7.9
gcc 4.6.2
Sparc-Solaris 2.10

I am trying, to compile with --enable-shared.
See attached the output of the configure and gmake.

In the .libs directory, i see that the sqlite3.o is larger than 1 MB...
Might that be the problem? Is it a problem, to build a usable shared
library, that includes an objectfile larger than 1 MB?

When i build with --enable-static, it all works fine. The sqlite3.o is
then slight smaller than 1 MB.

I will use the static version for now...


-- 
mit freundlichen Gruessen / with friendly regards
Michael Buchholz  michael.buchh...@de.verizonbusiness.com   MSAS Support
Verizon Business  Sebrathweg 20  44149 Dortmund  Germany   Vnet: 3171192
Tel: +492319721192 Fax: +492319722508 Home-Office: +492319479858



Verizon Deutschland GmbH - Sebrathweg 20, 44149 Dortmund, Germany - Amtsgericht 
Dortmund, HRB 14952 - Geschäftsführer: Detlef Eppig - Vorsitzender des 
Aufsichtsrats: Dominique Gaillard

checking for a BSD-compatible install... /usr/local/bin/ginstall -c
checking whether build environment is sane... yes
checking for gawk... no
checking for mawk... no
checking for nawk... nawk
checking whether make sets $(MAKE)... yes
checking for style of include used by make... GNU
checking for gcc... /prod/build/mbuchhol/gcc-4.6.2/bin/gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables... 
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether /prod/build/mbuchhol/gcc-4.6.2/bin/gcc accepts -g... yes
checking for /prod/build/mbuchhol/gcc-4.6.2/bin/gcc option to accept ISO C89... 
none needed
checking dependency style of /prod/build/mbuchhol/gcc-4.6.2/bin/gcc... gcc3
checking for special C compiler options needed for large files... no
checking for _FILE_OFFSET_BITS value needed for large files... 64
checking for gcc... (cached) /prod/build/mbuchhol/gcc-4.6.2/bin/gcc
checking whether we are using the GNU C compiler... (cached) yes
checking whether /prod/build/mbuchhol/gcc-4.6.2/bin/gcc accepts -g... (cached) 
yes
checking for /prod/build/mbuchhol/gcc-4.6.2/bin/gcc option to accept ISO C89... 
(cached) none needed
checking dependency style of /prod/build/mbuchhol/gcc-4.6.2/bin/gcc... (cached) 
gcc3
checking for ranlib... ranlib
checking build system type... sparc-sun-solaris2.10
checking host system type... sparc-sun-solaris2.10
checking for a sed that does not truncate output... /usr/5bin/sed
checking for grep that handles long lines and -e... /usr/sfw/bin/ggrep
checking for egrep... /usr/sfw/bin/ggrep -E
checking for ld used by /prod/build/mbuchhol/gcc-4.6.2/bin/gcc... 
/prod/build/mbuchhol/gcc-4.6.2/sparc-sun-solaris2.10/bin/ld
checking if the linker 
(/prod/build/mbuchhol/gcc-4.6.2/sparc-sun-solaris2.10/bin/ld) is GNU ld... yes
checking for /prod/build/mbuchhol/gcc-4.6.2/sparc-sun-solaris2.10/bin/ld option 
to reload object files... -r
checking for BSD-compatible nm... /prod/build/mbuchhol/gcc-4.6.2/bin/nm -B
checking whether ln -s works... yes
checking how to recognise dependent libraries... pass_all
checking how to run the C preprocessor... 
/prod/build/mbuchhol/gcc-4.6.2/bin/gcc -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking dlfcn.h usability... yes
checking dlfcn.h presence... yes
checking for dlfcn.h... yes
checking for g++... g++
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking dependency style of g++... gcc3
checking how to run the C++ preprocessor... g++ -E
checking for g77... no
checking for xlf... no
checking for f77... f77
checking whether we are using the GNU Fortran 77 compiler... no
checking whether f77 accepts -g... yes
checking the maximum length of command line arguments... 262144
checking command to parse /prod/build/mbuchhol/gcc-4.6.2/bin/nm -B output from 
/prod/build/mbuchhol/gcc-4.6.2/bin/gcc object... ok
checking for objdir... .libs
checking for ar... ar
checking for ranlib... (cached) ranlib
checking for strip... strip
checking if /prod/build/mbuchhol/gcc-4.6.2/bin/gcc supports -fno-rtti 
-fno-exceptions... no
checking for /prod/build/mbuchhol/gcc-4.6.2/bin/gcc option to produce PIC... 
-fPIC
checking if /prod/build/mbuchhol/gcc-4.6.2/bin/gcc PIC flag -fPIC works... yes
checking if /prod/build/mbuchhol/gcc-4.6.2/bin/gcc static flag -static works... 
no
checking if /prod/build/mbuchhol/gcc-4.6.2/bin/gcc supports -c -o file.o... yes
checking whether the /prod/build/mbuchhol/gcc-4.6.2/bin/gcc linker 
(/prod/build/mbuchhol/gcc-4.6.2/sparc-sun-solaris2.10/bin/ld) supports shared 
libraries... yes
checking whether -lc should be 

[sqlite] SQLite reading old data

2011-12-15 Thread Fabrizio Steiner
Hello

I'm currently facing a problem with the System.Data.SQLite ADO.NET Provider. 
I'm using stamps (simple integer column) on the rows in order to detect if 
another user has altered the same datarow during the time the first user 
started the edit and when he invoked the save. Every update increases this 
stamp and the update statement is constrained with the old stamp. If the update 
has no affected rows, there are different cases which are checked:
 - If the datarow has been deleted, the user is informed about this.
 - If the datarow still exists the current stamp in the database is retrieved.
   - If the db stamp is greater than the reference stamp, the user will be 
informed that another user changed the data in the meantime.
   - If the db stamp is smaller or equal to the reference stamp, an exception 
is thrown because the stamp can never get smaller.

I tried to simulate 3 users which are updating a datarow in the same table, 
every user changes it's own row.  Basically I had 3 threads each of one first 
retrieving a row in one connection. Updating the data and storing it back into 
the table with another connection. Every thread made a loop with several edits 
and a sleep in between.

Surprisingly I got exceptions for some edits, because the database stamp was 
smaller than the current reference stamp, which should never happen. So it 
seems like SQLite returned an old version of that row. Having the application 
stopped when the exception occured, I retrieved the same row in a management 
tool I saw the correct stamp and data for that particular row.

I'm using the identical code for other database systems such as SQL Server and 
there it works fine, the database layer is completely abstracted.

System.Data.SQLite version is 1.0.77.0 and the database is encryped with the 
included RC4 encryption.

Has anybody encountered anything similar?

Thanks for any help.

Kind Regards
Fabrizio


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] lost primary key

2011-12-15 Thread Igor Tandetnik
YJM YAN  wrote:
> source table:
> DDL:
> "create table xxx_table(id interger primary key autoincrement, field1,
> field2)"
> 
> int ret = sqlite3_exec(m_db,"attach './backup.db' as filedb",0,0,NULL);
> ret = sqlite3_exec(m_db,"begin transaction",0,0,NULL);
> ret = sqlite3_exec(m_db,"create table filedb.xxx_table as select * from
> xxx_table",0,0,NULL);
> ret = sqlite3_exec(m_db,"commit transaction",0,0,NULL);
> ret = sqlite3_exec(m_db,"detach filedb",0,0,NULL);
> 
> destination table:
> DDL:
> "create table xxx_table(id interger , field1, field2)"
> 
> use this way to back up a table, lost the primary key property.
> 
> How to configure its?

Instead of "create table as select", run a separate CREATE TABLE statement to 
create the new table the way you want it. Then run INSERT ... SELECT statement 
to copy the data over.
-- 
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] how to store a object into database?

2011-12-15 Thread jiffies
hello,
  I'm a new programer to sqlite and SQL . I have a struct like
this
   typedef struct program
{
int tp_id;

UM_U32 provider_lock: 1;
UM_U32 subtitle_pid   :13;
UM_U32 teletext_pid   :13;
UM_U32 cur_audio  : 5;

UM_U8 oes_type[MAX_OES_CNT];
UM_U16 oes_pid[MAX_OES_CNT];
}/*__attribute__((packed))*/ P_NODE;

what SQL should i use?
how to create a table?
how to insert ?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Aggregating Forex data

2011-12-15 Thread Rafael Garcia Leiva


I have added a new column with the date in seconds, created an index 
over this column and modified the query a little bit. Now the query 
takes less than 20 seconds. Since this is the worst case I'm happy with 
the result.


Just for reference, the final query is:

SELECT STRFTIME('%Y-%m-%d %H %M', MIN(date)) AS Date,
   (SELECT open FROM eurusd e2
WHERE e2.unix >= e1.unix / 300 * 300
ORDER BY e2.unix ASC LIMIT 1) AS Open,
   MAX(high) as High,
   MIN(low) as Low,
   (SELECT close FROM eurusd e3
WHERE e3.unix < (e1.unix / 300 + 1) * 300
ORDER BY e3.unix DESC LIMIT 1) AS Close
FROM eurusd e1
GROUP BY e1.unix / 300

The query plan (that I cannot interpret) is:

0|0|0|SCAN TABLE eurusd AS e1 (~100 rows)
0|0|0|USE TEMP B-TREE FOR GROUP BY
0|0|0|EXECUTE CORRELATED SCALAR SUBQUERY 1
1|0|0|SEARCH TABLE eurusd AS e2 USING INDEX unix_index (unix>?) (~25 
rows)

0|0|0|EXECUTE CORRELATED SCALAR SUBQUERY 2
2|0|0|SEARCH TABLE eurusd AS e3 USING INDEX unix_index (unixrows)


Many thanks for your support.

Best regards

Rafael

El 13/12/2011 18:40, Simon Slavin escribió:

On 13 Dec 2011, at 5:38pm, jr wrote:


since much of that time will be spent in the strftime() calls and calculations, 
how about adding a lookup table with pre-computed date/times?

Actually, with this much use of the date/time information I'd be storing it all 
as numbers.  And if you're wedded to the 5 minute period you might even want to 
create a column specifically for which 5 minute period the time/date falls in.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users