On 26 Nov 2009, at 5:54am, Edward Diener wrote:
> I have a table with an integer primary key as the first type. My
> understanding is that this is an alias for the rowid. When I insert a
> row in this table using _sqlite3_prepare and then sqlite3_step I need to
> retrieve the rowid for the
I have a table with an integer primary key as the first type. My
understanding is that this is an alias for the rowid. When I insert a
row in this table using _sqlite3_prepare and then sqlite3_step I need to
retrieve the rowid for the row I have just inserted. Is there an SQL
statement I can
My application includes a main process and some other processes. I open the
database in other process, but at end I will close the database in main
process.
The problem happens while I close the database. The main process is blocked.
And I could see the journal file is still there, so I guess
Schrum, Allan wrote:
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
>> boun...@sqlite.org] On Behalf Of Dmitri Priimak
>> Sent: Wednesday, November 25, 2009 11:39 AM
>> To: General Discussion of SQLite Database
>> Subject: Re: [sqlite] Error: file is
> -Original Message-
> From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> boun...@sqlite.org] On Behalf Of Dmitri Priimak
> Sent: Wednesday, November 25, 2009 11:39 AM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] Error: file is encrypted or is not a
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Nicolas Rivera wrote:
> In trying to understand shared cache mode, I would like to know why one
> would Not use it.
It is not useful unless you open the same database multiple times
concurrently within the same process. Then it only saves you time
Simon Slavin wrote:
> On 25 Nov 2009, at 6:19pm, Dmitri Priimak wrote:
>
>
>> Simon Slavin wrote:
>>
>>> On 25 Nov 2009, at 6:09pm, Dmitri Priimak wrote:
>>>
>>>
000 6166 6c69 6465 7420 206f 706f 6e65 6420
010 7461 6261 7361 2065 7274 6e61 6173 7463
020
On 25 Nov 2009, at 6:19pm, Dmitri Priimak wrote:
> Simon Slavin wrote:
>> On 25 Nov 2009, at 6:09pm, Dmitri Priimak wrote:
>>
>>> 000 6166 6c69 6465 7420 206f 706f 6e65 6420
>>> 010 7461 6261 7361 2065 7274 6e61 6173 7463
>>> 020 6f69 206e 3632 3a20 6620 6c69 2065 7369
>>> 030
Simon Slavin wrote:
> On 25 Nov 2009, at 6:09pm, Dmitri Priimak wrote:
>
>
>> 000 6166 6c69 6465 7420 206f 706f 6e65 6420
>> 010 7461 6261 7361 2065 7274 6e61 6173 7463
>> 020 6f69 206e 3632 3a20 6620 6c69 2065 7369
>> 030 6520 636e 7972 7470 6465 6f20 2072 7369
>> 040 6e20
On 25 Nov 2009, at 6:09pm, Dmitri Priimak wrote:
> 000 6166 6c69 6465 7420 206f 706f 6e65 6420
> 010 7461 6261 7361 2065 7274 6e61 6173 7463
> 020 6f69 206e 3632 3a20 6620 6c69 2065 7369
> 030 6520 636e 7972 7470 6465 6f20 2072 7369
> 040 6e20 746f 6120 6420 7461 6261 7361
Hi.
I noticed strange problem with some of my sqlite databases. It does not
affect all of them.
I have a db file, which I modify from the crontab using sqlite3 cli.
Every once in a while, file goes bad.
In the sense that when I connect to it using sqlite3 cli and do any
select I get error
Dear list,
I'm using SQLite 3.6.20 on an ARM Linux device which uses the UBIFS
filesystem (on OneNAND flash).
When I perform a database update, and cut the power a few seconds later, the
changes are rolled back
when the device restarts. This is because after the restart the journal file
has
What you are saying is you are holding information about items which have
different characteristics. To represent these as relations you would have a
product entity then you would have an attribute entity that would be like
(product_id,attribute_id,attribute_name,attribute_value) eg:
On Wed, Nov 25, 2009 at 4:43 AM, Pavel Ivanov wrote:
> Try to look at things not from the point of view of your application
> but from the point of view of the SQLite itself.
>
> > 1. backward compatibility. It worked before upto 3.6.16. so, probably
> it
> > should work
Hi,
In trying to understand shared cache mode, I would like to know why one
would Not use it. According to
http://www.hwaci.com/sw/sqlite/sharedcache.html, shared cache mode "can
significantly reduce the quantity of memory and IO required by the system."
In
So then only one write transaction at a time is allowed per database.
Which means there is no advantage, in terms of concurrency, with using
shared cache mode. Right?
> On 11/24/2009 4:17 PM, Pavel Ivanov wrote:
>
> Indeed, it's weird. And I've just realized that if we have two
> simultaneous
Sry, was a bit confused
You are right :-) Of course FOREIGN KEY makes no sense in a column
const. ...
Pavel Ivanov schrieb:
> According to http://www.sqlite.org/lang_createtable.html you can
> mention foreign-key-clause (starting with REFERENCES) as
> column-constraint. Why it doesn't work
According to http://www.sqlite.org/lang_createtable.html you can
mention foreign-key-clause (starting with REFERENCES) as
column-constraint. Why it doesn't work for you?
Pavel
On Wed, Nov 25, 2009 at 10:33 AM, Jan wrote:
> Hi,
>
> I am testing the new fk support in my db.
Hi,
I am testing the new fk support in my db. Currently I have *column
constraints* for fk that were parsed by genfkey to create triggers.
Simply adding FOREIGN KEY (column) to the column constr. seems not to
work. But moving everything to the end of the table definition as a
table constraint
> The same is true of FOREIGN KEY, by the way (I checked), but that's a bit
> more obvious since breaking FOREIGN KEY will always result in a database the
> programmer would consider corrupt.
You're not quite right. You're talking about immediate foreign keys.
There're deferred foreign keys
On 25 Nov 2009, at 2:06pm, Pavel Ivanov wrote:
>> I couldn't find the answer documented anywhere, so I will have to assume
>> that it may change in future versions. Unless the requirement for depth
>> first is somewhere in the SQL specification.
>
> I believe it should be. Triggers should be
> I couldn't find the answer documented anywhere, so I will have to assume that
> it may change in future versions. Unless the requirement for depth first is
> somewhere in the SQL specification.
I believe it should be. Triggers should be executed before the
statement causing them to fire is
On 25 Nov 2009, at 1:38pm, Pavel Ivanov wrote:
> Does this answers question?
I think it does for the current version: depth first. Thanks.
I couldn't find the answer documented anywhere, so I will have to assume that
it may change in future versions. Unless the requirement for depth first
Does this answers question?
sqlite> create table log (t);
sqlite> create table t1 (a);
sqlite> create table t2 (a);
sqlite> create trigger tt1 after update on t1 begin
...> insert into t2 values (new.a);
...> insert into log values ("update of t1, a="||new.a);
...> end;
sqlite> create
On 25 Nov 2009, at 12:26pm, Antti Nietosvaara wrote:
> Simon Slavin wrote:
>> I assume your database file is on your boot volume. What operating system
>> are you using ?
>>
>
> Actually the database is alone in its own partition.
Ah. That's better in some ways. But I think you're still
Try to look at things not from the point of view of your application
but from the point of view of the SQLite itself.
> 1. backward compatibility. It worked before upto 3.6.16. so, probably it
> should work the same now.
It was undefined behavior up to 3.6.16, it is undefined behavior now.
Simon Slavin wrote:
> I assume your database file is on your boot volume. What operating system
> are you using ?
>
Actually the database is alone in its own partition. I'm currently
trying to avoid the problem by assigning "big enough" partition for the
db calculated from the estimated
On 11/25/09 10:50 , "Simon Slavin" wrote:
> The message is that if you are short of
> space it is already too late for any software to cope with the problem.
>
I disagree. It all depends on where you set the threshold for "short of
space". To give you a trivial example,
On 25 Nov 2009, at 9:40am, Antti Nietosvaara wrote:
> I have an application which keeps an index of data in an SQLite
> database. I'm trying to figure out the best way to handle the possible
> scenario of database filling out the entire hard disk. I could just
> delete some of the oldest
Deleting data may not free enough space in the database file to allow
new records to be added [the new records may contain more data]. You
could continually delete old records until an INSERT succeeded
(indicating enough space now)? Otherwise, I'd say you'll just have to
monitor the hard disk
Hello,
I have an application which keeps an index of data in an SQLite
database. I'm trying to figure out the best way to handle the possible
scenario of database filling out the entire hard disk. I could just
delete some of the oldest rows, but I wonder if it's possible that even
delete
31 matches
Mail list logo