Yuri wrote:
> I have to call sqlite3_blob_write to partially modify blobs.
>
> Wouldn't it have been better if there was a function, for example,
> writeblob(tbl.blob_fld, offset, portion_data)? SQLite could do the
> same that sqlite3_blob_write is doing when writeblob result is
> assigned to the s
Not directly, but you can query the locks on the file as per documentation:
struct flockv_pending; // = { F_WRLCK, SEEK_SET, PENDING_BYTE ,
1, 0};
struct flockv_reserved; // = { F_WRLCK, SEEK_SET, RESERVED_BYTE ,
1, 0};
struct flockv_shared;//
On Wednesday, 1 March, 2017 14:12, Deon Brewis wrote:
> "But look how much space and processing time it would take up"
> Can you clarify what you mean by "space" ?
> The processing time argument I understand.
I doubt there is any "space" requirement at all. In fact, since you are not
carr
On Wednesday, 1 March, 2017 12:21, James K. Lowden
wrote:
> Probably I'm overlooking something, and probably this is not the right
> forum. SQLite adheres to the SQL standard; that's established. Why
> does the standard say what it does? I'm only saying it's not clear to
> me that there's the
The extra space would just be the few bytes to store the text, which is nothing
compared to the "extra" index size. And since it's "infrequent update" then it
wouldn't be too bad for the extra time. The normal way would have an extra
index to update, the trigger way would be less efficient. On a
Right, you need to use the birthday attack algorithm for determining collision
risk.
I wouldn't mind hashing - but there is an additional complication - the query I
want the covered field in the index for is this:
SELECT ExtraCol FROM Foo WHERE UniqueCol > range_start AND UniqueCol < range_end
Hmm - a bit of google foo and refreshing of chances of collions means my
1:9,223,372,036,854,775,808 is way off
That is the chance of any two hashes colliding - you'll have lot sof hashes.
The basic idea might be sound though even if you stick with a full MD5 it
should save a chunk of storage
Pa
As a bit of an off the wall suggestion you could try an MD5 (or even a
partial MD5 - half of the bytes)
CREATE table hashes (
hash integer primary key; // just the first 64 bits of the hash of
uniquecol and extracol
)
as an integer primary key the hash would be an alias of the rowid and so
st
I have to call sqlite3_blob_write to partially modify blobs.
Wouldn't it have been better if there was a function, for example,
writeblob(tbl.blob_fld, offset, portion_data)? SQLite could do the same
that sqlite3_blob_write is doing when writeblob result is assigned to
the same row/column tha
On 1 Mar 2017, at 9:41pm, Deon Brewis wrote:
> Yeah ok, but that is paltry compared with the gb's of diskspace that the
> actual second index takes up. But thanks for clarifying.
Ah. If it’s really GBs of disk space then I can see why you’d look for
alternative solutions.
But I have a 43 GB
On 1 Mar 2017, at 9:17pm, Bob Friesenhahn wrote:
> I am not sure of all of the conditions which might result in "Database is
> locked". For example, if the results of a query are being iterated through
> and the reader stops iterating part way through the result set, might this
> cause "Data
Yeah ok, but that is paltry compared with the gb's of diskspace that the actual
second index takes up. But thanks for clarifying.
-Deon
-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On
Behalf Of Simon Slavin
Sent: Wednesday, March 1, 2017 1:
On 1 Mar 2017, at 9:11pm, Deon Brewis wrote:
> "But look how much space and processing time it would take up"
>
> Can you clarify what you mean by "space" ?
Your triggers have to be stored. Every time they’re activated (each time you
insert a row) they have to be run. That requires SQLite t
On Wed, 1 Mar 2017, Simon Slavin wrote:
On 1 Mar 2017, at 7:27pm, Bob Friesenhahn wrote:
What is the recommended approach to diagnosing "Database is locked" errors and
determining the guilty party?
Are you checking the result codes returned by all SQLite commands you give to
make sure th
"But look how much space and processing time it would take up"
Can you clarify what you mean by "space" ?
The processing time argument I understand.
I think this is one of those things that if the database engine doesn't
internally support it, it can't really be emulated.
Sure would be nice
On 1 Mar 2017, at 7:27pm, Bob Friesenhahn wrote:
> What is the recommended approach to diagnosing "Database is locked" errors
> and determining the guilty party?
Are you checking the result codes returned by all SQLite commands you give to
make sure they are SQLITE_OK ?
I’m not talking about
On 1 Mar 2017, at 8:21pm, David Raymond wrote:
> The trigger version you asked about would look something like the below I
> believe. More risky than having the two indexes, but should work. (Famous
> last words)
I have no doubt it would work. But look how much space and processing time it
On Wed, 1 Mar 2017, Stephen Chrzanowski wrote:
Where is the database being stored? What OS is the software running
under? If Windows, I'd suggest looking up SysInternals and downloading the
package to see what has its grubby hands on the database. If Linux, as
root, run an LSOF (Lower case) a
The trigger version you asked about would look something like the below I
believe. More risky than having the two indexes, but should work. (Famous last
words)
create table Blah
(
Id integer primary key,
UniqueCol blob,
ExtraCol blob
);
create index blahIndex on Blah (UniqueCol, ExtraCol)
Where is the database being stored? What OS is the software running
under? If Windows, I'd suggest looking up SysInternals and downloading the
package to see what has its grubby hands on the database. If Linux, as
root, run an LSOF (Lower case) and grep for the database or process
accessing the
We are sometimes encountering permanent "Database is locked" errors
from sqlite 3.10.2. After the database gets in this state it is not
possible to write to it or read from it. It is not possible to use
sqlite's built in backup mechanism since it produces a backup file of
zero bytes.
What i
On Tue, 28 Feb 2017 18:01:35 -0800
J Decker wrote:
> Then how would you properly find the children? Or find any
> information about their parent, the child would have to in turn be
> deleted.
>
> foreign keys are as good as pointers.
That's an interesting way to put it. One of Codd's first
On Tue, 28 Feb 2017 19:44:56 -0700
"Keith Medcalf" wrote:
> It has always been a requirement that FK relationships are 1:N
> mappings, otherwise update anomalies will occur. If you have a
> relational database that does not "naturally" have all FK
> relationships as 1:N, then you have not normal
On Wed, 1 Mar 2017 07:26:31 +
Hick Gunter wrote:
> Without a UNIQUE constraint there can be more than one row in the
> parent table that is the "parent row" of at least one row of the
> child table. To delete rows from the parent while still satisfying
> the EXISTS relation, you can delete al
On Wed, 1 Mar 2017 17:00:16 +
Deon Brewis wrote:
> If not, is there a way to efficiently implement a UNIQUE constraint
> in a different way? (Trigger maybe?)
I'll defer to others about how SQLite will respond, but I'd be tempted
to do this:
create table Foo(UniqueCol primary key, Ex
Thanks, thought about it. But I have a ROWID requirement as well due to foreign
keys referring back to this table via the ROWID.
I could in theory put that just the RowId in an additional index, but then the
RowId index will just carry the entire UniqueCol again - back to the same
problem, just
Deon Brewis wrote:
> Is there way to add non-unique columns in a unique index?
>
> I would like to use the same index to enforce unique constraints, as well as
> giving a covered result for other queries.
>
> CREATE UNIQUE INDEX indx ON Foo(UniqueCol) INCLUDE (ExtraCol)
CREATE TABLE Foo (
Uniq
On 1 Mar 2017, at 5:00pm, Deon Brewis wrote:
> Is there way to add non-unique columns in a unique index?
I don’t know of anything that operates in that way.
> If not, is there a way to efficiently implement a UNIQUE constraint in a
> different way? (Trigger maybe?)
Not efficiently. You coul
Is there way to add non-unique columns in a unique index?
I would like to use the same index to enforce unique constraints, as well as
giving a covered result for other queries.
Something like an 'INCLUDE' would also work (actually even better). E.g.
CREATE UNIQUE INDEX indx ON Foo(UniqueCol)
Logical expression always false
---
\ext\misc\ieee754.c line 89:
else if( m==0 && e>1000 && e<1000 )
Bye Martin Strunz
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sq
On 03/01/2017 09:53 PM, Simone Mosciatti wrote:
Hi all,
tl;dr:
What are the difference between running SQLite inside TCL and running
it embed in a shared object module? Why I am seeing such big
difference in performance?
https://github.com/RedBeardLab/rediSQL/issues/13#issuecomment-2833096
@Richard,
I didn't know sqlite has a "REINDEX" statement - I used "drop index" and
"create index" pair instead. Very helpful, thanks!
@Simon,
No, you didn't waste my time, as I said above, I wonder if Richard would
have mentioned the "REINDEX" command to me if you hadn't commented :)
Thank you
Ryan,
I just want to thank you for your kindness and display of goodwill to
mankind. This is a great response. I even learned something from this
post. Thanks so much for your responses. There are others like us in
this group that love to learn and your posts always are well received.
Thanks
Hi all,
tl;dr:
What are the difference between running SQLite inside TCL and running it
embed in a shared object module? Why I am seeing such big difference in
performance?
https://github.com/RedBeardLab/rediSQL/issues/13#issuecomment-283309641
I finally got some time to work again on my
On 1 Mar 2017, at 1:52pm, Richard Hipp wrote:
> On 3/1/17, Simon Slavin wrote:
>
>> If you have a case where VACUUM does not fix a faulty index, that would be
>> very interesting.
>
> Not necessarily. VACUUM does not recreate the indexes, it just copies
> them, row by row. So if the index i
On 3/1/17, Simon Slavin wrote:
>
> If you have a case where VACUUM does not fix a faulty index, that would be
> very interesting.
Not necessarily. VACUUM does not recreate the indexes, it just copies
them, row by row. So if the index is self-consistent but it does not
match its table (it has ex
On 1 Mar 2017, at 12:28pm, Edwin Yip wrote:
> B - The "VACCUM" only fix the issue for a table, but the very same issue
> retains for another table.
If you have a case where VACUUM does not fix a faulty index, that would be very
interesting. However, we would need a copy of the unfixed version
Hello Domingo,
Thanks for your reply.
I'm not sure, but I guess it's the program (that uses the DB) crashes that
might caused this issue.
So far the recreation of the indexes fixed the issue for two tables. I'll
report back to here if I've got further findings.
On Wed, Mar 1, 2017 at 8:41 PM,
Hello Edwin!
One thing that bugs me is how the indexes could becoming corrupt ?
- Sqlite bug ?
- Hard drive fault ?
Cheers !
On 01/03/17 09:28, Edwin Yip wrote:
Update 1 after another hours of checking:
A - ' seems that the issue is related to indexes - "pragma integrity_check"
would resul
Update 1 after another hours of checking:
A - ' seems that the issue is related to indexes - "pragma integrity_check"
would result in errors like:
row 23465 missing from index IndexRetailItemLastModTime
row 24187 missing from index IndexRetailItemLastModTime
B - The "VACCUM" only fix the issue f
Hello,
I need some insights for an issue I spent hours finding out - was it DB
file corruption or anything else? Details below:
The table is called ProductType, "select count(*) from ProductType" returns
47 rows, which is correct.
There is a column called "LastModTime" and the COLLATE is ISO860
Hi sqlite3 maintainers,
I found a use-after-free bugs in salite3.
Here is the stack trace:
==12241== ERROR: AddressSanitizer: heap-use-after-free on address
0xb5a00590 at pc 0x8777099 bp 0xbfd44df8 sp 0xbfd44dec
READ of size 1 at 0xb5a00590 thread T0
#0 0x8777098 in sqlite3Strlen30 /home/b/as
On 2017/03/01 3:40 AM, do...@mail.com wrote:
# SELECT * FROM processors UNION SELECT * FROM storage;
Error: SELECTs to the left and right do not have the same number of
result columns.
All tables that I created in my database have differing column names,
values, and amounts of columns with the
43 matches
Mail list logo