[sqlite] indexing speed

2010-08-02 Thread Paul Sanderson
I have a table with just a few columns, one is a SHA1 hash and the
second an MD5, there are about 17 Million rows in the table

if I create an index on the SHA1 column using "create index if not
exists sha1index on hashtable(sha1)" the process takes about 3
minutes, if I follow this immediately by creating a second index on
the md5 column "create index if not exists md5index on hashtable(md5)"
this process takes over 30 minutes.

Any ideas as to why this might be?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Inserting a large amount of data into a large indexed table

2010-08-03 Thread Paul Sanderson
Hi

I have a table that contains 6 columns of integers, 2 of these columns
are indexed. There are about 10 Million rows of data in the table.
every now and again I need to add more rows, between about a thousand
and a million at a time. I want the process to be as quiick as
possible (although I know it will take minutes). The process at the
moment is to drop the indexes, add the new rows and then reindex.

Is this the best/fastest way of achieving this - is there a faster way?

P
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Inserting a large amount of data into a large indexed table

2010-08-03 Thread Paul Sanderson
Thanks Oliver that is what I am doing at the moment. I was wondering
whether there might be  a quicker way maybe using temporary tables for
similar - just exploring ideas

On Tuesday, August 3, 2010, Oliver Peters  wrote:
> Am Dienstag, den 03.08.2010, 23:24 +0100 schrieb Paul Sanderson:
>
> [...]
>
>> I  the process to be as quiick as
>> possible (although I know it will take minutes). The process at the
>> moment is to drop the indexes, add the new rows and then reindex.
>>
>> Is this the best/fastest  way of achieving this - is there a faster way?
>
> [...]
>
> BEGIN TRANSACTION;
>
> INSERT INTO table(a,b,c) VALUES(1,2,3);
>
> --more INSERTS
>
> COMMIT;
>
>
> Oliver
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>

-- 
Paul Sanderson
Sanderson Forensics
+44 (0)1869 325667
www.sandersonforensics.com
http://www.twitter.com/sandersonforens
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] copying only new rows from one table to another

2010-08-06 Thread Paul Sanderson
Newbie question

I have two tables and I want to copy (occasionally) a few thousand
columns that have been added to table1 into table2, both tables have
the same unique key but otherwise the columns are different.

table2 will be initially populated from table1 using

INSERT into table table2 SELECT refno from table1 WHERE flag > 0

I could drop table2 and then do the above but will then obviously lose
any data that has been modified in table2. How can I just append the
new rows from table1 into table2

Thanks
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] copying only new rows from one table to another

2010-08-06 Thread Paul Sanderson
Thanks igor - ill give that a go.

On 6 August 2010 16:36, Igor Tandetnik  wrote:
> Paul Sanderson  wrote:
>> I have two tables and I want to copy (occasionally) a few thousand
>> columns that have been added to table1 into table2, both tables have
>> the same unique key but otherwise the columns are different.
>>
>> table2 will be initially populated from table1 using
>>
>> INSERT into table table2 SELECT refno from table1 WHERE flag > 0
>>
>> I could drop table2 and then do the above but will then obviously lose
>> any data that has been modified in table2. How can I just append the
>> new rows from table1 into table2
>
> If refno is a primary key in table2, or has a unique constraint on it, you 
> can do
>
> insert or ignore into table2(refno)
> select refno from table1;
>
> "or ingore" part instructs SQLite to skip over all rows that would cause 
> uniqueness conflict if inserted.
> --
> Igor Tandetnik
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Paul Sanderson
Sanderson Forensics
+44 (0)1869 325667
www.sandersonforensics.com
http://www.twitter.com/sandersonforens
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Database is locked

2010-08-06 Thread Paul Sanderson
I expect that this has been asked before but I cant find anything
useful via google.

I am updating about 20 rows in a table and setting a particular value
to a particular value. The first time I do this all is OK, if I try
again a few seconds later I get a database is locked error.

The code is (using UniDAC), x is passed as a value to the function

try
{
UniConnection1->StartTransaction();
for(int i=0; iRecNo = i;
Tab->Edit();
Tab->FieldByName("cat")->AsInteger = x;
ThumbTab->Post();
}
}
__finally
{
UniConnection1->Commit();
Beep(1000,200);
}

I have added the Beep message to make sure that commit is returning
quickly, which it is doing

I am updating the same set of records, if I move on within the dataset
by count records and then update a new set of rows all works OK, if I
then move back to the previous set and update all is OK. The error
just happens when I trya nd update the same set of records on two
consecutive occasions

What can cause the dataabse to be locked in this way, are there any
sqliteisms I should be looking for.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] implementing a row cache

2010-08-28 Thread Paul Sanderson
I need very fast access to various subsets of data from a table so I
am trying to write a cache for a subset of the rows.  The cache must
be “sort” aware. The table is read only do I dfont need to worry about
keeping the cache upto date.

The table itself has about 30 columns and I need to cache all of them.
However, for various reasons there are implementation issues that mean
that my starting point is a query that includes the primary index (id)
and additional columns that are the columns on which the main table is
currently sorted.

What I want to do is a second query on the dataset selecting 100 rows
before and after the current row and place them into a second
(temporary) table.

For an unsorted table (or rather sorted by ID) I could do the following

“select * from table where ID  >= (refID-100)  limit 200

This would return the 200 row window as required

However the remaining fields are not all numeric so this (minus 100)
will not work

The dataset will already be sorted and is large (a few million rows)
so I need to avoid doing a subsequent sort – the user interacts in
real time and even a seconds delay will be too long and make it too
sluggish.

I have read the sqlite scrolling cursor page and think that I might
have a problem here but thought I would ask for help - Anyone have any
ideas how this might be achieved?

Cheers
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] implementing a row cache

2010-08-28 Thread Paul Sanderson
Thanks for the input

SELECT * FROM table WHERE id BETWEEN (currentid-100) AND
(currentid+100) is rather obvious and I should have thought of it - an
SQL newbie working late at night and then getting up early with the
problem running around my head

Howver although that works for the primary column it wont help me when
I am sorted on (say) a names column. In tihs case my primary query
would be sorted by names (SELECT id, names FROM table ORDER BY names)
I would then like to retrieve the 100 records either side of the
current record. The names column may contain duplicates but the id
column is unique. Could something like rowid be used, i.e. is there
some way of identifying the number of the current record with respect
to the current query rather than as an absolute reference?

If so that brings me to a related question. Does sqlite guarantee that
the rows returned by a sorted query on a column that does not contain
unique values will always be the same, i.e. will subsequent queries
return the rows in the same order?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] implementing a row cache

2010-08-28 Thread Paul Sanderson
Not a huge issue - if the record I want is not in the cache then I can
go and get it.

for the record I have checked my code just doing the sql queries and
it is too slow - hence the additional work here. Initial implementatio
is faster and looks workable (although it does introduce additional
problems) but prob still not as fast as I would like.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] query help

2010-10-26 Thread Paul Sanderson
I have two tables, table b is a subset of table a. both tables have
the same primary key

I want to update the rows from table a with a single column from table
b, what sql command would be most efficient for this?

Thanks
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] query help

2010-10-26 Thread Paul Sanderson
Thank You.

On 26 October 2010 13:14, Igor Tandetnik  wrote:
> Paul Sanderson  wrote:
>> I have two tables, table b is a subset of table a. both tables have
>> the same primary key
>>
>> I want to update the rows from table a with a single column from table
>> b, what sql command would be most efficient for this?
>
> update a set ColumnToUpdate = coalesce(
>    (select ColumnToUpdate from b where b.KeyColumn = a.KeyColumn),
>    ColumnToUpdate);
>
> -- or
>
> insert or replace into a(ColumnToUpdate, AllOtherColumns)
> select b1.ColumnToUpdate, a1.AllOtherColumns
> from b1 join a1 on b1.KeyColumn = a1.KeyColumn;
>
> The second statement could be faster if b is much smaller than a, but is more 
> verbose and has to be updated whenever schema changes. Time both on real 
> data, see which one works better for you.
> --
> Igor Tandetnik
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Paul Sanderson
Sanderson Forensics
+44 (0)1325 572786
www.sandersonforensics.com
http://www.twitter.com/sandersonforens
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Reducing time to create indexes

2010-11-23 Thread Paul Sanderson
I have a table with over 1 million rows and 20+ columns all of which
are indexed, I reasonably regularly recreate the table with new data
and find that the indexing process takes about 30 minutes. Are there
any settings/tweaks that I can use to reduce the time required to
create the index?

The columns are  mainly integers but with a few text columns normally
containing between 30 and a 1000 characters.

Thanks
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Reducing time to create indexes

2010-11-24 Thread Paul Sanderson
Unfortunately all of the columns are represented in a grid that users
can choose to sort on any column, although they are less likely to
sort on certain columns, if they do chose to do so then an excessive
delay is not really acceptable.

Currently I create ascending and descending indexes for each column
individually i.e.

create index if not exists name_a on table(name asc)
create index if not exists name_d on table(name desc)
etc.

The data is entered in order for the primary index

File size varies from case to case but typically prior to indexing it
is about 300MB and with indexes 600MB.

Ill have a look at FTS as there are other benefits to using that.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Reducing time to create indexes

2010-11-24 Thread Paul Sanderson
Thanks All

Simon, I'll try that and see what difference itmakes
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Reducing time to create indexes

2010-11-24 Thread Paul Sanderson
Stuck with a 32 bit Windows set up as the lowest common so in memory
is not feasible - I have seen some of my DB's in excess of 2GB - users
might object even if I could nick that much memory
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Reducing time to create indexes

2010-11-25 Thread Paul Sanderson
Thats interesting Max, I'll look at that.

There are two main string fields one has an average of about 15
characters per field with a max of about 100. The other has an average
length of about 150 characters with a max of about 250 (although
occasionally up to 500 or so)

It may be possible to split the table into two with the main fields in
one and all of the remainder (including the text fields) in the other.

I'll have a play and see what I come up with



On 25 November 2010 10:20, Max Vlasov  wrote:
> On Wed, Nov 24, 2010 at 1:13 AM, Paul Sanderson <
> sandersonforens...@gmail.com> wrote:
>
>> I have a table with over 1 million rows and 20+ columns all of which
>> are indexed, I reasonably regularly recreate the table with new data
>> and find that the indexing process takes about 30 minutes. Are there
>> any settings/tweaks that I can use to reduce the time required to
>> create the index?
>>
>>
> Paul, thought about it recently. I think there's something you could do if
> you're ready to change the scheme a little.
>
> The index creation involves reading and writing. The size of the index in
> your case for most of the fields is smaller than the data of the main table
> so we'll forget about writing, possible it works well using the sqlite and
> system cache.
>
> But your reading for every index should read the whole table and the full
> record (if it fits on a single page) even if the index needs only one field.
> This process of full table reading can be comparatively fast (if your base
> is not fragmented), slow (if it's heavily fragmented) and amazingly fast if
> your entire table luckily fitted in the os system cache when your first
> field was indexed. And the latter is possible if your text fields contribute
> much to the record size and you're ready to move it to a different table
> accessing later with a join. I don't know your exact numbers, but 1m records
> with 20 modest integers (using much few bits than 64) should take about
> 50MB-100MB and many modern software/hardware configurations will possibly
> read it just once accessing this data from the cache when your second and
> later indexes are created.
>
> Max Vlasov
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question

2010-11-30 Thread Paul Sanderson
As said off topic - but I have never heard of a cell ID being recorded
with an SMS message. You can find more info on the SMS message format
by search for "sms pdu" (protocol description unit) you can see from
this somewhat complex structure that the cell ID is not part of the
transmitted message format. This information is available at the
handset though and it is *possible* that it is recorded on specific
handsets although I have never seen it.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] forcing flush of journal

2011-01-26 Thread Paul Sanderson
I have an sqlite database and an assocaiated .journal file that I need
tolook at. I have estalished that there is data in the journal that
seems relevant and want flush the journal (if that is the correct
term) and to then view the tables with the updated data. Is there a
way to do this?

Thanks
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] forcing flush of journal

2011-01-27 Thread Paul Sanderson
Thats what I thought, thanks
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Return a column's contents to the free list

2018-06-09 Thread Paul Sanderson
 On 9 June 2018 at 16:17, Jay Kreibich  wrote:

>
> Without digging through some detailed docs, I’m pretty sure empty string
> and NULL require the same amount of storage space.  If not, the difference
> is maybe one byte.
>
>
You are correct Jay

The serial types NULL, 0 and 1 each have a serial type 0, 8 & 9
respectively and there is no data stored for the column - the content of
the column is inherent in the serial types array.

Blobs and Strings with no content are serial types 12 & 13, and there is
also zero bytes of data associated with each.



Paul
www.sandersonforensics.com
SQLite Forensics Book 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] unique values from a subset of data based on two fields

2018-06-29 Thread Paul Sanderson
I have a table

Create table names (id int, status int, name text)



1, 1, 'paul'

2, 1,'helen'

3, 0, 'steve'

4, 0, 'steve'

5, 0, 'pete'

6, 0, 'paul'



I want a query that returns all of the records with status = 1 and unique
records, based on name, where the status =0 and the name is not in the list
status=1



So from the above I would want to see



1, 1, paul

2, 1, helen

3, 0, steve (or 4, 0, steve)

5, 0, pete



I could do something like



Select * from names where status = 1 or name not in (select name from names
where status = 1)



But this gets both rows for steve, e.g.



1, 1, paul

2, 1, helen

3, 0, steve

4, 0, steve

5, 0, pete

while I am not bothered about which of the two steves I get back, I must
have all occurences of names with status = 1

I am probably missing somethng obvious

Paul
www.sandersonforensics.com
SQLite Forensics Book 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] unique values from a subset of data based on two fields

2018-06-29 Thread Paul Sanderson
Thank You David - I was just starting to play with CTEs



Paul
www.sandersonforensics.com
SQLite Forensics Book <https://www.amazon.co.uk/dp/ASIN/1980293074>

On 29 June 2018 at 17:45, David Raymond  wrote:

> with status_one as (
>   select *
>   from names
>   where status = 1
> ),
> one_names as (
>   select distinct name
>   from status_one
> )
> select min(id), status, name
> from names
> where status = 0
>   and name not in one_names
> group by status, name
>
> union all
>
> select * from status_one;
>
>
> -Original Message-
> From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org]
> On Behalf Of Paul Sanderson
> Sent: Friday, June 29, 2018 11:50 AM
> To: General Discussion of SQLite Database
> Subject: [sqlite] unique values from a subset of data based on two fields
>
> I have a table
>
> Create table names (id int, status int, name text)
>
>
>
> 1, 1, 'paul'
>
> 2, 1,'helen'
>
> 3, 0, 'steve'
>
> 4, 0, 'steve'
>
> 5, 0, 'pete'
>
> 6, 0, 'paul'
>
>
>
> I want a query that returns all of the records with status = 1 and unique
> records, based on name, where the status =0 and the name is not in the list
> status=1
>
>
>
> So from the above I would want to see
>
>
>
> 1, 1, paul
>
> 2, 1, helen
>
> 3, 0, steve (or 4, 0, steve)
>
> 5, 0, pete
>
>
>
> I could do something like
>
>
>
> Select * from names where status = 1 or name not in (select name from names
> where status = 1)
>
>
>
> But this gets both rows for steve, e.g.
>
>
>
> 1, 1, paul
>
> 2, 1, helen
>
> 3, 0, steve
>
> 4, 0, steve
>
> 5, 0, pete
>
> while I am not bothered about which of the two steves I get back, I must
> have all occurences of names with status = 1
>
> I am probably missing somethng obvious
>
> Paul
> www.sandersonforensics.com
> SQLite Forensics Book <https://www.amazon.co.uk/dp/ASIN/1980293074>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] unique values from a subset of data based on two fields

2018-06-29 Thread Paul Sanderson
Thanks Ryan

As often is the case the the actual problem is more complex than my example
- sometimes we over simplify to, well, simplify - but you have both given
me some ideas and I'll go away and play.

Paul

Paul
www.sandersonforensics.com
SQLite Forensics Book <https://www.amazon.co.uk/dp/ASIN/1980293074>

On 29 June 2018 at 20:24, R Smith  wrote:

> The solution from David works perfectly, just want to point out some CTE
> things since you mention getting into it.
> 1st - A nice thing about CTE is that, in the case of a non-recursive CTE
> (like this one), it can well be replaced by a simple sub-query, however,
> the CTE can be referenced more than once, unlike a sub-query.
> 2nd - One can even use the CTE in other sub-queries.
>
> With this in mind, here is another option for the query which is
> functionally equivalent (i.e. it's not better, simply showing alternate CTE
> use):
>
> with status_one as (
>   select *
> from names
>where status = 1
> )
> select min(id), status, name
>   from names
>  where status = 0 and name not in (select name from status_one)
>  group by status, name
> union all
> select * from status_one
> ;
>
>
> CTE capability is one of my favourite additions ever to SQLite (I may have
> mentioned this before), so I hope you too find them useful and joyful.
>
> Cheers,
> Ryan
>
>
> On 2018/06/29 6:45 PM, David Raymond wrote:
>
>> with status_one as (
>>select *
>>from names
>>where status = 1
>> ),
>> one_names as (
>>select distinct name
>>from status_one
>> )
>> select min(id), status, name
>> from names
>> where status = 0
>>and name not in one_names
>> group by status, name
>>
>> union all
>>
>> select * from status_one;
>>
>>
>> -Original Message-
>> From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org]
>> On Behalf Of Paul Sanderson
>> Sent: Friday, June 29, 2018 11:50 AM
>> To: General Discussion of SQLite Database
>> Subject: [sqlite] unique values from a subset of data based on two fields
>>
>> I have a table
>>
>> Create table names (id int, status int, name text)
>>
>>
>>
>> 1, 1, 'paul'
>>
>> 2, 1,'helen'
>>
>> 3, 0, 'steve'
>>
>> 4, 0, 'steve'
>>
>> 5, 0, 'pete'
>>
>> 6, 0, 'paul'
>>
>>
>>
>> I want a query that returns all of the records with status = 1 and unique
>> records, based on name, where the status =0 and the name is not in the
>> list
>> status=1
>>
>>
>>
>> So from the above I would want to see
>>
>>
>>
>> 1, 1, paul
>>
>> 2, 1, helen
>>
>> 3, 0, steve (or 4, 0, steve)
>>
>> 5, 0, pete
>>
>>
>>
>> I could do something like
>>
>>
>>
>> Select * from names where status = 1 or name not in (select name from
>> names
>> where status = 1)
>>
>>
>>
>> But this gets both rows for steve, e.g.
>>
>>
>>
>> 1, 1, paul
>>
>> 2, 1, helen
>>
>> 3, 0, steve
>>
>> 4, 0, steve
>>
>> 5, 0, pete
>>
>> while I am not bothered about which of the two steves I get back, I must
>> have all occurences of names with status = 1
>>
>> I am probably missing somethng obvious
>>
>> Paul
>> www.sandersonforensics.com
>> SQLite Forensics Book <https://www.amazon.co.uk/dp/ASIN/1980293074>
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] unique values from a subset of data based on two fields

2018-06-30 Thread Paul Sanderson
Easier and pretty obvious :) Thanks Keith



Paul
www.sandersonforensics.com
SQLite Forensics Book <https://www.amazon.co.uk/dp/ASIN/1980293074>

On 29 June 2018 at 23:20, Keith Medcalf  wrote:

> >I want a query that returns all of the records with status = 1 and
> >unique records, based on name, where the status =0 and the name is
> >not in the list status=1
>
> Translation into SQL using English to SQL Translator, using the most
> direct translation on the "problem statement" above directly into SQL:
>
> create table names (id int, status int, name text);
> insert into names values (1, 1, 'paul');
> insert into names values (2, 1, 'helen');
> insert into names values (3, 0, 'steve');
> insert into names values (4, 0, 'steve');
> insert into names values (5, 0, 'pete');
> insert into names values (6, 0, 'paul');
>
> -- I want a query that returns all of the records with status = 1
>
> SELECT id,
>status,
>name
>   FROM names
>  WHERE status == 1
>
> -- and
>
> UNION
>
> -- unique records, based on name, where the status = 0 and the name is not
> in the list [of names where] status=1
>
> SELECT id,
>status,
>name
>   FROM names
>  WHERE status == 0
>AND name NOT IN (SELECT name
>   FROM names
>  WHERE status == 1)
> GROUP BY name;
>
> Returns the rows:
>
> 1|1|paul
> 2|1|helen
> 3|0|steve
> 5|0|pete
>
> If the table is bigger than trivial (ie, contains more than the number of
> rows you can count with your fingers) then you will need the appropriate
> indexes to achieve performant results.
>
> ---
> The fact that there's a Highway to Hell but only a Stairway to Heaven says
> a lot about anticipated traffic volume.
>
>
> >-Original Message-
> >From: sqlite-users [mailto:sqlite-users-
> >boun...@mailinglists.sqlite.org] On Behalf Of Paul Sanderson
> >Sent: Friday, 29 June, 2018 09:50
> >To: General Discussion of SQLite Database
> >Subject: [sqlite] unique values from a subset of data based on two
> >fields
> >
> >I have a table
> >
> >Create table names (id int, status int, name text)
> >
> >
> >
> >1, 1, 'paul'
> >
> >2, 1,'helen'
> >
> >3, 0, 'steve'
> >
> >4, 0, 'steve'
> >
> >5, 0, 'pete'
> >
> >6, 0, 'paul'
> >
> >
> >
> >I want a query that returns all of the records with status = 1 and
> >unique
> >records, based on name, where the status =0 and the name is not in
> >the list
> >status=1
> >
> >
> >
> >So from the above I would want to see
> >
> >
> >
> >1, 1, paul
> >
> >2, 1, helen
> >
> >3, 0, steve (or 4, 0, steve)
> >
> >5, 0, pete
> >
> >
> >
> >I could do something like
> >
> >
> >
> >Select * from names where status = 1 or name not in (select name from
> >names
> >where status = 1)
> >
> >
> >
> >But this gets both rows for steve, e.g.
> >
> >
> >
> >1, 1, paul
> >
> >2, 1, helen
> >
> >3, 0, steve
> >
> >4, 0, steve
> >
> >5, 0, pete
> >
> >while I am not bothered about which of the two steves I get back, I
> >must
> >have all occurences of names with status = 1
> >
> >I am probably missing somethng obvious
> >
> >Paul
> >www.sandersonforensics.com
> >SQLite Forensics Book <https://www.amazon.co.uk/dp/ASIN/1980293074>
> >___
> >sqlite-users mailing list
> >sqlite-users@mailinglists.sqlite.org
> >http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Minimum Delta Time

2018-07-11 Thread Paul Sanderson
How about just using a trigger to check if endtime is < starttime+10 and
updating if it fires

Paul
www.sandersonforensics.com
SQLite Forensics Book 

On 11 July 2018 at 17:09, David Raymond  wrote:

> For a minimum of 10 minutes it'd be something like
>
> update TimeEvents
> set EndTime = max(
> current_timestamp,
> datetime(StartTime, '+10 minutes')
> )
> where
> EventID = ?
> and (EndTime is null or EndTime = '');
>
>
> -Original Message-
> From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org]
> On Behalf Of Stephen Chrzanowski
> Sent: Wednesday, July 11, 2018 10:25 AM
> To: General Discussion of SQLite Database
> Subject: [sqlite] Minimum Delta Time
>
> I've got an application that I've written that keeps track of time spent on
> particular tasks.  I do many things throughout the day, going between
> different tasks, and tasks I've already worked on for today, so this tool
> is incredibly helpful to help justify my warming a chair.
>
> I'd prefer the SQL code to handle this particular procedure I'm about to
> describe.  Obviously, in the application, I can make an additional trip to
> the database to yank out the start time, add the 10 minutes, then do the
> update, but I'd rather the database deal with this particular business rule
> since the handling of the actual values is done at the database level.
>
> At the bottom of this email is the Delphi code and the table schema I'm
> interested in.
>
> What I have is a button on the UI that toggles start/stop work times on a
> particular task.  The Delphi Code below shows the toggling methodology.  My
> interest is modifying the Stop part so that at a minimum, there is a 10
> minute delta between the start and end time.  So if I start a timer at
> 11:00, then stop at 11:01, I want the database to update the end time to
> 11:10.
>
> I suspect a SQLite CASE statement may be of help, but I'm not sure how to
> check for the 10 minute delta then update the row with the altered time or
> the real time.
>
>
>
> *Delphi Code:*
> tbl:=db.GetTable('select EndTime from TimeEvents where EventID=? order by
> StartTime desc',[EventID]);
> // If this task doesn't have a previous timer, or, this task has no
> currently running timers, make a new timer
> // otherwise, stop the currently running timer
> if (tbl.RowCount=0)or(tbl.FieldByName['EndTime']<>'') then begin
>   db.ExecSQL('insert into TimeEvents (EventID) values (?)',[EventID]);
> end else begin
>   db.ExecSQL('update TimeEvents set EndTime=current_timestamp where
> EventID=? and (EndTime is null or EndTime="")',[EventID]);
> end;
>
> *Table Schema*
> CREATE TABLE [TimeEvents](
>   [EventNumber] INTEGER PRIMARY KEY AUTOINCREMENT,
>   [EventID] integer NOT NULL REFERENCES [tEvents]([EventID]) ON DELETE
> CASCADE,
>   [StartTime] DATETIME(10) NOT NULL DEFAULT CURRENT_TIMESTAMP,
>   [EndTime] DATETIME);
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Reducing index size

2018-07-30 Thread Paul Sanderson
If I understand correctly then changing from a base64 index to a blob
containing the raw bytes would save 25%

Paul
www.sandersonforensics.com
SQLite Forensics Book 

On 30 July 2018 at 09:32, Eric Grange  wrote:

> Hi,
>
> Is there a way to reduce the size of an index on strings/blobs ?
>
> I have tables which are a key + value, the key is an "integer primary key
> autoincrement", which is used for references in all other tables of the
> schema.
> The values are cryptographic GUIDs (so 256 to 512 bits in size) with a
> "compact" encoding (either base64 or blob rather than hexadecimal strings),
> but they still represent gigabytes of data.
>
> Those tables have an index on the value, and my problem is that the size of
> this index (as reported by dbstat or sql3_analyzer) is about the same
> as the table.
>
> As these are cryptographic GUIDs, the first few bytes of a values are in
> practice unique, so in theory I can index just the first few bytes (using
> substr()),
> this indeed reduces in a much smaller index, but this also requires
> adapting all queries that search by value.
>
> Before starting down that route, is there another way?
>
> My searches on those indexes are by either exact value or by value start
> (human search & auto-completion)
>
> Eric
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Common index for multiple databases

2018-08-03 Thread Paul Sanderson
On 2 August 2018 at 20:08, Keith Medcalf  wrote:

Further to this, you can "emulate" the current structure by creating
multiple databases each containing only the tables needed for that "bit" of
your application.  For example, you can create a customers.db containing
the customers table and all the indexes associated with the customers
table.  You can also create an accounting.db containing all the accounting
tables (but not the customers table) and all the indexes that belong to
those tables.  Lather, Rinse, Repeat.  Create multiple databases and do not
duplicate table names -- that is each table goes in one and only one
database).

You then "open" a :memory: database and "ATTACH" your other database hunks
as required for the application.  You refer to the tables only by table
name and provided that you have not duplicated tables in multiple database
files, then SQLite3 will operate on the appropriate attached database.
Since the indexes associated with a table must be stored in the same
database file as the data table itself, all the indexes will be kept
up-to-date.  You will not be able to have the database enforce referential
integrity across multiple "attached" databases, nor use cross "attachment"
triggers (but you cannot do that now anyway).

---

If you do, for some reason, choose this approach then by default SQLite
limits the number of attached databases to 10, you can adjust this up to a
max of 125. More info below:

https://www.sqlite.org/limits.html

Paul
www.sandersonforensics.com
SQLite Forensics Book 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Packing integer primary key with field bits

2017-08-10 Thread Paul Sanderson
Space savings will depend very much on what other data is in the table.

If you have a 4096 byte page size and with an average record size of 1000
bytes then saving 7 bytes for each of the 4 records wont free up enough
space to fit a new record into that page. So savings in this scenario will
effectively be nil.

If on the otherhand the average record is 100 bytes you may well fit more
records into the page, conversely changing the page size to 64K would also
reduce the number of reads.

I suspect that biggest time savings may be gained by reducing disk I/O.

Better advice could possibly be given if we know the full table schema
including typical sizes for data in any fields/

Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence

On 10 August 2017 at 14:13, R Smith  wrote:

> On 2017/08/10 1:19 PM, x wrote:
>
>> Thanks for the replies. I’m not sure I agree with Gunter and Ryan though.
>> I’m thinking about this more from the gain in speed rather than saving
>> space.
>>
>> To clarify, I’m suggesting replacing a compound key (made up of several
>> integer cols) with an integer primary key (which sqlite will use rather
>> than the row id). I have done my homework on this so I’m familiar with
>> Gunter’s points regarding ‘between’ and ‘high end bits’ but will the
>> between on a single integer key not be faster than matching on the first m
>> fields of an n compound key? If an index is needed on any non-high bit col
>> an expression index would work just as fast for lookups (I suppose inserts
>> would be slower). The savings on space would contribute to the speed as
>> each disk read would contain more records.
>>
>
> Ok, if you require ALL the packed records all the time, and will always
> access it by the primary value (the first of the packed values) and is very
> very sure you won't ever need expanding the value range, then you might
> actually get a speed gain from it.
>
> Problem is, the gain will be minuscule, and the price is high. Lots of
> development time, loss of useful SQL aggregates and other functionality,
> possible future reworks... All of that for a very small speed gain?  If you
> are wanting that, why not simply use a custom structure and avoid SQLite
> completely? The speed gain will actually be significant then, and you're
> going to lose the SQL-ness of it anyway, so that shouldn't matter.
>
> A structured array mapped to a physical byte-stream will be several times
> faster than SQLite (or any other RDBMS for that matter).  SQL as supported
> by the average RDBMS is only really helpful when you are looking for
> SET-type relational data handling or very large data (and your use case is
> specifically not for large data). Most RDBMSes have great optimizations for
> speeding up resolving of relational-type questions and their every-day-use
> advantages are legion, they are however without exception NOT faster than -
> NOR intended to be faster than - simple byte/structured array handling.
>
> You might even find a synergy between using your own structured array
> together with an SQLite DB which only get accessed once you need more
> information than persists in the array itself - it's easy to make a pilot
> and test the speed gains. And please do that before investing the time to
> develop a fully fledged dual system.
>
>
>
>> Even forgetting about keys, if you packed say 8 columns into one int64
>> column would you not be saving a minimum of 7 bits?
>>
>
> No you won't, SQLite stores Integer much more efficiently. Unless you mean
> use ONLY the 64-bit index and not storing the values in separate fields in
> the DB at all, in which case yes, you will save a few bytes, possibly less
> than 7 though (I need to affirm the exact number, don't know off the
> top...).
>
> Cheers,
> Ryan
>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Data types for date and time functions

2017-09-25 Thread Paul Sanderson
Internally SQLite stores and process numbers as Julian day numbers, the
number of days since noon in Greenwich on November 24, 4714 B.C.

I have not examined the code in any depth but would assume that if you
store the data in the same format it would save on any processing overhead
for calculations but would just require a conversion for display.





Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence

On 25 September 2017 at 09:12, David Wellman 
wrote:

> Hi,
>
>
>
> We're designing a new feature which will involve a lot of date/time
> calculations which we intend to do within the SQLite engine. As far as we
> can tell it has the functions that we need. Basically we'll be loading data
> into SQLite and performing analysis and calculations using SQL.
>
>
>
> The question has come up as to how we should store date/time values in our
> tables? Basically how should we define our date/time columns?
>
>
>
> For us an obvious choice is to store dates/times as REAL. We're working in
> a
> Windows environment and so in the application code we're dealing with MS
> serial date values.
>
>
>
> Looking at the 'date and tine functions' page
> (http://www.sqlite.org/lang_datefunc.html ) it says "All five date and
> time
> functions take a time string as an argument". So my initial reaction is to
> store dates and times as TEXT. I think this means that when passing such
> date/time values into the functions there is one less conversion to do.
>
>
>
> But then looking at some of the examples on that page I came across the
> following:
>
> Compute the date and time given a unix timestamp 1092941466.
>
> SELECT datetime(1092941466, 'unixepoch');
>
>
>
> In the sql syntax that I'm used to (which I thought was the same for
> SQLite)
> a series of digits like that shown above is treated as a numeric data value
> (and type), not text (i.e. a  "time string"). If that was meant to be
> string
> data (i.e. text) then I'd have expected:
>
>
>
> SELECT datetime('1092941466', 'unixepoch');
>
>
>
> So to clarify: Is our idea of storing date/time values as TEXT data a
> sensible one?
>
>
>
> We're essentially thinking of performance and there are almost certainly
> pros and cons to doing this.
>
> Pro: (I think) more efficient processing as the data is supplied to these
> functions as text which is what they're expecting/require - and therefore
> there is no additional conversion required.
>
> Con: Our full timestamps will be 19 bytes compared to a REAL which is only
> 8
> bytes. So they will require more storage/disk space which ultimately means
> more I/O to read the same number of rows and columns.
>
>
>
> I accept that from a performance perspective there may not be much in it,
> but I'd be interested in people's thoughts.
>
>
>
> In anticipation, many thanks.
>
>
>
> Cheers,
>
> Dave
>
>
>
>
>
>
>
> Ward Analytics Ltd - information in motion
>
> Tel: +44 (0) 118 9740191
>
> Fax: +44 (0) 118 9740192
>
> www:   http://www.ward-analytics.com
>
>
>
> Registered office address: The Oriel, Sydenham Road, Guildford, Surrey,
> United Kingdom, GU1 3SR
>
> Registered company number: 3917021 Registered in England and Wales.
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Grouping and grabbing one item

2017-10-18 Thread Paul Sanderson
How about

select (SELECT ed from Tasks where task = 'QUOTE' and Pid = 1),
max(ed),target, sum(amt) from Tasks where Pid=1 group by target HAVING amt
> 0;

2017-09-27|2017-10-01|es-ES|170.0
2017-09-27|2017-10-01|fr-FR|185.0
2017-09-27|2017-10-01|it-IT|200.0



Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence

On 18 October 2017 at 18:23, jose isaias cabrera 
wrote:

>
> CREATE TABLE Tasks (
>  id INTEGER PRIMARY KEY,
>  Pid INTEGER,
>  bd TEXT,
>  ed TEXT,
>  task TEXT,
>  target TEXT,
>  amt REAL
> );
>
> INSERT INTO Tasks (Pid,bd,ed,task,target,amt) VALUES
> (1,'2017-09-27','2017-09-27','QUOTE','es-ES fr-FR it-IT',0);
> INSERT INTO Tasks (Pid,bd,ed,task,target,amt) VALUES
> (1,'2017-09-28','2017-10-01','DOC','es-ES',100);
> INSERT INTO Tasks (Pid,bd,ed,task,target,amt) VALUES
> (1,'2017-09-28','2017-10-01','DOC','it-IT',120);
> INSERT INTO Tasks (Pid,bd,ed,task,target,amt) VALUES
> (1,'2017-09-28','2017-10-01','DOC','fr-FR',110);
> INSERT INTO Tasks (Pid,bd,ed,task,target,amt) VALUES
> (1,'2017-09-28','2017-10-01','VAL','es-ES',70);
> INSERT INTO Tasks (Pid,bd,ed,task,target,amt) VALUES
> (1,'2017-09-28','2017-10-01','VAL','fr-FR',75);
> INSERT INTO Tasks (Pid,bd,ed,task,target,amt) VALUES
> (1,'2017-09-28','2017-10-01','VAL','it-IT',80);
> INSERT INTO Tasks (Pid,bd,ed,task,target,amt) VALUES
> (2,'2017-09-27','2017-09-27','QUOTE','es-ES fr-FR it-IT',0);
> INSERT INTO Tasks (Pid,bd,ed,task,target,amt) VALUES
> (2,'2017-09-28','2017-10-01','DOC','es-ES',100);
> INSERT INTO Tasks (Pid,bd,ed,task,target,amt) VALUES
> (2,'2017-09-28','2017-10-01','DOC','it-IT',120);
> INSERT INTO Tasks (Pid,bd,ed,task,target,amt) VALUES
> (2,'2017-09-28','2017-10-01','DOC','fr-FR',110);
> INSERT INTO Tasks (Pid,bd,ed,task,target,amt) VALUES
> (2,'2017-09-28','2017-10-01','VAL','es-ES',70);
> INSERT INTO Tasks (Pid,bd,ed,task,target,amt) VALUES
> (2,'2017-09-28','2017-10-01','VAL','fr-FR',75);
> INSERT INTO Tasks (Pid,bd,ed,task,target,amt) VALUES
> (2,'2017-09-28','2017-10-01','VAL','it-IT',80);
>
> I know I can do,
>
> select max(ed),target, sum(amt) from Tasks where Pid=1 group by target
> HAVING amt > 0;
>
> and get,
>
> 2017-10-01|es-ES|100.0
> 2017-10-01|fr-FR|185.0
> 2017-10-01|it-IT|200.0
>
> but, I would like to add the ed of the task='QUOTE' to the beginning of
> the list.  So, the result would look like this,
>
> 2017-09-27|2017-10-01|es-ES|100.0
> 2017-09-27|2017-10-01|fr-FR|185.0
> 2017-09-27|2017-10-01|it-IT|200.0
>
> I know how to select it by itself,
>
> SELECT ed from Tasks where task = 'QUOTE' and Pid = 1;
>
> but I need to add it to the beginning of the list with a JOIN or
> something. Any thoughts?  Thanks.
>
> josé
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Grouping and grabbing one item

2017-10-18 Thread Paul Sanderson
ahh bugger - google didn't show the new answers had popped up. Pleased I
came up with a working solution though :)


Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence

On 18 October 2017 at 19:03, jose isaias cabrera 
wrote:

>
> So simple!  Thanks, Darko.
>
> -Original Message- From: Darko Volaric
> Sent: Wednesday, October 18, 2017 1:57 PM
> To: SQLite mailing list
> Subject: Re: [sqlite] Grouping and grabbing one item
>
>
> select (SELECT ed from Tasks where task = 'QUOTE' and Pid = 1),
> max(ed),target, sum(amt) from Tasks where Pid=1 group by target HAVING amt
> > 0;
>
>
> On Oct 18, 2017, at 7:23 PM, jose isaias cabrera 
>> wrote:
>>
>>
>> CREATE TABLE Tasks (
>> id INTEGER PRIMARY KEY,
>> Pid INTEGER,
>> bd TEXT,
>> ed TEXT,
>> task TEXT,
>> target TEXT,
>> amt REAL
>> );
>>
>> INSERT INTO Tasks (Pid,bd,ed,task,target,amt) VALUES
>> (1,'2017-09-27','2017-09-27','QUOTE','es-ES fr-FR it-IT',0);
>> INSERT INTO Tasks (Pid,bd,ed,task,target,amt) VALUES
>> (1,'2017-09-28','2017-10-01','DOC','es-ES',100);
>> INSERT INTO Tasks (Pid,bd,ed,task,target,amt) VALUES
>> (1,'2017-09-28','2017-10-01','DOC','it-IT',120);
>> INSERT INTO Tasks (Pid,bd,ed,task,target,amt) VALUES
>> (1,'2017-09-28','2017-10-01','DOC','fr-FR',110);
>> INSERT INTO Tasks (Pid,bd,ed,task,target,amt) VALUES
>> (1,'2017-09-28','2017-10-01','VAL','es-ES',70);
>> INSERT INTO Tasks (Pid,bd,ed,task,target,amt) VALUES
>> (1,'2017-09-28','2017-10-01','VAL','fr-FR',75);
>> INSERT INTO Tasks (Pid,bd,ed,task,target,amt) VALUES
>> (1,'2017-09-28','2017-10-01','VAL','it-IT',80);
>> INSERT INTO Tasks (Pid,bd,ed,task,target,amt) VALUES
>> (2,'2017-09-27','2017-09-27','QUOTE','es-ES fr-FR it-IT',0);
>> INSERT INTO Tasks (Pid,bd,ed,task,target,amt) VALUES
>> (2,'2017-09-28','2017-10-01','DOC','es-ES',100);
>> INSERT INTO Tasks (Pid,bd,ed,task,target,amt) VALUES
>> (2,'2017-09-28','2017-10-01','DOC','it-IT',120);
>> INSERT INTO Tasks (Pid,bd,ed,task,target,amt) VALUES
>> (2,'2017-09-28','2017-10-01','DOC','fr-FR',110);
>> INSERT INTO Tasks (Pid,bd,ed,task,target,amt) VALUES
>> (2,'2017-09-28','2017-10-01','VAL','es-ES',70);
>> INSERT INTO Tasks (Pid,bd,ed,task,target,amt) VALUES
>> (2,'2017-09-28','2017-10-01','VAL','fr-FR',75);
>> INSERT INTO Tasks (Pid,bd,ed,task,target,amt) VALUES
>> (2,'2017-09-28','2017-10-01','VAL','it-IT',80);
>>
>> I know I can do,
>>
>> select max(ed),target, sum(amt) from Tasks where Pid=1 group by target
>> HAVING amt > 0;
>>
>> and get,
>>
>> 2017-10-01|es-ES|100.0
>> 2017-10-01|fr-FR|185.0
>> 2017-10-01|it-IT|200.0
>>
>> but, I would like to add the ed of the task='QUOTE' to the beginning of
>> the list.  So, the result would look like this,
>>
>> 2017-09-27|2017-10-01|es-ES|100.0
>> 2017-09-27|2017-10-01|fr-FR|185.0
>> 2017-09-27|2017-10-01|it-IT|200.0
>>
>> I know how to select it by itself,
>>
>> SELECT ed from Tasks where task = 'QUOTE' and Pid = 1;
>>
>> but I need to add it to the beginning of the list with a JOIN or
>> something. Any thoughts?  Thanks.
>>
>> josé
>>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Many ML emails going to GMail's SPAM

2017-11-21 Thread Paul Sanderson
Coincidence!  I have just been in my gmail folder marking a load of SQLite
email as 'not spam'

Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence

On 21 November 2017 at 10:35, Dominique Devienne 
wrote:

> Just FYI. Not sure if something changed on the mailer's settings.
> Possibly/likely linked to GMail changing it's SPAM heuristics I guess. --DD
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Energy consumption of SQLite queries

2017-11-21 Thread Paul Sanderson
A pretty much impossible task I would think.

The power usage of SQLite compared to the power usage of different hardware
components would be miniscule. But, there are so many other tasks running
on a system, many in the background, that isolating SQLite from the rest
would be next to impossible. Just look at process on a windows system via
the task manager or a linux system using top to get a very simplistic idea
of the different tasks that are using processor time - Sort by processor
usage and the list is always changing even when you are doing nothing. Add
in variable speed fans and processor throttling to manage temperature/power
consumption etc. and you have a mammoth task.

Good luck :)

Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence

On 21 November 2017 at 00:36, Simon Slavin  wrote:

> On 20 Nov 2017, at 10:54pm, Ali Dorri  wrote:
>
> > I am doing a research on the energy consumed by a query in SQLite. I
> have a
> > program which fills a database with blocks of data. Then, it attempts to
> > remove some data from the database. I don't know how to measure the
> energy
> > consumed from my host, i.e., my laptop which has both the SQLite and the
> > program, from the time I generated the query till the query is finished
> and
> > control returns back to my program.
>
> This is a hardware question, not anything to do with a particular piece of
> software.
>
> If you have a desktop computer, get one of those gadgets that you plug
> into the power socket and monitors how much power is passed to things that
> plug into them:
>
> 
>
> On a laptop, since the power is taken from an internal battery, and mains
> power is used to recharge it inconsistently, monitoring power usage from
> the mains is pointless.  See if the firmware provides a display or an API
> function which shows how much is going out.
>
> Then set up side-by-side comparisons, one with your computer doing those
> things in SQLite and one without.  The differences between the two power
> consumptions is how much power SQLite is using.  Unless you have really
> detailed power measurement, the results will be small and probably
> meaningless.
>
> Since you mention doing side-by-side comparisons with other databases,
> your setup should probably be comparing the same computer doing things in
> different DBMSs.  Maybe set up some procedure for doing something 10,000
> times and see how much power is used in total.
>
> Worth noting that power consumption from SQLite will be terribly
> inconsistent, based on what data is cached, how many database pages need to
> be accessed, and the state of the journal files.  This pales into
> insignificance, however, with the inconsistency of most other DBMSs, which
> perform far more internal caching and indexing.  You will get very
> different results from the same setup depending on how long the DBMS server
> has been running, not just on how long the computer has been turned on.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Many ML emails going to GMail's SPAM

2017-11-21 Thread Paul Sanderson
What about some sort of poll.

Mail lists might work but the additonal functionality offered by a forum (I
am a member of many) makes them my choice.

Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence

On 21 November 2017 at 16:43, Martin Raiber  wrote:

> On 21.11.2017 17:30 John McKown wrote:
> > On Tue, Nov 21, 2017 at 10:27 AM, Drago, William @ CSG - NARDA-MITEQ <
> > william.dr...@l3t.com> wrote:
> >
> >>> I really need to come up with an alternative to the mailing list.
> >>> Perhaps some kind of forum system.  Suggestions are welcomed.
> >>> --
> >>> D. Richard Hipp
> >>> d...@sqlite.org
> >> Please, not a forum. The email list is instant, dynamic, and
> convenient. I
> >> don't think checking into a forum to stay current with the brisk
> activity
> >> here is very practical or appealing.
> > ​I completely agree. The problem with a forum is mainly that it is not
> _a_
> > forum. It is a forum per list. Which means I spend way too much time
> > "polling" 8 to 10 web "forums" during the day just to see if anybody has
> > said anything of interest.
>
> I am using Discourse as community forum and I cannot really see any
> downside to that except for the increased server requirements.
> Individuals who want to use it like a mailing list still can do that
> (enable mailing list mode). They have a FAQ wrt. to cos/prons mailing
> list: https://meta.discourse.org/t/discourse-vs-email-mailing-lists/54298
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DateTime to bigint

2017-12-08 Thread Paul Sanderson
Hi Tibor

Your date format is windows ticks, i.e. 100 nano seconds intervals since
01/01/0001

You can convert it as follows

SELECT (StrfTime('%s', '2004-08-05') + 62135596800) * 1000 AS Ticks

where StrfTime('%s', '2004-08-05') is the number of seconds between the
provided date and 1/1/1970
62135596800 is the difference in seconds between 1/1/1970 and 1/1/0001
and 1000 converts it to nanoseconds

HTH

Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence

On 8 December 2017 at 16:53, Tibor Balog  wrote:

> Hi there,
>
> I am very new to SQLite but like it sofar.
> Run into something couldn’t find an answer may someone can put me in the
> right direction..
>
> Have an application uses SQLite table with a column “Datum” defined as
> bigint.
> I have an identical table with column “Datum” as DateTime “-mm-dd”
>
> How can I convert this column -preferable in place- to bigint?
>
> Some more info:
>
> “2004-08-05” should be converted to “6322726080” .
>
> Thank You,
> in advence.
> (Tibor)
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] What happens if an in memory database runs out of memory

2017-12-15 Thread Paul Sanderson
Try it

create a table and use the zeroblob(n) function to insert lots of blobs of
size n

ie
create table blobs (b blob);

insert into blobs values(zeroblob(1));
insert into blobs values(zeroblob(1));

etc.

interestingly the max blob size is specified as 2147483647 but on my
current test client 3.18.0

insert into blobs values(zeroblob(2147483647));

fails wih string or blob too big. Not had time to investigate :(



Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence

On 15 December 2017 at 16:30, x  wrote:

> Suppose I execute “attach :memory: as mem” and then create a table in mem
> that requires more space than the available RAM can hold what will happen?
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic DELETE index optimisation?

2017-12-18 Thread Paul Sanderson
Not sure how relevant it might be, but what page size is the DB set to and
what is the average size of a record?


Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence

On 18 December 2017 at 10:03, Dinu  wrote:

> Rowan Worth-2 wrote
> > I'm not sure what you're worried about? Dropping and recreating identical
> > indices within a transaction won't cause a visible structure change to
> > concurrent readers -- that's the point of a transaction.
>
> I honestly don't see how in any DB system the client process would not
> crash
> if the index it's running a curson on were to be removed. Even if SQLite
> were to pull this magic out of the hat, starving client processes for the
> lack of an index (a full scan query would probably take in excess of 30s)
> would quickly pile up the clients to the point where one would have to kill
> them anyway.
> So with this in mind, I'm really not looking for a barbaric fix to this,
> I'm
> more of tryng to understand the problem and find a viable, semantically
> stable solution (and maybe trigger some improvements in SQLite, if there's
> a
> system bug).
>
>
>
>
> --
> Sent from: http://sqlite.1065341.n5.nabble.com/
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] same code produces a different database file on different computers

2017-12-19 Thread Paul Sanderson
SQLite stores the verison number of the librrary in the database header.
Different SQlite libraries on different computers would cause this error.

Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence

On 19 December 2017 at 12:32, Karl Forner  wrote:

> Hello,
>
> I encountered a weird behavior recently.
> The exact same code (executed from a docker container using the same image)
> produced a different database file on two computers, as verified by a MD5
> or sha256 hash of the two files.
> But using the* .sha3sum* of sqlite3.exe I could check that the contents of
> the two databases are indeed identical.
>
> To sum up, same content, but the files are somewhat different.
>
> Is this something expected or known ?
> If so, is there a way to prevent this ?
>
> Our use case  if that we track the sha256 hashes of the files, that could
> have been produced on different computers to know of the outputs are up to
> date. Until now, since ~ 4 years it has always seem to work.
>
> Thanks.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Atomic DELETE index optimisation?

2017-12-19 Thread Paul Sanderson
Dinu

Option 2, dropping and recreating the index with the transaction, seems to
be the way forward - I would suggest that if the author of SQlite (Dr Hipp)
has put this forward as a solution, as he did earlier in this thread,  then
it is probably a safe option and will not lead to an implosion of anything.

Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence

On 19 December 2017 at 16:15, Dinu  wrote:

> Hi sub sk79,
> I have so far from this thread the following suggestions:
> 1) Copy table -> TRUNCATE -> copy back; this doesn't work, what was
> supposed
> to be TRUNCATE semantics (DELETE FROM without WHERE) has the same
> performance as with WHERE.
> 2) Structure alterations; either drop table, or drop indexes : I am
> reluctant to do this; my evangelical instinct tells me hacking the
> semantics
> of life might lead to implosion of Earth :)
> 3) "Deleted" bit field - presumably the "soft delete" as you call it; I am
> analyzing this, but here the question is whether we include the bit in the
> indexes. If so, performing a heap of UPDATEs should be even more
> inefficient; if we don't include it in the index, the problem of the cost
> of
> filtering the row needs some analysis which I will probably do; the problem
> with this solution is that is has residual effects: we run some pretty
> complex queries against this table, with complicated joins and we already
> got some surprising execution plans that needed query rewriting. So with
> this "deleted" bit out of the index pool, we need to check various other
> queries to make sure they are still optimized to what we need.
>
> All this said and done, 3 hours to delete 15G of data seems atrocious even
> if you do it by standards resulted from generations of DOD and NSA
> inbreeding... so I'm still hopeful for some DB-related solution.
>
>
>
> --
> Sent from: http://sqlite.1065341.n5.nabble.com/
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Can an SQL script be built from within sqlite?

2018-01-16 Thread Paul Sanderson
That terminal app is still sandboxed. AFAIAA you essentially get access to
the applictaios data folder and you can add, create, delete, etc files
within it.

Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence

On 16 January 2018 at 09:39, John G  wrote:

> I've not tried it, but this article from OSXdaily says you can get the
> command line (Terminal) in iOS.
>
> http://osxdaily.com/2018/01/08/get-terminal-app-ios-command-line/
>
> That probably does not solve the fork requirement, and I'm sure it is
> sandboxed.
>
> John G
>
>
>
> On 15 January 2018 at 15:00, Richard Hipp  wrote:
>
> > On 1/15/18, Shane Dev  wrote:
> > >
> > > Did the Apple engineers tell you why it is not possible to compile and
> > run
> > > the SQLite shell on iOS?
> > >
> >
> > You cannot get a command-line prompt on iOS, and the SQLite shell
> > requires a command-line prompt (like bash).  If i understand
> > correctly, iOS does not allow fork() as a security measure.
> >
> > The previous paragraph is not authoritative.  It is merely my
> > recollection.  I have not researched the issue.
> >
> > --
> > D. Richard Hipp
> > d...@sqlite.org
> > ___
> > sqlite-users mailing list
> > sqlite-users@mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> >
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How many AUTOINCREMENT tables are in your schema?

2018-03-20 Thread Paul Sanderson
Autoincrement can ONLY be used with an integer primary key

https://sqlite.org/autoinc.html

On Tue, 20 Mar 2018 at 06:50, Peter Halasz  wrote:

> When needed I use a declared INTEGER PRIMARY KEY.
> >
> >
> MAYBE THAT WOULD HAVE BEEN IN THE SURVEY TOO BUT I GUESS THERE WAS NO WAY
> TO INCLUDE A SMALL PIECE OF SQL TO RELIABLY CHECK FOR INTEGER PRIMARY KEY
>
> YES I AM SHOUTING
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
-- 
Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How many AUTOINCREMENT tables are in your schema?

2018-03-20 Thread Paul Sanderson
 I read that - but my point was more that some people seem to think that an
int primary key can be auto incrementing, it can't.


SQLite version 3.18.0 2017-03-28 18:48:43
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> create table test (id integer primary key autoincrement);
sqlite> create table test2 (id int primary key autoincrement);
Error: AUTOINCREMENT is only allowed on an INTEGER PRIMARY KEY
sqlite>

Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence

On 20 March 2018 at 08:48, R Smith  wrote:

>
> On 2018/03/20 10:24 AM, Paul Sanderson wrote:
>
>> Autoincrement can ONLY be used with an integer primary key
>>
>
> I think Peter's shouting is more about the inability to distinguish via
> SQL or Pragma between an INTEGER PRIMARY KEY and an INT PRIMARY KEY, both
> of which are of course integer and can be auto-incrementing, but only one
> of which is an alias for rowid.
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How many AUTOINCREMENT tables are in your schema?

2018-03-21 Thread Paul Sanderson
Actually it is totally different Chris

>  I read that - but my point was more that some people seem to think that
an
> int primary key can be auto incrementing, it can't.

an INT primary key cannot be autoincrementing

An INTEGER primary key and an INTEGER primary key autoincrement work in
essentially the same way. i.e. if you insert a row and do not specifically
assign a value to the pk (i.e. you assign NULL) the value assigned will
usually be one more than last pk used.

if you have an INT primary key and add a new row with no value assigned to
the PK then null will be stored (all null values are treated as unique in
SQLite and so as far as the PK is concerned all rows are different).

SQLite version 3.18.0 2017-03-28 18:48:43
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> create table test (id int primary key, data text);
sqlite> insert into test (data) values('row 1');
sqlite> insert into test (data) values('row 2');
sqlite> select id, data from test;
  |row 1
  |row 2

of course the rowid is still there hidden behind the scenes and you can
access it with

sqlite> select rowid, id, data from test;
1|  |row 1
2|  |row 2

but if you want to use the rowid as the PK then you should probably use an
INTEGER pk so it becomes an alias for the rowid in the first place.

Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence

On 20 March 2018 at 16:44, Chris Locke  wrote:

> >  some people seem to think that an int primary key can be auto
> incrementing, it can't
>
> But it works in the same way  sort of.  Its auto incrementing, with the
> caveat that if the last row is deleted, the previous number will be used
> again.  Depending on the database schema, this may or may not cause issues.
>
>
> Thanks,
> Chris
>
>
> On Tue, Mar 20, 2018 at 9:45 AM, Paul Sanderson <
> sandersonforens...@gmail.com> wrote:
>
> >  I read that - but my point was more that some people seem to think that
> an
> > int primary key can be auto incrementing, it can't.
> >
> >
> > SQLite version 3.18.0 2017-03-28 18:48:43
> > Enter ".help" for usage hints.
> > Connected to a transient in-memory database.
> > Use ".open FILENAME" to reopen on a persistent database.
> > sqlite> create table test (id integer primary key autoincrement);
> > sqlite> create table test2 (id int primary key autoincrement);
> > Error: AUTOINCREMENT is only allowed on an INTEGER PRIMARY KEY
> > sqlite>
> >
> > Paul
> > www.sandersonforensics.com
> > skype: r3scue193
> > twitter: @sandersonforens
> > Tel +44 (0)1326 572786
> > http://sandersonforensics.com/forum/content.php?195-SQLite-
> > Forensic-Toolkit
> > -Forensic Toolkit for SQLite
> > email from a work address for a fully functional demo licence
> >
> > On 20 March 2018 at 08:48, R Smith  wrote:
> >
> > >
> > > On 2018/03/20 10:24 AM, Paul Sanderson wrote:
> > >
> > >> Autoincrement can ONLY be used with an integer primary key
> > >>
> > >
> > > I think Peter's shouting is more about the inability to distinguish via
> > > SQL or Pragma between an INTEGER PRIMARY KEY and an INT PRIMARY KEY,
> both
> > > of which are of course integer and can be auto-incrementing, but only
> one
> > > of which is an alias for rowid.
> > >
> > >
> > >
> > > ___
> > > sqlite-users mailing list
> > > sqlite-users@mailinglists.sqlite.org
> > > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> > >
> > ___
> > sqlite-users mailing list
> > sqlite-users@mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> >
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to convert a datetime column to a date?

2018-03-22 Thread Paul Sanderson
how about

select date(dttm) dt,max(i) from foo group by date(dttm) order by 1;


Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence

On 21 March 2018 at 15:30, Ron Watkins  wrote:

> I have a table which contains a datetime column:
>
>
>
> table|foo|foo|2|CREATE TABLE foo (
>
> dttmdatetimenot null
>
> i int not null
>
> )
>
>
>
> I want to select out the max(i) value for each day where there are multiple
> records per day.
>
>
>
> select date(dttm) dt,max(i) from foo group by dt order by 1;
>
>
>
> However, it’s returning only 1 row, with no date column shown. How can I
> get an actual “date” listed in the first column, and also get 1 row per
> “date” value.
>
>
>
> |5283598256
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to Handle BigInt

2018-05-01 Thread Paul Sanderson
SQLite integers are all 64 bit - I don't about postgress, so unless
postgress allows integers bigger than 64 bit, and you use them, you should
be OK with your table definitions above.

Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence

On 1 May 2018 at 16:29, dmp  wrote:

> Hello,
>
> Given a conversion from a database table that contains BigInt, long,
> field from PostgreSQL to a SQLite similar table.
>
> CREATE TABLE postgresqltypes (
>   data_type_id serial,
>   bigInt_type bigint)
>
> CREATE TABLE sqlitetypes (
>   data_type_id INTEGER PRIMARY KEY AUTOINCREMENT,
>   int_type INTEGER)
>
> How to store, for values outside range for Integer,
> String or Real?
>
> danap.
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to Handle BigInt

2018-05-01 Thread Paul Sanderson
Perhaps, but that is only part of the story, and all of that is hidden from
the user and is only relevant in terms of how the number is stored on disk.
You can define a column as int, smallint, largeint, bigint, etc and,
irrespective of which you use, SQLite will save the data to disk
transparently using the smallest on disk format possible for the value you
have saved.

For the record there is another possibility for the values 0 and 1 where
SQLite uses no storage at all, other than the type byte in the serial types
array.

Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence

On 1 May 2018 at 17:05, Gerry Snyder  wrote:

> From the docs:
>
> *INTEGER*. The value is a signed integer, stored in 1, 2, 3, 4, 6, or 8
> bytes depending on the magnitude of the value.
>
> So perhaps you should have said " SQLite integers are all up to 64 bit."
>
> Gerry
>
> On Tue, May 1, 2018 at 8:56 AM, Paul Sanderson <
> sandersonforens...@gmail.com
> > wrote:
>
> > SQLite integers are all 64 bit - I don't about postgress, so unless
> > postgress allows integers bigger than 64 bit, and you use them, you
> should
> > be OK with your table definitions above.
> >
> > Paul
> > www.sandersonforensics.com
> > skype: r3scue193
> > twitter: @sandersonforens
> > Tel +44 (0)1326 572786
> > http://sandersonforensics.com/forum/content.php?195-SQLite-
> > Forensic-Toolkit
> > -Forensic Toolkit for SQLite
> > email from a work address for a fully functional demo licence
> >
> > On 1 May 2018 at 16:29, dmp  wrote:
> >
> > > Hello,
> > >
> > > Given a conversion from a database table that contains BigInt, long,
> > > field from PostgreSQL to a SQLite similar table.
> > >
> > > CREATE TABLE postgresqltypes (
> > >   data_type_id serial,
> > >   bigInt_type bigint)
> > >
> > > CREATE TABLE sqlitetypes (
> > >   data_type_id INTEGER PRIMARY KEY AUTOINCREMENT,
> > >   int_type INTEGER)
> > >
> > > How to store, for values outside range for Integer,
> > > String or Real?
> > >
> > > danap.
> > >
> > > ___
> > > sqlite-users mailing list
> > > sqlite-users@mailinglists.sqlite.org
> > > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> > >
> > ___
> > sqlite-users mailing list
> > sqlite-users@mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> >
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Only see unused when there are unused records

2018-05-09 Thread Paul Sanderson
Or

SELECT count(*) AS Total,
  CASE
WHEN Sum(used = 'unused') > 0 THEN Sum(used = 'unused')
  END AS NotUsed
FROM quotes

There might be a more succinct way


Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence

On 9 May 2018 at 21:31, Igor Tandetnik  wrote:

> On 5/9/2018 4:19 PM, Cecil Westerhof wrote:
>
>> I have a table where I use 'unused' to signify that a record is not yet
>> used. I want to know the number of unused records (but only if there are
>> unused records) and the total number of records.
>>
>
> Something like this perhaps:
>
> select count(*) Total, sum(totalUsed = 'unused') NotUsed from quotes;
>
> --
> Igor Tandetnik
>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] This list is getting spammed again

2018-05-11 Thread Paul Sanderson
Would it be possible for an admin to run a script that sent an individual
email (e.g. different number in subject) to each user on the list and see
who is sending the spam based on the replies?

Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence

On 11 May 2018 at 10:12, Cecil Westerhof  wrote:

> 2018-05-10 1:09 GMT+02:00 Simon Slavin :
>
> >
> >
> > On 9 May 2018, at 9:37pm, Cecil Westerhof 
> wrote:
> >
> > > ​I am bitten by it also now. I posted a question and within two
> minutes I
> > > got a spam message​
> >
> > I got three or four of these, each one soon after I'd posted a message.
> > Then I got no more.  I didn't do anything to stop them and I have checked
> > my spam system to see if it stopped them, but the spam system didn't
> > receive any more.
> >
>
> ​I marked the sender as spammer. (It was from one sender.) And the messages
> are send to spam now. I got a few more. I'll watch if this triggers again,
> or that it is a 'smart' spammer that stops when you do not reply.
>
> --
> Cecil Westerhof
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] This list is getting spammed again

2018-05-11 Thread Paul Sanderson
Prob not nabble as my message does not seem to have made it there yet,
didn't check mailman - I suspect that they are subscribed as spam is sent
within seconds.

Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence

On 11 May 2018 at 10:33, Simon Slavin  wrote:

> On 11 May 2018, at 10:26am, Paul Sanderson 
> wrote:
>
> > Would it be possible for an admin to run a script that sent an individual
> > email (e.g. different number in subject) to each user on the list and see
> > who is sending the spam based on the replies?
>
> My guess is that the spammer harvests our addresses from nabble, or from
> the Mailman archive.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] This list is getting spammed again

2018-05-11 Thread Paul Sanderson
ahh disregard - it was there

Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence

On 11 May 2018 at 10:33, Simon Slavin  wrote:

> On 11 May 2018, at 10:26am, Paul Sanderson 
> wrote:
>
> > Would it be possible for an admin to run a script that sent an individual
> > email (e.g. different number in subject) to each user on the list and see
> > who is sending the spam based on the replies?
>
> My guess is that the spammer harvests our addresses from nabble, or from
> the Mailman archive.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] question about DB

2018-05-14 Thread Paul Sanderson
Try it in a command line shell with the timer on

.timer on



Paul
www.sandersonforensics.com
SQLite Forensics Book 

On 14 May 2018 at 17:27, Simon Slavin  wrote:

> On 14 May 2018, at 3:03pm, sebastian bermudez 
> wrote:
>
> > the question is, there are some order of penalty in attach (2+)
> databases vs one big DB ?
>
> Three databases of 2.4Gb size ?  SQLite is designed to expect this.  Slow
> down for attaching compared with one big database is very very small.  No
> problem.
>
> Make sure you have indexes to help the WHERE and ON clauses.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is this really the best way to do this?

2018-05-16 Thread Paul Sanderson
How about something like

SELECT CASE count(*) WHEN 0 THEN 'Discard' ELSE 'Ok' END FROM blocked WHERE
email = 'rwillett.dr...@example.com';


Paul
www.sandersonforensics.com
SQLite Forensics Book 

On 16 May 2018 at 10:35, Abroży Nieprzełoży <
abrozynieprzelozy314...@gmail.com> wrote:

> I would make something like this:
>
> CREATE TABLE blocked(
> mail TEXT PRIMARY KEY,
> action TEXT NOT NULL DEFAULT 'DISCARD'
> ) WITHOUT ROWID;
>
> INSERT INTO blocked(mail) VALUES('badm...@example.com');
>
> SELECT coalesce(action, 'OK') AS action
> FROM (SELECT 'goodm...@example.com' AS mail)
> LEFT JOIN blocked USING(mail);
>
> SELECT coalesce(action, 'OK') AS action
> FROM (SELECT 'badm...@example.com' AS mail)
> LEFT JOIN blocked USING(mail);
>
>
> 2018-05-16 10:22 GMT+02:00, Rob Willett :
> > Hi,
> >
> > I'm experimenting with an email server, Mail In a Box. It's a free
> > product and gives you a mail server in a box with SSL certificates,
> > multiple domains and seems to work well.
> >
> > One thing it doesn't do is allow people to specify emails to block.
> >
> > It uses SQLite to store the underlying data necessary to power Postfix.
> > Dr Richard Hipp, add another new application to your list :)
> >
> > I've worked out how Postfix calls SQLite (pretty easy), worked out what
> > I want to do, and am trying to write a single line of SQL that does it.
> >
> > The need is that the SQL must generate a single string answer depending
> > on the email address that is fed in through the query. There are no
> > options here, it must return a value even if there is nothing in the
> > table. It is not practical to add a table with every email address that
> > returns OK.
> >
> > For this example the only strings it can return are 'OK' and 'DISCARD',
> > though the RFC allows other strings. If a query is done on an email and
> > it is blocked then it must return DISCARD (or an action in the action
> > column. If no email is in the table matching then it must return 'OK'.
> >
> > As a piece of pseudo code this would ne
> >
> > function GetMailStatus(emailAddress)
> > {
> >   IF emailAddress is present in blocked THEN
> >   return action associated with emailAddress -- Action is
> normally
> > DISCARD
> >
> >   return 'OK'
> > }
> >
> > I've created the table
> >
> > CREATE TABLE blocked (id INTEGER PRIMARY KEY AUTOINCREMENT,
> >  email TEXT NOT NULL UNIQUE ,
> >action TEXT NOT NULL DEFAULT 'DISCARD')
> >
> > I can insert values
> >
> > insert into blocked (email) values('rwillett.dr...@example.com')
> > insert into blocked (email) values('rwillett+dr...@example.com')
> >
> > And this is the code that I have written that works but it looks poor to
> > my untutored eyes
> >
> > select CASE
> >  WHEN EXISTS (select 1 from blocked where email =
> > 'rwillett.dr...@example.com')
> >  THEN (select action from blocked where email =
> > 'rwillett.dr...@example.com')
> >  ELSE 'OK'
> >  END
> >
> >
> > In the Postfix query I'll replace
> >
> > email = 'rwillett.dr...@example.com'
> >
> > with email = '%s'
> >
> > so that the email address is passed in.
> >
> > My worry is that the query has two selects and 'feels' bad. It works but
> > feels dirty...
> >
> > Is there a better way under SQLite?
> >
> > Any suggestions welcomed.
> >
> > Rob
> > ___
> > sqlite-users mailing list
> > sqlite-users@mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> >
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is this really the best way to do this?

2018-05-16 Thread Paul Sanderson
or more in tune with the default value in the schema



SELECT CASE count(*) WHEN 0 THEN 'Ok' ELSE action END FROM blocked WHERE
email = 'rwillett.dr...@example.com';


Paul
www.sandersonforensics.com
SQLite Forensics Book 

On 16 May 2018 at 09:22, Rob Willett  wrote:

> Hi,
>
> I'm experimenting with an email server, Mail In a Box. It's a free product
> and gives you a mail server in a box with SSL certificates, multiple
> domains and seems to work well.
>
> One thing it doesn't do is allow people to specify emails to block.
>
> It uses SQLite to store the underlying data necessary to power Postfix. Dr
> Richard Hipp, add another new application to your list :)
>
> I've worked out how Postfix calls SQLite (pretty easy), worked out what I
> want to do, and am trying to write a single line of SQL that does it.
>
> The need is that the SQL must generate a single string answer depending on
> the email address that is fed in through the query. There are no options
> here, it must return a value even if there is nothing in the table. It is
> not practical to add a table with every email address that returns OK.
>
> For this example the only strings it can return are 'OK' and 'DISCARD',
> though the RFC allows other strings. If a query is done on an email and it
> is blocked then it must return DISCARD (or an action in the action column.
> If no email is in the table matching then it must return 'OK'.
>
> As a piece of pseudo code this would ne
>
> function GetMailStatus(emailAddress)
> {
> IF emailAddress is present in blocked THEN
> return action associated with emailAddress -- Action is
> normally DISCARD
>
>  return 'OK'
> }
>
> I've created the table
>
> CREATE TABLE blocked (id INTEGER PRIMARY KEY AUTOINCREMENT,
>email TEXT NOT NULL UNIQUE ,
>   action TEXT NOT NULL DEFAULT 'DISCARD')
>
> I can insert values
>
> insert into blocked (email) values('rwillett.dr...@example.com')
> insert into blocked (email) values('rwillett+dr...@example.com')
>
> And this is the code that I have written that works but it looks poor to
> my untutored eyes
>
> select CASE
> WHEN EXISTS (select 1 from blocked where email = '
> rwillett.dr...@example.com')
> THEN (select action from blocked where email = '
> rwillett.dr...@example.com')
> ELSE 'OK'
> END
>
>
> In the Postfix query I'll replace
>
> email = 'rwillett.dr...@example.com'
>
> with email = '%s'
>
> so that the email address is passed in.
>
> My worry is that the query has two selects and 'feels' bad. It works but
> feels dirty...
>
> Is there a better way under SQLite?
>
> Any suggestions welcomed.
>
> Rob
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ALTER TABLE

2018-05-22 Thread Paul Sanderson
To modify column names if you want to live dangerously you could try
something like this

PS C:\sqlite> sqlite3 writ.db
SQLite version 3.23.1 2018-04-10 17:39:29
Enter ".help" for usage hints.
sqlite> create table test (c1, c2, c3);
sqlite> insert into test values(1, 2, 3);
sqlite> pragma writable_schema = yes;
sqlite> update sqlite_master set sql = 'create table test (d1, d2, d3)'
where name = 'test';
sqlite> .quit

PS C:\sqlite> sqlite3 writ.db
SQLite version 3.23.1 2018-04-10 17:39:29
Enter ".help" for usage hints.
sqlite> .headers on
sqlite> select * from test;
d1|d2|d3
1|2|3

You could potentially do the same to modify the type of a column and
SQLites column affinity rules could/should help display the data correctly.
I have not tested this for different column types

sqlite> create table test2 (c1 int, c2 int, c3 int);
sqlite> insert into test2 values(1, 2, 3);
sqlite> pragma writable_schema = yes;
sqlite> update sqlite_master set sql = 'create table test2 (d1 text, d2
text, d3 text)' where name = 'test2';
sqlite> .quit

PS C:\sqlite> sqlite3 writ.db
SQLite version 3.23.1 2018-04-10 17:39:29
Enter ".help" for usage hints.
sqlite> .headers on
sqlite> select * from test2;
d1|d2|d3
1|2|3

The same idea should also work to add/remove a foreign key -  but you would
of course need to understand any implications and ensure that the existing
data does not cause a constraint conflict.

The only suggestion I have a problem with is dropping a column. Every
record in the b-tree would need to be modified to remove the now redundant
data. The on;ly exception I can see to this is if you are dropping the last
column from a table

sqlite> pragma writable_schema = yes;
sqlite> update sqlite_master set sql = 'create table test2 (e1 text, e2
text)' where name = 'test2';
sqlite> .quit

PS C:\sqlite> sqlite3 writ.db
SQLite version 3.23.1 2018-04-10 17:39:29
Enter ".help" for usage hints.
sqlite> .headers on
sqlite> select * from test2;
e1|e2
1|2
sqlite> pragma integrity_check;
integrity_check
ok

I have not done any thorough testing as this sort of thing is outside my
main area of interest, but it might give you some ideas. It goes without
saying that messing with the sqlite_schema is dangerous territory.





Paul
www.sandersonforensics.com
SQLite Forensics Book 

On 22 May 2018 at 20:34, Igor Korot  wrote:

> Hi, Charles,
>
> On Tue, May 22, 2018 at 2:30 PM, Charles Leifer 
> wrote:
> > As a workaround, you can always rename the existing table, create the new
> > table with desired attributes, and do a INSERT INTO ... SELECT FROM
> > old_table. Then you can safely drop the old table.
>
> But the table_name will be different.
> Also the data in the old table might be referencing some other table.
> So this process is not really very
> straightforward...
>
> Thank you.
>
> >
> > On Tue, May 22, 2018 at 1:14 PM, Thomas Kurz 
> wrote:
> >
> >> > ALTER TABLE ADD COLUMN has existed for a long time.
> >>
> >> Yes, sorry, I mixed things up.
> >>
> >> The order of importance is imho:
> >> 1. RENAME COLUMN (shouldn't be too hard)
> >> 2. DROP COLUMN (should be a bit more comlicated but feasible)
> >> 3. MODIFY COLUMN
> >>
> >> > What kind of MODIFY COLUMN changes do you have in mind?
> >>
> >> I understand this can be difficult as there are many possible operations
> >> that might be incompatible with the data already stored in that column.
> >> Last time I needed MODIFY, I wanted to change the ON UPDATE/DELETE
> CASCADE
> >> action of a foreign key column. Also adding/removing a foreign key
> would be
> >> useful.
> >>
> >> Kind regards,
> >> Thomas
> >>
> >> ___
> >> sqlite-users mailing list
> >> sqlite-users@mailinglists.sqlite.org
> >> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> >>
> > ___
> > sqlite-users mailing list
> > sqlite-users@mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Re: database locked on select

2018-05-30 Thread Paul Sanderson
If you are doing each update in a separate transaction it will be much
slower than wrapping them in a single transaction.

See the faq here, it refers to inserts but updates will be the same.

http://sqlite.org/faq.html#q19


Cheers
Paul


On Wed, 30 May 2018 at 09:34, Torsten Curdt  wrote:

> > Do the select and updates run inside a explicit transaction or they
> > run in individual implicit transactions?
> >
>
> implicit - does that make a big difference in this case?
>
>
> If you really want a single query you could write something like:
> >
> > WITH data(id, c1, c2 /*, ... */) AS (VALUES
> > (123, 'abc', 'xyz' /*, ... */),
> > (456, 'xxx', 'yyy' /*, ... */),
> > (789, 'xyz', 'xyz' /*, ... */)
> > /*  ...  */
> > ) UPDATE tab
> > SET (c1, c2 /*, ... */) = (SELECT c1, c2 /*, ... */ WHERE data.id =
> > tab.id)
> > WHERE id IN (SELECT id FROM data);
> >
> >
> But for that again means all the data (or the single query) needs to be
> built up in memory.
>
> cheers,
> Torsten
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
-- 
Paul
www.sandersonforensics.com
SQLite Forensics Book 
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite delete too slow in 4 GB database

2018-06-04 Thread Paul Sanderson
Have you made sure aut_ovacuum is disabled?

pragma *auto_vacuum  * = 0

have you got a nice large pagesize if your records are that big?



Paul
www.sandersonforensics.com
SQLite Forensics Book 

On 4 June 2018 at 13:01, Olivier Mascia  wrote:

> Hello,
>
> > Sqlite delete too slow in 4 GB database
>
> What does:
>
> 'pragma secure_delete;'
>
> and
>
> 'pragma auto_vacuum;'
>
> say, on that db?
>
> --
> Best Regards, Meilleures salutations, Met vriendelijke groeten,
> Olivier Mascia
>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [EXTERNAL] Re: Sqlite delete slow in 4GB

2018-06-06 Thread Paul Sanderson
The structure of a record is shown in the graphic at this link which is
from my book SQLite Forensics:



www.sqliteforensics.co.uk/pics/table_leaf_format.png



As long as ALL of the serial types (i.e. all of the cell pointer array) is
held in the main B-tree (i.e. not an overflow page) which unless you have a
huge number of columns in a table will pretty much always be the case, then
finding out the offset to a given field in the payload is a case of just
reading and determining the size of each serial type until you get to the
entry you want. If the payload data you want is in the same page, before or
after a blob, then accessing it is just a case of reading from the current
page buffer. It is only when the blob is so large that the data you want is
in an overflow page that things slow down, it shouldn't matter if some of
the record overflows, as long as the data you want is on the b-tree leaf
page.



If the blob is very big then you may need to read multiple overflow pages
to get to the data you want.



It is also worth bearing in mind that this does not just apply to blobs, if
you have a large string field, or multiple string fields that cause a
record to overflow, then you have exactly the same problem with records
that come after the string fields.


Paul
www.sandersonforensics.com
SQLite Forensics Book 

On 6 June 2018 at 10:15, Hick Gunter  wrote:

> Your schema has a major flaw that is addressed in SQLite documentation:
>
> Do not put any fields after blob fields, especially if the content is
> typically large. SQLite accesses fields in the order of defintion, so to
> access your ANH_PRC_id field, it has to retrieve the 1MB ANP_VALUE blob.
>
> Putting small and frequently acessed fields at the beginning of the
> definition allows SQLite to retrieve these fields without expanding the
> whole row.
>
>
> -Ursprüngliche Nachricht-
> Von: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org]
> Im Auftrag von Aydin Ozgur Yagmur
> Gesendet: Mittwoch, 06. Juni 2018 11:06
> An: SQLite mailing list 
> Betreff: [EXTERNAL] Re: [sqlite] Sqlite delete slow in 4GB
>
> Thank you very quick response.
>
> We have already planned to change file system. But this problem seem not
> related with the file size, because we did not hit the limit and when i try
> to delete same data in ,for example, 2 GB-sized db, i encounter with the
> same problem. This is my table and index definitions.
>
> Thanks for your help.
>
> CREATE TABLE "ANHXT" (
>   "id" integer primary key autoincrement,
>   "ANH_AD" text,
>   "ANH_DBGMHWID" text,
>   "ANH_TYPE" integer,
>   "ANH_INDEXNO" int64_t)
> CREATE TABLE "PRCXT" (
>   "id" integer primary key autoincrement,
>   "ANP_SEGMENTNO" integer not null,
>   "ANP_VALUE" blob,
> ==>  "ANH_PRC_id" bigint,
>   constraint "fk_ANHPRC_ANH_PRC" foreign key ("ANH_PRC_id") references
> "ANHXT" ("id") on update cascade on delete cascade deferrable initially
> deferred) CREATE UNIQUE INDEX UQC_ANH_TYPE on ANHXT( ANH_TYPE)CREATE UNIQUE
> INDEX UQC_ANP_SEGMENTNO_ANAHTARID on PRCXT( ANP_SEGMENTNO,ANH_PRC_id)CREATE
> INDEX findex on PRCXT( ANH_PRC_id)
>
>
>
> On Wed, Jun 6, 2018 at 11:42 AM, Karl Billeter 
> wrote:
>
> > On Wed, Jun 06, 2018 at 09:27:57AM +0300, Aydin Ozgur Yagmur wrote:
> >
> > > I have been using fat32 file system. I have a database which has 4
> > > GB
> > size.
> >
> > Er... I'm surprised there aren't more problems due to 4GB being the
> > max file size supported by fat32.  Any chance to change it to exFAT?
> >
> >
> > K
> > ___
> > sqlite-users mailing list
> > sqlite-users@mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> >
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
>
> ___
>  Gunter Hick | Software Engineer | Scientific Games International GmbH |
> Klitschgasse 2-4, A-1130 Vienna | FN 157284 a, HG Wien, DVR: 0430013 | (O)
> +43 1 80100 - 0
>
> May be privileged. May be confidential. Please delete if not the addressee.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ROWID....

2018-06-09 Thread Paul Sanderson
As ROWID is often an alias for an integer primary key then it  needs to be
able to represent both negaitive and positive integers other wise you
restrict the range of an integer PK.



Paul
www.sandersonforensics.com
SQLite Forensics Book 

On 9 June 2018 at 13:03, Kevin Benson  wrote:

> On Sat, Jun 9, 2018 at 7:03 AM Luuk  wrote:
>
> >
> > In the docs (https://www.sqlite.org/autoinc.html) it says:
> > In SQLite, table rows normally have a 64-bit signed integer ROWID
> >  
> >
> > Question:
> > Why it this a signed integer, and not an unsigned integer?
> >
> > Simply by choice? of is there something more to say about this?
> >
>
> http://sqlite.1065341.n5.nabble.com/Use-of-AUTOINCREMENT-td74775.html#
> a74786
> --
>--
>   --
>  --Ö¿Ö--
> K e V i N
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] saving :memory:database to disk

2015-05-07 Thread Paul Sanderson
I am contemplating a change to a program whereby a database is
initailly created in memory and then later if my users choose they can
save it to disk and then switch to using the disk based DB.

I can obviously create a new disk based db, iterate through
sqlite_master and then populate each table. But I just wondered
whether anyone had any suggestions of a better way of doing this?

Thanks.


[sqlite] saving :memory:database to disk

2015-05-07 Thread Paul Sanderson
Thanks Marc - it's the latter, very dynamic.
Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


On 7 May 2015 at 18:55, Marc L. Allen  wrote:
> Don't you already have to have a DB initialization script to create the 
> in-memory database?  Couldn't you use that to create the on-disk database, 
> and then have a separate script to copy all the data over?
>
> Or is this for more of a dynamic database whereby the users can add/remove 
> tables and indexes?
>
> -Original Message-
> From: sqlite-users-bounces at mailinglists.sqlite.org 
> [mailto:sqlite-users-bounces at mailinglists.sqlite.org] On Behalf Of Paul 
> Sanderson
> Sent: Thursday, May 07, 2015 1:53 PM
> To: General Discussion of SQLite Database
> Subject: [sqlite] saving :memory:database to disk
>
> I am contemplating a change to a program whereby a database is initailly 
> created in memory and then later if my users choose they can save it to disk 
> and then switch to using the disk based DB.
>
> I can obviously create a new disk based db, iterate through sqlite_master and 
> then populate each table. But I just wondered whether anyone had any 
> suggestions of a better way of doing this?
>
> Thanks.
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
>
> This email and any attachments are only for use by the intended recipient(s) 
> and may contain legally privileged, confidential, proprietary or otherwise 
> private information. Any unauthorized use, reproduction, dissemination, 
> distribution or other disclosure of the contents of this e-mail or its 
> attachments is strictly prohibited. If you have received this email in error, 
> please notify the sender immediately and delete the original.
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Understanding WALs

2015-11-10 Thread Paul Sanderson
I am trying to get my head around the operation of WAL files and an
oddity that I can't explain.

As I understand it when a transaction is committed when a DB is in WAL
mode each changed page is appended to the WAL. Multiple transactions
work in the same way until a checkpoint occurs.

The WAL has a header which contains 2 salts. For each page (preceeded
by a header and collectively known as a frame) there is a copy of the
two salts from the WAL file header and two checksums which use the two
salts as seeds. For a page/frame in the WAL to be valid the salts in
the frame must match the salts in the file header and when the
checksum is recalulated on the content of the frame/page the checksums
must match/compute.

When a checkpoint occurs all the valid pages (using the rule above)
are written to the DB and salt1 in the WAL file header is incremented
and salt2 is randomised. Writes to the WAL then re-commence from the
begining. As the salts have been updated any pages in the WAL (which
is not truncated) will be invalid as neither of the salts and
therefore the checksums will match.

OK that seems sort of straight forward. So I created a new DB using
the command prompt and issued the following instructions:

pragma journal_mode = wal;
create table test (x int);
pragma wal_checkpoint;
insert into test values(1);
pragma wal_checkpoint

I then had a look at the WAL

The screenshot below shows the WAL file header and you can see the two
salts displayed

http://www.sandersonforensics.com/pics/wal_header.jpg

The screenshot at this link shows the first page in the WAL and you
can see that the two salts in this frame match the salts in the file
header (and that this is a commit frame - DB size in pages > 0).
Please take my word for it that the checksums also compute.

http://www.sandersonforensics.com/pics/wal_page2.jpg

As the last thing I did was a wal-checkpoint then according to the
documentation the page should have been written to the DB (it was see
pic below of page two of the DB) AND the salts incremented and
randomised (section 4.3 of the file format page).

http://www.sandersonforensics.com/pics/db_page2.jpg

For completness there is another page in the WAL, i.e. page 2 again
which in this case is blank and the salts to not match those in the
header and the checksums dont comput - this is what I expected above.

http://www.sandersonforensics.com/pics/db_invalid.jpg

As I said this is an academic question as the integrity of the
database has been maintained and the only (non)issue I see is that
instead of reading the latest page from the DB SQLite will read it
from the WAL - but as they are the same there is no issue.

I am trying to explain how WALs work I can speak with a bit more
authority and what I am seing does conflict with the explanation of
the working of WALs on sqlite.org. From a usability angle I can't see
an issue with this but as it came up while I was working through WAL
usage with a colleague I would like to be able to explain to him what
is going on.

Cheers
Paul


Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
Request a demo here :
http://sandersonforensics.com/forum/content.php?212-RequestDemo


[sqlite] Understanding WALs

2015-11-11 Thread Paul Sanderson
Thanks Richard a nice simple answer to pass on.


Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


On 11 November 2015 at 00:24, Richard Hipp  wrote:
> On 11/10/15, Paul Sanderson  wrote:
>>
>> As the last thing I did was a wal-checkpoint then according to the
>> documentation the page should have been written to the DB (it was see
>> pic below of page two of the DB) AND the salts incremented and
>> randomised (section 4.3 of the file format page).
>>
>
> The increment and randomization of the salts occurs at the start of
> the next write transaction.  (It is deferred until then to avoid
> having to take unnecessary write locks.)
>
> The file format documentation is not technically wrong, but it is
> misleading.  I have updated that document now (see the second
> paragraph of section 4.3 at https://www.sqlite.org/fileformat.html) to
> try to make it less ambiguous.
>
> --
> D. Richard Hipp
> drh at sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Speed of count distinct rows if unique index on all fields

2015-10-02 Thread Paul Sanderson
Counting all rows vs counting distinct rows is a very different task.
In simple terms

For all rows the process is
read every leaf page in the index
query the cell count field (number of records)
add cell count to the to the total count
repeat for the remaining leaf pages


For distinct records
read every leaf page in the index
read each cell on page and identify whether it has been seen before
add count of distinct cells to total count
repeat for the remaining leaf pages

a bit over simplified as there may be unique records that are on different pages

The thrust of this is that to count distinct records page and every
record in the tree must be read and examined, to count all records
just every page header in the tree needs to be read.

Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


On 2 October 2015 at 14:15, Simon Slavin  wrote:
>
> On 2 Oct 2015, at 9:05am, Bart Smissaert  wrote:
>
>> Noticed that if I have table with a unique index on all fields, counting
>> all rows is still a lot faster
>> (about a factor 10 on my particular test table) than counting distinct rows.
>> Could maybe an optimization be added to SQLite to speed this up, taking
>> advantage of the fact that there is a unique index on all fields?
>
> The fact that your index is so wide is actually slowing it down.  Because it 
> means that the index takes up more space on disk.  The fastest way to count 
> all the rows in a table would be to have an index on just a single numeric 
> field.  This means that reading the whole index in would involve reading the 
> least number of pages.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Encrypted or Corrupt File

2015-10-15 Thread Paul Sanderson
What chat application are they from.

What was the last thing that you did before they stopped working

Is there anything else on your computer that could be opening these fles

Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


On 15 October 2015 at 19:48, Simon Slavin  wrote:
>
> On 15 Oct 2015, at 7:37pm, Michael Horn  wrote:
>
>> I am new to this and an amateur to say the least. I'm trying to recover
>> online some chat files. I'm using SQLite manager to read files and the
>> files I was just using yesterday are not opening. file error: either the
>> file is encrypted or corrupt component returned failure code 0x8052000b.
>> Can anyone give me some pointers? Thanks.
>
> First, ignore whatever tool you're currently using to view the files and use 
> the SQLite command-line tool instead.  Does it get the same kind of error ?  
> If so ...
>
> Do you have hexdump or any other tool which can show the beginning of the 
> file as Hexadecimal higits ?  The beginning of a normal SQLite database is a 
> fixed header containing easy-to-read ASCII characters.  By looking at the 
> file you're trying to open we might be able to figure out what it is.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Database corrupt after hard reboot. Can it be repaired?

2015-10-19 Thread Paul Sanderson
Gunnar

If the data is very important then don't do anything that will change
the computer.

If you have sometools that can take an image copy of the disk (DD to
another drive) or maybe the free accessdata imaging software then I
would recommend doing this first - it will leave all options open
should you wish to try something complex later.

I have tools that can recover tables and rows from a damaged database,
all they need is the schema of the original DB (the more complicated
the bettter). The data can be recovered from the corrupt DB, from any
journal files and also potentially from the swap file (or equivalent)
or unused space on the hard disk. Records can also potentially be
recovered from RAM - although in your case as the power has been off
this is not going to work.

The success of the operation depends on many factors but there is a
fair chance that it won't be 100% acceptable - depending on the nature
of the data (and any table constraints) this may be acceptable. It may
also require some manual "filtering" of the recovered data as more
simple tables are prone to include what can be best described as
suprious results.

Any old backups of the database can potentially be used to create a
working solution.

More information - the DB schema particularly - and what you need
recovered for it to be a success (even a partial success) would be
good.
Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


On 19 October 2015 at 18:51, gunnar  wrote:
> Thanks a lot Simon!
> We'll try it.
>
>
> (BTW I also got an email from alexa, but looking in the sqlite mailing list
> I can see that I'm not the only one)
>
>
>
>
> On 10/19/2015 06:20 PM, Simon Slavin wrote:
>>
>> On 19 Oct 2015, at 5:13pm, gunnar  wrote:
>>
>>> We run sqlite with PRAGMA synchronous=OFF.
>>>
>>> Now we suffered from a 'hard reboot' because a sysadmin unplugged the
>>> power cable :(
>>>
>>> Is it possible to repair
>>
>> First take a copy of the database file and any other file from the same
>> directory with a similar name.  This is just in case you decide to use
>> heroic (i.e. expensive) means to rescue the database later.
>>
>> Open the proper (not the copy) database using the SQLite Shell Tool
>> downloadable from the SQLite site.  Use the '.dump' command to dump the
>> database as a text file of SQL commands.
>>
>> Read through that file and see if it looks like it has captured all your
>> data in it.  If it has ...
>>
>> Create a new blank database using the SQLite Shell Tool.
>> Use the '.read' command to execute the commands in the text file.
>> Use various commands in the SQLite shell tool to explore the resulting
>> database and see if it looks like it's worth using.
>>
>> Simon.
>> ___
>> sqlite-users mailing list
>> sqlite-users at mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>
>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How would sqlite read this from disk?

2015-10-29 Thread Paul Sanderson
It reads a complete page at a time so there is no seeking other than
to the start of each row - in the sense of a disk seek.

Note that there may be multiple required rows on the same page if the
row length is much less than the page length, or if rows are longer
than a size determined by some arcane math from the page size (see the
file format documentation), a row may overflow to one or more pages
. 
Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


On 29 October 2015 at 17:59, Jason H  wrote:
>
>
>> Sent: Thursday, October 29, 2015 at 1:34 PM
>> From: "Scott Hess" 
>> To: "SQLite mailing list" 
>> Subject: Re: [sqlite] How would sqlite read this from disk?
>>
>> On Thu, Oct 29, 2015 at 10:20 AM, Jason H  wrote:
>> >
>> > If I could ask a followup question. You made the statement "SQLite reads
>> > that row of the table from storage, from the first column to the last
>> > column needed by the SELECT, but perhaps not all the way to the end of the
>> > columns in the row.", Given an example select that requires columns 2,3,5,
>> > does this mean that column 4 is also read, or is column 4 be skipped? I
>> > guess the question is I am assuming that a row is serialization of one or
>> > more strings that are a serialization of a string preceded by their lengths
>> > (i.e. [L|data] ), would SQLite do something akin to (using my 2,3,5
>> > example):
>> > begin row read, read L1, seek L1, read L2, read L2*byte, read L3, read
>> > L3*byte, read L4, seek L4, read L5, read L5*byte
>> > or would the 'read L4, seek L4' be changed to 'read L4, read L4*byte' ?
>>
>>
>> You should consider reading https://www.sqlite.org/fileformat2.html ,
>> especially sections 2.3 "Representation Of SQL Tables" and 1.5 "B-tree
>> Pages".  If your project _really_ needs to know this level of detail, then
>> you really should read up on the underlying system.  Also maybe throw in
>> https://www.sqlite.org/arch.html to get a broad feel of how things break
>> down.
>
>
> Thanks Scott. I had actually already linked to the fileformat2 url, but I did 
> not see a discussion to the level of detail for row reading, for which I am 
> after. I did however re-read in the context of your previous statement and 
> gained some insight. I should a have stated it more like this:
> begin row read header_varint, serial_types*N, seek, read, seek, read or
> begin row read header_varint, serial_types*N, seek, read, read, read
>
> The documentation does not go into the detail of the engine is able to skip 
> the reading of unneeded interior rows. In theory, it can because the length 
> is contained in the header. So instead of read() on every column in the row, 
> it can call seek() if it knows it doesn't need that column. My question is 
> now simply: does it seek past unneeded columns, or does everything get send 
> through read once data has started being read? It's a minor detail with bug 
> performance implications.
>
> Many thanks again.
>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Get fields type

2016-04-13 Thread Paul Sanderson
you could use SQL i.e.
select typeof(col1) from table1

would return text, integer etc. for each row in the table for the
specified column.

problems could arise if rows have NULL for given column or SQLite
slack of strict column affinity mena sthat someone has dropped a
string into an integer column.

select typeof(col1) from table where col1 not null limit 1

mightbe a better option
Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


On 13 April 2016 at 17:43, Simon Slavin  wrote:
>
> On 13 Apr 2016, at 10:57am, hfiandor  wrote:
>
>> I have already obtained from a SQLQuery the field?s list. Now I need to
>> obtain the field?s type, from the SQLQuery obtained from a SQLite table.
>
> In the SQLite API you would use this
>
> 
>
> int sqlite3_column_type(sqlite3_stmt*, int iCol);
>
> but you're not using the SQLite API.
>
> What programming language are you using.
> What API are you using to access SQLite ?
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Get fields type

2016-04-13 Thread Paul Sanderson
Or of course you could query the sqlite_master table
Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


On 13 April 2016 at 18:08, Paul Sanderson  
wrote:
> you could use SQL i.e.
> select typeof(col1) from table1
>
> would return text, integer etc. for each row in the table for the
> specified column.
>
> problems could arise if rows have NULL for given column or SQLite
> slack of strict column affinity mena sthat someone has dropped a
> string into an integer column.
>
> select typeof(col1) from table where col1 not null limit 1
>
> mightbe a better option
> Paul
> www.sandersonforensics.com
> skype: r3scue193
> twitter: @sandersonforens
> Tel +44 (0)1326 572786
> http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
> -Forensic Toolkit for SQLite
> email from a work address for a fully functional demo licence
>
>
> On 13 April 2016 at 17:43, Simon Slavin  wrote:
>>
>> On 13 Apr 2016, at 10:57am, hfiandor  wrote:
>>
>>> I have already obtained from a SQLQuery the field?s list. Now I need to
>>> obtain the field?s type, from the SQLQuery obtained from a SQLite table.
>>
>> In the SQLite API you would use this
>>
>> <https://www.sqlite.org/c3ref/column_blob.html>
>>
>> int sqlite3_column_type(sqlite3_stmt*, int iCol);
>>
>> but you're not using the SQLite API.
>>
>> What programming language are you using.
>> What API are you using to access SQLite ?
>>
>> Simon.
>> ___
>> sqlite-users mailing list
>> sqlite-users at mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] obtaining bracketing rows where rows are missing

2016-04-15 Thread Paul Sanderson
I confused myself with that title.

I have a DB with 300 odd rows with an integer primary key, there are a
few breaks in the key numbering such that I have something like

1
2
3
5
8
9
10
11
14

I can generate a list of misisng rows easily enough , eg

4
6
7
12
13

but I would like a table of the rows that bracket the missing rows e.g.

3, 5
5, 8
11, 14

any ideas how I might go about this?

Cheers

Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


[sqlite] obtaining bracketing rows where rows are missing

2016-04-15 Thread Paul Sanderson
Thanks
Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


On 15 April 2016 at 11:14, Rowan Worth  wrote:
> sqlite> create table a(a integer primary key);
>
> sqlite> SELECT a1.a, a2.a FROM a AS a1, a AS a2
> WHERE a2.a > a1.a
> GROUP BY a1.a
> HAVING a2.a = min(a2.a)
> AND a2.a <> a1.a + 1;
>
> 3|5
> 5|8
> 11|14
>
> IIRC "HAVING x = min(x)" is not portable SQL but it seems to work in sqlite.
>
> -Rowan
>
>
>
> On 15 April 2016 at 18:04, Paul Sanderson 
> wrote:
>
>> I confused myself with that title.
>>
>> I have a DB with 300 odd rows with an integer primary key, there are a
>> few breaks in the key numbering such that I have something like
>>
>> 1
>> 2
>> 3
>> 5
>> 8
>> 9
>> 10
>> 11
>> 14
>>
>> I can generate a list of misisng rows easily enough , eg
>>
>> 4
>> 6
>> 7
>> 12
>> 13
>>
>> but I would like a table of the rows that bracket the missing rows e.g.
>>
>> 3, 5
>> 5, 8
>> 11, 14
>>
>> any ideas how I might go about this?
>>
>> Cheers
>>
>> Paul
>> www.sandersonforensics.com
>> skype: r3scue193
>> twitter: @sandersonforens
>> Tel +44 (0)1326 572786
>> http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
>> -Forensic Toolkit for SQLite
>> email from a work address for a fully functional demo licence
>> ___
>> sqlite-users mailing list
>> sqlite-users at mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Pascal (almost) style BLOBs

2016-04-18 Thread Paul Sanderson
You could use a binary PList or some other form of structured data -
the BPList blob could then contain, ints, floats, strings byte arrays
or even embedded binary plists. As it is a structured format then to
an extent it is self documenting (in terms of the stored data types)
Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


On 17 April 2016 at 16:27, William Drago  wrote:
> On 4/17/2016 9:45 AM, Simon Slavin wrote:
>>
>> On 17 Apr 2016, at 1:02pm, William Drago  wrote:
>>
>>> I am forced to consider this approach because my function will be
>>> receiving blobs but will have no idea what those blobs are.
>>
>> Why do you care ?  A BLOB is a BLOB.  It doesn't matter whether it's an
>> integer, a string, a car, a peace treaty or the smell of newly cut grass.
>> You store it when told to and retrieve it when told to.
>
>
> My job is to find all the blob columns and unblob them into arrays of
> whatever type they originally were before they were blob'd.
>
>>> if this "feature" is not clearly documented someone in the future will
>>> have a very hard time figuring out why the blobs don't make sense.
>>
>> BLOBs don't make sense.  They're opaque.  You are not meant to understand
>> anything about what's in a BLOB.  That's for the layer of your software
>> which packs the information into a blob or unpacks the blob back into
>> information.
>
>
> Understood. I'm writing the code that does the packing and unpacking. The
> users are designing their own databases. When they store for example, an
> array of 16 bit integers, or an array of 64 bit floats, I want to give them
> the same thing back when they query the database not just an array of bytes
> that they have to decode themselves.
>
> Since I have control over how the blobs are generated, I can easily add a
> type identifier to each blob so that I know how to unblob it.
>
> -Bill
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Controlling of check-pointing of WAL Dbs

2016-04-27 Thread Paul Sanderson
Can you kill the process rather than shut down your application? If
the last connection doesn't get shut then the WAL will remain and will
be in opened (but not played) when the DB is re-opened.
Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


On 27 April 2016 at 12:38, Albert Banaszkiewicz
 wrote:
> Hello.
>
> I am new here, thus I would like to apologize in advance if the question I am 
> going to rise was already asked in the past.
>
> I am using SQLite 3.11.1 (actually it is a NDS consortium customized version 
> based on it).
>
> According to documentation, in case of WAL mode enabled databases, final 
> check-pointing is being done when the last connection to DB is about to be 
> closed.
>
> We are running into the scenario, where we would like to have a complete 
> control over the moment when check-pointing happens. This is easy in case of 
> auto-check-pointing, where it can be completely disabled or customized (via 
> hooks).
> However, it is possible that during the component life-time there is going to 
> be no 'appropriate' moment to schedule it and if the user powers device down, 
> we still want to avoid it since modern OSes typically expect running 
> applications to terminate within the specified (several seconds usually) time 
> intervals. Of course, we don't want to loose any data contained in the WAL 
> journal as well and ideally, we would like to be still able to schedule 
> check-pointing in the 'appropriate' time after reboot.
>
> So finally, the questions:
>
> 1. Is there a way to control (i.e. disable) check-pointing happening at DB 
> closure ? (personally, I could not find anything like this in the code but 
> perhaps I missed something)
> 2. If not, are there any plans to introduce it ? Or is it not feasible at all 
> ?
>
> Thank you in advance,
> Albert
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Storing Images in Blobs - Best Practices

2016-02-18 Thread Paul Sanderson
How do you know that it is sqlite that is corrupting the images, could
your conversion to nsdata (or elsewhere prior to writing to teh DB) be
the culprit?

Given your schema if a record has become corrupt then the following
two columns (the data for which which would be stored contiguously
after the blob) would also be corrupt.


Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


On 17 February 2016 at 22:35, Simon Slavin  wrote:
>
> On 17 Feb 2016, at 10:23pm, Rooney, Joe  
> wrote:
>
>>  1.  The database table has four fields: Employee GUID (varchar), 
>> EmployeePic (Blob), isPicAvail (Bool) and picDate (int).
>
> Store the BLOB field at the end of the row.  This will make accessing the two 
> fields currently after it faster.
>
>> I use the same UPDATE call, as above in 4. I don't delete the previous Blob, 
>> just assuming that SQLite will take care of those details.
>
> It should do.
>
>> The process works well except that occasionally two images end up getting 
>> corrupted. Not just one, it always happens in pairs. I get the upper part of 
>> both images, with just the lower part showing garbage.
>
> Run PRAGMA integrity_check() on the database and tell us if it finds errors.
>
> If possible copy the database to a computer and download and use the analysis 
> utility from the SQLite download page.
>
>> The questions:
>>
>>  *   Is this general approach wise?
>>  *   Should I clear the old Blob before rewriting the new image, or should I 
>> simply delete the old rec and write a new one?
>>  *   Should I store the mime string coming back in the JSON return rather 
>> than converting the string to an image, and then converting it every time we 
>> need to display the image.
>>  *   Should I simply store a string link to the image and load that whenever 
>> it is needed.
>
> Yes.
> No, you should be able to just rewrite it.
> You wrote you were storing the NSData.  That's not just the bytes of the 
> image itself.  It doesn't matter which you do but be clear which you're 
> doing.  Storing MIME is probably a waste of space.
> I think you should store the content, not a link.
>
> Questions from me:
>
> What language are you writing in ?
> What API are you using to access SQLite ?
> Check the first few bytes of a BLOB and be certain that they are the first 
> few bytes of what would be in an image file stored on disk.
> How are you writing your BLOB ?  Are you writing the whole field at once or 
> are you using the specialist BLOB calls ?
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Why is a separate journal file needed ?

2016-02-24 Thread Paul Sanderson
You would need some sort of allocation table for the journal (wal or
rollback) and on a rollback commit or a wal checkpoint the allocation
would need to be cleared (or maybe some "in use" bit cleared) to show
that the pages were now free to be re-used.

The additional data stored with rollback journals (page number before
and checksum after) and wals (wal frame before) would also need to be
implemented else where and thereofre add extra logic.

It sounds doable but more complex.
Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


On 24 February 2016 at 15:46, Igor Tandetnik  wrote:
> On 2/24/2016 10:08 AM, Stephen Chrzanowski wrote:
>>
>> IMO, all that, plus the fact that you have an easy roll back mechanism.
>> Anything that needs to be put in the database is external to the pristine
>> database.  Lock the database with a transaction, fill up the journal, the
>> power goes out, your pristine database isn't touched.
>
>
> Rollback journal works the other way round. New data is written to database
> file; previous content of overwritten pages is copied to journal file.
> Committing a transaction is fast - simply delete the journal. Rolling back
> (e.g. after power failure) means copying original pages from the journal
> back to the database file.
>
> In the world you describe - how would you implement committing a
> transaction? At some point, something must write to the database file; at
> that point, the file is no longer "pristine". What happens if power goes out
> in the middle of this?
>
> Note also that a typical application commits much more often than it rolls
> back, so the system should be designed to make the former fast; the latter
> can be slow.
> --
> Igor Tandetnik
>
>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Why is a separate journal file needed ?

2016-02-26 Thread Paul Sanderson
WAL files can be many times bigger than a database - the default WAL
checkpoint size is when the WAL grows to > 1000 pages. You can get a
DB (for example) with 100 pages and a WAL of 1000 (with multiple
different copies of the same page).
Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


On 26 February 2016 at 08:46, Rowan Worth  wrote:
> On 24 February 2016 at 21:49, Richard Hipp  wrote:
>
>> On 2/24/16, Simon Slavin  wrote:
>> > Why can't the information which SQLite
>> > stores in a journal file be put in the database file ?
>>
>> Doing so would double the size of the database file.  Every database
>> file would contain extra space (normally unused) set aside for the
>> journal.
>>
>
> Surely this extra space would be proportional to the size of the largest
> [recent] transaction, rather than the size of the database itself? To be
> specific I'm thinking of rollback journals, I don't have experience with
> WAL mode.
>
> -Rowan
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Why is a separate journal file needed ?

2016-02-26 Thread Paul Sanderson
I have just checked a twitter database from a library of test DBs -
the DB is 88Kb and the associated WAL is 4012Kb similarly I have a
Safari history.DB that is 294Kb and associated WAL that is 3974Kb.

these are the bigger ones in my test library but they are real world databases
Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


On 26 February 2016 at 09:32, Paul Sanderson
 wrote:
> WAL files can be many times bigger than a database - the default WAL
> checkpoint size is when the WAL grows to > 1000 pages. You can get a
> DB (for example) with 100 pages and a WAL of 1000 (with multiple
> different copies of the same page).
> Paul
> www.sandersonforensics.com
> skype: r3scue193
> twitter: @sandersonforens
> Tel +44 (0)1326 572786
> http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
> -Forensic Toolkit for SQLite
> email from a work address for a fully functional demo licence
>
>
> On 26 February 2016 at 08:46, Rowan Worth  wrote:
>> On 24 February 2016 at 21:49, Richard Hipp  wrote:
>>
>>> On 2/24/16, Simon Slavin  wrote:
>>> > Why can't the information which SQLite
>>> > stores in a journal file be put in the database file ?
>>>
>>> Doing so would double the size of the database file.  Every database
>>> file would contain extra space (normally unused) set aside for the
>>> journal.
>>>
>>
>> Surely this extra space would be proportional to the size of the largest
>> [recent] transaction, rather than the size of the database itself? To be
>> specific I'm thinking of rollback journals, I don't have experience with
>> WAL mode.
>>
>> -Rowan
>> ___
>> sqlite-users mailing list
>> sqlite-users at mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Are BLOB stored inline with the rest of the record fields?

2016-03-03 Thread Paul Sanderson
What is the likely max length of a row in your table? if your row
length is less than page length-35 then it will fit in to one page.

The amount of wasted page depends on what fraction of a page size your
typical record is.

If you have a record that is 600 bytes long on a DB with 1024 byte
page size then you would "waste" about 1024-600 = 424 bytes per record
. 

If your page size is 4096 then for the same record size you fit 6
records into the page so would waste about 4096 - (600*6) = 496 / 6 =
82 bytes per record.

Pick your page size appropriately although there is usually more to
think about with page size than just wasted space.
Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


On 3 March 2016 at 08:58, Simon Slavin  wrote:
>
>> On 3 Mar 2016, at 8:16am, Eric Grange  wrote:
>>
>>> All BLOBs are stored inline.  The only complication comes when the total
>>> row length is longer than a page.
>>
>> Ok thanks!
>>
>> That is besides my original question but what is the complication in that
>> case?
>> Is the blob partially stored in the row, or is some indirection to other
>> pages used, with only some pointer stored in the row?
>
> SQLite database files are split up into pages.  Every page in a database is 
> the same number of bytes long.  Every page belongs to the header, or to free 
> space, or to a specific TABLE or INDEX.
>
> SQLite stores all the values for a row together.  Changing the value of any 
> column for a row requires rewriting the entire row, and then the row entry 
> for every index on that table which was changed. [1]
>
> Ideally a row of data would fit in a page.  This simplifies the job of 
> reading a row from disk or changing the data in a row.  SQLite does this 
> where possible.
>
> But it's possible for a row to be longer than a page.  This happens 
> especially where the row contains a long string or a long BLOB.  In that case 
> SQLite has to fetch more than one page from disk just to access the values 
> for that row.  And it's possible for a BLOB to be so long that it requires 
> one or more pages entirely to itself.  So the word 'inline' is a little 
> tricky here because there is more than one 'line' involved.
>
> Simon.
>
> [1] Handwave a few exceptions.
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] obtainingthe number of rows returned by a query (with a limit)

2016-03-05 Thread Paul Sanderson
I am trying to determine before a query is executed how many rows will
be returned. the following query works as expected

select count(*) from table

but

select count(*) from table limit 100

still returns the number of rows in the table not the number of rows
that would be returned by the query. is this operation correct?

I can of course parse the query to determine any limit on what will be
returned, but wondered if this is a bug or a design feature.




Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


[sqlite] obtainingthe number of rows returned by a query (with a limit)

2016-03-05 Thread Paul Sanderson
That doesn't work for me:

I am trying to avoid (select col1, col2, ... colx ) from ...

and want just a row count which is much less resource intensive.

I have users who need to execute queries that generate a large number
of rows, I have other users that create queries that generate a large
number of rows by accident (i.e. cross joins). I have no control over
what they enter but I want to do something to warn them that their
query might result in an over large number of rows and a limit clause
might be a solution.

if however they have already applied a limit clause there is no need
for a warning.

Before I am asked - for some of my users huge data sets might be OK as
they can use my application to massage the data before creating a
report or exporting the results set.

parsing the query and identifying the limit clause is an option, but I
hope there might be a simpler way.
Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


On 5 March 2016 at 13:05, Clemens Ladisch  wrote:
> Paul Sanderson wrote:
>> I am trying to determine before a query is executed how many rows will
>> be returned. the following query works as expected
>>
>> select count(*) from table
>>
>> but
>>
>> select count(*) from table limit 100
>>
>> still returns the number of rows in the table not the number of rows
>> that would be returned by the query.
>
> "The query" is the one that has the "limit" clause.  What else should
> the database execute?
>
> As documented <http://www.sqlite.org/lang_select.html>, the result rows
> are generated before the LIMIT clause is applied.
>
> To determine how many rows would be returned by an arbitrary query, use:
>
>   SELECT COUNT(*) FROM (SELECT ...);
>
>
> But why do you want to determine the number of rows in the first place?
>
>
> Regards,
> Clemens
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Fastest way to find whether at least one row has a certain column value

2016-03-05 Thread Paul Sanderson
Clemens,

Tim has the same issue as me, while

  SELECT EXISTS (select status from mytable where status=1);

works the portion in brackets is still executed in full and this is
what we are trying to avoid.

I am not working directly with the sqlite library but rather though a
DAC so I am also limited (no pun intended) there.




Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


On 5 March 2016 at 15:12, Clemens Ladisch  wrote:
> Tim Streater wrote:
>> On 05 Mar 2016 at 13:05, Clemens Ladisch  wrote:
>>> But why do you want to determine the number of rows in the first place?
>>
>> In my case I want to know whether at least one row exists that has a certain 
>> column which has a given value. At the minute I do this:
>>
>>   select count(*) from mytable where status=1 limit 1;
>>
>> Would this:
>>
>>   select count(*) from (select status from mytable where status=1 limit 1);
>>
>> or some other query be faster. Really, I'd like SQLite to stop after finding 
>> one row.
>
> So you want to know whether such a row exists? Then ask for that:
>
>   SELECT EXISTS (select status from mytable where status=1);
>
> This returns a boolean value.
>
>
> Regards,
> Clemens
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Fastest way to find whether at least one row has a certain column value

2016-03-05 Thread Paul Sanderson
Thanks Richard

so suppose I have two tables table1 and table2 each with 1000 rows and
say 100 columns some containing large blobs.

My user choose a query "select * from table1, table2"

I can modify the query and do a "select count(*)  from table1, table2"
to determine that 1M rows will be returned and ask him if this is what
he really wants to do.

But what if he has "select * from table1, table2 limit 1000"

In this instance there is no need for me to warn him as he has already
applied a limit to the amount of data returned.

Any suggestions as to how I can detect this , other than parsing the
query for "limit x"

Thanks

Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


On 5 March 2016 at 16:12, Richard Hipp  wrote:
> On 3/5/16, Paul Sanderson  wrote:
>> Clemens,
>>
>> Tim has the same issue as me, while
>>
>>   SELECT EXISTS (select status from mytable where status=1);
>>
>> works the portion in brackets is still executed in full and this is
>> what we are trying to avoid.
>
> The query planner in SQLite, while not brilliant, is smart enough to
> know that it can stop and return true as soon as it sees the first row
> from the query inside of EXISTS().
>
> So if there is an index on mytable.status, the above will be very
> query.  Or if there are instances of status=1 early in mytable, the
> above will be quick.  But if mytable.status is never 1 and if there is
> no index on mytable.status, then SQLite will scan the entire table to
> verify that status is never 1 before returning 0.  I don't see anyway
> around the latter case, however.
>
> --
> D. Richard Hipp
> drh at sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] obtainingthe number of rows returned by a query (with a limit)

2016-03-05 Thread Paul Sanderson
> Just allow them to cancel the query; use sqlite3_progress_handler().

I am using a third party data access component so this is out of my control.

If it computes many things and doesn't return many rows then I don't
really care. I only want to know how many rows a query will return
before I execute it in full. If I can tell that it is going to be a
huge return data set then I can prompt them to change it.


[sqlite] Changing the default page_size in 3.12.0

2016-03-05 Thread Paul Sanderson
Records can span multiple pages and the record itself is unrelated to
the block size.

For smaller tables indexes an increase in page size can increase the
DB size - consider a DB with 1024 byte pages and one table that
occupies 6 pages - i.e. 6K. in a new 4096 page size DB it would take
up two pages i.e. 8K.

Without further investigation I suspect that savings would come in
when you have larger tables with relatively large record payloads, I
would think that the free space in a page would be half the average
record length and so by multiplying the page size by 4 would decrease
the free (wasted) space (due to not enough space for another complete
record) also by a factor of four. This is grossly over simplified
though and takes no account of presumably less overflow pages due the
larger page size, pointer maps, and free space due to deleted records.

In short (no testing done) I am not sure that for most implementations
that increasing page size would make any significant space savings and
may likely as mentioned above increase the DB size..

The savings as mentioned earlier are IO related due to matching the
page size to the underlying hardwares block size and for larger
payloads ensuring less IO due to no or lesss overflow pages.




Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


On 5 March 2016 at 18:43, Jim Callahan  
wrote:
>
>
>> From: Domingo Alvarez Duarte
>> I have a question, is it enough to vacuum a database to update to the new
>> page size ?
>>
>
> Apparently all you need to is "pragma page_size=4096; vacuum;" using the
>> appropriate page size.
>> This makes very easy to convert any(all) database(s) with a single
>> t
>> command from the command-line, like so (Win7 example):
>> for %i in (*.db) do sqlite3 %i "pragma page_size=4096; vacuum;"
>
>
> I am confused.
> The "page" is an in-memory structure; it is how large a chunk the program
> reads from the file (analogous to how large a scoop or shovel), So, unless
> one is using an index, how would the on disk structure be impacted? How
> does SQLite handle the last block (does it expect the file to be an even
> multiple of the block size, or does it accept that the last read might be
> less than a full block?).
>
> For example, if one encountered an old file, would it be enough to rebuild
> the indices?
>
> Or is it simply a matter of closing the file? (close the file with the old
> version and open the file with the new?).
>
> I haven't read the source code so I don't know what assertions, checks or
> assumptions SQLite
> uses.
>
> Jim
>
>
>
> On Sat, Mar 5, 2016 at 11:04 AM,  wrote:
>
>> From: Domingo Alvarez Duarte
>>> I have a question, is it enough to vacuum a database to update to the new
>>> page size ?
>>>
>>
>> Apparently all you need to is "pragma page_size=4096; vacuum;" using the
>> appropriate page size.
>> This makes very easy to convert any(all) database(s) with a single command
>> from the command-line, like so (Win7 example):
>>
>> for %i in (*.db) do sqlite3 %i "pragma page_size=4096; vacuum;"
>>
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users at mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] obtainingthe number of rows returned by a query (with a limit)

2016-03-06 Thread Paul Sanderson
So

select count (*) from ...

Doesn't require a crystal ball and returns the number of rows but

Select count(*) from ... Limit x

Dies need one, interesting

On Saturday, 5 March 2016, Igor Tandetnik  wrote:

> On 3/5/2016 12:20 PM, Paul Sanderson wrote:
>
>> If it computes many things and doesn't return many rows then I don't
>> really care. I only want to know how many rows a query will return
>> before I execute it in full.
>>
>
> That would require a crystal ball or a time machine. Absent those, it's
> impossible to know how many rows a query will produce until it actually
> runs and produces them.
>
> Again, you can wrap an arbitrary query like this:
>
> select count(*) from (
>   select whatever ...
> );
>
> This query always returns exactly one row and one column, with the value
> being the number of rows that the inner query would have returned. Is this
> not what you want? Of course, it would take approximately as much time to
> run this new query as it would the original query.
> --
> Igor Tandetnik
>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


[sqlite] Fastest way to find whether at least one row has a certain column value

2016-03-06 Thread Paul Sanderson
I understand this - but, there always a but, I still would like to do
something. Applying the limit anyway and then telling them the query
has been limited might be a solution.

Time is usually not an issue but as the results are loaded into a grid
for display memory can be the issue.

I'll give it some more thought, but thank you for your answers it has helped.
Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


On 6 March 2016 at 08:09, R Smith  wrote:
>
>
> On 2016/03/05 6:21 PM, Paul Sanderson wrote:
>>
>> Thanks Richard
>>
>> so suppose I have two tables table1 and table2 each with 1000 rows and
>> say 100 columns some containing large blobs.
>>
>> My user choose a query "select * from table1, table2"
>>
>> I can modify the query and do a "select count(*)  from table1, table2"
>> to determine that 1M rows will be returned and ask him if this is what
>> he really wants to do.
>>
>> But what if he has "select * from table1, table2 limit 1000"
>>
>> In this instance there is no need for me to warn him as he has already
>> applied a limit to the amount of data returned.
>>
>> Any suggestions as to how I can detect this , other than parsing the
>> query for "limit x"
>
>
> I see your point, but I think you are missing something - The amount of rows
> returned by a query is a very bad indicator of how long such a query will
> run - AND, if it is a long query, you will waste twice the time for the end
> user.  Further to this, any amount of queries can be made that takes hours
> to complete, but produces perhaps less than 5 rows, or even just 1.
>
> Similarly, but on a very different tack, you can have a query like "SELECT
> xxx FROM A,B,C,D WHERE " without the limit clause, which might itself
> really only produce 10 or less rows (and perhaps take really long) due to
> the WHERE clause. So even if you did parse the LIMIT clause, you are no
> closer to a true tale.
>
> My strategy has always been to run queries as the user enters them and limit
> them to 1000 or such when a LIMIT is not already present. If I do actually
> produce 1000 rows (which means there may be more) then I prompt the user
> with "This query has more (perhaps significantly more) rows, would you like
> to: Load all (May take a long time), Load Next 1000, Stop here." or some
> other mechanism that achieves the same, though the "load next 1000" option
> is harder to implement when you connect via 3rd party tools.
>
> A last thought, LIMIT is not a Query word or in any way affects the Query
> idea that the Query planner sees, it has no algebraic influence on the
> query. It is an output limitation only. If there is room with 100 people in
> it, and I were to ask the curator: Please give me the names of the people
> limited to 10 values, like so:
> SELECT Name FROM People LIMIT 10
> he would take the list of 100 people and read off the 1st 10 of them. This
> limitation doesn't shorten his list or affects the number of people in the
> room, he just stops reading at some point to please me. So if I ask, how
> many people are in the room, but stop reading at the 10th result. he is just
> going to say: 100 people and stop there. Your limiting clause did not change
> that fact.
>
> Were I to ask: How many people would be in the list if I asked you for a
> list of people limited to 10 output values? He would answer 10 - of course,
> that is why this query works:
> SELECT COUNT(*) FROM (SELECT * FROM People LIMIT 10);
>
> Cheers,
> Ryan
>
>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How to read data from WAL?

2016-03-07 Thread Paul Sanderson
I have done some manual WAL decoding for my forensic software that can
identifiy a previous DB state - its fun :)...

(From memory)

To determine which pages belong to the last transaction, you need to :

Read the WAL header to obtain the current salt, then read each wal
frame to determine which frames belong to the current transaction and
then you can get the page number from the frame.

To determine which table/index the page belongs to you will need to
check each b-tree (making sure that you take account of pages that are
in the current WAL checkpoint - i.e. exist in the WAL prior to the
page you are looking for).

You will then need to manually decode each page you have identified in
the WAL to get the rows within the page and then you need to determine
what rows have been added/deleted or altered.

The issues here are that a small change to a table (one row
added/edited) can result in many changes to the pages for that table,
e.g. if a string is appended to then the row may no longer fit in the
page that it currently occupies so the row will be moved. Depending on
the key structure for the table this may also result in other rows
being moved to maintain the structure of the B-Tree. So if you read a
page from the WAL and then find the* previous version of that page
(which may be earlier in the WAL or in the DB) and a row is not
present you will need to parse all the changed pages in that tree to
determine whether it has been deleted or just moved.

(* I say THE previous page because you can and very often do have
multiple copies of a page in the WAL.)

All of the information you need is in the DB file format at the link
provided earlier. It is however not as simple as just reading the
format spec though, it took me a lot of experimentation (including
writing some DB/WAL page level visualisation tools) to work out what
is actually going on.





Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


On 7 March 2016 at 09:49, Sairam Gaddam  wrote:
>>
>> If you're really sure that the database files are identical, the easiest
>> way to do the replication would be to copy the WAL file.
>>
>> What processing do you want to do?
>>
>
> I want to take note of changes and depending on data, I might replicate in
> some other DB. The database files are not necessarily identical. So mere
> copying wont help me. I should get the data from WAL and based on which
> table(can I get these info from WAL?) the data was changed, I should act
> accordingly. So the only way is to read page and extract information from
> it or any other way??
>
>> Is there any other better way other than WAL?
>>
>> Record the SQL statements that are executed.
>
>
> This is one another case, where some optimization could happen, like for
> example if I run a update query and nothing gets updated in a particular
> table which is similar in 2 different databases, then WAL doesn't have any
> changes on that particular table and no need to execute the same query
> again in different DB!
> And I have some other use cases like those.
> So I would like to see the changes from WAL and based on changes and the
> query, I will proceed.
> So i need a better way to read the contents from the WAL.
> One of them is analyzing the pages from WAL, any others??. In PostgreSQL,
> for WAL, they have a module which will give summary of which row in a
> particular table was changed by reading the contents from WAL. Any module
> like that?? because my work will be easy if I get the info of changes that
> way!!
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] error messages from SQLite

2016-03-19 Thread Paul Sanderson
When executing a sql query I often find I have made a typo or been a
bit dull and I'll get an error message back along the lines of

Error: near "text": syntax error

Is there a way of expanding on this, for instance adding more of the
following text or a character offset. The last time this happened I
had a create table... command with about 30 text columns and
identifying which "text" string the error referred to took a little
while.

Thanks


[sqlite] Reserved column names

2016-03-19 Thread Paul Sanderson
Is there a list of reserved column names

I have seen the list or keywords at the link below

http://www.sqlite.org/lang_keywords.html

but for instance create table (abort int) will work fine.

I know that keywords can be quoted but I am interested in just those
that can be used unquoted (even if not advisable)

Without going through all of them can someone point me at a list?

Thanks


[sqlite] error messages from SQLite

2016-03-19 Thread Paul Sanderson
Thanks Tim

I am not concerned with errors from function calls - just errors with
SQL queries at the command line etc.
Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


On 19 March 2016 at 11:20, Tim Streater  wrote:
> On 19 Mar 2016 at 10:26, Paul Sanderson  
> wrote:
>
>> When executing a sql query I often find I have made a typo or been a
>> bit dull and I'll get an error message back along the lines of
>>
>> Error: near "text": syntax error
>>
>> Is there a way of expanding on this, for instance adding more of the
>> following text or a character offset. The last time this happened I
>> had a create table... command with about 30 text columns and
>> identifying which "text" string the error referred to took a little
>> while.
>
> In my PHP usage of SQLite, I have a shim around the provided functions which 
> logs the SQL in the event of error. The application then displays it too. The 
> shim also expects a small string to identify which actual SQLite call was 
> involved, so I don't have to look through 140 source modules trying to guess 
> what was going on at the time.
>
> --
> Cheers  --  Tim
>
> ___
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


Re: [sqlite] WAL: no schema after close

2016-06-03 Thread Paul Sanderson
As above the WAL wont check point unless

1. You tell it too
2. You commit a transaction which takes the wal above (default) 1000 pages
3. you exit the applictaion gracefully

The WAL algorithm is doing exactly what it should do and is taking (or
rather keeping) the DB in it's last know good state. In your case this
is a new blank DB.

It is not enough to commit a transaction as the wal can and often does
contain the data for multiple transactions.

Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


On 3 June 2016 at 11:32, Yongil Jang  wrote:
> In my guess, if you call following command after for each operation and
> schema exits exactly... Your test may removes *-wal file.
>
> conn.execute("PRAGMA wal_checkpoint;")
>
> I hope it is correct answer.
>
> Regards.
>
> 2016년 6월 3일 (금) 19:19, Максим Дементьев 님이 작성:
>
>> Hello,
>>
>> I’ve got a set of python 3 unittests which use sqlite (versions 3.12.0 and
>> 3.13.0 under Linux 4.4.6-gentoo x86_64).
>>
>> There is one test which closes the database and opens it again during the
>> test.
>> If I use the WAL (by using conn.execute("PRAGMA journal_mode=WAL;")) and
>> run all unittests, this unittest produces a database file with the empty
>> schema after first close, so it fails.
>> When I run it alone (or without WAL), it passes.
>>
>> I’ve checked “hexdump -C” of the normal and the bad cases after first
>> close, in both cases the size of database file is the same, they contain
>> common data for inserted rows, but there is no “CREATE TABLE ...”
>> statements in the bad database file.
>>
>> There are conn.commit() calls after each modification SQL statement. I
>> even tried to put explicit "BEGIN; ... COMMIT;" around "CREATE TABLE IF NOT
>> EXISTS..." statements, it doesn't help.
>>
>> Is it a well-known problem ?
>>
>> Regards,
>> Max
>>
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bad db feature request

2016-06-29 Thread Paul Sanderson
As mentioned above there is (or can be) reserved space at the end of
each page (documented in the DB header) that can be used for checksums
- you just need to write your own extension :)

https://www.sqlite.org/fileformat2.html

Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


On 29 June 2016 at 18:52, Scott Hess  wrote:
> On Wed, Jun 29, 2016 at 10:36 AM, Simon Slavin  wrote:
>> On 29 Jun 2016, at 5:45pm, Drago, William @ CSG - NARDA-MITEQ 
>>  wrote:
>>> Aren't there things like that already built in to the hard disk controllers 
>>> (CRC, Reed Solomon, etc.)?
>>
>> Yes.  But they operate at the level they understand.  For instance ...
>>
>> A change is made in a field which involves changing just one page of data.  
>> In terms of the SQLite file format this would mean that a table page is 
>> overwritten -- a one disk sector change.  If SQLite checksums existed then 
>> this would mean that the checksum, stored in the table pointer page, would 
>> also have to be updated.  Which would mean that another disk sector has to 
>> be changed too.
>>
>> Now suppose there's a big in the storage medium driver which means it 
>> occasionally writes the correct data to the wrong sector on disk.  Without 
>> checksums this fault would not be noticed: since the wrong sector on disk 
>> was updated the wrong checksum on disk would be updated too.  The errors 
>> would match.
>
> I think the bigger problem is that delegating this means that you
> assume the entire underlying stack is working correctly.  For
> instance, the disk may have elaborate error-correction protocols that
> are working correctly per sector, but SQLite's pages may span sectors.
> Or the underlying disk may be perfect and the filesystem doesn't
> provide the same guarantees.  Or someone is running things over NFS.
> Having the page checksum embedded in the page at the SQLite level
> would provide end-to-end confidence.
>
> Chaining the checksums is a whole different level of assurance.  To
> the best of my knowledge _all_ legitimately (1) corrupted databases
> I've seen had pages which were individually valid, but not valid when
> taken together.  Like an index page referred to a row which wasn't
> present in the table page.  This implies that the atomicity guarantees
> SQLite relies on were broken at the filesystem or disk level.
>
> -scott
>
> (1) I consider a system where the filesystem is simply broken to not
> be legitimate corruption.  For instance, if you get a page of random
> which doesn't appear to have every been SQLite data in the first
> place.  There's not much SQLite can do about that kind of thing.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bad db feature request

2016-06-30 Thread Paul Sanderson
The OP wanted something to check his systems for corruption - if the
SQLite dev team don't want to add checksums then the OP could possibly
solve his own problem by adding them to his own internal version.

Extension may have been a bad choice of word - he can modify/compile
his own version of SQLite add checksums - and also add a corruption
engine if he wants.
Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


On 29 June 2016 at 22:10, Dominique Devienne  wrote:
> On Wed, Jun 29, 2016 at 9:54 PM, Paul Sanderson <
> sandersonforens...@gmail.com> wrote:
>
>> As mentioned above there is (or can be) reserved space at the end of
>> each page (documented in the DB header) that can be used for checksums
>> - you just need to write your own extension :)
>>
>> https://www.sqlite.org/fileformat2.html
>
>
> Can you really? What kind of extension?
>
> The pager is not an extension point.
>
> The VFL is one, but https://www.sqlite.org/c3ref/io_methods.html is byte
> oriented, not page oriented.
> Plus there are different types of pages, with different format. Does the
> checksum go in the same place for all them?
> Official SQLite would ignore the extra info, and probably either not
> rewrite it, or keep it as-is, when modifying a page,
> making the page appear invalid. This is precisely the kind of things that
> if not built in is kinda useless I'm afraid.
>
> Plus w/o a format change, even if built-in to the latest SQLite, older
> SQLite apps wouldn't keep the checksums up-to-date too.
> This and index-based-row-counts and probably other things do require a
> backward-incompatible change IMHO. --DD
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Document ommision bind-parameter description

2016-07-13 Thread Paul Sanderson
There is a minor ommision on the BNF page of the database

http://www.sqlite.org/docsrc/doc/trunk/art/syntax/all-bnf.html#bind-parameter

In the expr section bind-parameter is a dangling hyper link



Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Document ommision bind-parameter description

2016-07-13 Thread Paul Sanderson
Ok Thank you
Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


On 13 July 2016 at 12:20, Richard Hipp  wrote:
> On 7/13/16, Paul Sanderson  wrote:
>> There is a minor ommision on the BNF page of the database
>>
>> http://www.sqlite.org/docsrc/doc/trunk/art/syntax/all-bnf.html#bind-parameter
>>
>> In the expr section bind-parameter is a dangling hyper link
>
> That BNF version of the grammar is unmaintained and is obsolete in
> multiple respects.
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Using Bitwise Logic In Query

2016-09-08 Thread Paul Sanderson
How does this work for you to get all the even rows

SELECT ROWID
FROM table
WHERE ROWID & 0x01 = 0x00


Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


On 8 September 2016 at 11:07, Dave Blake  wrote:
> Looking for the best way to query a table with an integer column by value
> of the lower 16 bits of the data in that column. Does SQLite support
> bitwise logic?
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [WAL] Querying/Finding out the WAL size (in pages) without trying to checkpoint

2016-09-20 Thread Paul Sanderson
From within SQLite - I don't know if its possible.

But if you can query the WAL file size you should be able to determine
the number of pages easily enough.

pages = (walfilesize-32)/(DBpagesize+24)

the only caveats I can think of are:

The WAL file is not truncated after a checkpoint so you would need to
make sure your checkpoint was done in truncate mode:

PRAGMA wal_checkpoint(truncate)

WAL auto checkpoints are always passive so you would need to disable
autocheckpointing and do all checkpoints yourself manually as above to
ensure WAL is truncated.

PRAGAM wal_autocheckpoint = 0

Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


On 20 September 2016 at 08:59, Jose Arroyo  wrote:
> From looking at the source code, it seems to me that the values returned
> on wal_hooks come from some sqlite's internal variables so it doesn't seem
> to be possible to query the DB for it. I suspect that the WAL size can be
> calculated from the WAL index file somehow, but I don't really grok how it
> is actually used from the docs online (
> https://www.sqlite.org/fileformat.html#wal_index_format).
>
> My current issue is that I'm encountering WAL checkpoint starvation and I'm
> trying to figure out a way to create checkpoint gaps only when needed. The
> scenario is pretty standard, one writer process and several readers
> processes. The writer notifies the readers in some specific DB updates
> scenarios, the readers query, do some stuff, update some fields by
> messaging the writer process who will queue up those updates. After a
> certain DB size, checkpoints start failing more and more often to the point
> that the WAL size starts growing unbounded.
>
> The caveat is that all this happens under a SPARQL layer and that I don't
> have much room to modify how the writer process works. So I was wondering
> if I could access the current number of pages in the WAL from outside the
> writer process in a cheap way.
>
> Any advice or guidance is greatly appreciated
>
> Cheers,
> José M. Arroyo
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [WAL] Querying/Finding out the WAL size (in pages) without trying to checkpoint

2016-09-20 Thread Paul Sanderson
Please let me know if this works for you - I've never tried it so at
the moment it's just an idea :)
Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


On 20 September 2016 at 13:05, Jose Arroyo  wrote:
> The writer process doesn't ever do "truncating" checkpoints, even if it
> does handle checkpoints manually. I'm starting to realize that truncating
> checkpoints are a good idea, especially after going through "checkpoint
> starvation" moments...
>
> In that case, I think I could make your suggestion work.
>
> Thank you !
>
> On 20 September 2016 at 11:57, Paul Sanderson 
> wrote:
>
>> From within SQLite - I don't know if its possible.
>>
>> But if you can query the WAL file size you should be able to determine
>> the number of pages easily enough.
>>
>> pages = (walfilesize-32)/(DBpagesize+24)
>>
>> the only caveats I can think of are:
>>
>> The WAL file is not truncated after a checkpoint so you would need to
>> make sure your checkpoint was done in truncate mode:
>>
>> PRAGMA wal_checkpoint(truncate)
>>
>> WAL auto checkpoints are always passive so you would need to disable
>> autocheckpointing and do all checkpoints yourself manually as above to
>> ensure WAL is truncated.
>>
>> PRAGAM wal_autocheckpoint = 0
>>
>> Paul
>> www.sandersonforensics.com
>> skype: r3scue193
>> twitter: @sandersonforens
>> Tel +44 (0)1326 572786
>> http://sandersonforensics.com/forum/content.php?195-SQLite-
>> Forensic-Toolkit
>> -Forensic Toolkit for SQLite
>> email from a work address for a fully functional demo licence
>>
>>
>> On 20 September 2016 at 08:59, Jose Arroyo 
>> wrote:
>> > From looking at the source code, it seems to me that the values returned
>> > on wal_hooks come from some sqlite's internal variables so it doesn't
>> seem
>> > to be possible to query the DB for it. I suspect that the WAL size can be
>> > calculated from the WAL index file somehow, but I don't really grok how
>> it
>> > is actually used from the docs online (
>> > https://www.sqlite.org/fileformat.html#wal_index_format).
>> >
>> > My current issue is that I'm encountering WAL checkpoint starvation and
>> I'm
>> > trying to figure out a way to create checkpoint gaps only when needed.
>> The
>> > scenario is pretty standard, one writer process and several readers
>> > processes. The writer notifies the readers in some specific DB updates
>> > scenarios, the readers query, do some stuff, update some fields by
>> > messaging the writer process who will queue up those updates. After a
>> > certain DB size, checkpoints start failing more and more often to the
>> point
>> > that the WAL size starts growing unbounded.
>> >
>> > The caveat is that all this happens under a SPARQL layer and that I don't
>> > have much room to modify how the writer process works. So I was wondering
>> > if I could access the current number of pages in the WAL from outside the
>> > writer process in a cheap way.
>> >
>> > Any advice or guidance is greatly appreciated
>> >
>> > Cheers,
>> > José M. Arroyo
>> > ___
>> > sqlite-users mailing list
>> > sqlite-users@mailinglists.sqlite.org
>> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] converting unix10 and unix13 dates in the same column

2016-09-29 Thread Paul Sanderson
I have a table with dates in different formats, either 10 digit or 13
digit unix dates

1234345087123
1234567890
1432101234
1456754323012

I want a sql query that will convert both dates, I tried this

SELECT CASE dt
  WHEN (unix10and13.dt < 100)
 THEN DateTime(unix10and13.dt, 'unixepoch')
  WHEN (unix10and13.dt > 100)
THEN DateTime(unix10and13.dt / 1000, 'unixepoch')
  ELSE dt
  END AS converted
FROM unix10and13

But this returns the original values - i.e. the else portion is being
evaluated but one of the previous expressions should evaluate to true
surely? Any ideas why this is failing?



I am also interested (because I tried and failed) in coding a second
query that would return all four rows but in two columns each with
either a unix10 or 13 date in the correct column - something like
this:

unix10,  unix13
   ,1234345087123
1234567890,
1432101234,
   ,1456754323012

any suggestions to achieve this approach?



Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] converting unix10 and unix13 dates in the same column

2016-09-29 Thread Paul Sanderson
ah OK - being dull thank you
Paul
www.sandersonforensics.com
skype: r3scue193
twitter: @sandersonforens
Tel +44 (0)1326 572786
http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-Toolkit
-Forensic Toolkit for SQLite
email from a work address for a fully functional demo licence


On 29 September 2016 at 14:29, Keith Medcalf  wrote:
> You query is incorrect.  It should be:
>
> SELECT CASE
>   WHEN (unix10and13.dt < 100)
>  THEN DateTime(unix10and13.dt, 'unixepoch')
>   WHEN (unix10and13.dt > 100)
> THEN DateTime(unix10and13.dt / 1000, 'unixepoch')
>   ELSE dt
>   END AS converted
> FROM unix10and13;
>
> When your case, you are using the CASE  WHEN  ...
>
> So, the THEN clauses are comparing the value of DT to the result on (dt < 
> 100) or (dt > 100).  The results of the expression are always 
> 1 or 0, which never equals DT, so the THEN clause is never executed and the 
> ELSE is always taken.
>
>
>> -Original Message-----
>> From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org]
>> On Behalf Of Paul Sanderson
>> Sent: Thursday, 29 September, 2016 07:14
>> To: General Discussion of SQLite Database
>> Subject: [sqlite] converting unix10 and unix13 dates in the same column
>>
>> I have a table with dates in different formats, either 10 digit or 13
>> digit unix dates
>>
>> 1234345087123
>> 1234567890
>> 1432101234
>> 1456754323012
>>
>> I want a sql query that will convert both dates, I tried this
>>
>> SELECT CASE dt
>>   WHEN (unix10and13.dt < 100)
>>  THEN DateTime(unix10and13.dt, 'unixepoch')
>>   WHEN (unix10and13.dt > 100)
>> THEN DateTime(unix10and13.dt / 1000, 'unixepoch')
>>   ELSE dt
>>   END AS converted
>> FROM unix10and13
>>
>> But this returns the original values - i.e. the else portion is being
>> evaluated but one of the previous expressions should evaluate to true
>> surely? Any ideas why this is failing?
>>
>>
>>
>> I am also interested (because I tried and failed) in coding a second
>> query that would return all four rows but in two columns each with
>> either a unix10 or 13 date in the correct column - something like
>> this:
>>
>> unix10,  unix13
>>,1234345087123
>> 1234567890,
>> 1432101234,
>>,1456754323012
>>
>> any suggestions to achieve this approach?
>>
>>
>>
>> Paul
>> www.sandersonforensics.com
>> skype: r3scue193
>> twitter: @sandersonforens
>> Tel +44 (0)1326 572786
>> http://sandersonforensics.com/forum/content.php?195-SQLite-Forensic-
>> Toolkit
>> -Forensic Toolkit for SQLite
>> email from a work address for a fully functional demo licence
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] converting unix10 and unix13 dates in the same column

2016-09-29 Thread Paul Sanderson
All sorted now thank you
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


  1   2   3   >