Re: [sqlite] Solving the Sudoku with SQlite 3.8.3 trunk

2014-02-08 Thread E.Pasma

Op 6 feb 2014, om 16:46 heeft Simon Slavin het volgende geschreven:



On 6 Feb 2014, at 7:15am, big stone  wrote:

If we wish to have SQLite + Python combination to become "reference  
choice"

in education, I would think that priority list should be :


Just to remind you that you're posting to the SQLite list.  Most of  
those are things that would be done by the Python maintainers, not  
the SQLite maintainers.


Simon.


I find the subject of Sudoku solving still interesting and have an  
other quey here. This derives from Bigstone's 1st solution, defining a  
neighbourhood relation between sudoku cells. New is that I tried  
bitmaps instead of a characterstring to represent the sudoku. Below is  
the result. Conclusions

- bitmaps are hard to debug as they can not be easily viewed
- but the solution is much fater (four times)
- the recursion tends to go breath first by default which is not  
optimal for speed.
- using a seperate (temporary) table for a non-trivial sub-queriy,  
instead of a CTE. is worth when used at several places.


create temporary table ind (
ind integer primary key, -- sudoku cell (1..81)
word0, -- bitmap of ind, bits 1..54
word1, -- bitmap of ind, bits 55..81
neighbours0, neighbours1) -- bitmap of neighbour cells
;
/*
initializing the neighbour bitmaps was the
most tricky part: one must probably turn the
soduku upside down and view it through a mirror
to see the x,y coordinates as used here.
*/
insert into ind
select  ind,
case when iword=0 then 10 then word0 else 0 end,
w1 | case when z>0 then word1 else 0 end,
w10 | case when z=1 then word0 else 0 end,
w11 | case when z=1 then word1 else 0 end,
w20 | case when z=2 then word0 else 0 end,
w21 | case when z=2 then word1 else 0 end,
w30 | case when z=3 then word0 else 0 end,
w31 | case when z=3 then word1 else 0 end,
w40 | case when z=4 then word0 else 0 end,
w41 | case when z=4 then word1 else 0 end,
w50 | case when z=5 then word0 else 0 end,
w51 | case when z=5 then word1 else 0 end,
w60 | case when z=6 then word0 else 0 end,
w61 | case when z=6 then word1 else 0 end,
w70 | case when z=7 then word0 else 0 end,
w71 | case when z=7 then word1 else 0 end,
w80 | case when z=8 then word0 else 0 end,
w81 | case when z=8 then word1 else 0 end,
w90 | case when z=9 then word0 else 0 end,
w91 | case when z=9 then word1 else 0 end
frominput
joinind
on  ind.ind = input.ind
)
,
sudoku as (
select  1 as ind,
w0, w1,
w10, w11,
w20, w21,
w30, w31,
w40, w41,
w50, w51,
w60, w61,
w70, w71,
w80, w81,
w90, w91
frominput
where   ind>81
union all
select  sudoku.ind+1,
w0 | word0,
w1 | word1,
w10 | case when z=1 then word0 else 0 end,
w11 | case when z=1 then word1 else 0 end,
w20 | case when z=2 then word0 else 0 end,
w21 | case when z=2 then word1 else 0 end,
w30 | case when z=3 then word0 else 0 end,
w31 | case when z=3 then word1 else 0 end,
w40 | case when z=4 then word0 else 0 end,
w41 | case when z=4 then word1 else 0 end,
w50 | case when z=5 then word0 else 0 end,
w51 | case when z=5 then word1 else 0 

[sqlite] FW: Need Help with Golf Handicap Calculation

2014-02-08 Thread Ed Tenholder


From: Ed Tenholder 
Sent: Saturday, February 08, 2014 1:44 PM
To: 'sqlite-users@sqlite.org'
Subject: Need Help with Golf Handicap Calculation

I’m just trying to learn SQL, and after lots of google  searches and reading 
posts on this email list, I’ve gotten pretty close.

Table:  CREATE TABLE Scores (ScoreID Integer Primary Key,ScoreDate 
Text,Player Text,CourseName Text,TeeName Text,Score Integer,Rating Real,Slope 
Integer);


Query:  SELECT MAX(ScoreDate),AVG((Score-Rating)*(113.0/Slope))*.96 FROM 
(SELECT * FROM  (SELECT * FROM (SELECT ScoreDate,Score,Rating,Slope FROM Scores 
WHERE Player="Joe Smith" ORDER BY ScoreDate ASC LIMIT 3) ORDER BY ScoreDate 
DESC LIMIT 20) ORDER BY (Score-Rating)*(113.0/Slope) ASC LIMIT 10)

Result:  MAX(ScoreDate)AVG((Score-Rating)*(113.0/Slope))*.96
  2000-05-16 
29.2436825396825



Logic:

• Select the oldest N scores (3 in the example above)
• From that, select the 20 newest scores
• From that, select the 10 lowest handicap-indexes:  
(Score-Rating)*(113/Slope)
• Return the lowest ScoreDate and the average of the handicap-indexes 
multiplied by .96   



The first SELECT is there because I am going to execute this query iteratively, 
substituting   for the “3”,from 1 to the count of total records   (so I can 
create a chart of the change in handicap over time)


The flaw is that the ScoreDate that is returned is the oldest date in the 
lowest 10 records, and what I need is the oldest date in the most recent 20 
records (from the sub-query).

I cannot figure out how to do this without breaking up the query   using temp 
tables  (which I can do, but I am interested in learning more about SQL and I’m 
sure there must be a way to do this  (if you can solve Soduko puzzles!)

Thanks for any help,

 Ed t.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Free Page Data usage

2014-02-08 Thread RSmith


On 2014/02/08 19:30, Raheel Gupta wrote:

@Simon, Sir I dont want to rearrange the data.

I will try to explain more.
All my rows have the exact same size. They will not differ in size.
My problem is due to the fact that I use 64kB page size.
My rows are exactly 8 Bytes + 4096 Bytes.

Now for the purpose of ease in calculation lets assume each row is exactly
4 KB.
So one page stores 16 rows.
Lets say 10 pages are in use and I have a total of 160 rows.


Sir,

We do understand exactly what you mean, no amount  of re-explaining will improve a 100% comprehension, and because we do understand, 
we know SQLite ain't doing it, and we are trying to offer other ways of achieving what you want to achieve, but this is not the road 
you seem to want to go down...  Which is OK.


The basic thing you need to understand is this: SQLite does not work the way you hope, it is not made to do the sort of work within 
the sort of restrictions you prescribe. Please consider using an alternative, or, accept the space vs. usage parameters. Even if you 
could adjust the code of SQLite to allow re-using pages with 1/3 free space (as opposed to 2/3 free space), then you are doomed 
because the code will be untested (unless you can download and run the entire test suite without errors) and even then, you will 
have to manually rebuild and repair and re-test your own version of the DB every time a new release happens and forever in future. 
Is this really feasible?  And even then... there is no guarantee SQLIte will re-use the exact rowids that fit inside a specific 
page, not to mention it will only even consider reusing a key if you did not specify "AUTOINCREMENT" in the schema (which, at least, 
is unlikely and fixable).


If you absolutely have to use SQLite, then maybe you can keep track of deleted rows, and in stead of deleting them, just mark them 
as "not used" while keeping the rowid or whatever primary key is used - add this key to a list of available keys maybe (to be 
faster), and when inserting new rows, first see if you have any items in your list of unused rows, then write them to that primary 
key using REPLACE etc.


A typical Schema could be like this:
CREATE TABLE datablocks (ID INTEGER PRIMARY KEY, Used INT DEFAULT 1, Data BLOB);
CREATE TABLE availrows (ID INTEGER PRIMARY KEY);

Some Pseudo code...
when deleting a row/rows:
UPDATE datablocks SET Used=0 WHERE ID=somerowid;
REPLACE INTO availrows VALUES (somerowid);

when adding a row
availRowID = (SELECT ID FROM availrows LIMIT 1);
if (availRowID!=NULL) then if (DELETE FROM availrows WHERE ID=availRowID) != SQL_OK then availRowID = NULL;  // Needed to ensure you 
can never overwrite a datablock

if (availRowID != NULL) then {
  REPLACE INTO datablocks (availRowID,1,blobValue);
} else {
  INSERT INTO datablocks (Data) VALUES (blobValue);
}

Of course adding BLOBs have some more processing to do, but you get the idea.

This way, no row will ever go unused and inserts wont ever use up any space other than that which already exists, unless no space 
exists, so the DB size will only grow if you have more actual data rows than before. Also, btw, this will have significant 
performance improvements if row-deletion is common.


Queries that need to check through the lists of data can simply reference the "Used" column in the where clause to ensure they list 
only rows that do contain valid data-blocks. (SELECT ... WHERE Used>0, etc.)


Of course, the caveat here is this other index-type table will consume a significant amount of diskspace on a DB the size you 
describe. Maybe have that in another DB file with different page size parameters too.


If it was me though, I would save the blob streams in another bytestreamed file, and only save the other data about it with indexes 
in the SQLite table, since you cannot really use a BLOB in a Where clause or for any other SQL-related function. Do the queries, get 
the index.. read the stream from the other file... SQLIte file size will be negligibly small and the data file will only ever be as 
big as is needed...  easy!


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Avoiding holding a lock for too long

2014-02-08 Thread Tim Streater
On 08 Feb 2014 at 17:48, Simon Slavin  wrote: 

> On 8 Feb 2014, at 4:58pm, Tim Streater  wrote:

>> I had a look at the PHP sqlite3 interface code, and it looks like ->query
>> does prepare, step, and reset (if there are no errors).
>
> If ->query() is doing 'step()' then the PHP code does not work the same way
> SQLite3 does internally.  SQLite would do 'step()' as part of ->fetchArray().

->query is only doing the one step(). If it gets SQLITE_ROW or SQLITE_DONE from 
that, then it does the reset, otherwise it returns an error. ->fetchArray() 
certainly does one step() each time it is called.

>>  Which of these obtains the lock on the db?
>
> The first 'step()'.  Before then all PHP needs to know is the structure of the
> database, not about the data in it.  The database needs to be locked from the
> first 'step()' to the last 'step()', though it can predict that it is finished
> if 'step()' returns 'no more rows'.

OK.

> So putting the above together you are still expected to use ->finalize() on
> the result set: it is the ->close() for that class and is the official way to
> release the handle.  Don't set it to null manually, use ->finalize() on it. 
> And, of course, eventually use ->close() on the database handle.
>
> Your solution may work for your test case, and it may work for this version of
> PHP using this version of SQLite, but I would recommend you use the API as
> documented.

OK - thanks, that's clearer now.



--
Cheers  --  Tim
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Send Mail from sqlite

2014-02-08 Thread Klaas V
From: "Keith Medcalf" 
Date: 7 Feb 2014 18:03:12 GMT+01:00
To: "General Discussion of SQLite Database" 
Reply-To: General Discussion of SQLite Database 



Have your application that is performing the update send an email when it does 
an update/insert/delete.

> I like to know if there is any possible to send a mail from sqlite.
> 
> I wanted to know if there is option to configure smtp in sqlite, which
> will help me to send a mail.
> 
> My requirement
> 
> I need to send a notification mail once the table is getting
> updated/inserted/deleted.

It's very simple: let your application enter 'mailto:@.' in 
the browser's URL-field

Kind regards|Cordiali saluti|Vriendelijke groeten|Freundliche Grüsse 
Klaas `Z4us` V < Ar(Tos)It>
 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Free Page Data usage

2014-02-08 Thread Richard Hipp
On Fri, Feb 7, 2014 at 7:39 AM, Raheel Gupta  wrote:

> Hi,
>
> My Page size is 64KB and I store around 4KB of row data in one row.
> I store around 1 rows in one table and the database size reaches 42MB.
>
> Now, I am facing a peculiar problem. When I delete just 2-3 rows, that page
> is not reused for the new data which will be inserted in the future.
>

That space will be reused if your new data has the same (or similar) key as
the rows that were deleted.

In order to achieve fast lookup, content must be logically ordered by key.
That means that all of the rows on a single page must have keys that are
close to one another.  If you have space on a page, and you insert a new
row with a nearby key, that space will be (re)used.  But if you insert a
new row with a very different key, that new row must be placed on a page
close to other rows with similar keys, and cannot appear on the same page
with rows of very dissimilar keys.



>
> The pragma freelist_count shows 0 if I delete the 1st 10 rows (approx 40KB)
> Only if I delete more than 20 rows does the freelist_count reflect 1 page
> as free.
>
> How should I get SQLIte to use the free space within a partially used page
> when rows from that page have been deleted.
>
> This causes a lot of space wastage when I store more rows.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Free Page Data usage

2014-02-08 Thread Richard Hipp
On Sat, Feb 8, 2014 at 11:51 AM, Simon Slavin  wrote:

>
> While a database is in use it might use perhaps 100 pages for a particular
> table.  Almost every one of those pages will have a little space free:
> anything from 1 byte to most of the page, depending on how much space each
> row takes up.  When writing a new row to a table, SQLite intelligently
> figures out which existing page it can write the row to (or does it ?
>  someone who has read the source code can tell me I'm wrong and if it
> searches for the 'best' page).
>

No.

SQLite (as most other database engines) use B-Trees.  Every row has a "key"
(which is often, but not always, the ROWID in SQLite.)  Rows need to be
stored in key-order.  Otherwise, you would not be able to find a row given
its key, except by doing a slow and wasteful scan of the entire table.

If four or five rows have adjacent keys, those rows can be placed
arbitrarily on one page.  There is a small index at the beginning of each
page that tells where to find each row on that page.  But you cannot spread
those keys out arbitrarily on different pages.  If they are adjacent, then
they need to be logically adjacent in the file.



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Free Page Data usage

2014-02-08 Thread Simon Slavin

On 8 Feb 2014, at 5:30pm, Raheel Gupta  wrote:

> I will try to explain more.
> All my rows have the exact same size. They will not differ in size.
> My problem is due to the fact that I use 64kB page size.
> My rows are exactly 8 Bytes + 4096 Bytes.

Your very specific use of SQLite is not every situation that SQLite has to be 
able to handle.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Avoiding holding a lock for too long

2014-02-08 Thread Simon Slavin

On 8 Feb 2014, at 4:58pm, Tim Streater  wrote:

> On 07 Feb 2014 at 23:04, Simon Slavin  wrote: 
> 
>> You should not be manually setting any handle to null.  Try calling
>> ->finalize() on the statement and eventually ->close() on the database, which
>> do more than just setting their value.  If you are setting anything to null
>> manually, this may be causing the problem you report.
> 
> I made myself a simple testbed in PHP based on the code I posted last time. I 
> used a sleep call to make each turn round the while loop take a couple of 
> seconds, so the code took 20 secs to complete. In another Terminal window, I 
> ran a second script that tried to update the same database (with a 2000msec 
> timeout). Having established that this latter script got "database is 
> locked", I then rejigged the first script to fetch all the result rows before 
> entering the slow loop. This sufficed to allow the second script to run 
> without getting "database is locked" (i.e I didn't use ->finalize or ->close 
> in the first script).
> 
> I had a look at the PHP sqlite3 interface code, and it looks like ->query 
> does prepare, step, and reset (if there are no errors).

If ->query() is doing 'step()' then the PHP code does not work the same way 
SQLite3 does internally.  SQLite would do 'step()' as part of ->fetchArray().

>  Which of these obtains the lock on the db?

The first 'step()'.  Before then all PHP needs to know is the structure of the 
database, not about the data in it.  The database needs to be locked from the 
first 'step()' to the last 'step()', though it can predict that it is finished 
if 'step()' returns 'no more rows'.

So putting the above together you are still expected to use ->finalize() on the 
result set: it is the ->close() for that class and is the official way to 
release the handle.  Don't set it to null manually, use ->finalize() on it.  
And, of course, eventually use ->close() on the database handle.

Your solution may work for your test case, and it may work for this version of 
PHP using this version of SQLite, but I would recommend you use the API as 
documented.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Free Page Data usage

2014-02-08 Thread Raheel Gupta
@Simon, Sir I dont want to rearrange the data.

I will try to explain more.
All my rows have the exact same size. They will not differ in size.
My problem is due to the fact that I use 64kB page size.
My rows are exactly 8 Bytes + 4096 Bytes.

Now for the purpose of ease in calculation lets assume each row is exactly
4 KB.
So one page stores 16 rows.
Lets say 10 pages are in use and I have a total of 160 rows.

Now I delete Rows 1-4 (total 4 rows) and I insert another 4 rows.
What I wanted is that the space freed by the first 4 rows being deleted be
used for the 4 new rows.

This should be done without any re-arrangement of data (so no vacuum and no
internal data rearrangement !).

As far as I am aware if a page is marked as free, sqlite will first use the
page to store new data. But since my page size is 64 KB, this will not be
possible.

Hence I am evaluating all options on this to optimize my storage space
utilization.



On Sat, Feb 8, 2014 at 10:21 PM, Simon Slavin  wrote:

>
> On 8 Feb 2014, at 11:24am, Raheel Gupta  wrote:
>
> > I dont want to repack the DB sir.
> > When a page becomes free I want to make sure that page is used up first
> and
> > then new pages are created.
>
> Just to explain that this would be extremely inefficient because a new row
> that you write to a database will not take up the same space as a row you
> have deleted.
>
> While a database is in use it might use perhaps 100 pages for a particular
> table.  Almost every one of those pages will have a little space free:
> anything from 1 byte to most of the page, depending on how much space each
> row takes up.  When writing a new row to a table, SQLite intelligently
> figures out which existing page it can write the row to (or does it ?
>  someone who has read the source code can tell me I'm wrong and if it
> searches for the 'best' page).
>
> What it won't do is rearrange existing pages so that they are used as much
> as possible.  That could be done whenever a row is deleted (including when
> a row is replaced using UPDATE).  But it would require a lot of checking,
> processing, reading and writing, and this would slow SQLite down a great
> deal for every DELETE and UPDATE operation.  As an the top of my head
> guess, individual operations could take unpredictable amounts of time since
> most efficient packing could require any number of pages to be rewritten.
>  I don't know of any database system that works like this.
>
> So that's one thing that might make you want to use VACUUM.  Even VACUUM
> does not reclaim the maximum amount of space possible.  Instead it prefers
> to keep the data for a row together and rows in primary index order
> together, to increase speeds
>
> There are also the auto_vacuum and incremental-vacuum PRAGMAs.  However
> they operate only on the level of pages: they will reap entire unused
> pages, but not interfere with the packing of data within a page.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Avoiding holding a lock for too long

2014-02-08 Thread Tim Streater
On 07 Feb 2014 at 23:04, Simon Slavin  wrote: 

> You should not be manually setting any handle to null.  Try calling
> ->finalize() on the statement and eventually ->close() on the database, which
> do more than just setting their value.  If you are setting anything to null
> manually, this may be causing the problem you report.

I made myself a simple testbed in PHP based on the code I posted last time. I 
used a sleep call to make each turn round the while loop take a couple of 
seconds, so the code took 20 secs to complete. In another Terminal window, I 
ran a second script that tried to update the same database (with a 2000msec 
timeout). Having established that this latter script got "database is locked", 
I then rejigged the first script to fetch all the result rows before entering 
the slow loop. This sufficed to allow the second script to run without getting 
"database is locked" (i.e I didn't use ->finalize or ->close in the first 
script).

I had a look at the PHP sqlite3 interface code, and it looks like ->query does 
prepare, step, and reset (if there are no errors). Which of these obtains the 
lock on the db? I also looked at PHP's ->finalize but in the case where there 
are no more rows to return, it didn't seem to do much of anything except return 
FALSE. Where will the lock have been dropped in the first script?

--
Cheers  --  Tim
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Free Page Data usage

2014-02-08 Thread Simon Slavin

On 8 Feb 2014, at 11:24am, Raheel Gupta  wrote:

> I dont want to repack the DB sir.
> When a page becomes free I want to make sure that page is used up first and
> then new pages are created.

Just to explain that this would be extremely inefficient because a new row that 
you write to a database will not take up the same space as a row you have 
deleted.

While a database is in use it might use perhaps 100 pages for a particular 
table.  Almost every one of those pages will have a little space free: anything 
from 1 byte to most of the page, depending on how much space each row takes up. 
 When writing a new row to a table, SQLite intelligently figures out which 
existing page it can write the row to (or does it ?  someone who has read the 
source code can tell me I'm wrong and if it searches for the 'best' page).

What it won't do is rearrange existing pages so that they are used as much as 
possible.  That could be done whenever a row is deleted (including when a row 
is replaced using UPDATE).  But it would require a lot of checking, processing, 
reading and writing, and this would slow SQLite down a great deal for every 
DELETE and UPDATE operation.  As an the top of my head guess, individual 
operations could take unpredictable amounts of time since most efficient 
packing could require any number of pages to be rewritten.  I don't know of any 
database system that works like this.

So that's one thing that might make you want to use VACUUM.  Even VACUUM does 
not reclaim the maximum amount of space possible.  Instead it prefers to keep 
the data for a row together and rows in primary index order together, to 
increase speeds

There are also the auto_vacuum and incremental-vacuum PRAGMAs.  However they 
operate only on the level of pages: they will reap entire unused pages, but not 
interfere with the packing of data within a page.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] struggling with a query

2014-02-08 Thread RSmith

Just to be clear, it isn't really "mine", just an adaption of the many 
excellent contributions, from which I too have learned.
A huge pleasure and fun exercise no less!


On 2014/02/08 14:35, Stephan Beal wrote:

On Sat, Feb 8, 2014 at 12:39 PM, RSmith  wrote:


SELECT name, max(vid=2)-max(vid=1) FROM v GROUP BY name;

So much more succint than my original, like I predicted :)


Indeed!!! This one wins if i am able to refactor it for use with the much
more complex structure i'm actually working with (the fossil SCM's vfile
table - my example is a simplified form to help me get my head around the
SQL).

FWIW, sqlite3's ".stats" say yours is overall more efficient:

Virtual Machine Steps:   242
vs the WITH variant i posted:
Virtual Machine Steps:   308


Thanks again!



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Free Page Data usage

2014-02-08 Thread Gerry Snyder
Instead of delete and then insert, can you somehow just keep track of which
rows are to be deleted, and when new rows come in replace if you can and
otherwise insert?

A little more bookkeeping, but it might save the space you need.

Gerry
On Feb 7, 2014 10:57 PM, "Raheel Gupta"  wrote:

> Hi,
>
> Sir, the 32 TB size is not always going to be reached.
> The Database is going to be used to store blocks of a Block Device like
> /dev/sda1
> The size can reach 3-4 TB easily and would start from atleast 20-100 GB. 32
> TB of data though impractical as of today will be possible in 2-3 years.
> The issue happens when I delete the rows and new rows are inserted at the
> end of the database the size of the database exceeds that of the actual
> block device size even though many pages are having free space.
> Hence I am simply trying to optimize the utilization of the free space
> available.
>
> I would have loved to use the page size of 2KB which would give me a
> practical size of 4TB. But that would have this hard limit of 4TB.
> So I have two possible options which I am trying to help me solve this
> issue :
> 1) Either make the page size to 2KB and increase the maximum page count to
> 2^64 which will be more than sufficient.
> 2) Improve the free space utilization of each page when the page size is
> 64KB.
>
> I hope this makes sense.
>
>
>
> On Sat, Feb 8, 2014 at 12:54 AM, RSmith  wrote:
>
> > A database that is geared for 32TB size and you are concerned about
> rather
> > insignificant space wasted by the page size that is needed to reach the
> > 32TB max size... does not make any sense unless you are simply paranoid
> > about space.  Removing the gaps in the table space when deleting a row
> (or
> > rows) will render a delete query several magnitudes slower.
> >
> > If it IS that big of a concern, then maybe use standard files rather than
> > SQLite to save data in?  If the SQL functionality is a must, you can use
> > vacuum as often as is needed to clear unused space - but beware, 1 -
> Vacuum
> > takes some processing to re-pack a DB, especially a near 32TB one... in
> the
> > order of minutes on a computer I would guess, and much much more on
> > anything else.  2 - a 32TB DB will need up to 64TB total free disk space
> to
> > be sure to vacuum correctly - so having issues with it taking up maybe
> 40TB
> > for 32TB of data is in itself an irrelevant concern. Even large queries,
> > temporary tables etc will all need additional interim space for the sorts
> > of queries that might be requested of a 32TB data-set.
> >
> > The real point being: if you do not have at least 64TB free on whatever
> > that 32TB DB will sit, you are doing it wrong, and if you do have that
> much
> > free, you can ignore the 25% wasted deletion space problem.
> >
> > If the problem is simply your own pedanticism (at least I can sympathise
> > with that!) then it's simply a case of "Welcome to efficient databasing",
> > but if it is a real space deficit, then I'm afraid you will have to
> re-plan
> > or reconsider either the max allowable DB, or the physical layer's space
> > availability - sorry.
> >
> >
> >
> > On 2014/02/07 20:35, Raheel Gupta wrote:
> >
> >> Hi,
> >>
> >> I use a page size of 64 KB. But my row consists of 2 columns that is :
> >> i - Auto Increment Integer,
> >> b - 4096 Bytes of BLOB data
> >>
> >> Now for the sake of calculation, lets say 16 rows fit in a page and my
> >> table has 1 rows when I start.
> >>
> >> Now, lets say I delete some data which is not in sequence i.e. it can be
> >> deleted as per data which is not in use. To create such a hypothetical
> >> situation for explaining this to you, here is a simple query :
> >> DELETE from TABLE where i%4 = 0;
> >>
> >> As you may see that there is now 25% data deleted in each page.
> >>
> >> Now even if I do insert another 2500 rows (25% of original size) my
> >> database size reaches 125% of the original size when I inserted the
> 1
> >> rows initially.
> >>
> >> Hence there is significant space wastage. Anyway i can improve that ?
> >> It would be nice if the database size would be close to the original
> size
> >> after deleting 25% and adding some new 25% data.
> >>
> >> I know you would recommend to use smaller page sizes. Ideally 2KP page
> >> size
> >> is good but then, the number of pages is restricted to a max of 2^32
> which
> >> will restrict the total database size to 4TB only. I need the max size
> to
> >> be capable of atleast 32TB.
> >>
> >>
> >>
> >> On Fri, Feb 7, 2014 at 11:14 PM, Donald Griggs 
> >> wrote:
> >>
> >>  Can you write more about how this is causing you a problem? Most users
> >>> don't experience this as a problem
> >>> On Feb 7, 2014 10:30 AM, "Raheel Gupta"  wrote:
> >>>
> >>>  SQLite's tables are B-trees, sorted by the rowid.  Your new data will
> > probably get an autoincremented rowid, which will be appended at the
> >
> 

Re: [sqlite] Memory-mapped I/O [was: SQLite 3.7.17 preview - 2x faster?]

2014-02-08 Thread Richard Hipp
On Fri, Feb 7, 2014 at 1:01 PM, varro  wrote:

> Regarding the following old post:
>
> Richard Hipp wrote:
> > By making use of memory-mapped I/O, the current trunk of SQLite (which
> will
> > eventually become version 3.7.17 after much more refinement and testing)
> > can be as much as twice as fast, on some platforms and under some
> > workloads.  We would like to encourage people to try out the new code and
> > report both success and failure.  Snapshots of the amalgamation can be
> > found at
> >
> >http://www.sqlite.org/draft/download.html
> >
> > Links to the relevant documentation can bee seen at
> >
> >http://www.sqlite.org/draft/releaselog/3_7_17.html
> >
> > The memory-mapped I/O is only enabled for windows, linux, mac OS-X, and
> > solaris.  We have found that it does not work on OpenBSD, for reasons we
> > have not yet been able to uncove; but as a precaution, memory mapped I/O
> is
> > disabled by default on all of the *BSDs until we understand the problem.
>
> Was the problem ever identified?
>

OpenBSD lacks a coherent filesystem cache.  That is to say, changes to a
file made using write() are not necessarily reflected in mmap-ed memory
right away.  And change to a mmap-ed segment are not necessarily reflected
in subsequent read() operations.


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Memory-mapped I/O [was: SQLite 3.7.17 preview - 2x faster?]

2014-02-08 Thread varro
Regarding the following old post:

Richard Hipp wrote:
> By making use of memory-mapped I/O, the current trunk of SQLite (which will
> eventually become version 3.7.17 after much more refinement and testing)
> can be as much as twice as fast, on some platforms and under some
> workloads.  We would like to encourage people to try out the new code and
> report both success and failure.  Snapshots of the amalgamation can be
> found at
>
>http://www.sqlite.org/draft/download.html
>
> Links to the relevant documentation can bee seen at
>
>http://www.sqlite.org/draft/releaselog/3_7_17.html
>
> The memory-mapped I/O is only enabled for windows, linux, mac OS-X, and
> solaris.  We have found that it does not work on OpenBSD, for reasons we
> have not yet been able to uncove; but as a precaution, memory mapped I/O is
> disabled by default on all of the *BSDs until we understand the problem.

Was the problem ever identified?  The answer isn't really important to
me, since the version I'm using (3.8.0.2) works perfectly fine for me
under FreeBSD 9.1, but I'm curious about the resolution (if any).

-- 
Will

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Compiler warning (treated as error) when using MSVC 2013 to build SQLite 3.8.3

2014-02-08 Thread Bryan Ferguson
Hello,

It seems that VS 2013 has grown stricter by default in treating potentially 
security-related warnings as errors. Some context here:

http://blogs.msdn.com/b/sdl/archive/2012/06/06/warnings-sdl-and-improving-uninitialized-variable-detection.aspx

When building the Release configuration for a project containing the SQLite 
3.8.3 amalgamation, it emits the following warnings as errors by default:
sqlite3.c(41085): error C4703: potentially uninitialized local pointer 
variable 'p' used
sqlite3.c(45247): error C4703: potentially uninitialized local pointer 
variable 'p' used
sqlite3.c(45230): error C4703: potentially uninitialized local pointer 
variable 'p' used
sqlite3.c(46156): error C4703: potentially uninitialized local pointer 
variable 'p' used

Initializing the local variable p to NULL in pager_lookup avoids these.

In my setup, sqlite3.c was built from a project created from the VS 2013 
"Static Library (Windows Store apps)" template. The errors were emitted from 
code generation for a second project created from the "Unit Test Library 
(Windows Store apps)" template that links the static library. Both projects 
were using the warning configuration defaults from those templates: /W3 /WX- 
/sdl .

A search for 'sqllite C4703' suggests a handful of folks have encountered this 
and worked around it in a few ways. However, I couldn't find any indication in 
the list of tickets or history of pager.c that this is tracked / fixed at the 
source.

Please let me know if you'd like me to enter a new ticket to track the minor 
change to avoid these warnings. Thank you for contributing this excellent 
library to the public domain.

Regards,
- Bryan

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] struggling with a query

2014-02-08 Thread Stephan Beal
On Sat, Feb 8, 2014 at 12:39 PM, RSmith  wrote:

> SELECT name, max(vid=2)-max(vid=1) FROM v GROUP BY name;
>
> So much more succint than my original, like I predicted :)


Indeed!!! This one wins if i am able to refactor it for use with the much
more complex structure i'm actually working with (the fossil SCM's vfile
table - my example is a simplified form to help me get my head around the
SQL).

FWIW, sqlite3's ".stats" say yours is overall more efficient:

Virtual Machine Steps:   242
vs the WITH variant i posted:
Virtual Machine Steps:   308


Thanks again!

-- 
- stephan beal
http://wanderinghorse.net/home/stephan/
http://gplus.to/sgbeal
"Freedom is sloppy. But since tyranny's the only guaranteed byproduct of
those who insist on a perfect world, freedom will have to do." -- Bigby Wolf
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] struggling with a query

2014-02-08 Thread RSmith

Yeah I quite like some of the solutions posted - got to love this list :)

One final optimization, since those values you are looking for essentially maps to Boolean (0 and 1), this query is the smallest and 
probably fastest (I think) that will produce the correct results from your table:


SELECT name, max(vid=2)-max(vid=1) FROM v GROUP BY name;

So much more succint than my original, like I predicted :)


On 2014/02/08 13:11, Stephan Beal wrote:

On Sat, Feb 8, 2014 at 11:58 AM, big stone  wrote:


with sqlite 3.8.3 (for the with) :

with v(vid,name) as (values (1,'foo'),(1,'bar'),(2,'bar'),(2,'baz'))

select  name,
  -max(case when vid=1 then 1 else 0 end ) + max(case when vid=2
then 1 else 0 end)
  from v group by name


i like that one. This slight variation (to allow me to strategically place
the inputs) works for me:


BEGIN TRANSACTION;
DROP TABLE IF EXISTS vf;
CREATE TABLE vf(vid,name);
INSERT INTO "vf" VALUES(1,'foo');
INSERT INTO "vf" VALUES(1,'bar');
INSERT INTO "vf" VALUES(1,'barz');
INSERT INTO "vf" VALUES(2,'bar');
INSERT INTO "vf" VALUES(2,'baz');
INSERT INTO "vf" VALUES(2,'barz');
COMMIT;

with
origin (v1,v2) as (select 1 v1, 2 v2),
v(vid,name) as (select vid,name from vf)
select  name,
  -max(case when vid=origin.v1 then 1 else 0 end )
  + max(case when vid=origin.v2 then 1 else 0 end)
  from v, origin group by name
;

sqlite> .read x.sql
bar|0
barz|0
baz|1
foo|-1

Thank you very much :).



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Free Page Data usage

2014-02-08 Thread Raheel Gupta
>> No matter what size you make the pages, a delete function is never going
to re-pack the db

I dont want to repack the DB sir.
When a page becomes free I want to make sure that page is used up first and
then new pages are created.
VACUUM is not what I want to do.

I think that free pages are used up for new data in SQLIte as well and I
have no doubt of that.
The issue is when the page is 64 KB and it has lets say 16KB free then
atleast the 16KB should be used before an entirely new page is created.

@Clemens told that the page is used again if it has 2/3 free space.
My question is that can this 2/3 ratio be changed by me for my purpose to
1/3 or any other ratio.



On Sat, Feb 8, 2014 at 3:47 PM, RSmith  wrote:

> Hi Raheel,
>
> It does make sense what you would like to do, but your concern does not
> make sense. You say you are "trying to optimize the utilization of the free
> space available" but give no indication why, it certainly does not seem
> that space is a problem.
>
> I do understand the urge to optimize very much, but inside a Database
> engine you can optimize either for speed or for size, not for both. SQLIte
> as it stands is quite good at not wasting space unnecessarily, BUT, it is
> first and foremost optimized for speed (Thank goodness for that), which
> means the space-saving you are looking for is not going to happen.  In my
> previous post I made a passing comment / suggestion re using your own data
> files in stead of sqlite, and if it is a case of not needing the sql
> ability - which I seriously doubt since you are basically saving blocks of
> information from a blocked device and doing so as byte streams (or BLOB
> fields in SQL terms) - then I seriously suggest creating your own files and
> custom index mechanism and saving the byte streams in there.  It will be a
> lot faster and with zero space wastage and the size limits can be whatever
> you like them to be.
>
> Trying to use SQLite (or any other DB engine) for this purpose is akin to
> using a full-function bakery with ovens, humidifiers, provers, rising
> agents and bake timers when you just want to warm up your pizza (not to
> mention being restricted by the limitations that come with it).
>
> No matter what size you make the pages, a delete function is never going
> to re-pack the db, though you might get better results at re-using the
> space - but this is a compromise and one that does not sit well with you
> (if I read you right).
>
> Best of luck!
> Ryan
>
>
>
> On 2014/02/08 07:57, Raheel Gupta wrote:
>
>> Hi,
>> Sir, the 32 TB size is not always going to be reached.
>> The Database is going to be used to store blocks of a Block Device like
>> /dev/sda1
>> The size can reach 3-4 TB easily and would start from atleast 20-100 GB.
>> 32 TB of data though impractical as of today will be possible in 2-3 years.
>> The issue happens when I delete the rows and new rows are inserted at the
>> end of the database the size of the database exceeds that of the actual
>> block device size even though many pages are having free space.
>> Hence I am simply trying to optimize the utilization of the free space
>> available.
>> I would have loved to use the page size of 2KB which would give me a
>> practical size of 4TB. But that would have this hard limit of 4TB.
>> So I have two possible options which I am trying to help me solve this
>> issue :
>> 1) Either make the page size to 2KB and increase the maximum page count
>> to 2^64 which will be more than sufficient.
>> 2) Improve the free space utilization of each page when the page size is
>> 64KB.
>> I hope this makes sense.
>>
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] struggling with a query

2014-02-08 Thread Stephan Beal
On Sat, Feb 8, 2014 at 11:58 AM, big stone  wrote:

> with sqlite 3.8.3 (for the with) :
>
> with v(vid,name) as (values (1,'foo'),(1,'bar'),(2,'bar'),(2,'baz'))
>
> select  name,
>  -max(case when vid=1 then 1 else 0 end ) + max(case when vid=2
> then 1 else 0 end)
>  from v group by name
>

i like that one. This slight variation (to allow me to strategically place
the inputs) works for me:


BEGIN TRANSACTION;
DROP TABLE IF EXISTS vf;
CREATE TABLE vf(vid,name);
INSERT INTO "vf" VALUES(1,'foo');
INSERT INTO "vf" VALUES(1,'bar');
INSERT INTO "vf" VALUES(1,'barz');
INSERT INTO "vf" VALUES(2,'bar');
INSERT INTO "vf" VALUES(2,'baz');
INSERT INTO "vf" VALUES(2,'barz');
COMMIT;

with
origin (v1,v2) as (select 1 v1, 2 v2),
v(vid,name) as (select vid,name from vf)
select  name,
 -max(case when vid=origin.v1 then 1 else 0 end )
 + max(case when vid=origin.v2 then 1 else 0 end)
 from v, origin group by name
;

sqlite> .read x.sql
bar|0
barz|0
baz|1
foo|-1

Thank you very much :).

-- 
- stephan beal
http://wanderinghorse.net/home/stephan/
http://gplus.to/sgbeal
"Freedom is sloppy. But since tyranny's the only guaranteed byproduct of
those who insist on a perfect world, freedom will have to do." -- Bigby Wolf
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] struggling with a query

2014-02-08 Thread Bernd Lehmkuhl

Am 08.02.2014 11:03, schrieb Stephan Beal:


i have table containing a mapping of logic dataset versions and filenames
contained in that dataset version:

CREATE TABLE v(vid,name);
INSERT INTO "v" VALUES(1,'foo');
INSERT INTO "v" VALUES(1,'bar');
INSERT INTO "v" VALUES(2,'bar');
INSERT INTO "v" VALUES(2,'baz');

i am trying like mad to, but can't seem formulate a query with 2 version
number inputs (1 and 2 in this case) and creates a result set with these
columns:

- name. must include all names across both versions
- status: -1 if in version 1 but not v2, 0 if in both, 1 if in v2 but not
v1.

So the above data set should produce:

foo, -1
bar, 0
baz, 1



Should work as well:

SELECT
  name,
  CASE
WHEN minvid = maxvid AND minvid = 1
THEN -1
WHEN minvid = maxvid AND minvid = 2
THEN 1
ELSE 0
  END vid
FROM
  (
SELECT
  name,
  MIN(vid) AS minvid,
  MAX(vid) AS maxvid
FROM
  v
GROUP BY
  name
  )

Bernd

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] struggling with a query

2014-02-08 Thread Stephan Beal
On Sat, Feb 8, 2014 at 11:57 AM, RSmith  wrote:

> One way of doing it:
>

Many thanks to you and Kevin both! These examples give me plenty to study
for today :).

-- 
- stephan beal
http://wanderinghorse.net/home/stephan/
http://gplus.to/sgbeal
"Freedom is sloppy. But since tyranny's the only guaranteed byproduct of
those who insist on a perfect world, freedom will have to do." -- Bigby Wolf
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] struggling with a query

2014-02-08 Thread Luuk

On 08-02-2014 11:58, big stone wrote:

with sqlite 3.8.3 (for the with) :

with v(vid,name) as (values (1,'foo'),(1,'bar'),(2,'bar'),(2,'baz'))

select  name,
  -max(case when vid=1 then 1 else 0 end ) + max(case when vid=2
then 1 else 0 end)
  from v group by name


almost the same as this:

with v(vid,name) as (values (-1,'foo'),(0,'bar'),(1,'baz'))
select * from v

;-)
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] struggling with a query

2014-02-08 Thread big stone
with sqlite 3.8.3 (for the with) :

with v(vid,name) as (values (1,'foo'),(1,'bar'),(2,'bar'),(2,'baz'))

select  name,
 -max(case when vid=1 then 1 else 0 end ) + max(case when vid=2
then 1 else 0 end)
 from v group by name
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] struggling with a query

2014-02-08 Thread RSmith

One way of doing it:

SELECT IFNULL(V1.name,V2.name) AS VName, CASE WHEN V1.name=V2.name THEN 0 ELSE 
-1 END AS VInd
 FROM v AS V1
 LEFT JOIN v AS V2 ON (V1.vid<>V2.vid) AND (V1.name=V2.name)
 WHERE V1.vid=1
UNION
SELECT IFNULL(V1.name,V2.name) AS VName, CASE WHEN V1.name=V2.name THEN 0 ELSE 
1 END AS VInd
  FROM v AS V1
  LEFT JOIN v AS V2 ON (V1.vid<>V2.vid) AND (V1.name=V2.name)
WHERE V1.vid=2;

Running that on your table yields:
VName"VInd"
bar0
baz1
foo-1

I'm sure someone will have a more succint or optimized version soon :)


On 2014/02/08 12:03, Stephan Beal wrote:

Hi, list,

most of the time i judge my SQL skills as mediocre, but at times like this
i feel like a complete noob...

i have table containing a mapping of logic dataset versions and filenames
contained in that dataset version:

CREATE TABLE v(vid,name);
INSERT INTO "v" VALUES(1,'foo');
INSERT INTO "v" VALUES(1,'bar');
INSERT INTO "v" VALUES(2,'bar');
INSERT INTO "v" VALUES(2,'baz');

i am trying like mad to, but can't seem formulate a query with 2 version
number inputs (1 and 2 in this case) and creates a result set with these
columns:

- name. must include all names across both versions
- status: -1 if in version 1 but not v2, 0 if in both, 1 if in v2 but not
v1.

So the above data set should produce:

foo, -1
bar, 0
baz, 1

My SQL skills fail me miserably, though.

i have no sqlite3 minimum version requirements (am working from the trunk)
and am free to use recursive select if necessary, but my instinct says that
this should be possible with joins and a CASE (for the status).

Any prods in the right direction would be much appreciated,



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] struggling with a query

2014-02-08 Thread Kevin Martin

On 8 Feb 2014, at 10:03, Stephan Beal  wrote:

> i am trying like mad to, but can't seem formulate a query with 2 version
> number inputs (1 and 2 in this case) and creates a result set with these
> columns:
> 
> - name. must include all names across both versions
> - status: -1 if in version 1 but not v2, 0 if in both, 1 if in v2 but not
> v1.

Only tried on your example dataset, but try this:

CREATE VIEW answer as
select 
name,
count(v2) - count(v1) as result
from
(select 
a.name as name,
b.name as v1,
c.name as v2
from
v as a
left join
(select name from v where vid = 1) as b
on a.name = b.name
left join
(select name from v where vid = 2) as c
on a.name = c.name)
group by
name
order by
result asc;

Thanks,
Kev
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Free Page Data usage

2014-02-08 Thread RSmith

Hi Raheel,

It does make sense what you would like to do, but your concern does not make sense. You say you are "trying to optimize the 
utilization of the free space available" but give no indication why, it certainly does not seem that space is a problem.


I do understand the urge to optimize very much, but inside a Database engine you can optimize either for speed or for size, not for 
both. SQLIte as it stands is quite good at not wasting space unnecessarily, BUT, it is first and foremost optimized for speed (Thank 
goodness for that), which means the space-saving you are looking for is not going to happen.  In my previous post I made a passing 
comment / suggestion re using your own data files in stead of sqlite, and if it is a case of not needing the sql ability - which I 
seriously doubt since you are basically saving blocks of information from a blocked device and doing so as byte streams (or BLOB 
fields in SQL terms) - then I seriously suggest creating your own files and custom index mechanism and saving the byte streams in 
there.  It will be a lot faster and with zero space wastage and the size limits can be whatever you like them to be.


Trying to use SQLite (or any other DB engine) for this purpose is akin to using a full-function bakery with ovens, humidifiers, 
provers, rising agents and bake timers when you just want to warm up your pizza (not to mention being restricted by the limitations 
that come with it).


No matter what size you make the pages, a delete function is never going to re-pack the db, though you might get better results at 
re-using the space - but this is a compromise and one that does not sit well with you (if I read you right).


Best of luck!
Ryan


On 2014/02/08 07:57, Raheel Gupta wrote:

Hi,
Sir, the 32 TB size is not always going to be reached.
The Database is going to be used to store blocks of a Block Device like 
/dev/sda1
The size can reach 3-4 TB easily and would start from atleast 20-100 GB. 32 TB of data though impractical as of today will be 
possible in 2-3 years.
The issue happens when I delete the rows and new rows are inserted at the end of the database the size of the database exceeds 
that of the actual block device size even though many pages are having free space.

Hence I am simply trying to optimize the utilization of the free space 
available.
I would have loved to use the page size of 2KB which would give me a practical size of 4TB. But that would have this hard limit of 
4TB.

So I have two possible options which I am trying to help me solve this issue :
1) Either make the page size to 2KB and increase the maximum page count to 2^64 
which will be more than sufficient.
2) Improve the free space utilization of each page when the page size is 64KB.
I hope this makes sense.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] struggling with a query

2014-02-08 Thread Stephan Beal
Hi, list,

most of the time i judge my SQL skills as mediocre, but at times like this
i feel like a complete noob...

i have table containing a mapping of logic dataset versions and filenames
contained in that dataset version:

CREATE TABLE v(vid,name);
INSERT INTO "v" VALUES(1,'foo');
INSERT INTO "v" VALUES(1,'bar');
INSERT INTO "v" VALUES(2,'bar');
INSERT INTO "v" VALUES(2,'baz');

i am trying like mad to, but can't seem formulate a query with 2 version
number inputs (1 and 2 in this case) and creates a result set with these
columns:

- name. must include all names across both versions
- status: -1 if in version 1 but not v2, 0 if in both, 1 if in v2 but not
v1.

So the above data set should produce:

foo, -1
bar, 0
baz, 1

My SQL skills fail me miserably, though.

i have no sqlite3 minimum version requirements (am working from the trunk)
and am free to use recursive select if necessary, but my instinct says that
this should be possible with joins and a CASE (for the status).

Any prods in the right direction would be much appreciated,

-- 
- stephan beal
http://wanderinghorse.net/home/stephan/
http://gplus.to/sgbeal
"Freedom is sloppy. But since tyranny's the only guaranteed byproduct of
those who insist on a perfect world, freedom will have to do." -- Bigby Wolf
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] help needed for major SQLite problem

2014-02-08 Thread Kees Nuyt
On Sat, 08 Feb 2014 12:06:01 +0700, Dan Kennedy
 wrote:

>On 02/08/2014 03:00 AM, C M wrote:
>> This is a follow-up to a question I asked on this list on Sep 1st, 2013,
>> about an error that I was randomly getting with disk-based SQLite database
>> in a Python desktop application.  I now have more info to provide about the
>> error...such as what was asked for at that time:
>>
>> On Sun, Sep 1, 2013 at 6:12 PM, Richard Hipp  wrote:
>>
>>> Does Python have an interface to the error and warning log mechanism of
>>> SQLite?  (http://www.sqlite.org/errlog.html)  Can you turn that on?  It
>>> will probably give more details about what it happening.
>>>
>> I wasn't able to do this at first, but thanks to switching from the
>> standard sqlite3 module in Python (otherwise known as pysqslite) to Roger
>> Binns's APSW module, and then also using an experimental module,
>> apswdbapi2, from Edzard Pasma (thank you both), I was able to set things up
>> to return the warning log...I think.  Today, after not seeing the error in
>> a very long time, I hit the error, and this was printed to sys.stdout:
>>
>> SQLITE_LOG: delayed 1375ms for lock/sharing conflict (10) SQLITE_IOERR
>>
>> SQLITE_LOG: os_win.c:35129: (5) winAccess(C:\Documents and Settings\user\My
>> Documents\My Dropbox\myapp\gorp.db-journal) - Access is denied. (3338)
>> SQLITE_IOERR
>>
>> SQLITE_LOG: statement aborts at 16: [SELECT resumes, start FROM Durations
>> WHERE start='2014-02-07 14:24:14.064000' AND value='activity'] disk I/O
>> error (3338) SQLITE_IOERR
>>
>> Does that give anyone a better idea of what could be happening and how I
>> can fix this problem?
>
>Looks like GetFileAttributesEx() might be throwing an ERROR_ACCESS_DENIED
>exception. Maybe a virus scanner or some other background process had
>temporarily locked the database file.
>
>Dan.

I agree, and I think Dropbox is the culprit here.


-- 
Groet, Cordialement, Pozdrawiam, Regards,

Kees Nuyt

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users