Re: [sqlite] Multi-valued attributes

2008-02-13 Thread Samuel Neff
It's called a many-to-many relationship and you use a cross-reference table
to represent the relationship.  Say you have table

Searches
-
SearchID
FromDate
ToDate
etc...



Users

UserID
FirstName
LastName


Then to define what users are associated with what searches, you create a
table

Xref_Searches_Users

SearchID
UserID


Then if you want to find all searches for a user, for example, then you do

SELECT Searches.*
FROM Searches NATURAL JOIN Xref_Searches_Users
WHERE UserID = @UserID

If you're going to be searching both for searches by users and users by
search, then you will likely want to create two indexes on the xref table,
one on "SearchID, UserID" and another on "UserID, SearchID".

HTH,

Sam


On Feb 13, 2008 8:00 PM, gongchengshi <[EMAIL PROTECTED]> wrote:

>
> I have a table were each row needs to be able to store a list of entries
> from
> another table as one of its attributes.  For instance the table is a
> collection of search filters.  The filters table has attributes: FromDate,
> ToDate, Users, Devices.  The Users attribute is not a single value but a
> list of Users contained in the Users table.  Same with the Devices
> attribute.  The Devices attribute is actually a list of Devices in the
> Devices table.
>
> How do you go about defining this schema in SQL?  The book I am reading
> "Database Systems" by Connolly and Begg say that you can have such
> relationships but they don't say how to create them.  I am using sqlite as
> my DBMS.
> --
> 
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multi-valued attributes

2008-02-13 Thread P Kishor
Replying to my question, to add a bit of information specific to my
current situation. Please see below --

On 2/13/08, P Kishor <[EMAIL PROTECTED]> wrote:
> On 2/13/08, gongchengshi <[EMAIL PROTECTED]> wrote:
> >
> > I have a table were each row needs to be able to store a list of entries 
> > from
> > another table as one of its attributes.  For instance the table is a
> > collection of search filters.  The filters table has attributes: FromDate,
> > ToDate, Users, Devices.  The Users attribute is not a single value but a
> > list of Users contained in the Users table.  Same with the Devices
> > attribute.  The Devices attribute is actually a list of Devices in the
> > Devices table.
> >
> > How do you go about defining this schema in SQL?  The book I am reading
> > "Database Systems" by Connolly and Begg say that you can have such
> > relationships but they don't say how to create them.  I am using sqlite as
> > my DBMS.
> > --
>
>
> Variations on this question have been discussed before, and both are
> covered adequately by Darren Duncan's reply (multi-valued columns a la
> PostgreSQL's variable-length multidimensional arrays) [1], or Sam
> Neff's suggestion of building a many-to-many cross-reference table, a
> common solution in most such situations.
>
> [1] http://www.postgresql.org/docs/8.3/static/arrays.html
>
> I like the idea of a multidimensional array as it seems intuitively a
> lot less complicated than the alternative, especially when many just
> many-to-many relationships are involved. Even though SQLite doesn't
> have the array datatype support, it can be easily accomplished in the
> application (assuming there is a programming language and application
> environment makes this easy). For me, it seems fast enough. SQLite
> does the really rapid searching and locating the data, made all the
> more rapid because of the lack of any multi-table JOINs required, and
> once I have the data, and I can happily split it into separate values
> and do another lookup. Yes, multiple lookups, and I have to balance
> the pros and cons and benchmark my own situation.
>
> So, while I will benchmark my own situation, I just wanted to find out
> if there are any other gotchas that I need to be aware of if I decide
> to implement a table design which holds a list of lookups rather than
> a cross-reference table.
>
> Many thanks,
>
> Puneet.
>

In my case, I have one attribute that has a many-to-many lookup with
two different kinds of attributes. I am tracking three attributes --
people, agencies, and reports. Both people and agencies have a
many-to-many relationship with reports. I have the following tables

persons(person_id, ..)
agencies(agency_id, ..)
reports(report_id, ..)

alternative 1: two xref tables --
persons_x_reports(person_id, report_id)
agencies_x_reports(agency_id, report_id)

alternative 2: one xref table
foo_x_reports (foo_id, report_id, foo_type[person|agency])

alternative 3: list of reports
persons(person_id, list_report_id)
agencies(agency_id, list_report_id)
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multi-valued attributes

2008-02-13 Thread P Kishor
On 2/13/08, gongchengshi <[EMAIL PROTECTED]> wrote:
>
> I have a table were each row needs to be able to store a list of entries from
> another table as one of its attributes.  For instance the table is a
> collection of search filters.  The filters table has attributes: FromDate,
> ToDate, Users, Devices.  The Users attribute is not a single value but a
> list of Users contained in the Users table.  Same with the Devices
> attribute.  The Devices attribute is actually a list of Devices in the
> Devices table.
>
> How do you go about defining this schema in SQL?  The book I am reading
> "Database Systems" by Connolly and Begg say that you can have such
> relationships but they don't say how to create them.  I am using sqlite as
> my DBMS.
> --


Variations on this question have been discussed before, and both are
covered adequately by Darren Duncan's reply (multi-valued columns a la
PostgreSQL's variable-length multidimensional arrays) [1], or Sam
Neff's suggestion of building a many-to-many cross-reference table, a
common solution in most such situations.

[1] http://www.postgresql.org/docs/8.3/static/arrays.html

I like the idea of a multidimensional array as it seems intuitively a
lot less complicated than the alternative, especially when many just
many-to-many relationships are involved. Even though SQLite doesn't
have the array datatype support, it can be easily accomplished in the
application (assuming there is a programming language and application
environment makes this easy). For me, it seems fast enough. SQLite
does the really rapid searching and locating the data, made all the
more rapid because of the lack of any multi-table JOINs required, and
once I have the data, and I can happily split it into separate values
and do another lookup. Yes, multiple lookups, and I have to balance
the pros and cons and benchmark my own situation.

So, while I will benchmark my own situation, I just wanted to find out
if there are any other gotchas that I need to be aware of if I decide
to implement a table design which holds a list of lookups rather than
a cross-reference table.

Many thanks,

Puneet.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] creating a table with an index

2008-02-13 Thread Igor Tandetnik
"Sam Carleton" <[EMAIL PROTECTED]>
wrote in message
news:[EMAIL PROTECTED]
> I am trying to create a table with two indexes:
>
> CREATE TABLE favorites (
>cust_id CHAR(32) NOT NULL,
>fldoid CHAR(38) NOT NULL,
>imgoid CHAR(64) NOT NULL,
>PRIMARY KEY (cust_id),
>INDEX (fldoid, imgoid));
>
> SQLite keeps complaining saying there is an error around INDEX.  What
> might I be doing wrong?

Here's the syntax of CREATE TABLE statement:

http://sqlite.org/lang_createtable.html

INDEX is not a valid keyword in such a statement. Did you perhaps mean 
UNIQUE?

Igor Tandetnik



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multi-valued attributes

2008-02-13 Thread Darren Duncan
At 5:00 PM -0800 2/13/08, gongchengshi wrote:
>I have a table were each row needs to be able to store a list of entries from
>another table as one of its attributes.  For instance the table is a
>collection of search filters.  The filters table has attributes: FromDate,
>ToDate, Users, Devices.  The Users attribute is not a single value but a
>list of Users contained in the Users table.  Same with the Devices
>attribute.  The Devices attribute is actually a list of Devices in the
>Devices table. 
>
>How do you go about defining this schema in SQL?  The book I am reading
>"Database Systems" by Connolly and Begg say that you can have such
>relationships but they don't say how to create them.  I am using sqlite as
>my DBMS.

Some quasi-relational DBMSs (and all truly relational DBMSs) support 
actual multi-valued attributes/fields, which in the general case are 
relation/rowset valued, or in the less general case are specifically 
set or array etc valued; PostgreSQL supports the latter to some 
extent.

AFAIK, SQLite does not support multi-valued fields, and so with it 
your schema will have to be of the form you get when you split the 
prior relvar/table with each multi-valued attribute/field separated 
into its own relvar/table (every new table also has a copy of the 
original table's primary key attribute), and subsequently those extra 
relvars/tables are relational-ungrouped to turn each 
multi-valued-field tuple/row into multiple tuples/rows.  You are then 
creating what are commonly called intersection tables, afaik, which 
are common when implementing many-to-many relationships between 
tables.

-- Darren Duncan
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] creating a table with an index

2008-02-13 Thread Sam Carleton
I am trying to create a table with two indexes:

CREATE TABLE favorites (
cust_id CHAR(32) NOT NULL,
fldoid CHAR(38) NOT NULL,
imgoid CHAR(64) NOT NULL,
PRIMARY KEY (cust_id),
INDEX (fldoid, imgoid));

SQLite keeps complaining saying there is an error around INDEX.  What
might I be doing wrong?

Sam
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Multi-valued attributes

2008-02-13 Thread gongchengshi

I have a table were each row needs to be able to store a list of entries from
another table as one of its attributes.  For instance the table is a
collection of search filters.  The filters table has attributes: FromDate,
ToDate, Users, Devices.  The Users attribute is not a single value but a
list of Users contained in the Users table.  Same with the Devices
attribute.  The Devices attribute is actually a list of Devices in the
Devices table.  

How do you go about defining this schema in SQL?  The book I am reading
"Database Systems" by Connolly and Begg say that you can have such
relationships but they don't say how to create them.  I am using sqlite as
my DBMS.  
-- 
View this message in context: 
http://www.nabble.com/Multi-valued-attributes-tp15471820p15471820.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Blob incremental i/o via Python

2008-02-13 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Norman Young wrote:
> We had to exclude references to sqlite3_enable_load_extension and
> sqlite3_load_extension from within apsw.c, in order to avoid undefined
> symbol references during the build (python setup.py install).

You should have modified the setup.py to include a define of
SQLITE_OMIT_LOAD_EXTENSION.

The general theory is that you should create/compile SQLite with
whatever flags you want (you can exclude many features) and then provide
the same flags when compiling apsw.  Unfortunately I don't know of any
way of automatically finding out what flags/inclusions/exclusions SQLite
was compiled with.

Roger

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHs5FbmOOfHg372QQRAojNAKCloFVR3YPu6fnHFnOjnqMj7utF9QCg07tU
GNBSMYjUti27yR1Q3oOaihw=
=I5cd
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Blob incremental i/o via Python

2008-02-13 Thread Norman Young
On Feb 12, 2008 4:04 AM, Roger Binns <[EMAIL PROTECTED]> wrote:

>
> Norman Young wrote:
> > Can this same interface be accessed in Python?
>
> That API has to be wrapped.
>
> > Can this be done from Python via the sqlite3 module? Can you point me to
> > examples?
>
> I have implemented it in APSW in the development code (ie not released
> yet).  You can find the source on the python sqlite site
> http://www.initd.org/tracker/pysqlite/wiki/APSW#Developmentversion
> ... 
> Roger


Thanks for your quick response, Roger.

We have compiled the apsw module in the Maemo OS2008 (chinook) SDK. We
successfully imported the Python module, both within the SDK, and on the
Tablet. We have not yet specifically tested the incremental blob i/o access,
but I expect that to go well.

We had to exclude references to sqlite3_enable_load_extension and
sqlite3_load_extension from within apsw.c, in order to avoid undefined
symbol references during the build (python setup.py install).

static PyObject *
Connection_enableloadextension(Connection *self, PyObject *enabled)
{
#if 0
...
#endif
}
...
static PyObject *
Connection_loadextension(Connection *self, PyObject *args)
{
#if 0
...
#endif
}

We presume that the symbol resolution failure arose from shortcomings in the
SDK cross-compilation environment. Since we do not need sqlite3 to load
extensions, this work-around serves our purposes. Perhaps more ambitious
Maemo SDK users will investigate further than we have.

Thanks for the great module. We look forward to using blob incremental i/o
via Python in our continuing development. It enables an important
improvement in our design.

Norm.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] indexing

2008-02-13 Thread David Baird
On Feb 13, 2008 1:01 PM, Kasper Daniel Hansen <[EMAIL PROTECTED]> wrote:
> Thank you for a clear and precise answer.
>
> I seem to recall that it is possible to have in-memory databases with
> sqlite. But perhaps that is only possible with alot of tinkering and
> using the C functions. Is that true (I am thinking of avoiding the I/
> O bottleneck on our network drive by just moving everything into
> memory since we have a large mem server - but only if it is quick and
> easy)

Instead of specifying a filename for the database, try this instead:
":memory:".  If you need a file with that name, you would type this:
"./:memory:".
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite 3.5.5 on OpenBSD running on HPPA (HP9000) make test

2008-02-13 Thread Zbigniew Baniewski
On Fri, Feb 01, 2008 at 11:46:06AM -0500, Rob Sciuk wrote:

> Actually, upon inspection, that may be an artifact of an earlier attempt 
> to run the tests as root, the "chocolate" directory was, in fact owned by 
> root.  I'm re-running the tests, having removed that directory, and I'll 
> repost the results upon completion ...

Hallo,

sent a question to OpenBSD list, before I noticed, you're user of both
OpenBSD and SQLite. Perhaps could you help me a little with my SQLite
problem:

I've compiled lately SQLite 3.5.6 and TCL/Tk 8.5.1 - everything seem to be
working OK, with one annoying exception: everytime, when I'm leaving tclsh,
when SQLite module has been loaded before ("package require sqlite3") there
is an error message: "Segmentation fault (core dumped)". Do you know the
origin of the problem - and (even better) the cure?

Both SQLite and TCL/Tk were compiled directly from the sources (_not_ from
OpenBSD's "ports"), OpenBSD 4.2/i386. TCL/Tk itself seems to be working just
fine (didn't notice any problems, as of yet).

Exactly the same issue I had earlier, trying SQLite 3.4.2 with TCL/Tk 8.5.0.
-- 
pozdrawiam / regards

Zbigniew Baniewski
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] indexing

2008-02-13 Thread Kasper Daniel Hansen
On Feb 13, 2008, at 11:12 AM, [EMAIL PROTECTED] wrote:

> Kasper Daniel Hansen <[EMAIL PROTECTED]> wrote:
>> I have a table with two variables, say A and B (both integers). The
>> table is rather large - around 2.9 GB on disk. Every combination of
>> (A,B) occurs only once. I am creating a unique index as
>>CREATE UNIQUE INDEX ABidx ON abtable (A,B)
>> It seems that the (A,B) index is created much slower than the (B,A)
>> index. I am wondering about the reason for this. My - very limited -
>> understanding was that sqlite needs to search over the whole database
>> for every (A,B) combination. Could someone give a quick pointer to
>> where the index strategy is described? Would this be quicker if I fit
>> the whole database into memory?
>>
>
> Creating an index on A,B is equivalent to sorting on A,B.
>
> The sorting algorithm currently used by SQLite requires
> O(NlogN) comparisons, which is optimial.  But it also requires
> O(N) disk seeks, which is very suboptimal.  You don't notice
> all these seeks if your database fits in cache.  But when you
> get into databases of about 3GB, the seeking really slows you
> down.
>
> A project on our to-do list is to implement a new sorter
> that uses O(1) seeks.  We know how to do this.  It is just
> finding time to do the implementation.
>
> If creating an index on B,A is much faster than creating an
> index on A,B, that probably means that B,A is initially closer
> to being in sorted order than A,B is.  The initial order of the
> entries does not effect the number of comparisons in a sort,
> but it does reduce the number of seeks if the values are
> initially close to being sorted.

Thank you for a clear and precise answer.

I seem to recall that it is possible to have in-memory databases with  
sqlite. But perhaps that is only possible with alot of tinkering and  
using the C functions. Is that true (I am thinking of avoiding the I/ 
O bottleneck on our network drive by just moving everything into  
memory since we have a large mem server - but only if it is quick and  
easy)

Kasper
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Innovative examples / user stories

2008-02-13 Thread David Baird
On Feb 11, 2008 10:51 AM, Lars Aronsson <[EMAIL PROTECTED]> wrote:
> Is there any documentation of how people use SQLite in odd ways in
> their everyday activities?

I used to work with the NS-2 network simulator as an undergrad at NMT
for some experiments we were doing in wireless power control systems.
After frustrations with parsing NS-2 data files, I decided to start
adding my own hooks to their code which would print simulation results
in somewhat normalized ASCII tabular formats.  Then I imported the
tables into an SQLite database (because it is fast and extremely easy
to setup/use).  This was heaven compared to what we were doing before.

Our schema looked something like this (probably there should be more
indexes or primary keys...):

CREATE TABLE ns_phytx (
time REAL,
pktid INTEGER,
nodeid INTEGER,
pkttype STRING,
source INTEGER,
dest INTEGER,
txpwr REAL,
bytes INTEGER,
duration REAL,
x REAL,
y REAL,
PRIMARY KEY (pktid)
);
CREATE TABLE ns_phyrx (
time REAL,
pktid INTEGER,
nodeid INTEGER,
rxpwr REAL,
x REAL,
y REAL,
PRIMARY KEY (pktid, nodeid)
);
CREATE TABLE ns_macrx (
time REAL,
pktid INTEGER,
nodeid INTEGER,
min_SIR REAL,
PRIMARY KEY (pktid, nodeid)
);
CREATE TABLE ns_collision (
time REAL,
nodeid INTEGER,
pktid1 INTEGER,
pktid2 INTEGER,
type INTEGER
);
CREATE TABLE ns_interference (
time REAL,
nodeid INTEGER,
total_power REAL,
accepted_power REAL
);
-- possibly with more tables, depending on what types of simulation
data we were working with

SQLite allowed us to easily and quickly perform aggregations and
joins, and then we could takes those results and do further analysis
and visualization using Octave/MATLAB, GNUPLOT, and other tools.

For example, you can easily get a link gain by joining the ns_phytx
and ns_phyrx tables on pktid.  Scrounging around, I found some queries
like this (embedded in a bash script):

function q3() {
title="Histogram of collisions at node $nodeid"
echo -n "q3 [query]"
sqlite3 z.db << EOF > $filename-1
.separator ' '
.mode list
SELECT ROUND(c.time/1 - 0.5) * 1 AS histtime, COUNT(*)
FROM
ns_collision AS c
INNER JOIN
ns_phytx AS t1
ON c.pktid1 = t1.pktid
INNER JOIN
ns_phytx AS t2
ON c.pktid2 = t2.pktid
WHERE
c.nodeid = $nodeid AND
c.type = 1
GROUP BY
histtime;
EOF

echo -n " [gnuplot]"
gnuplot << EOF
set term postscript eps
set xrange [$starttime:$endtime]
set yrange [0:$collscale]
set title "$title"
set output '$filename.eps'
plot \
'$filename-1' using (\$1):(\$2) \
t "sum over 1 second intervals" w impulses
EOF

echo " $title"
}

I don't really hear a lot of people talking about how they used
databases to collect and assist with analysis of simulation results.
Has anyone else been doing something like this?  I think SQL is
excellent for this.

-David
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] indexing

2008-02-13 Thread drh
Kasper Daniel Hansen <[EMAIL PROTECTED]> wrote:
> I have a table with two variables, say A and B (both integers). The  
> table is rather large - around 2.9 GB on disk. Every combination of  
> (A,B) occurs only once. I am creating a unique index as
>CREATE UNIQUE INDEX ABidx ON abtable (A,B)
> It seems that the (A,B) index is created much slower than the (B,A)  
> index. I am wondering about the reason for this. My - very limited -  
> understanding was that sqlite needs to search over the whole database  
> for every (A,B) combination. Could someone give a quick pointer to  
> where the index strategy is described? Would this be quicker if I fit  
> the whole database into memory?
> 

Creating an index on A,B is equivalent to sorting on A,B.

The sorting algorithm currently used by SQLite requires
O(NlogN) comparisons, which is optimial.  But it also requires
O(N) disk seeks, which is very suboptimal.  You don't notice
all these seeks if your database fits in cache.  But when you
get into databases of about 3GB, the seeking really slows you
down.

A project on our to-do list is to implement a new sorter
that uses O(1) seeks.  We know how to do this.  It is just
finding time to do the implementation.

If creating an index on B,A is much faster than creating an
index on A,B, that probably means that B,A is initially closer
to being in sorted order than A,B is.  The initial order of the
entries does not effect the number of comparisons in a sort,
but it does reduce the number of seeks if the values are 
initially close to being sorted.

--
D. Richard Hipp <[EMAIL PROTECTED]>


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] indexing

2008-02-13 Thread Kasper Daniel Hansen
I have a table with two variables, say A and B (both integers). The  
table is rather large - around 2.9 GB on disk. Every combination of  
(A,B) occurs only once. I am creating a unique index as
   CREATE UNIQUE INDEX ABidx ON abtable (A,B)
It seems that the (A,B) index is created much slower than the (B,A)  
index. I am wondering about the reason for this. My - very limited -  
understanding was that sqlite needs to search over the whole database  
for every (A,B) combination. Could someone give a quick pointer to  
where the index strategy is described? Would this be quicker if I fit  
the whole database into memory?

Kasper
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Wrong column type being returned

2008-02-13 Thread Rob Richardson
Greetings!

I have an application that writes a small SQLite database (well, several
small databases), and an ActiveX control that reads them.  One of the
tables has three columns: a foreign key into another table, a value
(which should be a double, but in the file I looked at, they're all
integers for some unknown reason), and a timestamp.  In early versions
of these applications, the timestamp was written as a floating-point
number containing a date in Julian form (number of days since 1/1/1601,
I think it is).  But with the latest SQLite library, the writing
application suddenly started writing the dates in human-readable form
("2008-02-01 11:35:00").  I don't know why.  Of course, this broke the
reading control.  But since I had cleverly written the reading
application using classes that wrap the various field types, all I had
to do was create a JulianDate class to wrap a date field.  In that
class, I'd check the column type I'm reading.  If it's a double, I'd
just return that, and if it's text, I'd convert it into a Julian date
and return that.  This worked just fine when I first wrote it a week or
two ago.  But now it's not working.  

My code uses sqlite3_column_type to determine the column type of the
date column.  If it's SQLITE_FLOAT, then sqlite3_column_double() is used
to retrieve the value.  Otherwise, I assume the column is text and use
sqlite3_column_text16() to retrieve the data.  The problem I'm having is
that my sqlite3_column_type() call is returning 1, which means the
column is an integer.  It isn't.  When I open the file in SQLiteSpy, I
see that all values in the date column are floating point numbers, as
expected.  And as I step through my code, since it tries to call
sqlite3_column_text16(), I see that the data to the right of the decimal
point is getting truncated when I read it.  Can anyone tell me why
SQLite suddenly thinks this column of floating-point data holds only
integers?

Thanks very much!

Rob Richardson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Innovative examples / user stories

2008-02-13 Thread Zack Jarrett
I write GUI automation scripts using the AppleScript language and the  
System Events application agent on Mac OS X 10.4 and 10.5.  These  
scripts are used to QA test Mac software.  My scripts need to write  
applications states to disk, read/write preferences, read input data,  
and compare output of an application to known good states.  All of  
these values are stored in SQLite databases.  I query the data by  
forming sqlite3 command line application commands as strings in  
AppleScript and passing those strings to the shell.

Prior to using SQLite I was using AppleScript's read/write to file  
operations and found them lacking in flexibility and elegance.  I  
investigated using Apple's application agent "Database Events" (which  
uses SQLite in some measure) but found its query language to be  
severely limiting.  It is ever so much nicer to be able to query my  
data using regular old SQL.  Another benefit is that when debugging  
scripts I can easily look into what the script is writing to disk.

I don't know if it's entirely unusual, but SQLite is incredibly useful  
for my purpose.  At some point in the future I plan on having  
AppleScript build it's own scripts in situ (something which it does  
not do well natively) by pulling command strings (based on specific,  
dynamic conditions) out of a SQLite db.

Zack Jarrett

On Feb 11, 2008, at 10:51 AM, Lars Aronsson wrote:

> Is there any documentation of how people use SQLite in odd ways in
> their everyday activities?

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] blob incremental i/o constraints

2008-02-13 Thread drh
"Igor Sereda" <[EMAIL PROTECTED]> wrote:
> The questions around sqlite3_blob_xxx methods that Roger brought up a couple
> of months ago are very interesting for me too, and I haven't seen any reply
> to Roger's message. (Roger - do you have any update?)
> 
> As far as I can gather from the cited description of the problem, we should
> manually acquire SHARED db lock before reading a blob, and RESERVED lock
> before writing a blob. Can someone confirm that?

Transactions and locks are created and close automatically
by sqlite3_blob_open() and sqlite3_blob_close().  No other
interaction by the application is required.

> 
> I guess I should have run some experiments
> 

That sounds like a good plan...

--
D. Richard Hipp <[EMAIL PROTECTED]>

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] blob incremental i/o constraints

2008-02-13 Thread Igor Sereda
The questions around sqlite3_blob_xxx methods that Roger brought up a couple
of months ago are very interesting for me too, and I haven't seen any reply
to Roger's message. (Roger - do you have any update?)

As far as I can gather from the cited description of the problem, we should
manually acquire SHARED db lock before reading a blob, and RESERVED lock
before writing a blob. Can someone confirm that?

Also, how blob i/o operations deal with transactional context is indeed not
very clear. Based on few words in description of sqlite3_blob_close, I
assume blob i/o is transactional, but this also brings a more subtle point:

What happens with the cache when a really large BLOB (larger than cache
size) is read/written? I guess I should have run some experiments, and I
will probably do, but it seems quite probable that everything will be paged
out of cache by the blob. Considered that the reason for incremental i/o is
(supposedly) to avoid lots of memory allocation, we can expect that every
BLOB going through sqlite3_blob_read/write will be quite large, so the cache
will be thrashing. Please tell me it is not so :)

What I'd suggest is to maybe check whether there is an EXCLUSIVE lock and
write directly to filesystem, as when cache spill happens? Otherwise, I'm
thinking of creating a separate connection with very small cache
specifically to handle blob i/o.

Thanks!
Igor



> -Original Message-
> From: Roger Binns [mailto:[EMAIL PROTECTED]
> Sent: Sunday, December 02, 2007 10:16 AM
> To: sqlite-users@sqlite.org
> Subject: [sqlite] blob incremental i/o constraints
> 
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> There isn't a documentation or wiki page about the blob i/o
> functionality beyond the api reference.  I am curious if the behaviour
> I
> am seeing was intentional.
> 
> Open a blob for reading.  Insert into the same table (not affecting the
> blob or its row).  On calling sqlite3_blob_read, I get SQLITE_ABORT
> returned and the errmsg is "not an error".  This also happens if I do a
> few reads, and then the insert, on the next read.  (This also happens
> with blobs open for writing and doing read or write after the table is
> changed).
> 
> Open a blob for reading.  Call sqlite3_blob_write, and get back
> SQLITE_READONLY.  On next calling sqlite3_blob_close, I also get back
> SQLITE_READONLY.  If sqlite_blob_close is going to return any errors
> from prior reads/writes then it should also do so for the above
> paragraph (which it doesn't).
> 
> You can delete (via SQL) a blob that is open.  The next
> sqlite3_blob_read on the blob gets SQLITE_ABORT.  (Heck you can even
> change the contents via SQL).
> 
> It isn't stated anywhere what the transactional nature of blob i/o is.
> For example is data committed after each write, or does it happen on
> the
> close.
> 
> Roger
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.6 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
> 
> iD8DBQFHUluYmOOfHg372QQRAr6wAKCyo4lRyfeu5gtAxJ+yfH8/KFhhGwCfTV36
> F5Z1rGEiL8hjdSMIC+XjWTs=
> =nd4s
> -END PGP SIGNATURE-
> 
> ---
> --
> To unsubscribe, send email to [EMAIL PROTECTED]
> ---
> --


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [Linux/Python 2.4.2] Forking Python doesn't work

2008-02-13 Thread Gilles Ganault
On Wed, 13 Feb 2008 13:42:20 +0100, Gilles Ganault
<[EMAIL PROTECTED]> wrote:
(snip)

Oops, wrong forum at Gmane :-) Sorry for the wrong posting.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] [Linux/Python 2.4.2] Forking Python doesn't work

2008-02-13 Thread Gilles Ganault
Hello

When a call comes in, I'd like to fork a Python script that
broadcasts a message so that users see the CID name + number pop up on
their computer screen, and simultaneously ring their phones.

The following script doesn't work as planned: It waits until the
script ends before moving on to the next step, which is Dial():

===
exten => s,1,AGI(netcid.py|${CALLERID(num)}|${CALLERID(name)}) exten
=> s,n,Dial(${MYPHONE},5)   
===
# cat netcid.py
#!/usr/bin/python

import socket,sys,time,os

def sendstuff(data):
   s.sendto(data,(ipaddr,portnum))
   return

sys.stdout = open(os.devnull, 'w')
if os.fork():
#BAD? sys.exit(0)   
os._exit(0)
else:
now = time.localtime(time.time())
dateandtime = time.strftime("NaVm/%y NaVM", now)

myarray = []
myarray.append("STAT Rings: 1")
myarray.append("RING")
myarray.append("NAME " + cidname)
myarray.append("TTSN Call from " + cidname)
myarray.append("NMBR " + cidnum)
myarray.append("TYPE K")

s = socket.socket(socket.AF_INET,socket.SOCK_DGRAM)
s.setsockopt(socket.SOL_SOCKET,socket.SO_BROADCAST,True)

portnum = 42685
ipaddr = "192.168.0.255"

for i in myarray:
sendstuff(i)

#Must pause, and send IDLE for dialog box to close
time.sleep(5)
sendstuff("IDLE " + dateandtime)
===

In another forum, people told me that I should fork twice. Is that
really necessary?
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/278731

Thank you.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users