Re: [sqlite] Retrieving data

2006-09-13 Thread thomas . l
Am 14.09.2006 um 02:11 Uhr haben Sie geschrieben:

> I'm using sqlite in VC++ 2005.
> When I started this I knew nothing about sqlite or indeed SQL at all
so its
> been tough going trying to work out how this all works.

I started so too ;-)

> I tried:
> sqlite3_exec(AccDataBase,"SELECT Name,Address FROM Accounts WHERE
MemberNo =
> 2;",Callback(cError,10,,),test,);

You don't need a Callback-Function in any case. Try it
without

I performed two little Samples on my HP. Take a look.

> Now I don't fully understand how the callback part works so I just
made the

It's coming soon, because you work with it

Best regards, Thomas

--
www.thlu.de



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] trigger and schema error

2006-09-13 Thread drh
chetana bhargav <[EMAIL PROTECTED]> wrote:
> Hi,
>
>   Wanted to conform once, I am caching some of the prepared statements in 
> memory, now if some one adds a trigger to another table in the same DB. Do we 
> still get schema change error.
>

Yes.
--
D. Richard Hipp   <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Retrieving data

2006-09-13 Thread Richard Stern
Hi all.

I'm using sqlite in VC++ 2005.
When I started this I knew nothing about sqlite or indeed SQL at all so its
been tough going trying to work out how this all works.

So far I have created a database and a table and added columns and rows
filled with data. But I'm having trouble retrieving that data.

Lets say I have the columns MemberNo, Name and Address.
I want to use a specific MemberNo to retrieve a name and address and store
them in separate variables.

I tried:
sqlite3_exec(AccDataBase,"SELECT Name,Address FROM Accounts WHERE MemberNo =
2;",Callback(cError,10,,),test,);

Now I don't fully understand how the callback part works so I just made the
variables that seemed appropriate and threw them in. I thought the "result"
one was supposed to get filled by the result of the SELECT, but it wasn't.

When I ran this, no error was returned but the callback didn't seem to do
anything.

So is this the correct command to use? Is there a better/easier way?

Rick



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] trigger and schema error

2006-09-13 Thread chetana bhargav
Hi,
   
  Wanted to conform once, I am caching some of the prepared statements in 
memory, now if some one adds a trigger to another table in the same DB. Do we 
still get schema change error.
   
  -Chetan


-
Do you Yahoo!?
 Everyone is raving about the  all-new Yahoo! Mail.

Re: [sqlite] Very wide tables and performance

2006-09-13 Thread jose simas

Thanks for your answers! I am very happy with SQLite as it is I was
just wondering if I could improve it for this case.

I am using tables with this configuration for performance reasons. I
have to support an indeterminate number of columns (user data) and a
"normal" design is not as fast as this solution. I can't remember the
results of the tests right now but the differences in loading data
into the database and reading it to memory were very large.

Thanks,
Jose

On 9/13/06, Dennis Cote <[EMAIL PROTECTED]> wrote:

jose simas wrote:
> My application uses SQLite as its file format and up to two of the
> tables can have several thousand columns (up to 20 or 30 thousand at
> times).
>
> When I open a connection there's a noticeable pause (around one second
> on a file with a table of 7,000 columns, for example). There is also a
> noticeable delay in sorting them by the primary key.
>
> Is there anything I can do to favour this kind of tables?
>
Jose,

What can you possibly be doing with tables that have that many columns?
Are you sure you don't mean 20K-30K rows? In SQL a row corresponds to a
record, and a column corresponds to a field in a record.

If you really mean columns, then your best approach is probably to
redesign your tables to move much of the data into other related tables.
Can you give us some idea of your table schema and how it is used?

There will be little or no benefit to compiling sqlite yourself.

Dennis Cote

-
To unsubscribe, send email to [EMAIL PROTECTED]
-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Very wide tables and performance

2006-09-13 Thread Dennis Cote

jose simas wrote:

My application uses SQLite as its file format and up to two of the
tables can have several thousand columns (up to 20 or 30 thousand at
times).

When I open a connection there's a noticeable pause (around one second
on a file with a table of 7,000 columns, for example). There is also a
noticeable delay in sorting them by the primary key.

Is there anything I can do to favour this kind of tables?


Jose,

What can you possibly be doing with tables that have that many columns? 
Are you sure you don't mean 20K-30K rows? In SQL a row corresponds to a 
record, and a column corresponds to a field in a record.


If you really mean columns, then your best approach is probably to 
redesign your tables to move much of the data into other related tables. 
Can you give us some idea of your table schema and how it is used?


There will be little or no benefit to compiling sqlite yourself.

Dennis Cote

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Very wide tables and performance

2006-09-13 Thread drh
"jose simas" <[EMAIL PROTECTED]> wrote:
> 
> When I open a connection there's a noticeable pause (around one second
> on a file with a table of 7,000 columns, for example). There is also a
> noticeable delay in sorting them by the primary key.
> 

While there is no limit to the number of columns in an SQLite
table (other than the ability of a 32-bit integer to count them)
there are a number of places in the parser where performance
is quadratic in the number of columns in the table.  These places
could be rewritten to work in expected linear time, but I made a
deliberate design decision to keep the code simple and fast for 
the very common case of a reasonable number of table columns.

Your solution, of course, it to refactor your design to use
tables with fewer columns.  Not only will this help SQLite to
run much faster, it will also likely make your program easier
to read and maintain.
--
D. Richard Hipp   <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Very wide tables and performance

2006-09-13 Thread jose simas

Hi,

My application uses SQLite as its file format and up to two of the
tables can have several thousand columns (up to 20 or 30 thousand at
times).

When I open a connection there's a noticeable pause (around one second
on a file with a table of 7,000 columns, for example). There is also a
noticeable delay in sorting them by the primary key.

Is there anything I can do to favour this kind of tables? I am using
the version 3 dll and accessing it through C++. If it improves
performance I can go all the way and compile SQLite myself.

Thanks
Jose

http://www.GenXproTools.com/

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Inserting Values in Bulk

2006-09-13 Thread Rich Shepard

On Wed, 13 Sep 2006, Pablo Santacruz wrote:


Try this, it may suits you.


  Thank you, Pablo. I think that it is OK now; I'm in the process of testing
the method by working within pycrust. I've made the input data into an array
of lists rather than tuples, and that will also work.

Rich

--
Richard B. Shepard, Ph.D.   |The Environmental Permitting
Applied Ecosystem Services, Inc.(TM)|Accelerator
 Voice: 503-667-4517  Fax: 503-667-8863

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Inserting Values in Bulk

2006-09-13 Thread Pablo Santacruz

Try this, it may suits you.
(I'm using pysqlite from trac)

from pysqlite2 import dbapi2 as sqlite
import csv

con = sqlite.connect("mydb")
cursor = con.cursor()
reader = csv.reader(open("some.csv", "rb"))
qmarks = "?," * 30 + "?"

#1
for row in reader: #row is a tuple
   cursor.execute("insert into pr values (" + qmarks + )", row)

#Alternative to #1 (instead of iterating)
#cursor.executemany("insert into pr values (" + qmarks + ")", reader)




On 9/12/06, Dennis Cote <[EMAIL PROTECTED]> wrote:


Rich Shepard wrote:
>
>   Yes, 'row' is the string of values.
>
>> Note, you have only got 4 columns in your column list, so you will
>> get an
>> error if you feed in 31 columns of data.
>
>   I've corrected this already.
>
>> You may need to extract the relevant columns using split to separate
>> your
>> string at the commas, collecting the data for the columns you need, and
>> then reassembling only those columns into a new string of row data.
>
>   This is what I thought that I did for the first three fields (a
> sequential
> record ID that relates the digital record to a paper form), and the
> two text
> fields. Those are inserted in individual statements. I wasn't at all
> clear
> about the rest of the values; that's why I asked.
>
>   The statement now reads:
>
> # now get the pair-wise vote values for rows 7-34, bytes 12-67, with
> dictionary lookups
> pw = split_line[7:67:2]
> self.cur.execute("insert into voting(pr1, pr2, pr3, pr4, \
> pr5, pr6, pr7, pr8, pr9, pr10, pr11, pr12, pr13, pr14, pr15, \
> pr16, pr17, pr18, pr19, pr20, pr21, pr22, pr23, pr24, pr25, \
> pr26, pr27, pr28) values (DATA_MAP_7(pw))")
>
>   If I understand your revision, I need to change the end of the above
> statement to values (DATA_MAP_7(" + pw + "). Does this look better?
>
>
Rich,

I'm still not sure about exactly what you are trying to do, but what I
was suggesting was building your SQL insert statement as a string in
python by concatenating the constant part of the statement (the insert
keyword, the table name , and the list of columns) with the variable
part of the statement (the data for each row in your table). The +
operator is used to concatenate strings into larger strings in python.
Then passing your complete SQL statement to the execute method.

This was a suggestion of a quick and dirty way to get your data into a
table since you said you already had the data for the row in a string
variable, with columns separated by commas, and text fields delimited by
quotes.

If that is not the case, there are better ways to put data into a table.

First, it is generally frown on to have repeated data in columns of a
relational database. These columns should be normalized into another
table and related to the associated record in the existing table using
its  record id  (hence the name relational database). Instead of a table
like this:

create table votes (
id integer primary key,
cat text,
pos text,
pr1 float,
pr2 float,
pr3 float,
...
pr28 float
)

You should normalize the floats into a separate table. This table will
have two columns if the order of the prn fields does not matter, or
three if it does.

create table pr(
vote_id integer references votes(id),
pr float
)

or

create table pr(
vote_id integer references votes(id),
n integer,
pr float
)

And your votes table will have only the first three columns.

create table votes (
id integer primary key,
cat text,
pos text,
)

Now when you insert a record into the votes table, you will also insert
28 records into the pr table. Each of these 28 records will have the
same value for the vote_id, the value of the id just inserted into the
votes table. If the order of the pr fields is important then you need to
track the order in the pr table by setting the column n to match the pr
field number as each pr value is inserted.

The following sample shows how to do this for a single CSV data row in a
string variable row.


def dequote(s):
if s.startswith('"') and s.endswith('"'):
   s = s[1:-1]
   s = s.replace('""', '"')
return s

# separate fields and dequote strings from CSV row
fields = row.split(',')
fields = map(dequote, fields)
# get id of this row
id = int(fields[0])
# insert the row into the main table
cur.execute("insert into votes values(?, ?, ?)", (id, fields[1],
fields[2]))
# insert the repeated pr fields into the associated pr table
for n in range(28):
cur.execute("insert into pr values(?, ?, ?)", (id, n+1,
float(fields[n+3]))


Even if you don't normalize your table this sample should show how to
use parameters (the question marks in the SQL) to make your SQL clearer
and safer (and due to statement caching in pysqlite it should also
perform faster).

HTH
Dennis Cote





-
To unsubscribe, send email to [EMAIL PROTECTED]

-


Re: [sqlite] no TRUNCATE

2006-09-13 Thread Christian Smith

P Kishor uttered:


looking at the SQL supported by SQLite, I see no mention of TRUNCATE.
Not in the SQL clauses supported, nor in the clauses omitted. Is
TRUNCATE a weird beast?




I don't think it's standard SQL. At least not SQL92 that SQLite aims to 
implement.


TRUNCATE is an optimised version of DELETE for tables, which dispenses 
with deleting row by row. SQLite already optimises this case anyway, by 
DROPping a table and recreating it when a DELETE with no WHERE clause is 
executed.


Christian


--
/"\
\ /ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
 X   - AGAINST MS ATTACHMENTS
/ \

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] no TRUNCATE

2006-09-13 Thread P Kishor

looking at the SQL supported by SQLite, I see no mention of TRUNCATE.
Not in the SQL clauses supported, nor in the clauses omitted. Is
TRUNCATE a weird beast?

--
Puneet Kishor http://punkish.eidesis.org/
Nelson Inst. for Env. Studies, UW-Madison http://www.ies.wisc.edu/
Open Source Geospatial Foundation https://edu.osgeo.org/

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] strange behavior of "<" and ">" operator

2006-09-13 Thread Andrea Federico Grisotto



postgres=# SELECT * from np WHERE (42>=fromId) AND (42<=toId);
fromid | toid | serviceid
+--+---
40 |   43 | 3
(1 row)



please forget it,this an error.



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] strange behavior of "<" and ">" operator

2006-09-13 Thread davep
>> You will be told MySQL doesn't set the standard :-)
> the same query in postgresql have have same result of Mysql.
And nor does Postgresql.

> Mysql and postgresql have the same behavior and sqlite is different.
>
> what do you think about this?
> where I am mistaking?

You are making a mistake in thinking that MySQL and Postgresql are 'right'
and sqlite is wrong. There are no strict rules governing this and sqlite
defines quite clearly in its documentation the comparison operator rules
for fields declared as integer, or string type.

If you want numeric/integer comparison in sqlite, you need to define the
field as a numeric/integer type.

In MySQL and postgresql, they both, if the field is detected by content to
be numeric, do a numeric comparison before doing a string comparison.

sqlite has "stricter" typing rules.

Dave.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] strange behavior of "<" and ">" operator

2006-09-13 Thread Andrea Federico Grisotto

[EMAIL PROTECTED] wrote:

  fromIdVARCHAR(6),
  toId  VARCHAR(6),
  

excuse me, this was a stupid question:
*fromId* and *toId* are VARCHAR, if I use *int* the behavior is correct,
excuse me again,  but remain the difference behavior between sqlite and
Mysql.



You will be told MySQL doesn't set the standard :-)

the same query in postgresql have have same result of Mysql.


postgres=# SELECT * from np WHERE (42>=fromId) AND (42<=toId);
fromid | toid | serviceid
+--+---
40 |   43 | 3
(1 row)



Mysql and postgresql have the same behavior and sqlite is different.

what do you think about this?
where I am mistaking?

bye Andrea.



Re: [sqlite] reg:BOLB data

2006-09-13 Thread Gerry Snyder

sandhya wrote:

Hi,
I created a table with fileds filename and the value of type text and BLOB
respectively...
I stored a file and  checked whether it is existing or not from command
prompt with sqlite3 exe..It is displaying but the problem is i can able
to see the whole content of the file which i stored.
Where as i thought it will be in form of some OID's.?
  
The OID is there, but is not shown in a query by default. Try something 
like:


select OID,* from tablename;


Hope this helps,

Gerry


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] strange behavior of "<" and ">" operator

2006-09-13 Thread Mario Frasca

Andrea Federico Grisotto wrote:


*fromId* and *toId* are VARCHAR, if I use *int* the behavior is correct,
excuse me again,  but remain the difference behavior between sqlite 
and Mysql.


just food for thought: when you're asking an ambiguous question (like in 
the case of comparing apples with potatoes) the answer is a question of 
interpretation.


since you have declared fromId and toId to be VARCHAR, it is correct to 
compare them with other VARCHAR values, like here...


SELECT * from np WHERE ('42'>=fromId) AND ('42'<=toId);

in this case you are clearing stating what you want and you get the same 
answer from both engines.


in the expression (42 >= fromId) mysql converts fromId to integer and 
performs an integer comparison, while sqlite3 converts the 42 to string 
and performs a string comparison.  as said, a question of interpretation.


hth,
MF

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] strange behavior of "<" and ">" operator

2006-09-13 Thread davep
>
>>   fromIdVARCHAR(6),
>>   toId  VARCHAR(6),
>
> excuse me, this was a stupid question:
> *fromId* and *toId* are VARCHAR, if I use *int* the behavior is correct,
> excuse me again,  but remain the difference behavior between sqlite and
> Mysql.

You will be told MySQL doesn't set the standard :-)

Dave.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] strange behavior of "<" and ">" operator

2006-09-13 Thread Andrea Federico Grisotto



  fromIdVARCHAR(6),
  toId  VARCHAR(6),


excuse me, this was a stupid question:
*fromId* and *toId* are VARCHAR, if I use *int* the behavior is correct,
excuse me again,  but remain the difference behavior between sqlite and 
Mysql.


bye Andrea.




[sqlite] Re: strange behavior of "<" and ">" operator

2006-09-13 Thread Igor Tandetnik

Andrea Federico Grisotto
<[EMAIL PROTECTED]> wrote:

CREATE TABLE np (
  fromIdVARCHAR(6),
  toId  VARCHAR(6),
  serviceId INTEGER,
  PRIMARY KEY (fromId, toId)
);

if I run this query in *sqlite version 3.3.6* I obtain (two rows):


sqlite> SELECT * from np WHERE (42>=fromId) AND (42<=toId);
fromId   toId serviceId
---  ---  ---
052
40   43   3


I believe this is what happens. You've declared the two columns to have 
TEXT affinity (see http://sqlite.org/datatype3.html), so comparisons are 
done in alphabetical order, not in numeric order. When compared as 
strings, '0' < '42' and '5' > '42'.


If you want fromId and toId to be treated as integers, why don't you 
declare them as INTEGER?


Igor Tandetnik 



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] strange behavior of "<" and ">" operator

2006-09-13 Thread Andrea Federico Grisotto

I have a table like this:

CREATE TABLE np (
  fromIdVARCHAR(6),
  toId  VARCHAR(6),
  serviceId INTEGER,
  PRIMARY KEY (fromId, toId)
);

and I insert these values:

INSERT INTO np VALUES (  0,  5, 2);
INSERT INTO np VALUES ( 40, 43, 3);
INSERT INTO np VALUES (440,499, 3);
INSERT INTO np VALUES (500,599, 1);

I want to list only the rows who have, for a particular value, for 
example *42*

a min bound "fromId" and max bound "toId".

for example:
if I run this query in *sqlite version 3.3.6* I obtain (two rows):


sqlite> SELECT * from np WHERE (42>=fromId) AND (42<=toId);
fromId   toId serviceId
---  ---  ---
052
40   43   3


and if I run the same query in Mysql I obtain (one row):


mysql> SELECT * from np WHERE (42>=fromId) AND (42<=toId);
++--+---+
| fromId | toId | serviceId |
++--+---+
| 40 | 43   | 3 |
++--+---+
1 row in set (0.03 sec)

this is correct because  *40 <= 42 <= 43*

I have the same result using *BETWEEN* operator.


can you help me to to explain to me this "strange" (for me)
behavior?


bye Andrea.



[sqlite] reg:BOLB data

2006-09-13 Thread sandhya
Hi,
I created a table with fileds filename and the value of type text and BLOB
respectively...
I stored a file and  checked whether it is existing or not from command
prompt with sqlite3 exe..It is displaying but the problem is i can able
to see the whole content of the file which i stored.
Where as i thought it will be in form of some OID's.?
IS there any such format the files will get store in Sqlite DB or this is
the only way to store(ie along with content)?Please tell me.
Also what will happen in case if have to store huze say 3000 files at a
time?
Please suggest me if there is any better way of doing it.

Thank you
Regards
Sandhya R



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] HexValues

2006-09-13 Thread Vivien Malerba

On 9/12/06, guy12 <[EMAIL PROTECTED]> wrote:


hi,

i need to insert hex values in the data base and after that i have to select
them again ...

what's the best way for doing that ??

i first used INTERGER for that but i think that is the wrong way...

p.s. i need to accress the db through my c-code


You can use the notation descrobid at
http://www.sqlite.org/datatype3.html for the BLOB data type.

Cheers,

Vivien

-
To unsubscribe, send email to [EMAIL PROTECTED]
-