RE: [sqlite] retrieving data from multiple tables
I have four tables like Mytable1,Mytable2,Mytable3,Mytable4 Mytable1 and Mytable2 have one similar field rest al different, Mytable2 and Mytable3 have one similar field rest al different, Mytable3 and Mytable4 have one similar field rest al different, i have to select from four tables by matching with all these field. -Original Message- From: T&B [mailto:[EMAIL PROTECTED] Sent: Wednesday, March 14, 2007 12:44 PM To: sqlite-users@sqlite.org Subject: Re: [sqlite] retrieving data from multiple tables Hi Kirrthana, > I have created a database with four tables,i have to search and > retrieve > data from all the four tables based on the entry i get from the > previous > table and display all the entries,could anybody tell how to do it. I'm not sure what you mean. If you mean you have four tables with similar fields, and you want to SELECT data from all of them at once, then you can create a dynamic compound table that will contain all the data, and SELECT from that. eg: CREATE VIEW MyTables AS SELECT * FROM MyTable1 UNION ALL SELECT * FROM MyTable2 UNION ALL SELECT * FROM MyTable3 UNION ALL SELECT * FROM MyTable4; SELECT * FROM MyTables WHERE ; Or, if you mean that the result of SELECTing in one table becomes the basis of the SELECT in the next, that is something like the sequence: value1 = SELECT ResultField1 FROM MyTable1 WHERE SearchField1 = ; value2 = SELECT ResultField2 FROM MyTable2 WHERE SearchField2 = ResultField1; value3 = SELECT ResultField3 FROM MyTable3 WHERE SearchField3 = ResultField2; value4 = SELECT ResultField4 FROM MyTable4 WHERE SearchField4 = ResultField3; Then I think you could do it in one action by: SELECT ResultField4 FROM MyTable1 LEFT JOIN MyTable2 ON SearchField2 = ResultField1 LEFT JOIN MyTable3 ON SearchField3 = ResultField2 LEFT JOIN MyTable4 ON SearchField4 = ResultField3 WHERE SearchField1 = ; Tom - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] retrieving data from multiple tables
Hi Kirrthana, I have created a database with four tables,i have to search and retrieve data from all the four tables based on the entry i get from the previous table and display all the entries,could anybody tell how to do it. I'm not sure what you mean. If you mean you have four tables with similar fields, and you want to SELECT data from all of them at once, then you can create a dynamic compound table that will contain all the data, and SELECT from that. eg: CREATE VIEW MyTables AS SELECT * FROM MyTable1 UNION ALL SELECT * FROM MyTable2 UNION ALL SELECT * FROM MyTable3 UNION ALL SELECT * FROM MyTable4; SELECT * FROM MyTables WHERE ; Or, if you mean that the result of SELECTing in one table becomes the basis of the SELECT in the next, that is something like the sequence: value1 = SELECT ResultField1 FROM MyTable1 WHERE SearchField1 = ; value2 = SELECT ResultField2 FROM MyTable2 WHERE SearchField2 = ResultField1; value3 = SELECT ResultField3 FROM MyTable3 WHERE SearchField3 = ResultField2; value4 = SELECT ResultField4 FROM MyTable4 WHERE SearchField4 = ResultField3; Then I think you could do it in one action by: SELECT ResultField4 FROM MyTable1 LEFT JOIN MyTable2 ON SearchField2 = ResultField1 LEFT JOIN MyTable3 ON SearchField3 = ResultField2 LEFT JOIN MyTable4 ON SearchField4 = ResultField3 WHERE SearchField1 = ; Tom - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] retrieving data from multiple tables
Hi all, I have created a database with four tables,i have to search and retrieve data from all the four tables based on the entry i get from the previous table and display all the entries,could anybody tell how to do it. Regards Kirrthana
[sqlite] Does anybody can port sqlite3-3.3.7 to uClinux based-ARModulator?
Hi guys uClinux version with 2003. kernel with linux-2.4.x. sqlite with 3.3.7 verdor with GDB. productor with ARModulator. compilied process work out as well. "illegal write to 0x54fd0 of 0x11affb4" I received. [EMAIL PROTECTED] uClinux-dist-2003]# arm-elf-gdb linux-2.4.x/linux GNU gdb 5.0 Copyright 2000 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "--host=i686-pc-linux-gnu --target=arm-elf"... (gdb) target sim Loaded ROM boot.rom Loaded ROM boot.rom Loaded ROM boot.rom Loaded ROM boot.rom Connected to the simulator. (gdb) load Loading section .init, size 0xa000 vma 0x100 Loading section .text, size 0xd0180 vma 0x100a000 Loading section .data, size 0x9154 vma 0x10dc000 Start address 0x100 Transfer rate: 7444128 bits in <1 sec. (gdb) run Starting program: /mnt/sdb/Parcket/uClinux-dist-2003/linux-2.4.x/linux Linux version 2.4.20-uc0 ([EMAIL PROTECTED]) (gcc version 2.95.3 20010315 (release)(ColdFire patches - 20010318 from http://fiddes.net/coldfire/)(uClinux XIP and shared lib patches from http://www.snapgear.com/)) #1 Wed Mar 14 16:41:17 CST 2007 Processor: Atmel AT91M40xxx revision 0 Architecture: EB01 On node 0 totalpages: 1024 zone(0): 0 pages. zone(1): 1024 pages. zone(2): 0 pages. Kernel command line: root=/dev/rom0 Calibrating delay loop... 12.97 BogoMIPS Memory: 4MB = 4MB total Memory: 2984KB available (832K code, 182K data, 40K init) Dentry cache hash table entries: 512 (order: 0, 4096 bytes) Inode cache hash table entries: 512 (order: 0, 4096 bytes) Mount-cache hash table entries: 512 (order: 0, 4096 bytes) Buffer-cache hash table entries: 1024 (order: 0, 4096 bytes) Page-cache hash table entries: 1024 (order: 0, 4096 bytes) POSIX conformance testing by UNIFIX Linux NET4.0 for Linux 2.4 Based upon Swansea University Computer Society NET3.039 Initializing RT netlink socket Starting kswapd Atmel USART driver version 0.99 ttyS0 at 0xfffd (irq = 2) is a builtin Atmel APB USART ttyS1 at 0xfffcc000 (irq = 3) is a builtin Atmel APB USART Blkmem copyright 1998,1999 D. Jeff Dionne Blkmem copyright 1998 Kenneth Albanowski Blkmem 1 disk images: 0: 140-155D3FF [VIRTUAL 140-155D3FF] (RO) RAMDISK driver initialized: 16 RAM disks of 4096K size 1024 blocksize SkyEye NE2k Ethernet driver version 0.2 (2003-04-27) sene2k dev name: eth0: <6>NET4: Linux TCP/IP 1.0 for NET4.0 IP Protocols: ICMP, UDP, TCP IP: routing cache hash table of 512 buckets, 4Kbytes TCP: Hash tables configured (established 512 bind 512) NET4: Unix domain sockets 1.0/SMP for Linux NET4.0. VFS: Mounted root (romfs filesystem) readonly. Shell invoked to run file: /etc/rc Command: #hostname GDB-ARMulator Command: #/bin/expand /etc/ramfs.img /dev/ram0 Command: #mount -t proc proc /proc Command: #mount -t ext2 /dev/ram0 /var Command: #mkdir /var/tmp Command: #mkdir /var/log Command: #mkdir /var/run Command: #mkdir /var/lock Command: #cat /etc/motd Command: #/bin/ifconfig eth0 up 10.0.0.2 Command: Command: hostname GDB-ARMulator Command: /bin/expand /etc/ramfs.img /dev/ram1 Command: mount -t proc proc /proc mount: /etc/mtab: Read-only file system Command: mount -t ext2 /dev/ram1 /var mount: /etc/mtab: Read-only file system Command: mkdir /var/tmp Command: mkdir /var/log Command: mkdir /var/run Command: mkdir /var/lock Command: mkdir /var/empty Command: cat /etc/motd Welcome to _ _ / __| ||_| _ _| | | | _ _ _ _ _ | | | | | | || | _ \| | | |\ \/ / | |_| | |__| || | | | | |_| |/\ | ___\|_||_|_| |_|\|\_/\_/ | | |_| GDB/ARMulator support by <[EMAIL PROTECTED]> For further information check: http://www.uclinux.org/ Command: Execution Finished, Exiting Sash command shell (version 1.1.1) /> cd bin /bin> sqlite3 illegal write to 0x54fd0 of 0x11affb4 Program received signal SIGINT, Interrupt. illegal read from 0x214e04d0 illegal read from 0x214e04d1 illegal read from 0x214e04d2 illegal read from 0x214e04d3 0x14e0524 in ?? () (gdb)
Re: [sqlite] Pragma table_info(), why no fields like UNIQUE, AUTOINCREMENT
Hi, - Original Message - From: "Stef Mientki" <[EMAIL PROTECTED]> To: Sent: Tuesday, March 13, 2007 3:47 PM Subject: Re: [sqlite] Pragma table_info(), why no fields like UNIQUE, AUTOINCREMENT > > > > > > You should also consider how your change might effect > > backwards compatibility. The last time that table_info() > > was modified, the Ruby-On-Rails community got really > > upset. I'm rather of a mind to leave table_info() alone. > > > Forgive my ignorance, I'm just a beginner in databases, > but what about TABLE_INFO2 ( ) ? > (with the explicit restriction that it can be extended in the future > when needed, > so Ruby users leave it alone ;-) > I really can't imagine that extension of a function can cause serious > compatibility issues) > > The alternative is now that I've to > - build a table from table_info() > - query sqlite_master, to get the SQL string with which the table was > generated > - parse the SQL string from sqlite_master > - and add it to my edit grid form table_info > or are there simpler ways ? > > -- > cheers, > Stef Mientki > http://pic.flappie.nl > A small opinion on that matter: what I would really like to see is something like system tables. Today sqlite uses only sqlite_master to keep information about its objects and parsing is required to getter better info of each object (if one needs to). Using other system tables to keep information about each object seems very appropriated since most RDBMS already implement that and it is compatible with SQL ANSI. I think it shouldn't be much of a problem since sqlite engine already parses each object when it opens a database. This would remove the need for PRAGMA commands and would make life much simpler. Just a thought. Best Regards, Crercio O. Silva / DBTools Software http://www.dbtools.com.br - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] MingW32 help wanted
Scott Hess wrote: I took Makefile.linux-gcc, and made the obvious changes (there are mingw lines all over in there). EXE = .exe, SO =dll, SHPREFIX = , [that was nothing for that setting], and TCC, AR, and RANLIB set to the path to the appropriate commands from mingw. I should warn that I haven't actually built using mingw for maybe 9 months! This is just going off of the Makefile that I still have lying around. Thanks Scott! With a little bit of creativity I managed to get it all going! :) Stefan - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] MingW32 help wanted
I took Makefile.linux-gcc, and made the obvious changes (there are mingw lines all over in there). EXE = .exe, SO =dll, SHPREFIX = , [that was nothing for that setting], and TCC, AR, and RANLIB set to the path to the appropriate commands from mingw. I should warn that I haven't actually built using mingw for maybe 9 months! This is just going off of the Makefile that I still have lying around. -scott On 3/13/07, Stefan de Konink <[EMAIL PROTECTED]> wrote: Mike Frysinger wrote: > On Tuesday 13 March 2007, Stefan de Konink wrote: >> My target is to get sqlite compiled on my host platform i686 and my >> target platform mingw32. Could someone get a step by step manual to get >> the 'precious' dll/.a? > > well, this could be autotoolized and force people to do > ./configure --host=mingw32 > i do that now with a library of my own ... but i cross-compile everything from > my Linux host ;) > -mike Ok that doesn't work. Because: lemon needs to run on the host system. os_unix doesn't compile because it has undefined 'locks'. removing os_unix leads to unresolved symbols in a lot of files when making the dll. So a step by step tutorial would be nice :) So what I did. ./configure --prefix=/opt/gtk Adapted Makefile and replaced TCC and added a --tag to libtool (because it otherwise complains). make everything is ok until os_unix... Stefan - To unsubscribe, send email to [EMAIL PROTECTED] - - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] MingW32 help wanted
Mike Frysinger wrote: On Tuesday 13 March 2007, Stefan de Konink wrote: My target is to get sqlite compiled on my host platform i686 and my target platform mingw32. Could someone get a step by step manual to get the 'precious' dll/.a? well, this could be autotoolized and force people to do ./configure --host=mingw32 i do that now with a library of my own ... but i cross-compile everything from my Linux host ;) -mike Ok that doesn't work. Because: lemon needs to run on the host system. os_unix doesn't compile because it has undefined 'locks'. removing os_unix leads to unresolved symbols in a lot of files when making the dll. So a step by step tutorial would be nice :) So what I did. ./configure --prefix=/opt/gtk Adapted Makefile and replaced TCC and added a --tag to libtool (because it otherwise complains). make everything is ok until os_unix... Stefan - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] MingW32 help wanted
On Tuesday 13 March 2007, Stefan de Konink wrote: > My target is to get sqlite compiled on my host platform i686 and my > target platform mingw32. Could someone get a step by step manual to get > the 'precious' dll/.a? well, this could be autotoolized and force people to do ./configure --host=mingw32 i do that now with a library of my own ... but i cross-compile everything from my Linux host ;) -mike pgpSZmQ2ECRO4.pgp Description: PGP signature
[sqlite] MingW32 help wanted
Hello, I have found this post: http://marc.10east.com/?l=sqlite-users&m=115689163614925&w=2 My target is to get sqlite compiled on my host platform i686 and my target platform mingw32. Could someone get a step by step manual to get the 'precious' dll/.a? When removing the os_unix.c from the Makefile I have everything compiled. But running mkdll.sh doesn't work for me :( Stefan - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Degrouping, desummarizing or integrating headings
Thanks Puneet for your response. this is a programming problem and not necessarily a SQLite problem. Well, I can program it, but hope that instead there's a way to do it within SQL. Nevertheless, you don't specify how you are getting this summary data... are they in a database? are they just a text file? Are they XML? It comes as a CSV or tabbed text file, then into a table in my database. So, it's in a table in my database, eg: CREATE TABLE Shopping_Grouped(Aisle TEXT, Product TEXT, Cost REAL) But the Aisle entries are in records by themselves, and apply to the subsequent records containing Product and Cost, for example: INSERT INTO Shopping_Grouped(Aisle) VALUES('Dairy'); INSERT INTO Shopping_Grouped(Product, Cost) VALUES('Milk', 2); INSERT INTO Shopping_Grouped(Product, Cost) VALUES('Cream', 1); INSERT INTO Shopping_Grouped(Product, Cost) VALUES('Cheese', 3); INSERT INTO Shopping_Grouped(Aisle) VALUES('Bakery'); INSERT INTO Shopping_Grouped(Product, Cost) VALUES('Sliced', 4); INSERT INTO Shopping_Grouped(Product, Cost) VALUES('Sliced', 3); INSERT INTO Shopping_Grouped(Product, Cost) VALUES('Cake', 2); But I want to get it into this schema: INSERT INTO Shopping(Aisle, Product, Cost) VALUES('Dairy', 'Milk', 2); INSERT INTO Shopping(Aisle, Product, Cost) VALUES('Dairy', 'Cream', 1); INSERT INTO Shopping(Aisle, Product, Cost) VALUES('Dairy', 'Cheese', 3); INSERT INTO Shopping(Aisle, Product, Cost) VALUES('Bakery', 'Sliced', 4); INSERT INTO Shopping(Aisle, Product, Cost) VALUES('Bakery', 'Sliced', 3); INSERT INTO Shopping(Aisle, Product, Cost) VALUES('Bakery', 'Cake', 2); You could create a schema out of this normalizing aisles and products CREATE TABLE aisles (aisle_id, aisle_name); CREATE TABLE products (product_id, product_name, product_cost, aisle_id); It's the "normalizing" that I'm asking how to do, via SQL (ie clever SELECT statements). and go from there with SELECT a.aisle_name, p.product_name, p.product_cost FROM products p JOIN aisles a ON p.aisle_id = a.aisle_id That's the reverse of what I need. The data is already in this final form and I need to "unscramble" it. you will have to figure out how to get your source data into the SQLite schema you create It's already in the schema I outlined above. But how to get it into the schema I want, is my question. but that should be trivial depending on your programming environment. But is it possible via SQL? Thanks, Tom From: T&B <[EMAIL PROTECTED]> Date: 14 March 2007 3:07:24 AM To: sqlite-users@sqlite.org Subject: [sqlite] Degrouping, desummarizing or integrating headings Hi All, I come across a fair bit of source data in summary format, where the one field's values are used as a heading for a group of records for which it applies. For instance, this shopping list: Aisle Product Cost Dairy Milk $2 Cream$1 Cheese $3 Bakery Sliced $4 Rolls$3 Cake $2 How can I select the data from that table of 8 records so that the result is this table of 6 records?: Aisle Product Cost Dairy Milk $2 Dairy Cream$1 Dairy Cheese $3 Bakery Sliced $4 Bakery Rolls$3 Bakery Cake $2 Thanks, Tom - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] FTS: index only, no text storage - Was: [sqlite] FTS: Custom Tokenizer / Stop Words
On 3/13/07, Ralf Junker <[EMAIL PROTECTED]> wrote: Scott Hess wrote: >Keeping track of that information would probably double the >size of the index. With your estimate, the SQLite full text index (without document storage) would still take up only 50% of the documents' size. In my opinion, this is still a very good ratio, even if some specialized full text search engines apparently get away with less than 30%. It's a pretty raw estimate, though, and assumes that we only bother to store the set of terms per document, without retaining ordering. Since it takes X space to store the terms with their list of docids, it should take approximately X space to store the docids with their list of terms, suitably encoded. Actually, a bit of a win could be had because the reverse mapping need not retain the positions, and the termids should encode more tightly than docids, since there are fewer of them. I am optimistic that the proper implementation will use even less than 50%: Indeed :-). I've mostly been trying to stay on the positive side of the 80/20 rule, and avoid some of the complications that come along with some of the bigger dedicated systems. My modifications are completely rudimentary and not at all optimized - the column to store the document text still exists. The only difference is that it is not used - it stores a null value which could be saved. In fact, the entire FTS table (the one without the suffixes) would not be needed and cut down storage space. The overhead of the table should be pretty minimal, and is providing the useful function of docid assignment. A complete implementation might be able to fold docid assignment into the internal data structures, but that might be counterproductive (it's surely useful to have docids assigned in the exact same way as sqlite assigns rowids). >A thing I've considered doing is to keep deletions >as a special index to the side, Would this open the door to "insert only, but no-modify and no-delete" indexes? I am sure users would like pay this cost for the benefit of even smaller FTS indexes! You're 90% there for INSERT-only - the above is a notion for how one could handle UPDATE and DELETE without storing the content data. >which would allow older data to be >deleted during segment merges. Unfortunately, I suspect that this >would slow things down by introducing another bit of data which needs >to be considered during merges. I found that _not_ adding the original text turned out to be a great time saver. This makes sense if we know that the original text is about 4 times the size of the index. Storing lots of text by itself is already quite time consuming even without creating a FTS index. So I do not expect really bad slow downs by adding a docid->term index. Are you doing your inserts in the implied transactions sqlite provides for you if you didn't open an explicit transaction? I'm found that when doing bulk inserts, the maintenance of the content table is a pretty small part of the overall time, perhaps 10%. Snippets are of course nice to have out of the box as it is right now. But even without storing the original text, snippets could be created by 1. supplying the text through other means (additional parameter or callback function), so that not FTS but the application would read it from a disk file or decompress it from a database field. 2. constructing token-only snippets from the document tokens and offsets. This would of course exclude all non-word characters, but would still return legible information. A use-case that was considered was indexing PDF data, in which case the per-document tokenization cost would probably be a couple seconds. If you ran a query which matched a couple thousand documents and proceeded to re-tokenize them for snippet generation, you'd be in deep trouble. This is somewhat addressable by providing scoring mechanisms and using subselects (basically, have the subselect order by score, then cap the number of results, and have the main select ask for snippets). A variant on that would be an index of a CD. In that case it's pretty much essential that the index be able to efficiently answer questions without having to seek all over the disk. Option 2 has some attraction, though, because you have the option of transparently segmenting the document into blocks and thus not having to re-tokenize the entire document to generate snippets. This extends the b-tree characteristics a little further into the data. [Of course, you'd really just store things in a form which made snippets easy to generate without re-tokenizing at all, but that's besides the point.] >Being able to have an index without storing the original data was a >weak goal when fts1 was being developed, but every time we visitted >it, we found that the negatives of that approach were substantial >enough to discourage us for a time. [The "we" in that sentence means >"me and the various people I run wacky ideas past."] I'm keeping a
Re: [sqlite] FTS: index only, no text storage - Was: [sqlite] FTS: Custom Tokenizer / Stop Words
Hello Scott, I was hoping that you would read my message, many thanks for your reply! >UPDATE and DELETE need to have the previous document text, because the >docids are embedded in the index, and there is no docid->term index >(or, put another way, the previous document text _is_ the docid->term >index). This is very understandable given the present design. >Keeping track of that information would probably double the >size of the index. With your estimate, the SQLite full text index (without document storage) would still take up only 50% of the documents' size. In my opinion, this is still a very good ratio, even if some specialized full text search engines apparently get away with less than 30%. I think you have done an enourmous job on FTS2! I am optimistic that the proper implementation will use even less than 50%: My modifications are completely rudimentary and not at all optimized - the column to store the document text still exists. The only difference is that it is not used - it stores a null value which could be saved. In fact, the entire FTS table (the one without the suffixes) would not be needed and cut down storage space. >A thing I've considered doing is to keep deletions >as a special index to the side, Would this open the door to "insert only, but no-modify and no-delete" indexes? I am sure users would like pay this cost for the benefit of even smaller FTS indexes! >which would allow older data to be >deleted during segment merges. Unfortunately, I suspect that this >would slow things down by introducing another bit of data which needs >to be considered during merges. I found that _not_ adding the original text turned out to be a great time saver. This makes sense if we know that the original text is about 4 times the size of the index. Storing lots of text by itself is already quite time consuming even without creating a FTS index. So I do not expect really bad slow downs by adding a docid->term index. >Of course, there's no way the current system could generate snippets >without the original text, because doclists don't record the set of >adjacent terms. That information could be recorded, but it's doubtful >that doing so would be an improvement on simply storing the original >text in the first place. The current system _does_ have everything >needed to generate the offsets to hits even without the original text, >so the client application could generate snippets, though the code is >not currently in place to expose this information. Snippets are of course nice to have out of the box as it is right now. But even without storing the original text, snippets could be created by 1. supplying the text through other means (additional parameter or callback function), so that not FTS but the application would read it from a disk file or decompress it from a database field. 2. constructing token-only snippets from the document tokens and offsets. This would of course exclude all non-word characters, but would still return legible information. >Being able to have an index without storing the original data was a >weak goal when fts1 was being developed, but every time we visitted >it, we found that the negatives of that approach were substantial >enough to discourage us for a time. [The "we" in that sentence means >"me and the various people I run wacky ideas past."] I'm keeping an >eye out for interesting implementation strategies and the time to >explore them, though. Maybe my arguments could influence the opinion of "we"? I would love to see FTS without text storage, especially since I just lost a project to another FTS product because duplicating data was unfortunately "out of disk space". All the best and keep up your good work, Ralf - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] FTS: index only, no text storage - Was: [sqlite] FTS: Custom Tokenizer / Stop Words
Ion Silvestru wrote: >Just a question: did you eliminated stop-words in your tests? No, I did not eliminate any stop-words. The two test runs were equal except for the small changes in FTS 2. My stop words question was not intended for source code but for human language texts. Ralf - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Pragma table_info(), why no fields like UNIQUE, AUTOINCREMENT
You should also consider how your change might effect backwards compatibility. The last time that table_info() was modified, the Ruby-On-Rails community got really upset. I'm rather of a mind to leave table_info() alone. Forgive my ignorance, I'm just a beginner in databases, but what about TABLE_INFO2 ( ) ? (with the explicit restriction that it can be extended in the future when needed, so Ruby users leave it alone ;-) I really can't imagine that extension of a function can cause serious compatibility issues) The alternative is now that I've to - build a table from table_info() - query sqlite_master, to get the SQL string with which the table was generated - parse the SQL string from sqlite_master - and add it to my edit grid form table_info or are there simpler ways ? -- cheers, Stef Mientki http://pic.flappie.nl - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] LEMON question
I want to know if I can here public some LEMON PARSER examples that I have done... because as i far i know LEMON is maintained into SQLite project. Best Regards -- Cesar Rodas http://www.cesarodas.com/ Mobile Phone: 595 961 974165 Phone: 595 21 645590 [EMAIL PROTECTED] [EMAIL PROTECTED]
Re: [sqlite] date/time implementation question
Rafi Cohen wrote: Hi, I hope this question is not off-topic for this list. I'm recieveing, occasionally, a .csv file from a source which I need to process accordcing to some criteria and either insert, delet or update it's rows into a database. One of it's criterias is date comparison. In other words, proceeding differently when the date/time mentioned on a specific row is a past date or future date comparing with the current date/time. My problem is the format in which I recieve the date and time: "HH:MM:SS, --mm-dd". This means separate fileds, one for time and one for date. If the format was: "-mm-dd HH:MM:SS", I could call strptime to stor this in a struct tm, then cakk mktime to get this in a time_t varialbe and then call difftime with the current date/time to make the comparison (I'm using C as programmikng language on Linux). So, my question is: should I unify the 2 strings into one to obtain the second format and then proceed as I explained above, or is there a way to compare the date with the current date separately and the time with the current time separately in case the dates are equal? If the conclusion will be to uify the strings into one, should I create a table with a single date/time column or still keep the 2 fileds separately in my sql table? Thanks, Rafi. Rafi, It depends. :-) It depends on what you think is more important, execution speed, database size, simplicity of coding, etc... Assuming the date format you gave has a typo and there is really only one '-' between the year and month in your existing fields, you should be able to do your date comparisons in sqlite. Use concatenation to build a single date and time string. With your date and time in ISO format you can then use string comparisons as date comparisons select case when (date || ' ' || time) < ?limit_date then field_one else field_two end from my_table HTH Dennis Cote - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] date/time implementation question
Hi, I hope this question is not off-topic for this list. I'm recieveing, occasionally, a .csv file from a source which I need to process accordcing to some criteria and either insert, delet or update it's rows into a database. One of it's criterias is date comparison. In other words, proceeding differently when the date/time mentioned on a specific row is a past date or future date comparing with the current date/time. My problem is the format in which I recieve the date and time: "HH:MM:SS, --mm-dd". This means separate fileds, one for time and one for date. If the format was: "-mm-dd HH:MM:SS", I could call strptime to stor this in a struct tm, then cakk mktime to get this in a time_t varialbe and then call difftime with the current date/time to make the comparison (I'm using C as programmikng language on Linux). So, my question is: should I unify the 2 strings into one to obtain the second format and then proceed as I explained above, or is there a way to compare the date with the current date separately and the time with the current time separately in case the dates are equal? If the conclusion will be to uify the strings into one, should I create a table with a single date/time column or still keep the 2 fileds separately in my sql table? Thanks, Rafi.
Re: [sqlite] Pragma table_info(), why no fields like UNIQUE, AUTOINCREMENT
"Vivien Malerba" <[EMAIL PROTECTED]> wrote: > On 3/13/07, Martin Jenkins <[EMAIL PROTECTED]> wrote: > > Vivien Malerba wrote: > > > I've already sent a proposal along with a patch some time ago about > > > that, but nobody seemed to care, see > > > http://www.mail-archive.com/sqlite-users@sqlite.org/msg21285.html > > > > Vivien, I can't see any patch attached to that post. Perhaps you should > > resubmit it? And I'll guess the odds of any patch being accepted are > > significantly improved if the patch both passes and extends the test suite. > > > > The patch was not archived. I guess I'll work on it to follow your > advice and make sure it passes the test suite. I can send it to you if > you want to have a look at it, though. > You should also consider how your change might effect backwards compatibility. The last time that table_info() was modified, the Ruby-On-Rails community got really upset. I'm rather of a mind to leave table_info() alone. -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] FTS: index only, no text storage - Was: [sqlite] FTS: Custom Tokenizer / Stop Words
On 3/13/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: Ion Silvestru <[EMAIL PROTECTED]> wrote: > To Ralf: > >As a side effect, the offsets() and snippet() functions stopped working, > >as they seem to rely on the presence of the full document text in the > >current implementation. > > Did you tested "phrase" searching on the index-only version, didn't this > kind of search rely on offsets()? Phrase searches do *not* use the full document text. But UPDATE and DELETE do, ironically. Or at least they used to, unless Scott has changed that in FTS2. Indeed, phrase searches should continue to work, because since we have the terms from the query, we can look them up and compare their token positions in the document (offsets being the character positions of the tokens). UPDATE and DELETE need to have the previous document text, because the docids are embedded in the index, and there is no docid->term index (or, put another way, the previous document text _is_ the docid->term index). Keeping track of that information would probably double the size of the index. A thing I've considered doing is to keep deletions as a special index to the side, which would allow older data to be deleted during segment merges. Unfortunately, I suspect that this would slow things down by introducing another bit of data which needs to be considered during merges. Of course, there's no way the current system could generate snippets without the original text, because doclists don't record the set of adjacent terms. That information could be recorded, but it's doubtful that doing so would be an improvement on simply storing the original text in the first place. The current system _does_ have everything needed to generate the offsets to hits even without the original text, so the client application could generate snippets, though the code is not currently in place to expose this information. Being able to have an index without storing the original data was a weak goal when fts1 was being developed, but every time we visitted it, we found that the negatives of that approach were substantial enough to discourage us for a time. [The "we" in that sentence means "me and the various people I run wacky ideas past."] I'm keeping an eye out for interesting implementation strategies and the time to explore them, though. -scott - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Degrouping, desummarizing or integrating headings
On 3/13/07, T&B <[EMAIL PROTECTED]> wrote: Hi All, I come across a fair bit of source data in summary format, where the one field's values are used as a heading for a group of records for which it applies. For instance, this shopping list: Aisle Product Cost Dairy Milk $2 Cream$1 Cheese $3 Bakery Sliced $4 Rolls$3 Cake $2 How can I select the data from that table of 8 records so that the result is this table of 6 records?: Aisle Product Cost Dairy Milk $2 Dairy Cream$1 Dairy Cheese $3 Bakery Sliced $4 Bakery Rolls$3 Bakery Cake $2 this is a programming problem and not necessarily a SQLite problem. Nevertheless, you don't specify how you are getting this summary data... are they in a database? are they just a text file? Are they XML? You could create a schema out of this normalizing aisles and products CREATE TABLE aisles (aisle_id, aisle_name); CREATE TABLE products (product_id, product_name, product_cost, aisle_id); and go from there with SELECT a.aisle_name, p.product_name, p.product_cost FROM products p JOIN aisles a ON p.aisle_id = a.aisle_id you will have to figure out how to get your source data into the SQLite schema you create, but that should be trivial depending on your programming environment. -- Puneet Kishor http://punkish.eidesis.org/ Nelson Inst. for Env. Studies, UW-Madison http://www.nelson.wisc.edu/ Open Source Geospatial Foundation http://www.osgeo.org/education/ - collaborate, communicate, compete = - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Pragma table_info(), why no fields like UNIQUE, AUTOINCREMENT
On 3/13/07, Martin Jenkins <[EMAIL PROTECTED]> wrote: Vivien Malerba wrote: > I've already sent a proposal along with a patch some time ago about > that, but nobody seemed to care, see > http://www.mail-archive.com/sqlite-users@sqlite.org/msg21285.html Vivien, I can't see any patch attached to that post. Perhaps you should resubmit it? And I'll guess the odds of any patch being accepted are significantly improved if the patch both passes and extends the test suite. The patch was not archived. I guess I'll work on it to follow your advice and make sure it passes the test suite. I can send it to you if you want to have a look at it, though. Cheers, Vivien - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] Degrouping, desummarizing or integrating headings
Hi All, I come across a fair bit of source data in summary format, where the one field's values are used as a heading for a group of records for which it applies. For instance, this shopping list: Aisle Product Cost Dairy Milk $2 Cream$1 Cheese $3 Bakery Sliced $4 Rolls$3 Cake $2 How can I select the data from that table of 8 records so that the result is this table of 6 records?: Aisle Product Cost Dairy Milk $2 Dairy Cream$1 Dairy Cheese $3 Bakery Sliced $4 Bakery Rolls$3 Bakery Cake $2 Thanks, Tom - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] SQLite v/s SQLite3 Performance Assay
Hi Doug, My bad... the getTickCount() in the snippet is basically a wrapper over ftime call, and the same is used on the linux platform also for timing. time_t getTickCount() { timeb tm = {0}; ftime(&tm); return ( (tm.time*1000) + ((time_t)tm.millitm) ); } Thanks & Regards Nitin K On 3/13/07, Doug Nebeker <[EMAIL PROTECTED]> wrote: Be aware that the Windows GetTickCount call has a resolution of 10 to 15 ms on most machines, so that could throw throw your timings off if you're timing each individual test case as it appears below. To get better timer resolution, use QueryPerformanceCounter. Now, would that make SQLite3 looks slower than v2? I wouldn't think so--you'd think it would skew the results equally for both tests. Doug This email was sent to you by Reuters, the global news and information company. To find out more about Reuters visit www.about.reuters.com Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Limited. Reuters Limited is part of the Reuters Group of companies, of which Reuters Group PLC is the ultimate parent company. Reuters Group PLC - Registered office address: The Reuters Building, South Colonnade, Canary Wharf, London E14 5EP, United Kingdom Registered No: 3296375 Registered in England and Wales
Re: [sqlite] SQLite v/s SQLite3 Performance Assay
"Nitin Kashyap" <[EMAIL PROTECTED]> wrote: > > My Intentions are towards exploring the reason behind these differences; > and what can be done to counter these performance differences. I'm > seeking some pointers from the Community. > I use SQLite for a lot of different things (as you would expect) over the course of many years. And my experience has been that SQLite3 is almost always faster than SQLite2. Sometimes remarkably faster. I'm guessing that the poor performance you are getting from SQLite3 is a test artifact of some kind. -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Pragma table_info(), why no fields like UNIQUE, AUTOINCREMENT
Vivien Malerba wrote: I've already sent a proposal along with a patch some time ago about that, but nobody seemed to care, see http://www.mail-archive.com/sqlite-users@sqlite.org/msg21285.html Vivien, This mailing list does not pass attachments. A patch such as your is best handled by submitting a feature request through the bug report tracker at http://www.sqlite.org/cvstrac/captcha?cnxp=/cvstrac/tktnew and attaching your code and test suite. You will also have to submit a dedication as described at http://www.sqlite.org/copyright.html. HTH Dennis Cote - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] FTS: index only, no text storage - Was: [sqlite] FTS: Custom Tokenizer / Stop Words
Ion Silvestru <[EMAIL PROTECTED]> wrote: > To Ralf: > > >As a side effect, the offsets() and snippet() functions stopped working, as > >they seem to rely on the presence of the full document text in the current > >implementation. > > Did you tested "phrase" searching on the index-only version, didn't this > kind of search rely on offsets()? > Phrase searches do *not* use the full document text. But UPDATE and DELETE do, ironically. Or at least they used to, unless Scott has changed that in FTS2. -- D. Richard Hipp <[EMAIL PROTECTED]> - To unsubscribe, send email to [EMAIL PROTECTED] -
RE: [sqlite] SQLite v/s SQLite3 Performance Assay
Be aware that the Windows GetTickCount call has a resolution of 10 to 15 ms on most machines, so that could throw throw your timings off if you're timing each individual test case as it appears below. To get better timer resolution, use QueryPerformanceCounter. Now, would that make SQLite3 looks slower than v2? I wouldn't think so--you'd think it would skew the results equally for both tests. Doug -Original Message- From: Nitin Kashyap [mailto:[EMAIL PROTECTED] Sent: Tuesday, March 13, 2007 2:51 AM To: sqlite-users@sqlite.org Subject: Re: [sqlite] SQLite v/s SQLite3 Performance Assay > Version 3 has a different default safety-level (default FULL) to > version 3 (default NORMAL). So if you didn't explicitly set the > safety-level during the tests, then version 3 was syncing the disk > more often than version 2. I think this might be why version 3 appears > slower in Test Case I (Inserts). > > The results of cases II to IV seem odd. Can you post the test code to > the list? > > Dan. Hi Dan, You seem to be right on the Sync end; But it does not explains why read is taking more time. Below are the snippet u requested. Select Query: #define SQLQuery4 "select tbl01.code, * from tbl01, tbl02, tbl03, tbl04 " \ "where tbl01.code=tbl02.code01 " \ "and tbl01.code=tbl03.code01 " \ "and tbl01.code=tbl04.code01 " \ "order by tbl04.orderField " #define SQLQuery2 "select tbl03.code, * from tbl03, tbl04 " \ "where tbl03.code = tbl04.code03 " \ "order by tbl04.orderField " SQLite2 Snippet: { end getTickCount(); / Start Select: 2 table */ printf("Perfroming Simple Select of 2 Table..."); fflush(stdout); beg = end; i=0; snprintf(sqlQry, 1024, "%s",SQLQuery2); if( sqlite_compile(pSource, sqlQry, NULL, &pVm, &errMsg) == SQLITE_OK ) { while( sqlite_step( pVm ,&numColumn ,(const char ***) &ppRowValues ,(const char ***) &ppColNames) == SQLITE_ROW ) { i++; } sqlite_finalize(pVm, &errMsg); } else { printf("err in sql: : %s\n",__LINE__, sqlQry); if(errMsg != NULL) printf("errMsg: %s\n", errMsg); goto cleanUp; } end = getTickCount(); printf("Done\n"); printf("Time To Select 2 Table with entries returned(%d),(%d): %ld\n", i, numColumn, (end-beg)); } SQLite3 Snippet: { end = getTickCount(); / Start Selecting: 2 table */ printf("Perfroming Simple Select of 2 Table"); beg = end; i=0; snprintf(sqlQry, 1024, "%s",SQLQuery1); if( sqlite3_prepare(pSource, sqlQry, -1, &pSqlStmt, &pzTail) == SQLITE_OK ) { while( sqlite3_step(pSqlStmt) == SQLITE_ROW ) { i++; } sqlite3_finalize(pSqlStmt); } else { printf("err in sql: : %s\n",__LINE__, sqlQry); if(errMsg != NULL) printf("errMsg: %s\n", errMsg); goto cleanUp; } end = getTickCount(); printf("Done\n"); printf("Time To Select 2 Table with entries returned(%d): %ld\n", i, (end-beg)); } Thanks & Regards Nitin K This email was sent to you by Reuters, the global news and information company. To find out more about Reuters visit www.about.reuters.com Any views expressed in this message are those of the individual sender, except where the sender specifically states them to be the views of Reuters Limited. Reuters Limited is part of the Reuters Group of companies, of which Reuters Group PLC is the ultimate parent company. Reuters Group PLC - Registered office address: The Reuters Building, South Colonnade, Canary Wharf, London E14 5EP, United Kingdom Registered No: 3296375 Registered in England and Wales - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] FTS: index only, no text storage - Was: [sqlite] FTS: Custom Tokenizer / Stop Words
To Ralf: >As a side effect, the offsets() and snippet() functions stopped working, as >they seem to rely on the presence of the full document text in the current >implementation. Did you tested "phrase" searching on the index-only version, didn't this kind of search rely on offsets()? - To unsubscribe, send email to [EMAIL PROTECTED] -
Re[2]: [sqlite] FTS: index only, no text storage - Was: [sqlite] FTS: Custom Tokenizer / Stop Words
>Just a question: did you eliminated stop-words in your tests? Sorry, you specified that you indexed source code files, so no stop-words are applicable here. - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] FTS: index only, no text storage - Was: [sqlite] FTS: Custom Tokenizer / Stop Words
Thank you. Just a question: did you eliminated stop-words in your tests? >Concluding: Given the great database size savings possible by separating full >text index from data storage, I wish that >developers would consider adding such an option to the SQLite FTS interface. If such an option will be added, I see a big future for using SQLite as a simple, but powerful and easily customized (user tokenizers etc) full-text search engine, and not only as a DB engine. Currently we don't have many options for full-text desktop engine, there are some, like DTSearch, Onix, Lucene, but these are over-priced, can't be easily customized or too complex. - To unsubscribe, send email to [EMAIL PROTECTED] -
[sqlite] FTS: index only, no text storage - Was: [sqlite] FTS: Custom Tokenizer / Stop Words
>But what about: > >I am very interested to know if it would be possible to use an FTS indexing >module to store the inverted index only, but >not the document's text. This would safe disk space if the text to index is >stored on disk rather than inside the database. This is possible with just minor modifications to fts2.c (below). I commented out the instructions responsible for inserting and updating the text body into the %_content table. As a side effect, the offsets() and snippet() functions stopped working, as they seem to rely on the presence of the full document text in the current implementation. Neverthelses, I ran FTS2 over a collection of source code files, and the results are astonishing: With the original fts2.c, the database figures are as follows: Number of documents:10739 Files Total size of document text stored: 234 MB Total size of database: ===> 295 MB <=== Size of index within database: 61 MB Index / Text ratio:26 Percent With the modified fts2.c (no text stored), the database size was obviously much smaller: Number of documents:10739 Files Total size of document text stored: 234 MB Total size of database: ===> 61 MB <=== Index / Text ratio:26 Percent I addition to the database size savings, I can think of a number of other benefits in separating text and reverted index storage: 1. Indexing docuements stored in another database would not need to duplicate storage. A small "FTS database" could be attached to the "Data database" if necessary, so the "data" database stays smaller without the index. Deleting the "FTS database" would leave the the data untouched. 2. Point 1 from above would allow to distribute CDs without FTS and let the user create a small FTS index on local storage to speed up searching. This way more data can be shipped on single CD volumes. 3. Indexing compressed text would become possible. The current implementation does not allow text compression because the FTS tables always store uncompressed. 4. Ease maintainance and consistency of data as long as FTS is experimental. If data and FTS are separated, only the FTS index must be rebuild if FTS changes, while the current implementation potentially requires to upgrade entire tables to yet unknown formats. 5. FTS could be removed from a database without touching the data: Only the FTS tables would have to be deleted. Concluding: Given the great database size savings possible by separating full text index from data storage, I wish that developers would consider adding such an option to the SQLite FTS interface. Finally, here are the changes I applied to fts2.c as proof of concept: /* insert into %_content (rowid, ...) values ([rowid], [pValues]) */ static int content_insert(fulltext_vtab *v, sqlite3_value *rowid, sqlite3_value **pValues){ sqlite3_stmt *s; int i; int rc = sql_get_statement(v, CONTENT_INSERT_STMT, &s); if( rc!=SQLITE_OK ) return rc; rc = sqlite3_bind_value(s, 1, rowid); if( rc!=SQLITE_OK ) return rc; /* for(i=0; inColumn; ++i){ rc = sqlite3_bind_value(s, 2+i, pValues[i]); if( rc!=SQLITE_OK ) return rc; } */ return sql_single_step_statement(v, CONTENT_INSERT_STMT, &s); } /* update %_content set col0 = pValues[0], col1 = pValues[1], ... * where rowid = [iRowid] */ static int content_update(fulltext_vtab *v, sqlite3_value **pValues, sqlite_int64 iRowid){ sqlite3_stmt *s; int i; int rc = sql_get_statement(v, CONTENT_UPDATE_STMT, &s); if( rc!=SQLITE_OK ) return rc; /* for(i=0; inColumn; ++i){ rc = sqlite3_bind_value(s, 1+i, pValues[i]); if( rc!=SQLITE_OK ) return rc; } */ rc = sqlite3_bind_int64(s, 1+v->nColumn, iRowid); if( rc!=SQLITE_OK ) return rc; return sql_single_step_statement(v, CONTENT_UPDATE_STMT, &s); } Ralf - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Pragma table_info(), why no fields like UNIQUE, AUTOINCREMENT
Vivien Malerba wrote: I've already sent a proposal along with a patch some time ago about that, but nobody seemed to care, see http://www.mail-archive.com/sqlite-users@sqlite.org/msg21285.html Vivien, I can't see any patch attached to that post. Perhaps you should resubmit it? And I'll guess the odds of any patch being accepted are significantly improved if the patch both passes and extends the test suite. Martin - To unsubscribe, send email to [EMAIL PROTECTED] -
Re: [sqlite] Pragma table_info(), why no fields like UNIQUE, AUTOINCREMENT
On 3/12/07, Stef Mientki <[EMAIL PROTECTED]> wrote: If ask the table sturcture, with pragma table_info() I get of course the basic fields, like: CID,Name,Type, And also SOME special values, like Null, DefaultValue, PrimaryKey But NOT the following special values (and probably a lot more) Unique, AutoIncrement I've already sent a proposal along with a patch some time ago about that, but nobody seemed to care, see http://www.mail-archive.com/sqlite-users@sqlite.org/msg21285.html Cheers, Vivien - To unsubscribe, send email to [EMAIL PROTECTED] -