BTW: DBSight uses the same approach that Erick describes.
Like Phil said, this will incur multiple joins for each document.
But this works well and efficient for incremental indexing.
And this approach does not need to change any database schema.
--
Chris Lu
-
Instant Scal
I've used the approach that Erick describes, and it
works well. Another approach is to create a single new
table in your database that holds all of the data you
want to index. This allows you to copy the various
fields from other tables using separate SQL statements
before you index, rather than re
Mohammad,
This is the main idea, but things can get quite complicated.
In addition, do you need to do incremental indexing?
Do you need to delete duplicates?
How would you manage deleted documents in the database?
Will taking down the server while indexing affect you?
...
You are welcome to tak
Well, don't do it that way .
I'm assuming that you have some SQL statement like "for each
entry in table 1, find all the related info", and what you're
failing to retrieve is the result.
So, try something like creating a SQL statement that selects
the ID for table 1 and write it to a file. A
Hi all
I am going to index our database. one approach is to join them and then
index the fields. but the information are very large say more than 3
millions. so the Sql Server fails to select them.
I want to know if anyone has such this experience to indexing huge
information of database using lu