Ahh :) I also meant to say that whilest the particular fields I am talking about in this question are the same, data at the end of those rows is different, and will be treated differently.

I really abstracted my query away from what I am actually trying to do exactly and more along the lines of how effiecient or ineficient I am doing things.

This particular SQL query was originally - before I optimised it for the person here working with it - returning upwards of 100,000 rows of data because it had been produced by sombody using the built in data join wizard in MSSQL Server Management Studio. They would then post process it in code (various flavours of .net and C ) and come up with the results they were wanting... The query I hand cut for them and tested now returns exactly the data they wish in just enough rows to contain it all. No need to post process it as such.

Regards,
David



On Fri, 21 Feb 2014, Ben Finney wrote:

David Crisp <[email protected]> writes:

In this case I am reading in data from a SQL database query(pymssql)

If you want to eliminate duplicates from your query, do so with ‘SELECT
DISTINCT’. Then you don't ever get the duplicate rows in the first place :-)

--
\          “Ocean, n. A body of water occupying about two-thirds of a |
 `\     world made for man — who has no gills.” —Ambrose Bierce, _The |
_o__)                                        Devil's Dictionary_, 1906 |
Ben Finney

_______________________________________________
melbourne-pug mailing list
[email protected]
https://mail.python.org/mailman/listinfo/melbourne-pug
_______________________________________________
melbourne-pug mailing list
[email protected]
https://mail.python.org/mailman/listinfo/melbourne-pug

Reply via email to