Hello list, I have a dataset that has about 4000 records, 30 columns and
lots of duplicate data. I want to delete all the records that are
duplicates(both database and object have to be exactly the same). I'd be
interested in hearing how people would deal with this. I'm thinking I
could run an SQL statement and GROUP on every single column, that would
give me all of the unique records. If I include a Count() column then
any value greater that 2 will represent a duplicate. From there I should
be able to query back to the original database and delete all of the
duplicates no matter how many, leaving only the unique ones. I haven't
found the easiest way yet to query between the databases and delete the
ones I want to but I will sum up what people say and what method worked
best for me. MI and MB solutions would be great.
Thanks,
Ron
Ron Halliday, Cartographer
Portolan Geomatics
http://members.home.net/portolan/
----------------------------------------------------------------------
To unsubscribe from this list, send e-mail to [EMAIL PROTECTED] and put
"unsubscribe MAPINFO-L" in the message body, or contact [EMAIL PROTECTED]