Don't worry this table is modest in size.
Any solution is good enough for me, as long as it gets the job done :-))

Thanks and BR
Loreia

On Fri, Aug 17, 2012 at 9:06 PM, Niphlod <niph...@gmail.com> wrote:

> np.
> BTW: no simple ways to do that. Just mind that if there are millions rows
> (especially if the columns are not indexed) that is going to take some time.
>
>
> On Friday, August 17, 2012 5:09:40 PM UTC+2, Loreia wrote:
>>
>> Thanks a lot.
>>
>> BR
>> Loreia
>>
>> On Thursday, August 16, 2012 1:23:39 PM UTC+2, Niphlod wrote:
>>>
>>> group by your unique columns, count the rows, find the ones with count >
>>> 1.
>>>
>>> db.define_table(
>>>     'finddup',
>>>     Field('f1_name'),
>>>     Field('f2_name'),
>>>     Field('f3_name')
>>>     )
>>> fd = db.finddup
>>> count = fd.id.count()
>>> rtn = db(fd.id>0).select(fd.f1_name, fd.f2_name, fd.f3_name, count,groupby
>>> =fd.f1_name|fd.f2_name|**fd.f3_name, having=count>1)
>>>
>>>
>>>
>>> On Thursday, August 16, 2012 11:20:54 AM UTC+2, Loreia wrote:
>>>>
>>>> Hi,
>>>>
>>>> I am looking for a way to automatically find any possible duplicate
>>>> records (rows) in one table of my Postgres database.
>>>> Quick google-ing reveals it is easy to do it with pure SQL, e.g. one
>>>> way specific for Postgres: http://**stackoverflow.com/questions/**
>>>> 1746213/how-to-delete-**duplicate-entries-in-**postgresql<http://stackoverflow.com/questions/1746213/how-to-delete-duplicate-entries-in-postgresql>
>>>>
>>>> But I would like to be able to do it with web2p DAL (which would make
>>>> it database independent).
>>>> Is there a way to do it with web2py DAL?
>>>>
>>>> I just want to be able to find those duplicates, I don't need to delete
>>>> them.
>>>>
>>>> Thank you and best regards,
>>>> Loreia
>>>>
>>>  --
>
>
>
>

-- 



Reply via email to