Alex - Are you aware of the rewrite branch of pywikipedia? The rewrite branch is written to use the API consistently, and much of what you are doing in the trunk is already implemented in the rewrite branch. You might want to take a look at the rewrite branch and see if you can contribute to it.
Russ ----- Original Message ----- From: <[email protected]> To: <[email protected]> Sent: Tuesday, July 14, 2009 12:59 PM Subject: [Pywikipedia-svn] SVN: [7062] trunk/pywikipedia/wikipedia.py > Revision: 7062 > Author: alexsh > Date: 2009-07-14 16:59:57 +0000 (Tue, 14 Jul 2009) > > Log Message: > ----------- > Change page.getRestrictions() API data from XML to JSON(had comments about > get multiple pages data), function tested in blockpageschecker.py > > Modified Paths: > -------------- > trunk/pywikipedia/wikipedia.py > > Modified: trunk/pywikipedia/wikipedia.py > =================================================================== > --- trunk/pywikipedia/wikipedia.py 2009-07-14 12:13:56 UTC (rev 7061) > +++ trunk/pywikipedia/wikipedia.py 2009-07-14 16:59:57 UTC (rev 7062) > @@ -1308,28 +1308,44 @@ > ('autoconfirmed' or 'sysop') > * expiry is the expiration time of the restriction > """ > + #, titles = None > + #if titles: > + # restrictions = {} > + #else: > restrictions = { 'edit': None, 'move': None } > try: > api_url = self.site().api_address() > except NotImplementedError: > return restrictions > - api_url += > 'action=query&prop=info&inprop=protection&format=xml&titles=%s' % > self.urlname() > - text = self.site().getUrl(api_url) > - if 'missing=""' in text: > - self._getexception = NoPage > - raise NoPage('Page %s does not exist' % self.aslink()) > - elif not 'pageid="' in text: > - # I don't know what may happen here. > - # We may want to have better error handling > - raise Error("BUG> API problem.") > - match = re.findall(r'<protection>(.*?)</protection>', text) > + > + predata = { > + 'action': 'query', > + 'prop': 'info', > + 'inprop': 'protection', > + 'titles': self.title(), > + } > + #if titles: > + # predata['titles'] = query.ListToParam(titles) > + > + text = query.GetData(predata, useAPI = True)['query']['pages'] > + > + for pageid in text: > + if text[pageid].has_key('missing'): > + self._getexception = NoPage > + raise NoPage('Page %s does not exist' % self.aslink()) > + elif not text[pageid].has_key('pageid'): > + # Don't know what may happen here. > + # We may want to have better error handling > + raise Error("BUG> API problem.") > + if text[pageid]['protection'] != []: > + #if titles: > + # restrictions[ pageid ] = { 'edit': None, 'move': > None } > + # for detail in text[pageid]['protection']: > + # restrictions[ pageid ][ detail[ 'type' ] ] = [ > detail[ 'level' ], detail[ 'expiry'] ] > + #else: > + for detail in text[pageid]['protection']: > + restrictions[ detail[ 'type' ] ] = [ detail[ > 'level' ], detail['expiry'] ] > > - if match: > - text = match[0] # If there's the block "protection" take the > settings inside it. > - api_found = re.compile(r'<pr type="(.*?)" level="(.*?)" > expiry="(.*?)" />') > - for entry in api_found.findall(text): > - restrictions[ entry[0] ] = [ entry[1], entry[2] ] > - > return restrictions > > def put_async(self, newtext, > > > > _______________________________________________ > Pywikipedia-svn mailing list > [email protected] > https://lists.wikimedia.org/mailman/listinfo/pywikipedia-svn > _______________________________________________ Pywikipedia-l mailing list [email protected] https://lists.wikimedia.org/mailman/listinfo/pywikipedia-l
