Thanks

Simpler, more to the point now I want a response from xpath, it reads what 
it does, makes sense.

thanks again 

sayth

On Monday, 25 January 2016 22:28:44 UTC+11, Valdir Stumm Junior wrote:
>
> Hey!
>
> Since Scrapy 0.24 you don't need to instantiate selectors manually to deal 
> with the response. You can just call one of the shortcut methods: 
> response.xpath() or response.css().
>
> def parse(self, response):
>     for ids in response.xpath('.//race/@id'):
>         ...
>
>
>     
>     
>
> On Sun, Jan 24, 2016 at 1:54 AM, Sayth Renshaw <flebbe...@gmail.com 
> <javascript:>> wrote:
>
>> Sorry is it this sel = Selector(xml_response)
>>
>> Only issue is that it returns xml_response error.
>>
>>   File 
>> "/home/sayth/.virtualenvs/scrapy_xml/local/lib/python2.7/site-packages/twisted/internet/defer.py",
>>  
>> line 588, in _runCallbacks
>>     current.result = callback(current.result, *args, **kw)
>>   File "/home/sayth/Projects/conv_xml/conv_xml/spiders/myxml.py", line 
>> 14, in parse
>>     sel = Selector(xml_response)
>> NameError: global name 'xml_response' is not defined
>>
>>
>> On Sunday, 24 January 2016 14:35:42 UTC+11, Sayth Renshaw wrote:
>>>
>>> Hi
>>>
>>> Just putting together my first xml parser in scrapy and while it works i 
>>> noted a deprecation in the logs and wanted to know what I should be doing 
>>> moving forward. Below is my code this is the warning.
>>>
>>>
>>> 2016-01-24 14:32:04 [py.warnings] WARNING: 
>>> /home/sayth/.virtualenvs/scrapy_xml/local/lib/python2.7/site-packages/scrapy/selector/unified.py:108:
>>>  
>>> ScrapyDeprecationWarning: scrapy.selector.XmlXPathSelector is deprecated, 
>>> instantiate scrapy.Selector instead.
>>>   for x in result]
>>>
>>> # -*- coding: utf-8 -*-
>>> import scrapy
>>> from scrapy.selector import Selector
>>> from scrapy.http import HtmlResponse
>>> from scrapy.selector import XmlXPathSelector
>>>
>>> class MyxmlSpider(scrapy.Spider):
>>>     name = "myxml"
>>>
>>>     start_urls = (
>>>         ["file:///home/sayth/Downloads/20160123RAND0.xml"]
>>>     )
>>>
>>>     def parse(self, response):
>>>         xpath = XmlXPathSelector(response)
>>>         for ids in xpath.select('.//race/@id'):
>>>             print(ids.extract())
>>>
>>> Thanks
>>>
>>>
>>> Sayth
>>>
>>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "scrapy-users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to scrapy-users...@googlegroups.com <javascript:>.
>> To post to this group, send email to scrapy...@googlegroups.com 
>> <javascript:>.
>> Visit this group at https://groups.google.com/group/scrapy-users.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>
>
> -- 
> [image: Scrapinghub] <https://scrapinghub.com> 
>
> Valdir Stumm Junior 
> Developer Evangelist, Scrapinghub 
> [image: Skype] stummjr
> [image: Twitter] <https://twitter.com/stummjr> [image: Github] 
> <https://github.com/stummjr>
> [image: Twitter] <https://twitter.com/scrapinghub> [image: LinkedIn] 
> <https://www.linkedin.com/company/scrapinghub> [image: Github] 
> <https://github.com/scrapinghub>
>
> *We turn web content into structured data. Lead maintainers of Scrapy 
> <http://scrapy.org>.*
>

-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to scrapy-users+unsubscr...@googlegroups.com.
To post to this group, send email to scrapy-users@googlegroups.com.
Visit this group at https://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to