Hey Valdir, yes once again you are right :). I was adding the callback 
argument and completely forgot formxpath one, thank you so much for the 
help I really appreciate it!

четвртак, 24. децембар 2015. 15.57.09 UTC+1, Valdir Stumm Junior је 
написао/ла:
>
> Hey, Mario! The code you posted here is missing the *formxpath* argument 
> that tells FormRequest which form it should use.
>
> Your spider code should look like this:
>
> def parse_app_lists(self, response):
> """Returns page with 42 days results instead of default 7 days."""
> yield FormRequest.from_response(response, formdata={
> formdata={'RdoTimeLimit': '42'}, dont_filter=
> dont_filter=True,
> formxpath="(//form)[2]",
> callback=self.parse_page)
>
> def parse_page(self, response):
> open_in_browser(response)
>
>
>
> On Thu, Dec 24, 2015 at 12:46 PM, Mario <laki-paki...@hotmail.com 
> <javascript:>> wrote:
>
> Hi Valdir,
>
> The code that you provided works great in shell when testing, but it 
> doesn’t work when I run the actual spider, here’s part of spider code I’ve 
> implemented after your suggestion:
>
> def parse_app_lists(self, response):
>     """Returns page with 42 days results instead of default 7 days."""
>
>     yield FormRequest.from_response(response,
>                               formdata={'RdoTimeLimit': '42'},
>                               dont_filter=True,
>                               callback=self.parse_page)
> def parse_page(self, response):
>     open_in_browser(response)
>
> The output that gets opened in Firefox looks like this:
>
> [image: ss]
>
> So basically it doesn’t get loaded for some reason.
>
> I’ve also set the DOWNLOAD_DELAY=2 so forms can get loaded properly.
>
> четвртак, 24. децембар 2015. 14.53.47 UTC+1, Valdir Stumm Junior је 
> написао/ла:
>
> Hey! I think that FormRequest is using the first form on the page (that 
> text field from the top). You can tell FormRequest to use the second form 
> through an xpath expression, like this:
>
> fr = FormRequest.from_response(response,
>                                   formdata={'RdoTimeLimit': '42'}, 
> dont_filter=True, formxpath="(//form)[2]")
>
> On Thu, Dec 24, 2015 at 11:13 AM, Mario <laki-paki...@hotmail.com> wrote:
>
> Hi guys,
>
> Here’s what I’m trying to accomplish:
>
> [image: ss]
>
> This is url: http://www.eplanning.ie/CarlowCC/SearchListing/RECEIVED
>
> Step 1(from image): Select 42 days instead of 7 days default.
>
> Step 2(from image): Click on search button for results.
>
> I realize that this is possible with Selenium but I’m almost certain that 
> this can be done with Scrapy by setting proper form/s.
>
> I’ve tried mimicking calls in scrapy shell, here’s just one of many calls 
> that I used for selecting 42 days limit:
>
> scrapy shell 'http://www.eplanning.ie/CarlowCC/SearchListing/RECEIVED'
>
> from scrapy.http import FormRequest
>
> fr = FormRequest.from_response(response,
>                                   formdata={'RdoTimeLimit': '42'}, 
> dont_filter=True)
>
> fetch(fr)
>
> view(response)
>
> Here’s the output:
>
> [image: ss]
>
> Thanks in advance for any help, and merry Christmas everybody!
> ​
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "scrapy-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to scrapy-users...@googlegroups.com.
> To post to this group, send email to scrapy...@googlegroups.com.
> Visit this group at https://groups.google.com/group/scrapy-users.
> For more options, visit https://groups.google.com/d/optout.
>
>
>
>
> -- 
> [image: Scrapinghub] <https://scrapinghub.com>
>
> Valdir Stumm Junior 
> Developer Evangelist, Scrapinghub 
> [image: Skype]stummjr[image: Twitter] <https://twitter.com/stummjr>[image: 
> Github] <https://github.com/stummjr>[image: Twitter] 
> <https://twitter.com/scrapinghub>[image: LinkedIn] 
> <https://www.linkedin.com/company/scrapinghub>[image: Github] 
> <https://github.com/scrapinghub>
>
> *We turn web content into structured data. Lead maintainers of Scrapy 
> <http://scrapy.org>.*
>
> ​
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "scrapy-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to scrapy-users...@googlegroups.com <javascript:>.
> To post to this group, send email to scrapy...@googlegroups.com 
> <javascript:>.
> Visit this group at https://groups.google.com/group/scrapy-users.
> For more options, visit https://groups.google.com/d/optout.
>
>
>
>
> -- 
> [image: Scrapinghub] <https://scrapinghub.com>
>
> Valdir Stumm Junior 
> Developer Evangelist, Scrapinghub 
> [image: Skype]stummjr <https://twitter.com/stummjr>
> ...

-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to scrapy-users+unsubscr...@googlegroups.com.
To post to this group, send email to scrapy-users@googlegroups.com.
Visit this group at https://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to