Thanks so much for your reply Asheesh - you even took the time to record a 
screencast!

Quick questions about your installation workflow:
Why did you run pip install --editable scrapy, run several tests and 
install dependencies?

Can we just:
1. fork the project  
2. pip install requirements.txt to install python dependencies
3. run setup.py

Also, in C++ when we make changes to certain parts of the source code, we 
can run a makefile to recompile it. For python-based projects such as 
scrapy, when I make changes to the project, will the interpreter 
automatically detect the changes? Or do I have to go through some process 
to update it? 


On Sunday, January 4, 2015 3:18:48 PM UTC+8, Asheesh Laroia wrote:
>
> Hi Yan! Thanks for joining the scrapy users list, and it's lovely to see 
> someone interested in helping the project out.
>
> Let me try to answer your questions one by one:
>
> On Sat, Jan 3, 2015 at 12:24 AM, Yan Yi <[email protected] <javascript:>> 
> wrote:
>
>>
>> 1. How do I run scrapy from source after forking the project? 
>> The "INSTALL" text file simply points me to the online documentation for 
>> standard installation.  Makefile.buildbot has a section saying python 
>> extras/makedeb.py build. Tried to run makedeb.py build but I got an 
>> error saying no module named scrapy. Not sure why the scrapy source has 
>> scrapy as a dependency.
>>
>>
> Make a virtualenv, and do "pip install -e .", is my usual strategy.
>
> I made a ASCII art screencast here of me doing that, but it is 
> accidentally super long. You can watch me try a bunch of things and fail a 
> lot, which will probably be instructive.
>
> Here's the link: https://asciinema.org/a/15161
>
> That took me way more work than I expected! I think probably some docs may 
> have become out of date.
>
> Some things to note:
>
> * I created the virtualenv in "." (the current directory) and therefore 
> invoke pip from the ./bin/ directory.
>
> * https://oh-bugimporters.readthedocs.org/en/latest/intro.html is a 
> separate project I work on that depends on Scrapy, so its development 
> environment setup tips might prove useful to you, too.
>
> * Sometimes I seem to do nothing for 10 seconds at a time or longer; this 
> is because I'm installing things, in a different terminal. Please feel free 
> to use the timing bar at the bottom to skip past the parts where I'm doing 
> nothing.
>
>  
>
>> 2. I could not find process_spider_exception() method in 
>> master/scrapy/middleware.py. 
>> Am I looking in the wrong place?
>>
>
> My usual way to find code like this is to use the GitHub web search, or 
> this command line tool:
>
> $ git grep "def process_spider_exception" | cat
> scrapy/contrib/spidermiddleware/httperror.py:    def 
> process_spider_exception(self, response, exception, spider):
> scrapy/core/spidermw.py:        def process_spider_exception(_failure):
>
> So one of those seems to be the answer!
>
>  
>
>>
>> 3. How do I begin fixing this bug? A few tips on the direction I'd need 
>> to take would be great.
>>
>
> I haven't contributed to Scrapy, but from my experience with similar 
> projects, the answer is usually:
>
> * Write a test case that indicates that the bug is real -- so the test 
> should *fail* when you run it
>
> * Hack up the code until it passes
>
> I realize you probably wanted more help than that! I don't have a huge 
> amount of experience with the Scrapy codebase, so that's the help I can 
> provide.
>
> Make sure to read through 
> http://doc.scrapy.org/en/latest/contributing.html and other docs in the 
> doc.scrapy.org site.
>
> Cheers, and welcome aboard!
>

-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to