ImportError: No Module named Amazon1.items when i try to start the crawler.

this is my setup

> spiders folder

 __init__.py 

> __init__.pyc

 items.py
pipelines.py
setting.py
settings.pyc



spider folder

>  __init__.py

 __init__.pyc



My items.py 

>  import scrapy.items import Item, Field  
>
> class Amazon1Item(scrapy.Item):
>   title = Field()
> link = Field()


 My pipline

> class Amazon1Pipeline(object):
>
>     def process_item(self, item, spider):
>
>         return item
>
>

My spider 

> from scrapy.spider import BaseSpider

from scrapy.selector import HtmlXPathSelector

from Amazon1.items import Amazon1Item

from scrapy.http import Request


>
>
>
>
> class Amazon1Item(BaseSpider):

name = "amazon1"

allowed_domains = ("amazon.com")

start_urls = ("http://www.amazon.com/b?ie=UTF8&node=9187220011";)

 def parse(self, response):

hxs = HtmlXPathSelectorPath(response)

print hxs.select('//span[@class="a-color-price 
> a-text-bold"]/text()').extract()

items = []



What am I doing wrong?

-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to