Are you using pkgutil.get_data()?. What exception do you get?
On Mon, Mar 31, 2014 at 2:18 PM, David McClure < davidwilliammccl...@gmail.com> wrote: > Hi Pablo, > > First of all, thanks so much for all the work on Scrapy! It's a fantastic > tool. Anyway, I'm actually running into this problem too - I'm writing a > crawler that needs to read an XML configuration file, which works fine when > the spider is run from the command line. But, when deployed the Scrapyd, > it's as if the configuration file doesn't exist. My setup.py file looks > like this: > > ``` > from setuptools import setup, find_packages > > setup( > name = 'myproject', > version = '0.1', > packages = find_packages(), > package_data = { > 'myproject': ['content/*.xml', 'content/category/*.xml'] > }, > entry_points = { > 'scrapy': ['settings = myproject.settings'] > }, > zip_safe = False > ) > ``` > > Where /content is inside the /myproject module. And, when I manually > inspect the built package that deploys the Scrapyd, the XML files are > included. Any idea what could be going on here? > > Thanks! > David > > On Thursday, March 14, 2013 9:52:46 AM UTC-7, Pablo Hoffman wrote: >> >> This is a snippet I've used to make deploy work with static files: >> >> from setuptools import setup, find_packages >> >> setup( >> name='project', >> version='1.0', >> packages=find_packages(), >> package_data={ >> 'myproject': ['conf/*.json', 'conf/cities.txt'], >> }, >> entry_points={ >> 'scrapy': ['settings = myproject.settings'] >> }, >> zip_safe=False, >> ) >> >> >> The conf/ dir is inside the myproject folder. >> >> And then you use myproject.__file__ to access the folder, on your code. >> >> Pablo. >> >> >> >> On Sat, Feb 16, 2013 at 10:55 PM, Thimo Brinkmann <thimo.b...@googlemail. >> com> wrote: >> >>> Hey guys, >>> >>> I am having a really hard time deploying to scrapyd because of the >>> static configuration files (lets call them dictionary files) that need to >>> be used). >>> Played around with MANIFEST.in, pkgutil.get_data, StringIO and whatever >>> I could find for now more than 4 hours, but really can't get it work. Has >>> anyone a working example? >>> >>> I want to load two static JSON files from scrapyd, but it never seemed >>> to find the files, whatever referencing method I used. Normally I had the >>> open method followed by the filename and I put the filename in the eggs >>> root as well as the projects egg folder, but in no case the files were >>> found. If anyone knows how to do this with a full example, I would be very >>> helpful. >>> >>> >>> Kind regards, >>> Thimo >>> >>> -- >>> You received this message because you are subscribed to the Google >>> Groups "scrapy-users" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to scrapy-users...@googlegroups.com. >>> To post to this group, send email to scrapy...@googlegroups.com. >>> Visit this group at http://groups.google.com/group/scrapy-users?hl=en. >>> For more options, visit https://groups.google.com/groups/opt_out. >>> >>> >>> >> >> -- > You received this message because you are subscribed to the Google Groups > "scrapy-users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to scrapy-users+unsubscr...@googlegroups.com. > To post to this group, send email to scrapy-users@googlegroups.com. > Visit this group at http://groups.google.com/group/scrapy-users. > For more options, visit https://groups.google.com/d/optout. > -- You received this message because you are subscribed to the Google Groups "scrapy-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to scrapy-users+unsubscr...@googlegroups.com. To post to this group, send email to scrapy-users@googlegroups.com. Visit this group at http://groups.google.com/group/scrapy-users. For more options, visit https://groups.google.com/d/optout.