1. They approach is right, it just could be a little bit more complicated 
than process the file after.
2. I think so FeedExporter 
<https://github.com/scrapy/scrapy/blob/master/scrapy/contrib/feedexport.py#L134>
 
is basically a Pipeline.
3. You want to extend the crawl command with a custom one 
<http://doc.scrapy.org/en/latest/topics/commands.html#custom-project-commands>
.

El miércoles, 6 de agosto de 2014 05:31:51 UTC-3, Gabriel Birke escribió:
>
> I have a spider that returns different item types (books, authors and 
> publishers) and would like to export each item type to its own file, being 
> flexible about the format (CSV or JSON). My first approach would be to use 
> a pipeline class, but then I lose the easy functionality of specifying the 
> feed exporters on the command line. Now here is a bunch of questions:
>
> 1) Is this the right approach or should I implement this differently?
> 2) How can I reuse the exporters in a pipeline?
> 3) How can I pass command line arguments to my pipeline class?
>

-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to scrapy-users+unsubscr...@googlegroups.com.
To post to this group, send email to scrapy-users@googlegroups.com.
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to