Hi Everyone,

Many thanks to all who could make it to last Thursday's Python Northwest
meeting for a great evening.  Below is a potted summary.  Next meeting will
be a coding meeting on 15th March.  See you there ...

"""
In a break from the norm, we went round the table and talked about
interesting Python libraries we'd recently seen or used.  In order:

Safe presented pyPdf (http://pybrary.net/pyPdf/), a library for
manipulating PDF files (e.g. chopping and combining pages, extracting text
and decrypting files) and commented on its usefulness in unit testing PDF
report generation.  The pyPdf library is not designed for generating PDF's,
so reportlab (http://www.reportlab.com) was also presented, and in
particular, a demonstration of its ability to generate a wide variety of
barcode formats.

Dave loves lego!  Take a look at http://bricxcc.sourceforge.net/nbc/ for
some of the stuff he's been playing with and his quest to make this stuff
programmable from linux.  Dave also runs a server which stores his better
half's comics in SVG format then delivers them in PNG (which are then
stored as blobs in a database) ... he attempted to show us how his server
would crash as soon he requested a new comic for convertion to PNG, but
alas his demonstration worked perfectly.

Daley gave us a well received demonstration of Lettuce (
http://packages.python.org/lettuce/index.html), a Python port of Cucumber,
the Ruby-based Behaviour Driven Development tool.  Expected behaviours are
described in a "Feature File" in a natural language called Gherkin.  The
steps for these behaviours are expressed in Python in a "Steps File".
 Lettuce run tests based on these two files.  Apart from amusing us with
the vegetable references, Daley highlighed how in an Agile environment, the
Feature File could be edited by non-programmers such as testers and
end-users to describe requirements and validate code delivered by
programmers.

Jonathan presented Scrapy (http://scrapy.org/), a web crawling and scraping
framework based on Twisted (http://twistedmatrix.com/trac/).  We were shown
some demo code which explained how the crawler is built from pipelines for
sequential processing with an ability to control many aspects of the
crawler's behaviour.  Scrapy also notably has an interactive console
scraping environment.  The scraper is based on XPath which everyone agreed
was way down their list of favourite tools.  We discussed combining the
Scrapy crawler with either Beautiful Soup (
http://www.crummy.com/software/BeautifulSoup/) or lxml (http://lxml.de/).
 We were also shown some RRDtool (http://oss.oetiker.ch/rrdtool/) diagrams
of server load while the web crawler was active which prompted Robie to
mention du2rrd (http://oss.oetiker.ch/optools/wiki/du2rrd), a useful
sysadmin tool for disk monitoring.

Later in the pub, Ben and Robie hashed out why Tau was right and Pi was
wrong (http://tauday.com/) and Ben introduced us to Geogebra (
http://www.geogebra.org/cms/) and the naturally beautiful sunflowers
produced when using Phi (http://en.wikipedia.org/wiki/Golden_ratio) as an
input.

A huge thanks to all who came and joined in and made it a great evening!
"""

All the best,

Safe


Safe Hammad
http://safehammad.com
@safehammad

-- 
To post: [email protected]
To unsubscribe: [email protected]
Feeds: http://groups.google.com/group/python-north-west/feeds
More options: http://groups.google.com/group/python-north-west

Reply via email to