In the spirit of peace love and understanding, I'll start by saying that 
I appreciate Marnen's philosophy, and I agree that "if the test has any 
potential of touching the database, it is *absolutely necessary all the 
time* to reset the DB to a known state."

But in my application, I have a few huge tables that are used 
exclusively as reference data, are required for running any meaningful 
tests and are never modified by the application.

For example, one is a TimeDimension table with one entry per hour for 10 
years (that's 85K+ rows).  Each row is a "highly decorated TimeStamp" 
which maps a timestamp to day of week as a string, whether or not it's a 
holiday and 17 other columns of dimensional goodness.  It would take a 
LONG time to set it up and tear it down for each test.  In comparison to 
loading the table, MySQL's rollback mechanism is breathtakingly fast.

[Before you ask why the heck a giant table is preferable to simply using 
the functions associated with Date and Time, I'll refer to you to:
  http://www.ralphkimball.com/html/booksDWT2.html
  http://philip.greenspun.com/sql/data-warehousing.html
]

Apart from the large "readonly" tables, I avoid preloaded fixtures for 
all the reasons Marnen and others have pointed out: they're "brittle", 
they tend obscure tests by separating "expected" and "observed" results. 
Etc.

But in this case, it really makes sense to preload the db with large 
static tables.  The alternative -- reloading the tables for every test 
-- would not give me more insights about the business logic in my 
application, and would be prohibitively slow.

Peace out.

- ff
-- 
Posted via http://www.ruby-forum.com/.

-- 
You received this message because you are subscribed to the Google Groups "Ruby 
on Rails: Talk" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/rubyonrails-talk?hl=en.

Reply via email to