Title:
Hi All,

While were talking about methodology, here's a brain dump about requirements and functional specifications for Spectra that I've had sitting around in draft for a month or so (this is not official Macromedia stuff).  It's a combination of my Spectra development experience and ideas from the book Practical Software Requirements, Kovitz, Manning Publications, ISBN 1-884777-59-7 (truly excellent book).  The brain dump covers in detail the tasks to get through at the pointy end (start) of a Spectra project - it fits in pretty well with the official methodology.  It looks like a lot, but I really believe it's not worth cutting corners at this stage of the project - do the whole thing on a whiteboard in 3 days if necessary, but do it.  Very interested to hear your (constructive) comments/feedback - it's definitely a work in progress.
 
Cheers,

Robin Hilliard
Senior Product Support Engineer - Asia Pacific
Macromedia, Inc.


Task 1 - Platform

Very little to do with actual requirements.  We know we're using Spectra, so get a box, install O/S (or whatever SOE (standard operating environment) is required by the hosting organisation), CF, Spectra, apply all hotfixes, service packs, security bulletins, test it then ghost the disk image so that you can build production or staging anytime you want.  Keep the tapes/cd in a safe!  Need to do same for DB server.  Set up production and development architecture (including source control) Should be no brainer after this for app to install and run on top of this.  A small amount of documentation about the platform (hardware and network configuration, O/S versions, configuration cookbook) should be prepared for inclusion in the technical specification later on.  The documentation should be sufficiently detailed to enable a skilled administrator not involved in the original platform set-up to reproduce the same configuration (in case you're HBAB - "hit-by-a-bus").  It would be well-worth testing this for real.

Task 2 - Functional Requirements (parallel with Task 1)

I am a big fan of Practical Software Requirements, Kovitz, Manning Publications, ISBN 1-884777-59-7 (thanks Graham).  He separates functional requirements (what should happen in the real world once the system is in place) with functional specifications (what the outward behaviour of the system needs to be to make this happen).  Functional requirements have to be defined in terms of real world things and events, so that the business stakeholders can understand and sign off the requirements (who are your stakeholders - well, they sign the cheques!).  They also allow the designers/developers to really understand why the system is being built and what's important to the customer.  Here is a rough list of these things (a glossary) that need to be described before we can write requirements for a typical Spectra (i.e. content management) system:

  • Site Users - how many; who are they; categories of user; why will they use the site; when do they use site; how often (scalability requirements come from this - customer must have some idea how many users they need for their business case to work); are there particular combinations of things they will regularly want to do (do not go into a full "use case" narrative at this point!) skill level, location, internet connection details (browser, platform, speed).
  • Content Creators - how many, who are they; roles & responsibilities (e.g. create, edit, mentor, approve, monitor, escalate); skills (can they write HTML?), what sort of mistakes can they make; location; internet connection details.
  • Syndication Sources - ok to be technical - file formats & protocols; how often are they updated; where do they source their content, if real world what sort of delays do they have between real world event and content becoming available.  This could also include existing databases that are going to be imported into the content management system in a one-off migration (which is a type of syndication).
  • Syndication Consumers - similar to Site Users, except more technical details about format and protocol they want content in.
  • Content Object Model - This is the "virtual" content model that the system "realises" in the same way that Word realises a document or Excel realises a spreadsheet.  What are the types of content the system is to manage?  Examples of these include images, files, events, products, product families, brands, news items, suppliers, offices, case studies, faqs, organisations, services, entire web pages, subscriptions, users and promotions.  Normal Database modelling techniques are used at this point.  For each type of content:
    • Explain what it is,
    • List its properties (name, colour, description,...);
    • Estimate number of instances;
    • Rate of updates;
    • Who owns/is responsible for the content;
    • Any business directions that might cause the content description to change and how;
    • Relationships with other types of content and cardinality of relationships;
    • Any analogs of the content type in the real world or other systems and how the content is related to the real world analog (part number, event date and name, primary keys/uids). 
  • For each property give:
    •  Some sample values;
    •  Valid ranges (e.g. 1 > x > 99, In {red, green, blue}, text up to 1000 characters). 
  • Another important part of the content model is how content items are categorised.  Examples of categories include region, brand, event type, product category.  What categories can be allocated to a content type, can more than one value from a category be assigned to a content object.  In an E-R or UML diagram categories can be drawn as content type with a yes/no fields for each category value, and relationships with content object types.  If you start associating properties with a category other than its name it probably is another content object type.  Categories are used to define query requirements in the next step (e.g. retrieve products by product type, retrieve all BBQ events). 
Make sure that the frontline business experts are involved in this process and review this glossary for wrong assumptions or mistakes - it's very, very important!  Only now are we ready to state what the requirements for the system are, in the terms of the things we've just described.  Requirements should have IDs so that they can be checked off later.  Update the glossary as necessary, don't put in a requirement mentioning something that hasn't been defined. The following groups of requirements should be considered:
  • Queries the Site Users can make against the Content Object Model:
    • Pseudo-select statements are fine as long as the business stakeholders can understand them - statements may include properties from the user profile to allow for personalisation;
    • Which users can use the query (security);
    • How quick does the query response need to be;
    • How are the results filtered (by user type, current region etc) ;
    • How current (delay between content update and it being available in query results;
    • How often will it be used, and
    • Sample of output from query (rough, this is just to inform people designing interface later on). 
Watch out for complex queries that may need extra data and processing and make sure they are fully described, e.g. full text search or geographic locator, sort suppliers from category x by distance from user's postcode.  Get real users to review the results on paper - are they useful results?
  • What information does the site implicitly gather about the user?  How long is this stored? How is it reported?
  • Site presentation.  Let the graphic designers do what they're paid for.  We can describe requirements for branding, navigability and visual impression without  designing the site for them. Describe rules/preferences for how logos etc appear on the site, corporate colours, refer to existing corporate guidelines if appropriate.  Navigability is driven by the way users use the site - what things do they do, in what order/combination, how often?  Should the site structure (sometimes called "site taxonomy") be based on a particular set of categories from the content object model, or should that be left to the designers to best match how users use the site?  A very good idea is to test your site taxonomy on paper or a prototype with people in your target user group to see if it makes sense.  Show them a hierarchy of categories and ask "where would you look for an article on better pest control?" or whatever.  A technique used by Jason Davey (Artistic Director at ZIVO then Bullseye) to specify and test "visual impression" is to ask the marketing stakeholders to identify the top three adjectives they would like site users to associate with their site (e.g. reliable, approachable, fun).  Once this requirement was signed off they would then mock up different "look and feel" prototypes and test them with a group of site users, who are given a multiple choice questionnaire and asked to rank adjectives they associate with each prototype to find the best match, which goes to show that you can quantify and test anything if you want to.
  • Operations Content Producers can perform on the Content Object Model.  Well, this is probably why the client bought Spectra, they've seen demo.  Needless to say they can add, edit, delete and categorise content items, and they can get around the site in design or display mode.  After that things get more tricky and likely to be overlooked.  Things to consider are:
    • Who is allowed to edit, review and approve the various content types and categories, and who can edit these user permissions;
    • Can Content Producers modify rules by which content is selected? If yes, name the rules, describe the query and how it can be changed (examples might be select any number of any type of content, to the most recent three items of some content type filtered by a category specified by the Content Producer).  You can think of it as having a "rule" content type;
    • Are there any extra queries beyond those available to Site Users, e.g. search for any object by name, type and category?  Maybe a low skill business user from a regional office needs to be able to see all the suppliers in their region and update them with a dedicated tool.  How do you find content that is due for review, or old versions of a content item, and
    • How do Content Producers make changes "go live"?
  •  Content Object Model behaviour rules. Closely related to the previous section:
    • What is the lifecycle of a content item - edit->finished->archived->deleted;
    • What state does content need to be in for it to be visible in Site User queries, edited, reviewed or sent live;
    • Are past versions of objects stored somehow;
    • Are changes audited;
    • What is the lifecycle of a change (it may be different to a content item). Do changes go live immediately, do they need approval, do they involve more than one content object, and
    • If updating an existing object, does the object remain on the live site whilst the revised version is edited and approved?
  • Data entry procedures.  Requirements that lay out how the Content Object Model should be updated by the Content Producers in cooperation with the System to achieve a correspondence with an event or entity in the real world:
    • How quickly should real world news/events/requests etc be input by the Content Producer, what is their responsibility to check authenticity, legal requirements etc outside the System, and
    • What validation must be carried out by the system on the data entered (spell check, valid ranges as per content model) to mitigate risk of entry errors commonly made by Content Producers?
  • Syndication mappings:
    • How is content from syndication sources mapped into the Content Object Model, and
    • How is content in the Content Object Model mapped to Syndication Consumers (site partners)?
  • Syndication behaviour rules:
    • Rules to select content for syndication;
    • Validation, if any, of inbound content, what should happen if content is rejected;
    • Schedules for push/pull of content, and
    • Security.
There will probably be requirements for functionality that doesn't fit into the list of content management requirement headings that I've listed.  It is important to identify this functionality and cover its requirements separately.  Some requirements may turn out to be for other systems, or may be better served by organisational or training solutions that have nothing to do with I.T.  Resist the urge to shoehorn everything into a content management problem - as they say, when you've just bought a hammer, everything looks like a nail.  How would you approach writing requirements for such functionality?  What questions do you need to answer - how do you frame the problem?  Kovitz identifies five common "problem frames" with guidelines as to what sort of questions need to be answered to formulate requirements for a solution to such problems.  The frames can be used "to recognise familiar problems when you see them and gain a head start on unfamiliar problems by varying the familiar". Here is a brief description of each problem frame, (Kovitz pages 74-75):
  • Software that solves an information problem answers queries about a certain part of the real world.  Documenting an information problem involves describing the types of information requests to be satisfied, the part of the real world to which the requests apply, and how the software can get access to that part of the real world.
  • In a control problem, the software is responsible for ensuring that some part of the world behaves in accordance with specified rules.  Documenting a control problem involves describing the objects that inhabit that part of the world and the causal rules they obey, the rules according to which they are supposed to behave, and the phenomena shared with the software through which the software can monitor the state of the world and initiate causal chains that result in the rules being followed.
  • To solve a transformation problem, the software generates output data that maps to input data in accordance with specified rules.  Documenting a transformation problem involves describing the entire set of all possible inputs and the mapping rules that indicate, for each possible input, the correct output.
  • In a workpiece problem, the software serves as a tool for creating wooden workpieces.  Documenting a workpiece problem consists of describing the object to exist within the computer and the operations that users can perform on them.
  • Finally, in a connection problem the software must simulate or make do with a connection between domains that do not really share phenomena...[in] one form of connection problem the principal information to document is the delay and distortion characteristics of the connection domain, and the behavioural characteristics of the domain of interest, so that the system can detect invalid data received from the connection domain.
The content management requirements outline I described above was composed of several such frames.  Likewise, when faced with an unfamiliar problem try to recognise these frames and use them as guidelines to determine what questions should be answered.  For instance, a user registration page allowing users to create or update their user profile might be considered a workpiece problem with the user profile as the workpiece.  If there were real world details such as credit card and tax file numbers being entered there may also be some elements of a connection problem involved with the user as the fallible connection. Updating information in a legacy customer database based on this information would involve a transformation problem, whereas automatically creating a job in a call centre workflow to get staff to send cards to customers on their birthdays would be a control problem.  The ability to query the accounts system to find out why we're spending so much #%@$%! money on cards and postage would be an information problem...
 
The stakeholders and business experts may not know what they want in all cases.  The only way to really resolve these situations is to propose some options, build and test prototypes of the options in a time boxed mini-project to help them come to an informed decision.  Once the requirements are finalised the business should test the requirements to make sure that they support their business model or do whatever is required to justify shelling out lots of money to build it.
 
If the project is to be broken into multiple releases (highly recommended - remember it may be more releases but each release is shorter and involves less risk) each requirement should be prioritised as to which release it should be in.  My strong recommendation for a Spectra project is to implement the content data entry and inward syndication functionality in the first release, and leave the first elements of the public site to the second release.  The advantages of this approach are:
  • It allows content entry to proceed in parallel with the second release, taking it off the critical path.  On a content rich site it can take longer to enter and review the content than it takes to write the site code;
  • It is a good test of the platform and the Content Object Model, without the pressure of the site being public;
  • The Content Producers get exposure and training on the system and have time to organise before the public launch, and
  • Development of the public site in the second release will be much easier with live data available for testing.
As with all useful documentation, presentation isn't as important as correctness, clarity and completeness.  Sign off on whiteboard printouts if necessary, as long as the requirements are understood by both the developers and the business it is achieving its purpose.  Once the prioritised requirements document is signed off the Functional Specification of the first release can commence.
 
Task 3 - Functional Specification (augmented for each release)
 
The functional specification describes the nitty gritty of the outward behaviour (screens, file formats, navigation, clicking here does this) of the System.  It's audience is the authors of the technical specification and the developers.  It is not a document for the business readers - they will be involved in testing the specification later on but they do not have the UI or analysis skills to participate directly in its creation and review - that's why they hired the developers.  As long as the system meets the requirements listed in the previous document they should be happy. 
 
A syndrome to recognise and avoid is the "all documentation is a PowerPoint presentation for the business stakeholders" syndrome.  In extreme cases this can lead to starting the build with a glossy 9-slide-animated-bullets spectacular technical specification which of course is completely ignored.  The functional and technical specifications are working documents.  Stakeholder communication is an important, separate issue - by all means keep the business community informed of developments and involve them in testing as appropriate, but don't hijack the specification documents for this purpose.
 
Traditional website specifications would mostly be made up of screen descriptions, where the links on the screen go to (commonly called "control action response" tables) and a site map to tie the screens together.  The database structure would be left for the developer to infer during the technical specification, or more usually during the coding right up to the end of testing!  Typically there would be some redundancy in the descriptions (i.e. navigation menus repeated, layouts for product descriptions appearing on multiple pages).  Another problem with this type of description is that as web sites become more dynamic a page-by-page/sitemap spec is progressively less useful because it can only describe a snapshot of the site in one configuration instead of focusing on how the site is to be made dynamic.  It's like trying to document the requirements for Microsoft Word by providing a few finished documents written in Word and leaving it up to the reader to work out the details of the tool that put the documents together.
 
Now to my point: the Spectra architecture is not just about saving coding time.  It can save time documenting your functional specification as well.  Since we know we are developing a Spectra site at this stage, it is sensible to make use of Spectra concepts such as content types, methods, metadata and containers that make the document more succinct and easy to understand.  They also make it easier for web designers to understand how the page will be constructed, so they can factor that into their HTML during the build.  A basic framework (subject to my later comments about factoring out repeated functions) for a Spectra functional specification follows:
  • For each page:
    • Page name used in cross-references;
    • URL if necessary to specify for navigability;
    • Page layout - fields, buttons, containers need to be labelled or numbered.  The contents of a container can be left empty (just draw a box) as the method output is described elsewhere;
    • Control definitions - names, type, how are their values and available options populated.  For a container you describe the rule that populates the container, e.g. select the teasers for the most recent 3 news items - if the rule is used more than once in the site describe it separately and include a reference to it from the container description;
    • Control Action Response (CAR) diagram - a table with three columns (guess headings).  Here you detail exactly how each control is supposed to respond when you click, hover, enter text into, etc the control (any changes to the content model or other internal states of the system resulting from the users actions should be described), and
    • References to related requirements in the requirements document.
  • For each rule (if rules are used in multiple places) - a name; description of the rule's select logic including the user-defined parameters and a page-style description (screen layout, CAR diagram etc) of the interface the user uses to set the user-defined parameters.
  • For each content type:
    • Content type name used in cross references;
    • Name of original content type(s) in requirements specificationit may not be a 1-1 mapping;
    • Description - some types are self-explanatory, other types may have taken some discussion to work out what they are-and aren't.  List the rules for what makes something an "X" - give examples, use terminology from the requirements document
    • For each property:
      • Name used in cross references;
      • type (text, number, date).  If an enumerated type used in more than one place (e.g. person title, country), refer to separate definition of enumerated type.
      • size (number of characters)
      • description
    • For each method (if types are tables and objects are rows, can think of method as a complex calculated field in each row - useful to explain this way to non-Spectrites):
      • Same as page except no URL.
      • Include edit methods
  • For each wizard (PLP):
    • Name used in cross-references;
    • List of properties referred to and updated during execution of the PLP;
    • A step flowchart with conditional logic at branch points expressed in terms of PLP and other properties, and
    • Page description for each page in the PLP.  Make sure any changes to the content object model or other internal states of the system are described.
  • For each workflow:
    • Name used in cross-references;
    • Type of workflow "artefact" (the thing/content object being passed through the workflow);
    • A dependency diagram showing which steps must be finished for a subsequent step to proceed;
    • For each workflow step:
      • Rules for allocating workflow step to a person;
      • How is the person informed of their new work (cross-reference to a task-list screen, for example);
      • Page description or cross-reference to PLP, and
      • Rules for when a workflow step is complete.
  • For each enumerated type (sometimes called "code-table" or "lookup-table"):
    • Name used in cross-references, and
    • List of keys (short values stored in database, passed in form fields etc), and display values (strings used in drop-downs etc).  May require short and long versions of display values, or strings in multiple languages.
  • List and describe the internal system states that aren't covered by the content object model but are referred to by pages, plps and other interface components.  These include user profiles, sessions, site "modes" (design mode, no frames mode) and "flags" (are we logged in?).  This may sound tech specish, but like the content model you can think of these things as being part of a "realised" domain outside of the system, accessible only via the system.  You have to describe these things if you want to describe personalisation, security, and site navigation behaviours in a rigorous, meaningful way.  State diagrams can be useful to help developers understand how these states are changed by summarising the events that have an affect on the state in one place.
  • For each scheduled task:
    • Schedule for when it runs, what does it do?
  • For each function not described so far:
    • A clear description of the outward behaviour of the interface ( e.g. file format), and changes to the content object model or other internal states of the system that are triggered by outside actions.  Use whatever notation is appropriate to explain the behaviour clearly to the user (CAR diagram, state diagram, parsing tree etc).  Don't get into describing the implementation details, that's for the technical spec.
The functional specification is a dry facts document - "this page contains these controls populated from these fields in this content object type."  It is used as a look-up reference by people coding user interfaces and file generators/parsers.  Accurate cross references, lack of ambiguity and everything defined once and only once are important attributes of a good functional specification.
 
To get the most benefit out of preparing a functional specification you need to test it.  Otherwise mistakes made in the functional specification will be propagated through into the build and the effort to write the functional specification will have been largely wasted.  Key scenarios should be picked to illustrate how the interface realises all the requirements in the requirements document to be implemented in the current release.  Mock-ups (i.e. storyboards showing sequences of screens with populated containers and exhibiting the required navigation behaviour) of these scenarios should be created and shown to end users to get feedback on the interface.  The final scenario mock-ups can be included as an appendix in the functional specification and used to communicate progress to the business stakeholders.  Again, document signoff is critical before work proceeds.
 
I've made a big deal about document signoffs.  In the real world clients will invariably have changes to make after signoff.  This will be minimised if release scopes are kept small and comprehendible, but it will still happen.  Acknowledging this, a formal change process should be put in place which recognises both the need to make such changes and the impact that the change has on development work in progress.  Each change needs a scope, impact analysis (which documents and code need updating) and quote for the extra labour costs.  If this expectation is set up front with the client and the process followed I have found that it makes the project run much more smoothly and everyone stays happy.
 
Tech spec thoughts to follow at a later date... (yes there's more to think about - oh joy!)

Reply via email to