Exactly how can someone develop a client for blind and partially
sighted people while staying within your new rules? Or are they stuck
with using a screen reader on the Twitter web pages (which are
notoriously inaccessible)?

Because speech synthesis is slower than reading, it will be necessary
to remove some of the proprietary notices and marks, rather than
repeating them all the time, so breaking term 4B. And exporting to
datastore to drive speech synthesis might break 4A.

I am not sure what 5C means in practice, but it would be important to
reproduce all parts of the Twitter service that make sense when done
through sound and typing, not vision.

As most disability aids are distributed through charities or social
service programs, people are paid to distribute them and teach people
how to use them. That breaks 5B.

It is common to let disabled users store pre-written pieces of text
they can send at a keystroke (or mouth blow if they cannot type on
normal keyboards). That would break 5D.

Twitter developer documentation and resources: http://dev.twitter.com/doc
API updates via Twitter: http://twitter.com/twitterapi
Issues/Enhancements Tracker: http://code.google.com/p/twitter-api/issues/list
Change your membership to this group: 

Reply via email to