This is what the list is all about--teaching each other what we know as
individuals.
Maria Campbell
[email protected]
When the power of love overcomes the love of power, the world will know peace.
--Attributed to Jimi Hendrix
On 2/5/2016 3:54 AM, David Moore wrote:
Hi Joseph,
Thank you for that. When I teach or tutor, I find out about the person
as much as I can and want my student to be able to go out and teach
others what I am teaching him. That is how I rate myself on how good
of a teacher I am. Have a great one.
*From:* Joseph Lee <mailto:[email protected]>
*Sent:* Thursday, February 4, 2016 1:21 PM
*To:* [email protected] <mailto:[email protected]>
*Subject:* Re: Improving my teaching approach and/or sensitivity
Hi,
On many GUI (graphical user interfaces), people would use a mouse
pointer (or a touchscreen) to interact with visual elements. By
convention, left mouse button performs primary action such as
selecting and activating elements (single-click and double-click,
respectively on Windows), and the right mouse button performs
secondary action such as opening context menus. On touchscreens, a tap
performs primary action and usually tap and hold performs secondary
action.
Now some of you may ask, “how could two input devices perform same
action?” The best way to describe this is that operating systems don’t
care what the input devices are as long as they support same set of
behavior (called abstraction; I’ll talk about what really goes on
behind the scenes later).
As for Brian’s question: Empathy is what I think Nicole is trying to
say. However, we cannot forget that we the screen reader users also
contribute to some difficulties experienced by tutors:
·Information blackout and missing puzzle pieces: We the blind people
are one of the most affected by information blackout (not getting
crucial information in time, getting incomplete information, having a
skewed set of data to work with and so on). Often, we say we have
complete knowledge of screen reading technology when in fact we don’t,
often resulting from not getting crucial information about software
we’re working with or approaching a concept with missing pieces.
·Insistence on using alternatives: For certain tasks, it is crucial to
use alternatives such as mouse commands provided by JAWS (JAWS cursor)
and other features. However, we cannot say a screen reader specific
feature is the only way to cross the desert (for this reason, I’m a
strong opponent of Research It; although it is useful for many, we can
find information via other ways such as Google, news websites and
visiting sources Research It uses; for resident NVDA users who are
asking for Research It like feature, I’ll not accept such an add-on
into our community add-ons site).
·Being teachable: An indirect learning outcome of tutoring is for
students to teach concepts and applications. Some are good at this,
while others may need more time to become proficient at it. One thing
I’m worried about is our tendency to “just eat whatever is given”
without cooking something new. I believe that, as a clean dish is a
dish that is willing to let go of what is contained within, we the
screen reader users (and students who are tutored by seasoned screen
reader users) should be prepared to teach at any moment’s notice (this
involves continuous refinement, practice, good understanding of
concepts taught by tutors and so on; there is a specific concern I’d
like to bring up below).
I think the ultimate question we ought to ask ourselves is, “what can
we do to help tutors beyond showing knowledge acquisition?” I think
what will bring a smile to faces of tutors is the fact that students
are good at teaching others about what they’ve learned, more so now
that we are more interconnected and blindness-related topics such as
screen readers are receiving more mainstream coverage.
Footnotes:
1.Input abstraction (for resident programmers): Many operating systems
can perform same tasks from different devices thanks to a concept
called abstraction (technically, this is called “hardware
abstraction”). An operating system (in this case, operating system
kernel) exposes a set of API’s that device drivers are expected to
implement. For example, an operating system may let a user perform
primary action from a number of input devices, including mouse clicks,
tapping the touchscreen, pressing ENTER from a keyboard and so on.
Although different drivers work with different hardware, all of them
(mouse pointer driver, touchscreen processor, keyboard driver and so
on) will let the operating system see that the user wishes to perform
primary action (to a user, it doesn’t matter which input device is
used as long as the primary action is performed).
*2.*In regards to teaching and sharing content and showing expertise:
Teaching isn’t an easy job; it requires patience, practice and
building expertise. Even producing tutorials require significant
investment. When it comes to sharing content, I believe that people
won’t send feedback unless it is widely circulated (hence, I tend to
disagree with those who say certain tutorials must be accessible to
members of certain lists only).**
*Hope this helps.*
*Cheers,*
*Joseph*
**
**
*From:*Cindy Ray [mailto:[email protected]]
*Sent:* Thursday, February 4, 2016 9:25 AM
*To:* [email protected]
*Subject:* Re: Improving my teaching approach and/or sensitivity
I don’t think he said that the clicking should be obvious if you
hadn’t used a mouse. I think he said we needed to know it and maybe
understand it. Can’t remember for sure. I know what click and double
click are, but I don’t know what it means to right click or left click
either. Course I may not be a great user either. LOL.
Cindy
*From:*Jean Menzies [mailto:[email protected]]
*Sent:* Thursday, February 4, 2016 11:14 AM
*To:* [email protected] <mailto:[email protected]>
*Subject:* Re: Improving my teaching approach and/or sensitivity
Hi Brian,
First, perhaps a better way than asking “How blind are you”, might be
to simply ask straight up if the person has any useful residual vision
that would be helpful when learning the computer. They will know the
answer. lol.
As for directional elements, I am congenitally blind and have no
problem with that so far as it goes. However, because JAWS works in a
linear fashion, the visual layout doesn’t always match up. For
example, when people tell me to click on a link on the left of the
page, that has no meaning so far as JAWS is concerned. So, that kind
of direction is pointless. Yes, thee are arrows to move left, right,
up and down, but that is about as far as is important for me in terms
of directional visual concept of layout.
And, you said:
I mean, I realize that a screen reader user does not literally click
or right click, but they had ought to know that click translates to
select (most of the time), double click translates to activate ...
Gee, huh? I’ve been using JAWS since 2001 and am a fairly decent user.
I didn’t know that. I thought click was like pressing enter or
spacebar to activate something. I thought double click was like right
clicking. And speaking of “clicking”, I still don’t get left and right
clicks per se. I know that right click is like bringing up the context
menu, but I’m not sure what a left click really is.
I just was wondering why you thought this concept of “clicking” should
be obvious to anyone who has never used a mouse.
Jean
*From:*Brian Vogel <mailto:[email protected]>
*Sent:*Thursday, February 4, 2016 7:35 AM
*To:*[email protected] <mailto:[email protected]>
*Subject:*Improving my teaching approach and/or sensitivity
Hello All,
I have recently been e-mailing back and forth with several
members here off-forum about topics and issues that go beyond the
scope of discussion here. In the course of a specific exchange, and
from the previous occurrence here of someone telling me, "that's a
sighted answer," I composed the following in an e-mail, which I'll
share here verbatim:
--------------------------------
I actually try to avoid purely visual descriptions to the
extent I can. You may find the following amusing, and it took me a
long time to get comfortable asking it, but the first question I ask
any of my clients when we start tutoring is, "How blind are you?" I
often have very sketchy information about what residual vision, if
any, they have and it's critical to know that (and whether it will
remain) as far as how to approach certain things. I then follow up
with, "Has your vision always been this way or could you see
previously?" Both of these answers factor in to whether I ever
mention specific colors, for instance, because the actuality, as
opposed to the abstract concept, of color is meaningless to those
who've never had the sensory experience of color. Everyone, though,
has to have the concepts of left, right, up, down in both the vertical
and horizontal planes, so I don't hesitate to say something like "at
the lower right" because I know that that translates in a very
specific way once you have any orientation at all to "how you get
where" in relation to your own computer screen. If this is a bad
idea, for reasons I can't fathom as a sighted person, I welcome
suggestions as to what is more appropriate and efficient for
communicating location information for access. Mind you, I do use
specifics like "in the main menu bar," "in the insert ribbon," "4th
button over by tabbing," etc..
I've never understood "the furor" that some people get
into over the use of common computer actions like click, right click,
triple-finger double-tap, etc. I mean, I realize that a screen reader
user does not literally click or right click, but they had ought to
know that click translates to select (most of the time), double click
translates to activate, there exists a "right click" function to allow
you to bring up context menus (which are often a godsend), etc. This
is a situation where I actually feel it's incumbent on the student to
ask if they do not understand what a specific "sighted" reference
which is what they'll always be hearing from someone other than a
fellow screen reader user translates to in "screen-readerese." You're
never going to get a sighted assistant telling you to "press spacebar
to select/activate" something, they'll tell you either to select it or
to click on it. If you go to training classes for non-screen reader
software you absolutely have to know and understand how common
computing control jargon "translates" for you. Mind you, if I've got
an absolute beginner I teach the translation at the outset but what I
don't do is use screen readerese unless it's essential. I think that
limits independence rather than building it.
--------------------------------
Just as I said yesterday that it is members of the cohort here, not I,
who are best able to determine if a given document is accessible via
JAWS. The cohort here is also better able to instruct me in where my
assumptions, presumptions, techniques may either be completely wrong
or in need of some improvement.
The only thing I will ask is that if something in the above is
considered really offensive, please don't excoriate me about that, but
make me aware that it is offensive and why. I am honestly trying to
get better at what I do both as a tutor and as a sighted person
working with people with visual impairments. I know that my frame of
reference is different than yours, or at least could be, and that it
may be in need of adjustment. The only way I can make that adjustment
is to put my thoughts out there and ask for help.
I'll close with a quotation from Carlin Romano that I think has direct
parallels here, "When intellectuals take their ideas to the mass
market, they are not just doing a good deed for the mass market. They
are doing a good thing for themselves. The mass marketplace of ideas
proves to be a better critic of big assumptions in any field than is
the specialized discipline, or one's peers."
Brian