Derek,

> Forced usage?  There are cases where decimal or hex is the natural
> representation and there are the gray areas.  People will vary in their
> preferences on which way to jump in the gray cases.

What I meant was that there needs to be distinction between what could be seen as
truly natural due to some isomorphism or congruence and usage by rote learning
and/or dictat.  If a teacher/mentor trains a person to believe that you have to
use a cetain representation in a certain context it becomes natural for that
person even if for everyone else it is completely unnatural because there is no
underlying rational.

It's at this point that I wished I had an online copy of Marian Petre's cartoon on
"natural".

> Guidelines might push people in a certain direction (if only because
> they get fed up of having to explain to their boss why some tool keeps
> flagging some construct where they decided to jump in the opposite
> direction).  By its nature guideline wording is best when concise.  This
> means that the general cases get handled well and the edge cases
> tag along (right, wrong, or indifferently).

Guidelines that encapsulate some neat isomorphism or congruence (such as using
hexadecimal for bitmaps) are clearly useful to support usage and
training/learning, no problem there.  However, do people need to know the
rationale to really appreciate and use the guideline properly.  I would guess
(others may know of the evidence for or against) that guidelines are most useful
for experts who have previously understood the rational of the guideline and are
using the guideline as an aide memoire.  But as you say, they can have political
usefulness within organizations for dealing with people who do not understand the
issues at all.

> > > Arrays are normally used to represent a sequence of the same kind of data.
> > > So mixing decimal and hex is suspect.
> >
> >Why?
>
> Was a 0x missed off by mistake, do readers of the source see a 0x where
> none exists (and deduce the incorrect value)?

Good point.  So the guideline is use the same base for all initializers in an
array.  The rational is that since the items are homogeneous so should be the
representation of the initializers.

> >I wonder if you are trying to impose a deeper meaning to the
> >representation where there is in fact none other than learned usage.
>
> I don't understand your point here.

I was talking cr@p, your point was a good one.

> Neither, we are dealing with the poorly educated, probably of averaging
> ability, professional (well they do get paid).
>
> Are there any real amateurs out there these days?  Why would somebody
> program for nothing when he/she could probably bluff their way into a
> position  where they were paid to program?

This gets onto a completely different moral, political and ethical issue of
professionalism probably not appropriate for this discussion list.  The question
of principle is whether any should be allowed to develop software systems if they
do not have properly accredited education and training with forceably registration
-- cf. accountants, solicitors, medics, etc.

> >Are we trying to support new learners or understand the depths of the
> >seasoned professional?  This is an important issue since the two are
> >very different domains.
>
> I was consulting at a bank in the city a few months ago.  A few rows
> down from me in the development group was a person writing some
> Java.  She had an introductory Java book open next to her terminal
> and would think nothing of shouting  questions to somebody in the
> next row (I don't think she knew much about programming, let alone
> Java).  What amazed me was the fact that she was not fired within a
> day or two, for incompetence.  Computing is an industry where the
> requirement to continually learn new things means those without
> ability can hide their failings.

This is strong anecdotal evidence for requiring certification of all
practitioners.  Of course this will never happen because of the perceived (not
provably real) IT skills shortage.  I bet if all the incompetant programmers were
removed from projects and only the good people were working on them, we would see
a massive decrease in the IT skills shortage.

> In some development groups I have known it was an open question whether
> firing the 10% 'cleverest' programmers or the 10% most incompetent
> programmers would have made the biggest difference.
>
> Software development and maintenance is usually a group activity.
> Yes, talented people can write understandable source.  But it takes
> longer and requires iterations of development, they invariably start off
> with the more complicated version first.  Given the pressure to get to
> market there is rarely time to iterate.

I am not convinced by this but I have no evidence other than personal experience
and belief.  It is true that some of the best individual programmers are totally
unable to work in teams and all too often projects cannot be structured to permit
individual working.  But I am really not convinced by the general arguments you
refer to about time to market.   Certainly sales and marketting people often set
impossible deadlines and then everyone has to hack around stupid commitments but
this is probably the fault of the technical amangement for not getting realistic
product strategies in place.

Going off discussion group topic again sorry.

The real point is to do with methodology.  Someone once said something along the
lines of:  No engineered large system ever works, a working large system always
evolves from a working small system.

Now I am not an total advocate of the Extreme Programming school but the idea of
system evolution /prototyping backed up by properly structured testing regimes is
in my experience the only way that real working systems get constructed.  I know
the Software Engineering orthodoxy says otherwise but for the small to medium size
systems I have been involved with that orthodoxy would have led to failure.
I have not been on any very large systems so maybe things would work there.

The programming language and is expressivity and the ability to have all
documentation generated out of the source code are for the projects I have been
involved in the primary indicators.  Doxygen, Doc++, JavaDoc have shown how these
sort of things can be done.

The problem with all this is that I cannot see people doing proper experiments to
really get at the variables and dimensions of this problem.  Everything is by
anecdote, belief and advocacy.

> My own position is that Mr/Mrs/Miz average is the person who needs
> to be studied.  A position I have followed in my guidelines (and expect to
> get a lot of stick over).

Isn't this putting the solution before the question?  What I think we need to know
is what make some people good at systems development and why are some good as
individuals and others good in teams.  What makes people bad at programming even
though they think they are good?
This necessitate working with all abilities including Ms/Mr Average.


Russel.
======================================================================
Professor Russel Winder         Professor of Computing Science
Department of Computer Science  Fax: +44 20 7848 2851/+44 20 7848 2913
King's College London           [EMAIL PROTECTED]
Strand, London WC2R 2LS, UK     http://www.dcs.kcl.ac.uk/staff/russel/





- Automatic footer for [EMAIL PROTECTED] ----------------------------------
To unsubscribe from this list, mail [EMAIL PROTECTED]  unsubscribe discuss
To join the announcements list, mail [EMAIL PROTECTED] subscribe announce
To receive a help file, mail [EMAIL PROTECTED]         help
This list is archived at http://www.mail-archive.com/discuss%40ppig.org/
If you have any problems or questions, please mail [EMAIL PROTECTED]

Reply via email to