On 7/16/2012 11:22 AM, Pascal J. Bourguignon wrote:
BGB <[email protected]> writes:

general programming probably doesn't need much more than pre-algebra
or maybe algebra level stuff anyways, but maybe touching on other
things that are useful to computing: matrices, vectors, sin/cos/...,
the big sigma notation, ...
Definitely.  Programming needs discreete mathematics and statistics much
more than the mathematics that are usually taught (which are more useful
eg. to physics).

yes, either way.

college experience was basically like:
go to math classes, which tend to be things like Calculus and similar;
brain melting ensues;
no degree earned.

then I had to move, and the college here would require taking a bunch more different classes, and I would still need math classes, making trying to do so not terribly worthwhile.


but, a person can get along pretty well provided they get basic
literacy down fairly solidly (can read and write, and maybe perform
basic arithmetic, ...).

most other stuff is mostly optional, and wont tend to matter much in
daily life for most people (and most will probably soon enough forget
anyways once they no longer have a school trying to force it down
their throats and/or needing to "cram" for tests).
No, no, no.  That's the point of our discussion.  There's a need to
increase "computer"-literacy, actually "programming"-literacy of the
general public.

well, I mean, they could have a use for computer literacy, ... depending on what they are doing. but, do we need all the other stuff, like "US History", "Biology", "Environmental Science", ... that comes along with it, and which doesn't generally transfer from one college to another?...

they are like, "no, you have World History, we require US History" or "we require Biology, but you have Marine Biology".

and, one can ask: does your usual programmer actually even need to know who the past US presidents were and what things they were known for? or the differences between Ruminant and Equine digestive systems regarding their ability to metabolize cellulose?

maybe some people have some reason to know, most others don't, and for them it is just the educational system eating their money.


The situation where everybody would be able (culturally, with a basic
knowing-how, an with the help of the right software tools and system) to
program their applications (ie. something totally contrary to the
current Apple philosophy), would be a better situation than the one
where people are dumbed-down and are allowed to use only canned software
that they cannot inspect and adapt to their needs.

yes, but part of the problem here may be more about the way the software industry works, and general culture, rather than strictly about education.

in a world where typically only closed binaries are available, and where messing with what is available may risk a person facing legal action, then it isn't really a good situation.

likewise, the main way which newbies tend to develop code is by copy-pasting from others and by making tweaks to existing code and data, again, both of which may put a person at legal risk (due to copyright, ...), and often results in people creating programs which they don't actually have the legal right to possess much less distribute or sell to others.


yes, granted, it could be better here.
FOSS sort of helps, but still has limitations.

something like, the ability to move code between a wider range of "compatible" licenses, or safely discard the license for "sufficiently small" code fragments (< 25 or 50 or 100 lines or so), could make sense.


all this is in addition to technical issues, like reducing the pain and cost by which a person can go about making changes (often, it requires the user to be able to get the program to be able to rebuild from sources before they have much hope of being able to mess with it, limiting this activity more to "serious" developers).

likewise, it is very often overly painful to make contributions back into community projects, given: usually only core developers have write access to the repository (for good reason);
fringe developers typically submit changes via diff patches;
usually this itself requires communication with the developers (often via subscribing to a developer mailing-list or similar); nevermind the usual hassles of making the patches "just so", so that the core developers will actually look into them (they often get fussy over things like which switches they want used with diff, ...);
...

ultimately, this may mean that the vast majority of minor fixes will tend to remain mostly in the hands of those who make them, and not end up being committed back into the main branch of the project.

in other cases, it may leads to forks, mostly because non-core developers can't really deal with the core project leader, who lords over the project or may just be a jerk-face, or a group of people may want features which the core doesn't feel are needed, ..., causing the core project to splinter into several independent forks (which rarely, if-ever, re-integrate).


the problem then is that there is not any clear / general solution to these sorts of issues.

so, the barrier to entry is fairly high, often requiring people who want to be contributors to a project to have the same vision as the project leader. sometimes leading to an "inner circle of yes-men", and making the core developers often not accepting of, and sometimes adversarial to, the positions held by groups of fringe users.


part of this though may have to deal with the tight integration and centralization inherent in many projects (where minor changes actually have to be made to the core in order to be shared).

a partial solution (often emerging as projects become large) is that extensibility mechanisms and separation of concerns often become more important, and changes/extensions can be made without needing to alter the core (or cores), but with the partial downside that often, due to size, the architecture of the project is largely frozen.

so, maybe a project life-cycle exists:
formative, single developer playing with code, architecture is fairly loose;
stabilizing, single developer, more serious/focused development, core architecture tends to solidify; dictatorial, core leader surrounded by a few core developers, and direct interaction with fringe developers and users (forks seem more likely at this stage); community, centralization loosens, usually a larger and more diverse community forms, and things like APIs and plug-ins largely displace the use of hard-coding for the addition of new features, ...

the above process may repeat, with the new projects either being plug-ins or building on the provided APIs (of the original project), as the original project tends to become larger and more general purpose (it transitions more into being a platform).

usually, at this point, what is left of the original architecture is "set in stone", and may be regarded as an "inherent truth" of this sort of system.


Furthermore, beside the need the general public has of being able to do
some programming, non-CS professionals also need to be able to write
programs.  Technicians and scientists in various domains such as
biology, physics, etc, need to know enough programming to write honest
programs for their needs.  Sure, they won't have to know how to write a
device driver or a unix memory management subsystem.  But they should be
able to design and implement algorithms to process their experiments and
their data, (and again, with the right software tools, things like
Python sound good enough for this kind of users, I kind of agree with
http://danweinreb.org/blog/why-did-mit-switch-from-scheme-to-python).

it depends some.

but, yeah, it could be easier...



so, the main goal in life is basically finding employment and basic
job competence, mostly with education being as a means to an end:
getting higher paying job, ...
Who said that?

I think this is a given.

people need to live their lives, and to do this, they need a job and money (and a house, car, ...).

likewise goes for finding a mate: often, potential mates may make decisions based largely on how much money and social status a person has, so a person who is less well off will be overlooked (well, except by those looking for short-term hook-ups and flings, who usually more care about looks and similar, and typically just go from one relationship to the next).

meanwhile, employers are like "we want such and such a degree, and such-and-such skills and work experience, ...". so, a person goes for the degree, to get a job, so that they can have a house, car, ..., such that potential mates will actually take them seriously enough to consider talking to them, ...


(so, person pays colleges, goes through a lot of pain and hassle, gets
a degree, and employer pays them more).
You wish!

this is the usual idea, whether or not it is actually true or not is a separate issue.


probably focusing more on the "useful parts" though.
No, that's certainly not the purpose of high-school education.

usually it seems more about a combination of:
keeping students in control and under supervision;
preparing them for general "worker drone" tasks, by giving them lots of busywork ("gotta strive for that A" => "be a busy little worker bee in the office");
...


now, how many types of jobs will a person actually need to be able to recite all 50 states and their respective capital cities? or the names of the presidents and what they were most known for during their terms in office?

probably not all that many...

surely there are both more productive and educational things they could be doing with their time.

maybe knowing about past wars could be useful, dunno, one hears more about them in general society than they do in the classes though (including some which happened, but weren't apparently worth mention in class, like the Korean War and Gulf War, ...).

person can watch History Channel and see lots about WWII, never-mind wacky stuff which IMO doesn't belong (like "Ancient Astronauts" and similar, with stuff like this it damages the credibility of their more serious shows...).


On the other hand, an awful lot of classes, and college degree
programs seem to think that coding in Java is all there is, and we're
seeing degrees in game design (not that game design is simple,
particularly if one goes into things like physics modeling, image
processing, massive concurrency, and so forth).
Indeed.  In the French manual, it's made mention only of languages in
the Algol family.  It would be better if they also spoke of Prolog,
Haskell, and of course Lisp too.  But this can be easily corrected by
the teachers, if they're "good" enough.
yes, but you can still do a lot with Java (even if hardly my favorite
language personally).

throw some C, C++, or C# on there, and it is better still.
No.  Java is good enough to show off the algol/procedural and OO
paradygms.  There's no need to talk about C, C++ or C# (those language
are only useful to professionnal CS guys, not to the general public).
(And yes, I'd tend to think Python would be better for the general
public than Java).

they would need to know those languages if they have much hope to ever tinker with or contribute to larger projects, which would likely be the main use case for end-users knowing programming (so, they can add a feature to an app themselves, and get it made available for their friends, rather than complain about it on the user forums in the hope that one of the developers will hear them and decide to implement it).


What you could throw in, is some Lisp, some Prolog, and some
Haskell.  Haskell could even be taught in Maths instead of in CS ;-)

The point here is to teach to the general public (eg. your future
customers and managers) that there are other languages than the
currently-popular Algol-like languages and languages in the Lisp, logic
or functional families are also useful tools.

math people are not really general users, but rather their own niche.


a user writing plug-ins for their web-browser or Photoshop or Office or similar is a much more likely "general" use-case.


a problem with most other further reaching languages is:
it is often harder to do much useful with them (smaller communities,
often deficiencies regarding implementation maturity and library
support, ... 1);
This is irrelevant.


again, this is a roadblock to many people adopting languages, especially obscure ones.


it is harder still for people looking at finding a job, since few jobs
want these more obscure languages;
This is totally irrelevant to the question of educating the general
public and giving them a CS/programming culture.

general public does what they do mostly for jobs...

it is little different between programming, or using MS Office...



a person trying to "just get it done" may have a much harder time
finding code to just copy/paste off the internet (or may have to go
through considerably more work translating it from one language to
another, 2);
This is irrelevant.  The question is for them to know what CS can do for
them, and know that they can hire a profession CS/programmer to do the
hard work.

this is what most end-user programmers do though...

it doesn't matter if they are web-designers, or game modders, or making a browser toolbar, or making Flash animations/games, or whatever.


1: it is not a good sign when one of the first major questions usually
asked is "how do I use OpenGL / sound / GUI / ... with this thing?",
which then either results in people looking for 3rd party packages to
do it, or having to write a lot of wrapper boilerplate, or having to
fall back to writing all these parts in C or similar.
This is something that is solved in two ways:

- socially: letting the general public have some consciousness of what
   CS is and what it can do, so that they can contract a CS/programmer
   professionnal to solve their problem, just like they do eg. with a MD
   when they have a health problem.  And the MD doesn't use the same
   tools and the same drugs than what you do at home: he has his own
   specialized and powerful tools and drugs.

- technically: providing programmable software systems that are more
   easily usable by the general public.  Ie. NOT C, more Lisp, Python,
   and integrated programming environment like indeed Hypercard.

maybe JavaScript?...


2: it is much easier to copy/paste between languages if both have the
same basic syntax. whether or not this is "good", this is a bit of a
productivity feature, as then a person doesn't have to find code in
the same language, but only one in a language with a similar-enough
syntax that they can do quick changes or use some find/replace magic
or similar (for example, although C++ and Java are very different, the
similar syntax doesn't completely rule out the ability to use
copy/paste and some manual editing to convert between them, rather
than having to rewrite the code from-scratch).
This is totally irrelevant, ie. it's only a CS/programmer professionnal
problem.

But otherwise indeed tools could be developed to translate from one
high-level general public language to the other, but this is something
that would only be needed in the new world.


I disagree here.

I have seen many small/hobby projects which make heavy use of copy/pasted code (and often data as well), and this is actually more how many people learn to write actually serious code, so is hardly a "professionals only" problem.


_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to