>> Socially I think we allow computers to hand-cuff us.

> Socially? As in people don't get out enought and meet in
> meat-space? Or our interactions, even on-line?

Well I was thinking more about social infrastructure. Things like bank
personell being able to help you in any way if their computers go out.
I understand why it's become that way, but it'd be nice to think that
there would be some kind of backup system, like they could make a
phone call to another central authority for the bank who could verify
your available funds, etc. Or that the personell at your own bank
might be able to manually give you a withdrawl -- right now all they
can do on paper is take a deposit. I think that generally speaking
once the computers go down they don't even have access to the cash
drawer at most banks. I could be wrong about that, but that's my
feeling.

This is just one example -- although the majority of examples stem
from poor user interfaces rather than from over-dependance on machines
in general. There's one example of a poor user interface in the
Inmates Are Running the Asylum for a system for displaying movies to
different passengers on an intercontinental flight. The system
required (hand-cuffed) the stewardesses to collect the money and enter
it into the system before the passenger could be given their movie,
which, the average programmer thinks is perfectly logical (and I don't
blame us). The problem is that assumption is made with a lack of
understanding of the environment the stewardesses work in - an
environment in which it's much easier just to give the passenger the
movie and then collect the cash later. The interface was so
problematic for them that the stewardesses would regularly sabotage
the system so that nobody could get any of the movies (and hence, the
airline lots a lot of money) because being handcuffed by the interface
in that way was such a hassle for them.

>> Well, maybe if the optimization is as swell as it's said
>> > to be.  I can't help but feel that even as smart as
>> > computers are, there are areas that a human could see a
>> > "pattern" before the computer could. Or whatever.
>>
>> Is this a response to my comment about why I'm not
>> bothered by the fact that the ColdFusion server generates
>> java code? (another good example of which is that the
>> server used to generate C++ code (at least I thought I
>> remembered somebody saying such), and my knowledge
>> of C++ wasn't helpful when I worked with ColdFusion
>> then either)

> More along the lines of "over helpful" generation.  The
> old mac OS always kind of bugged me.  It was SO arcane
> to do underlying stuff, ya know? I guess that was cool
> too.  But you ever feel so abstracted that you are no
> longer in control? I guess that's bad design or
> interface or something more than abstraction...

Ahh... Rarely. :) But that's another limitation of generated code.
It's one thing to have an entire language that is an abstraction such
as CFML, but then, the amount of time that goes into that code
generator is astronomical in comparison to the amount of time that
goes into developing the code generators that we would use. Of course,
our code generators are for more specific purposes, that's true, but I
still shy away from it on the thinking that it's liable to lead to
handcuffing me in some way, re: what you just described about the old
mac OS or the couple of examples above. Although there have been a
couple of occasions on which I felt there was too much abstraction in
something, more frequently I find myself being limited and/or required
to do more work as a result of a lack of abstraction.

> The computer still doesn't know the goal (yet), so it
> has to consider all options, picking what it thinks you
> want. Your comment about hoping the macromedia engeneers
> thought about this stuff... some dude some where put some
> logic in there, there is no law of nature stating that it
> doesn't matter once your at a higher level. Man, that made
> sense.

Yeah, to date the people responsible for ColdFusion seem to have been
pretty effective at identifying good abstractions (for the majority of
their market) and making them work well. Not that I haven't had the
occasional complaint. :) But I really have to give them credit for
making my life easier. :)

> Maybe you're right, and it's a moot point, but I think
> understanding something to it's core is worthy. Actually
> considering the difference of running "the same code" on
> a 64 bit or a 32 bit.  It's nice to know it should "just
> work", but I really like that intuitive guess type stuff
> that happens when you start understanding the nature of
> something.

> "How did you know to look there to fix that?"
> "I dunno, it just made sense."

Oh there's certainly value in understanding some of the lower theory,
but an individual programmer can only learn so much. Not that an
individual programmer can't learn whatever they want to learn, but you
know, there are only so many hours in a day. :) In the long-run, you
have to weigh the value of having that low level insight against the
value of getting work done during the hours that you would be
studying, and there's enough low-level imo you could study forever
before you got started on an actual project. :P

> Sometimes stuff doesn't work, even tho we're told at the
> high level it should.  Then what. :-P  Ya gotta dig in.
> If you dig that kind of stuff. I guess you could also just
> say, "hey person who's thing my thing doesn't work with,
> why aren't you "standard"?". Or wait for the person who's
> job it is to do that part figures it out.

Yep, and people make those kinds of decisions fairly frequently in
this industry.

> I guess our ideas of optimization are different.

> When I think of optimization, it's not necessarily
> "speed".  There are so many areas to optimize, many of
> which have nothing to do with processors or memory.
> And much optimization is usefull later on.

> I would think.  At least it seems kind of evolving, or
> whatever.

This just means we need to define our terms. :) When I talk about
optimization I generally limit that word to issues of mechanical
efficiency unless we're talking specifically about a user-interface
issue. Issues of human efficiency when using the application are in my
experience usually discussed under the heading "usability" or
sometimes "human factors". These issues are of course completely
separate from mechanical optimization -- they may effect one another
in the system (one efficient interface for a user may be innefficient
for the machine or vice versa) but they are by themselves radically
different types problems and must be addressed with radically
different types of analysis.

When I'm talking about 20 years into the future and two pieces of
software operating with the same efficiency, I'm holding everything
else equal (user interface, feature completeness of the application,
etc) as is necessary to evaluate the individual aspect. Once you've
examined the individual aspect separately from the whole, then you can
better examine their interaction (much like programming). :)

That's how I come to the conclusion that we're in transition -- the
Star Trek computer is optimal for humans, although it may not be as
mechanically efficient as today's machines (actually I would expect it
to be less efficient). It's optimal for human use because it accepts
input in a way that is natural for humans (conversational speech).
It's mechanically inoptimal because a human won't notice the
difference between mechanically optimal and mechanically inoptimal
software by then as a result of much more powerful hardware.

>> All that being said of course, anyone can screw up a
>> good thing and it's not very difficult to accomplish.
>> There are lots of times that

> Ha!! That kills me. Listen to this:
> I had the bright idea to instead of having tables
> with different data-types, I'd have tables all of
> one data-type, and use a key and another table to
> keep track of what was where or whatever.

> Long story short, it's death by a thousand queries.
> I had to make some cache tables in the end, just to
> keep it all together. Bleh. I'd had some SQL
> generating stuff already tho so the cache wasn't
> too hard to wrangle. And now everything is a lot
> faster, so long as I can get my
> cache-keeper-up-to-dater working optimally.

I've contemplated similar systems myself. I would assume that a view
would make it a lot easier to manage, and actually if you used a DAO
type object, the end result wouldn't be much different than the DAO's
we use now. Although it's ultimately got to be a little less efficient
regardless because then you've got to either join multiple tables or
have one table where only 2 columns are used at a time
(propertyname/id and value) out of x number of different type columns.

I think the jury's still out (at least it is for me) on the benefit of
that abstraction.

>> Sorta saying it's all data, but some data is much
>> > easier to parse than other data is. By "much" I
>> > mean astronomically.
>>
>> Oh. Okay... Yes, admittedly. :)
>> Hence much of the reason behind XML.

> Indeed. I thought that was all the reason. ;-)

There are some other more specific reasons, but I believe they are all
items that hinge on its parseability. :) Of course this then becomes
the reason it's used for syndication like RSS and for webservices,
because no matter how much more mechanically efficient relational
databases are they don't speak to the ability of a foreign or unknown
system to parse the data.


s. isaac dealey     434.293.6201
new epoch : isn't it time for a change?

add features without fixtures with
the onTap open source framework

http://www.fusiontap.com
http://coldfusion.sys-con.com/author/4806Dealey.htm


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~|
Message: http://www.houseoffusion.com/lists.cfm/link=i:4:237646
Archives: http://www.houseoffusion.com/cf_lists/threads.cfm/4
Subscription: http://www.houseoffusion.com/lists.cfm/link=s:4
Unsubscribe: http://www.houseoffusion.com/cf_lists/unsubscribe.cfm?user=89.70.4
Donations & Support: http://www.houseoffusion.com/tiny.cfm/54

Reply via email to