>> The comment I forgot to make in that post was that really
>> I think the trick is to find handfull of technologies you
>> like that are on the up-swing (since they all rise and
>> fall) and stick with them as long as you can. This should
>> make it easier to add other complementary skills as the
>> demand for them increases and mitigate the risks involved
>> in devaluation of any individual skill as a result of
>> increased supply or waning demand. As an individual I'm
>> personally probably more invested in ColdFusion than
>> anything else, potentially over-invested actually.

> Ok. But your heavy involvement in CF perpetuates it as
> well.

Yes. :)

>> I guess we shouldn't be to reliant on computers,
>> > neh? (-:
>>
>> My opinions on that subject tend to be pretty unpopular.
>> :)

> Swinging  to what side? More so or less?

Socially I think we allow computers to hand-cuff us.

>> I still put forth that generated code is generated code,
>> > why shy away from generated code? So long as it's well
>> > formatted (don't look at me, you saw my regex ;) you
>> > should be ok, I recon.
>>
>> It's a question of who's generating it and why. :)
>>
>> To me the fact that my coldfusion templates or CFC's are
>> generated Java is transparent. I know it's there, but I
>> don't have to care too much about what's being generated,
>> beyond knowing that it is generated and having some
>> understanding of the problems that can be caused by that.

> Well, maybe if the optimization is as swell as it's said
> to be.  I can't help but feel that even as smart as
> computers are, there are areas that a human could see a
> "pattern" before the computer could. Or whatever.

Is this a response to my comment about why I'm not bothered by the
fact that the ColdFusion server generates java code? (another good
example of which is that the server used to generate C++ code (at
least I thought I remembered somebody saying such), and my knowledge
of C++ wasn't helpful when I worked with ColdFusion then either)

> Guess the argument about optimization has some validity,
> yet I can't help see history repeat itself. Every few
> years there's this idea that it doesn't matter, we're
> getting bigger, faster processors, more RAM, etc.. Yet
> the real idea is to conserve energy. Sorta. I guess make
> less go further.  That's never going to change, no
> matter how much power there is. It's the nature of
> power - corruption and responsibility aside.

No not entirely. The issue is that we're still in transition. The
hardware progress is not as fast as many of us would like and
sometimes we jump the gun with regard to wanting to be able to have
the Star Trek computer that we just tell what to do and it does it. So
if I build an application today and I fail to optimize it , then my
application is going to be slow in comparison to another application
which accomplishes the same task. (Incidentally I spend quite a bit of
my programming time thinking about the optimization of my software --
I may not always get it right, but I do have a reasonable handle on
the concepts.)

Skip forward 20 years.

Twenty years from now if you load up the same two applications, you
won't be able to tell the difference between them. Yes, one of them is
still inefficient / slow, but to the human person using them, there is
no tangible difference, because advances in hardware cause the slow
application to perform as quickly as the efficient one. So both
applications have the same value (including monetary value) in the
market.

When we optimize software, we're not doing that for the future, we're
doing that to compete in today's market, and because "today's market"
is always becoming "tomorrow's market" that means we're always
shooting at a moving target, so there becomes this balancing act
between how much time we spend optimizing an application and making it
blazingly fast today, and how much time we carve away from the
optimization game in favor of tasks that will be more important in the
market in years to come, such as usability, extensibility and new
features.

Extensibility in particular is one of those "future value" prospects.
Pretty much without fail, something which makes your application
extensible will cost you some efficiency. In some cases it may be
thoughtfull application of XML, which as Joel points out in the
article you posted is always going to be slow compared to a database
(in today's market, and for a good while yet - although if you store
the xml in a file and/or in memory you can get some of that back by
not needing a network trip through the database port), other times it
may be the division of logical functionality into separate objects
which then have to be instantiated. Don't get me wrong, I love
objects, but object instantiation is always slower than using
something that exists already in memory. Thus each time you find an
object doing too much and you separate it into two or more objects to
handle different tasks, you're increasing the load on the machine,
even though you haven't added much if any more code.

All that being said of course, anyone can screw up a good thing and
it's not very difficult to accomplish. There are lots of times that
I've seen someone generate hundreds or thousands of queries in a given
page request because they used a loop over one query to drive
additional queries instead of using a single more complex query with
join statements and possibly grouped output. Hundreds or thousands of
queries are always going to be slower (even if the individual queries
are blazing fast) for two reasons. One is that it's another "Schlemiel
the Painter's Algorithm" because the database then has to scan the
index for the table once for each subordinate query, instead of
combining index scanning into a single execution plan that can "kill
two birds", and the other reason is that it requires more network
trips through that port (and consequently also more translation of db
data into some native memory object in our case Java objects).

Using one query to drive multiple smaller queries (and yes, I will
admit to having done this in the past :) is one of the slowest
performing and least scalable constructs that a person can write.
Sometimes they're necessary (data from different servers on different
platforms -- even linked servers in Oracle or SQL Server can't really
help with this because then they have to open a port to the other
server too). Sometimes it's not a problem (i.e. the master query will
never produce more than 3-5 records because it's an MRU feature - I
wouldn't use this by itself as an excuse to do it). Sometimes people
want to use objects to trade their extensibility for the speed of a
single query (although conceptually this may be a nice idea, imo our
hardware just isn't ready for it).

> You may have seen this, but I liked it alot:
> http://www.joelonsoftware.com/articles/fog0000000319.html

Decent article. What Joel fails to mention is that a number of
languages (ColdFusion being one of them) prohibit an understanding of
the relationship between the high level concepts and certain low-level
concepts like string termination. We know that ColdFusion code becomes
generated Java, this is true. We also know how the Java String object
functions, because there's plenty of readily available information
about it.

What we don't know is how the ColdFusion "typeless" variable class
functions. We know that the language provides us with methods of
testing those typeless variables for common patterns like date,
numeric or uuid, but we don't know how that comparison functions, and
ultimately a date is a string, a numeric is a string and a uuid is a
string.

String string string. :) At least, they are to us. Under the hood I'm
hoping without knowing that the engineers at Macromedia figured out a
way to store numeric values as numbers for the sake of efficiency in
their typeless variable, because I know when I type "x = 3" in CFML
that the mere fact that the variable x is abstracted by this
"typeless" variable class means it's less efficient than a Java
integer.

I'm willing to accept that abstraction because I believe (as do a
great many other folks) that the programming time saved by that
abstraction is more valuable than the machine power saved by the Java
integer. And here we get to one of my pet peeves. :) This is when
otherwise smart developers point to Joel's "law of leaky abstractions"
article (
http://www.joelonsoftware.com/articles/LeakyAbstractions.html ) and
say "see! You shouldn't abstract, you should just code!".

I got this once in response to my SQL Abstraction API and I guess the
jury's still out, but I'd rather try the abstraction and fail, than
fail by default because I didn't test the theory at all. The fact that
it's less efficient is a given. The idea that it's inviable because
it's less efficient is pure assumption without testing it. As such,
those folks who point to the law of leaky abstractions as a reason not
to do things like SQL language abstraction or web services or XML in
general are arguing against any kind of progress at all. If we all
took their advice, we'd all be trying to create eBay with Assembler,
and failing miserably because greater complexity in software requires
greater abstraction. The programmers at eBay would never be able to
get to a point where they could write code to display anything to the
user because they'd be too busy writing code to allocate memory for
the auction timers. Or we could all just go back to using slide rules.

>> well be. Thus a sweeping change would need to occur
>> in each bean or require a change to the generator
>> and a re-build of its generated code. I personally
>> find it easier to simply create objects which are
>> flexible enough to not require generation and use
>> composition or inheritance to allow me to make
>> sweeping changes. The sweeping change then is a
>> line or two of code, instead of a larger
>> modification to the generator and a rebuild.

> So what, you use the root java "object"? ;-)
> Seriously, you have to store the information
> somewhere -  I don't know of any ESP
> generated code.

Umm... no. I use composition and inheritance mostly. When you do
enough of those two things in the right places, the ability to make
sweeping changes comes pretty naturally, because a change to one
object then affects any object which extends or uses it. I will admit
that it's a double-edged sword because changing private methods of
extended classes can cause the change or problems from the change to
ripple through the inheritance chain. Similar is true of changing the
arguments of public methods. But it can be a real boon if you do a lot
of brainstorming on use-cases and structural planning up-front.

>> But that's coming from someone who, using line breaks and
>> > MS Word, smashed several hundred pages of mish-mash
>> > into
>> > a C^HSV. They weren't commas, so I deleted the C, see?
>> > :P  It would be a dream to get a bunch of pragmatically
>> > generated documents compared to that.  Programs you can
>> > reverse engineer, whatnot. People are so random, sorta.
>> > [...]
>>
>> Ya lost me. :)

> Sorta saying it's all data, but some data is much
> easier to parse than other data is. By "much" I
> mean astronomically.

Oh. Okay... Yes, admittedly. :)
Hence much of the reason behind XML.


s. isaac dealey     434.293.6201
new epoch : isn't it time for a change?

add features without fixtures with
the onTap open source framework

http://www.fusiontap.com
http://coldfusion.sys-con.com/author/4806Dealey.htm


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~|
Message: http://www.houseoffusion.com/lists.cfm/link=i:4:237437
Archives: http://www.houseoffusion.com/cf_lists/threads.cfm/4
Subscription: http://www.houseoffusion.com/lists.cfm/link=s:4
Unsubscribe: http://www.houseoffusion.com/cf_lists/unsubscribe.cfm?user=89.70.4
Donations & Support: http://www.houseoffusion.com/tiny.cfm/54

Reply via email to