Well, I used Hotdog, Dreamweaver, Interdev (back in the ASP days!) – but you 
can’t say that tooling’s good just because there’s lots of it. It needs to 
actually solve problems at hand, and my experience in any fast moving area, is 
that the tooling takes a while to catch up. Firstly the landscape has to settle 
down, then the tooling needs to mature to reflect the best practises/patterns 
for how people are using the technology.

Re technical debt: to be clear with our following readers, I would say that 
technical debt are those compromises/choices that are made during 
development/standup that you know will have to be remediated or cause costs to 
be incurred elsewhere to support the solution (throwing your costs “over the 
fence”)

Technical debt can then be divided into a couple of camps:

-          accepted technical debt: management has accepted the debt (and 
future remediation or ongoing cost) because the alternative are unpalatable. 
For example:

o   we may still be deploying applications on Windows Server 2003, even though 
we’re paying for a custom support agreement, because the alternative – 
upgrading the whole application ecosystem to a supported OS, is simply not 
doable (technically, financially, whatever) in the current budget cycle. We may 
have to buy a new version of the app, or upgrade our tooling (monitoring, 
backup, security suite etc), or retest everything on a 64bit platform.

o   we may still be buying obsolete, expensive WAN connections simply because 
we have an existing agreements with our various suppliers that allow us to meet 
application performance and availability SLAs and have known outcomes/issues, 
because the alternative (re-tendering for WAN connections) is unpalatable.
In both cases, everything will need to be overhauled/remediated over time, and 
we’re paying less now knowing we’ll pay more in future. But the cost is 
accepted.


-          unacknowledged technical debt: in this case, the true cost of the 
solution isn’t known and accepted upfront. Choices are made (even with the best 
intentions) to meet project timelines/budget that don’t cater for the full set 
of use cases. Generally I would say it’s harder to ignore or “slip this past 
steering committee” when you have a centrally delivered web based solution. 
Much of the infrastructure complexity “goes away” – servers, storage, bandwidth 
are easy to procure in data centres, and UAT testing is relatively 
straightforward. When you have a thick-client deployment, then infrastructure 
comes back into play (OSes, platform dependencies, distribution/deployment 
systems, end-to-end monitoring etc) and UAT tends to be truncated. No one goes 
looking for all the corner cases of “certain hardware models, with certain 
OSes, with certain other apps installed, and certain peripherals installed” and 
even if a problem manifests itself, we try to brush it under the carpet, 
because flying out to back-of-nowhere to troubleshoot the end user’s problem 
isn’t viable. So it becomes BAU/Operations problem (and cost). And most 
crucially, every time you deploy another thick-client app, you look to, and 
lock in, all the stuff that currently exists and is hard to change – OS 
version, supporting frameworks (JVMs, .NET, citrix clients, AV, backup) 
firewall rules, deployment technologies, network, telco, helpdesk. Trying to 
upgrade any of that stuff, across tens of thousands of clients, with a thousand 
apps, is very expensive [1]

Now, to be fair, I will say that I’ve pretty much only worked in large(r) 
enterprise type environments, so my experience may not be representative of 
what’s out there in the mainstream. In simpler environments, this may all be a 
moot point. But in the large complex environments, we have decades of 
accumulated debt – stuff that would go away if we had unlimited time, resources 
and money. But we don’t – most crucial is probably time – there’s only 365 days 
in a year and that limits change windows. And second is money – we spend $1.1 – 
1.2 billion dollars a year on tech. It’s not insignificant, we yet we still 
have 16bit apps, and are running things that aren’t supported or are no longer 
manufactured. Life is crazy ☺

Regards
Ken

[1] Relatively. It might be, say, a $30-50m project, which isn’t expensive in 
banking per se – upgrading a major channel system would be 5x that, and a core 
systems refresh would be 10x that (at least), but replacing an end-user 
computing environment doesn’t really provide much business benefit compared to 
those other things



From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Nathan Schultz
Sent: Tuesday, 22 November 2016 4:05 PM
To: ozDotNet <ozdotnet@ozdotnet.com>
Subject: Re: [OT] node.js and express

Ken, I'm curious as to why you think there is less technical debt in 
web-applications?

I agree that the web is less mature - but it's not because of lack of time or 
tooling. A mate of a mate made millions making web-development software in the 
mid 90's (HotDog software), and I was doing web apps in Visual InterDev (which 
is before Visual Studio's time).

On 22 November 2016 at 12:13, Ken Schaefer 
<k...@adopenstatic.com<mailto:k...@adopenstatic.com>> wrote:
A couple of possible reasons:


-          All the emphasis is on centrally delivered applications (aka web 
based), so that’s where all the innovation and change is happening. It will 
take time for maturity and tooling to catch up.

-          It’s harder to bypass the full technical cost of development when 
something’s centrally delivered. It’s easier to incur “technical debt” when you 
build a little thick-client app – the real cost of the app gets buried in IT 
operations.

Cheers
Ken

From: ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com> 
[mailto:ozdotnet-boun...@ozdotnet.com<mailto:ozdotnet-boun...@ozdotnet.com>] On 
Behalf Of Greg Low (??????)
Sent: Tuesday, 22 November 2016 2:33 PM
To: ozDotNet <ozdotnet@ozdotnet.com<mailto:ozdotnet@ozdotnet.com>>
Subject: RE: [OT] node.js and express

But that’s a centralized vs distributed argument. I understand that. By why 
exactly does a centralized development process have to be orders of magnitude 
slower than a distributed one? I just think the tooling has let us down -> big 
time.

Regards,

Greg

Dr Greg Low

1300SQLSQL (1300 775 775) office | +61 419201410<tel:%2B61%20419201410> mobile│ 
+61 3 8676 4913<tel:%2B61%203%208676%204913> fax
SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> | 
http://greglow.me<http://greglow.me/>



Reply via email to