Kim writes,

> From: Kim Holburn<mailto:[email protected]>
> Subject: [LINK] Cory Doctorow: ‘Enshittification’ is coming for absolutely 
> everything
>
> https://doctorow.medium.com/my-mcluhan-lecture-on-enshittification-ea343342b9bc
>
What’s enshittification .. It’s my theory explaining how the internet was 
colonized by platforms,
> and why  all those platforms are degrading so quickly and thoroughly, and why 
> it matters —
>
> We can reverse the enshittification of the internet. We can halt the creeping 
> enshittification
> https://pluralistic.net/2024/01/30/go-nuts-meine-kerle#ich-bin-ein-bratapfel



Forcing AI on developers is a bad idea that is going to happen  (aka, 
enshittification?)

We’ve still got time to make it better before it does

By Rupert Goodwins Mon 12 Feb 2024 .. Opinion
https://www.theregister.com/2024/02/12/opinion_column_on_forcing_ai_features_on_developers/


There is a thing that companies do, a pathological behavior that makes 
customers unhappy and makes things worse in general.

It is so widespread and long-running that it should have its own name, much as 
an unpleasant medical condition. It does not, but you'll recognize it because 
it has blighted your life often enough: it's the unwanted new feature.

The Hacker's Dictionary with its roots in the 1970s came close with "creeping 
featurism," but that doesn't really get to the malignancy the effect is capable 
of.

It can trip up muscle memory of daily tasks, it can get in your face by 
insisting you explore and enjoy its brand-new goodness, it can even make you 
relearn a task you had mastered. Bonus points if it is difficult to disable or 
shut up, and double score if that's impossible.

For maximum effect, though, it should make your job impossible and force you to 
abandon a tool on which you base your professional life.

Take a bow, JetBrains, maker of IDEs. The introduction of a non-removable AI 
Assistant plug-in into the daily working lives of developers is such a bad 
idea, there's a good chance that the whole phenomenon class could be named 
JetBrains Syndrome.

This has absolutely nothing to do with the quality of the AI assistance 
proffered, and everything to do with understanding the practicalities of being 
a developer. Every single user of JetBrains' products is under time pressure to 
produce working code as part of a project. There will be time to learn about 
new features, but that time is not today.

If you have an ounce of common sense or a microgram of experience, the time for 
new features is when you're ready, probably after others have gone in and 
kicked the tires. That AI assistant may be fabulous, or it may be an intrusive, 
buggy timesink that imposes its own ideas on your work. AI assistants have form 
for that.

It's also nothing to do with whether the plug-in is quiescent and won't do 
anything until activated, as JetBrains says, nor does it matter that it won't 
export code to places unknown for AI learning purposes. It might activate 
itself in the future, or its behavior may change: that won't matter if the 
code's just not there in the first place. That is not an option.

For devs who want to investigate AI responsibly, in their own time, that's a 
red flag. Just not as red and flaggy as introducing an AI module into a 
development environment used in companies with strict "No AI coding" policies, 
though. 'Yes, there's AI but trust us, it's turned off?' Who'd want to be the 
developer having to make that argument to management?

This is just plain weird. JetBrains' own developers are, well, developers. 
They'll have direct experience of the pressures and factors in devlife that 
makes non-optional creeping featurism such a stinker of an idea. Think 
security, IP risk and code quality policies. It's the same unsolvable paradox 
that says everyone involved in making customer support such a horrible 
experience themselves have to experience horrible customer support. Why don't 
they make their bit better?

The kindest answer in JetBrains' case is that through lack of consultation, 
knowledge or foresight, it just didn't know that there were no-AI policies in 
place in some corporate dev teams. That's kinder than "we knew, but marketing 
made us do it." Let's assume AI assistance in development isn't just marketing, 
then: how can companies like JetBrains, and indeed everyone working in software 
creation, make no-AI policies unnecessary?

In the words of Philip K Dick's android hunter Rick Deckard, "Replicants are 
like any other machine, they're either a benefit or a hazard. If they're a 
benefit, they aren't my problem" – if you want chapter and verse on the 
reality-warping nature of AI, PKD is your go-to visionary. We don't know where 
dev AI fits on that scale, but we can change the landscape to help us find out.

Take the worry that using AI code means using code that the developer doesn't 
fully understand, with implications for security, reliability and 
maintainability.

True enough, but AI code isn't automatically worse here than some of the stuff 
that is entirely the work of carbon-based life-forms. Cut and paste, use of 
external functions and libraries that aren't fully understood, and 
"it-seems-to-work-itis" are all guilty here, and we have techniques to deal 
with them such as audits, walk-throughs and the horror that is properly policed 
documentation protocols. AI code won't get a magic pass here.

The specter of new IP infringement law is a good one. Lawyers use it in bedtime 
stories to frighten their children and turn them away from the nightmare of 
becoming programmers. But here, dev AI has the chance of getting ahead of its 
generalist cousins, as training data sets will be predominately open source.

That doesn't obviate the problem, but it is a far more permissive and 
transparent world than text or image corpuses, and an organization that allows 
the use of open source code will have an easier time digesting open-source-only 
trained AI.

The final problem for no-AI policies is, as Philip K Dick so often noted, if 
you can't tell the difference between the natural and the artificial, is there 
a meaningful difference at all? We can't tell. Is there AI-generated code in 
the systems you're using to read this right now, in the browser or the cloud or 
the local platform?

In the massively dynamic, inter-related mass of components that make up 
computing in 2024, demanding purity of just one component may make little odds. 
If it does matter, then we should look back to the generalist AI systems, which 
are embarking on embedding fingerprints in AI generated content that marks it 
as such.

Easy enough to do in source code, far more difficult in executable. Should we? 
Could we? Is it even ethical not to?

Whether you use it or not, AI code is going to be increasingly part of the 
environment in which your product will exist, so setting rules for identifying 
and managing it is the most important foundation. You can't fridge, you can't 
force, you can't be ambiguous. As JetBrains has just most helpfully found out. ®

--
_______________________________________________
Link mailing list
[email protected]
https://mailman.anu.edu.au/mailman/listinfo/link

Reply via email to