Send Link mailing list submissions to
        [email protected]

To subscribe or unsubscribe via the World Wide Web, visit
        https://mailman.anu.edu.au/mailman/listinfo/link
or, via email, send a message with subject or body 'help' to
        [email protected]

You can reach the person managing the list at
        [email protected]

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Link digest..."


Today's Topics:

   1. Does current AI represent a dead end? (Kim Holburn)
   2. Re: Does current AI represent a dead end? (Roger Clarke)
   3. U.S. Solar Manufacturing Surges (Stephen Loosley)
   4. Re: Does current AI represent a dead end? (Kate Lance)


----------------------------------------------------------------------

Message: 1
Date: Thu, 5 Dec 2024 15:33:23 +1100
From: Kim Holburn <[email protected]>
To: Link mailing list <[email protected]>
Subject: [LINK] Does current AI represent a dead end?
Message-ID: <[email protected]>
Content-Type: text/plain; charset=UTF-8; format=flowed

https://www.bcs.org/articles-opinion-and-research/does-current-ai-represent-a-dead-end

Many of these neural network systems are stochastic, meaning that providing the 
same input will not always lead to the same output. 
The behaviour of such AI systems is ?emergent? ? which means despite the fact 
that the behaviour of each neuron is given by a 
precise mathematical formula, neither this behaviour nor the way the nodes are 
connected are of much help in explaining the 
network?s overall behaviour.

...

This idea lies at the heart of piecewise development: parts can be engineered 
(and verified) separately and hence in parallel, and 
reused in the form of modules, libraries and the like in a ?black box? way, 
with re-users being able to rely on any verification 
outcomes of the component and only needing to know their interfaces and their 
behaviour at an abstract level. Reuse of components 
not only provides increased confidence through multiple and diverse use, but 
also saves costs.

...

Current AI systems have no internal structure that relates meaningfully to 
their functionality. They cannot be developed, or reused, 
as components. There can be no separation of concerns or piecewise development. 
A related issue is that most current AI systems do 
not create explicit models of knowledge ? in fact, many of these systems 
developed from techniques in image analysis, where humans 
have been notably unable to create knowledge models for computers to use, and 
all learning is by example (?I know it when I see it 
<https://www.acluohio.org/en/cases/jacobellis-v-ohio-378-us-184-1964#:~:text=TheU.S.SupremeCourtreversed,tohardcorepornography...>?).
 
This has multiple consequences for development and verification.

...

Systems are not explainable, as they have no model of knowledge and no 
representation of any ?reasoning?.

....

Verification comes with a subset of issues following from the above. The only 
verification that is possible is of the system in its 
entirety; if there are no handles for generating confidence in the system 
during its development, we have to put all our eggs in the 
basket of post-hoc verification.

...

So, is there hope? I believe ? though I would be happy to be proved wrong on 
this ? that current generative AI systems represent a 
dead end, where exponential increases of training data and effort will give us 
modest increases in impressive plausibility but no 
foundational increase in reliability. I would love to see compositional 
approaches to neural networks, hard as it appears.

-- 
Kim Holburn
IT Network & Security Consultant
+61 404072753
mailto:[email protected]  aim://kimholburn
skype://kholburn - PGP Public Key on request


------------------------------

Message: 2
Date: Thu, 5 Dec 2024 16:13:42 +1100
From: Roger Clarke <[email protected]>
To: [email protected]
Subject: Re: [LINK] Does current AI represent a dead end?
Message-ID: <[email protected]>
Content-Type: text/plain; charset=UTF-8; format=flowed

Music to *my* ears, at least.

http://rogerclarke.com/EC/AII.html#CML (2019)
http://rogerclarke.com/EC/AIEG.html#RF (2020)

But also:
http://www.rogerclarke.com/EC/RGAI.html#GAIC (2024)

_________________

On 5/12/2024 15:33, Kim Holburn wrote:
> https://www.bcs.org/articles-opinion-and-research/does-current-ai-represent-a-dead-end
> 
> Many of these neural network systems are stochastic, meaning that 
> providing the same input will not always lead to the same output. The 
> behaviour of such AI systems is ?emergent? ? which means despite the 
> fact that the behaviour of each neuron is given by a precise 
> mathematical formula, neither this behaviour nor the way the nodes are 
> connected are of much help in explaining the network?s overall behaviour.
> 
> ...
> 
> This idea lies at the heart of piecewise development: parts can be 
> engineered (and verified) separately and hence in parallel, and reused 
> in the form of modules, libraries and the like in a ?black box? way, 
> with re-users being able to rely on any verification outcomes of the 
> component and only needing to know their interfaces and their behaviour 
> at an abstract level. Reuse of components not only provides increased 
> confidence through multiple and diverse use, but also saves costs.
> 
> ...
> 
> Current AI systems have no internal structure that relates meaningfully 
> to their functionality. They cannot be developed, or reused, as 
> components. There can be no separation of concerns or piecewise 
> development. A related issue is that most current AI systems do not 
> create explicit models of knowledge ? in fact, many of these systems 
> developed from techniques in image analysis, where humans have been 
> notably unable to create knowledge models for computers to use, and all 
> learning is by example (?I know it when I see it 
> <https://www.acluohio.org/en/cases/jacobellis-v-ohio-378-us-184-1964#:~:text=TheU.S.SupremeCourtreversed,tohardcorepornography...>?).
>  This has multiple consequences for development and verification.
> 
> ...
> 
> Systems are not explainable, as they have no model of knowledge and no 
> representation of any ?reasoning?.
> 
> ....
> 
> Verification comes with a subset of issues following from the above. The 
> only verification that is possible is of the system in its entirety; if 
> there are no handles for generating confidence in the system during its 
> development, we have to put all our eggs in the basket of post-hoc 
> verification.
> 
> ...
> 
> So, is there hope? I believe ? though I would be happy to be proved 
> wrong on this ? that current generative AI systems represent a dead end, 
> where exponential increases of training data and effort will give us 
> modest increases in impressive plausibility but no foundational increase 
> in reliability. I would love to see compositional approaches to neural 
> networks, hard as it appears.
> 

-- 
Roger Clarke                            mailto:[email protected]
T: +61 2 6288 6916   http://www.xamax.com.au  http://www.rogerclarke.com

Xamax Consultancy Pty Ltd      78 Sidaway St, Chapman ACT 2611 AUSTRALIA 

Visiting Professorial Fellow                          UNSW Law & Justice
Visiting Professor in Computer Science    Australian National University



------------------------------

Message: 3
Date: Thu, 05 Dec 2024 17:27:56 +1030
From: Stephen Loosley <[email protected]>
To: "link" <[email protected]>
Subject: [LINK] U.S. Solar Manufacturing Surges
Message-ID: <[email protected]>
Content-Type: text/plain; charset="UTF-8"

U.S. Solar Manufacturing Surges

By Michael Bates -December 4, 20240
https://solarindustrymag.com/u-s-solar-manufacturing-surges


The U.S. added a record-breaking 9.3 GW of new solar-module manufacturing 
capacity in the third quarter, including five new or expanded factories in 
Alabama, Florida, Ohio and Texas.

Total U.S. solar module manufacturing capacity is now nearly 40 GW.

The latest U.S. Solar Market Insight Q4 2024 report from the Solar Energy 
Industries Association (SEIA) and Wood Mackenzie shows that at full capacity, 
U.S. solar module factories can produce enough equipment to meet nearly all 
demand for solar in the United States.


Notably, solar cell manufacturing resumed in the third quarter as silicon cells 
were manufactured in the U.S. for the first time since 2019.

The U.S. solar industry installed 8.6 GW of new electricity generation capacity 
in Q3, representing a 21% year-over-year increase and the largest third quarter 
ever for the industry.

The utility-scale segment led the industry, with 6.6 GW of new projects coming 
online. Utilities and businesses are driving this growth as they procure 
significant levels of solar to meet rising demand for electricity. 

The commercial and community solar markets also experienced strong gains in the 
third quarter, growing by 44% and 12% year-over-year, respectively.

Texas continues to lead the nation in solar deployment, adding 2.4 GW of 
capacity in Q3. The Lone Star State accounts for 26% of all new capacity to 
come online so far in 2024. Florida has installed the second-most solar 
capacity in 2024, and nearly 30,000 Florida households have installed solar 
this year.

In the last two years, 1.4 million American households have used federal 
incentives to install solar and lower their energy costs.

?Our current outlook for the next five years has the U.S. solar industry 
growing 2 percent per year on average, reaching a cumulative total of nearly 
450 GW by the end of 2029,? says Michelle Davis, head of solar research at Wood 
Mackenzie and lead author of the report.

?Demand for solar remains robust, and annual installation forecasts would be 
higher if not for limitations the industry faces, including those related to 
interconnection, labor availability, supply constraints, and policy.?

Total solar deployment in 2024 is again expected to exceed 40 GW, followed by 
annual installation volumes of at least 43 GW for the remainder of the decade.

--



------------------------------

Message: 4
Date: Thu, 5 Dec 2024 20:54:45 +1100
From: Kate Lance <[email protected]>
To: Kim Holburn <[email protected]>
Cc: [email protected]
Subject: Re: [LINK] Does current AI represent a dead end?
Message-ID: <Z1F4ZRA+Toqi7iOP@plum>
Content-Type: text/plain; charset=utf-8

Ed Zitron has been debunking the whole AI 'economy' for a while now
and he's delightfully readable:

https://www.wheresyoured.at/godot-isnt-making-it/

"The entire tech industry has become oriented around a dead-end technology that
requires burning billions of dollars to provide inessential products that cost
them more money to serve than anybody would ever pay. 

The obscenity of this mass delusion is nauseating ? a monolith to bad
decision-making and the herd mentality of tech's most powerful people, as well
as an outright attempt to manipulate the media into believing something was
possible that wasn't. And the media bought it, hook, line, and sinker.

Hundreds of billions of dollars have been wasted building giant data centers to
crunch numbers for software that has no real product-market fit, all while
trying to hammer it into various shapes to make it pretend that it's alive,
conscious, or even a useful product. 

There is no path, from what I can see, to turn generative AI and its associated
products into anything resembling sustainable businesses, and the only path
that big tech appeared to have was to throw as much money, power, and data at
the problem as possible, an avenue that appears to be another dead end."

Lots more good stuff, especially on NVIDIA's chip problems.

Regards,
Kate



On Thu, Dec 05, 2024 at 03:33:23PM +1100, Kim Holburn wrote:
> https://www.bcs.org/articles-opinion-and-research/does-current-ai-represent-a-dead-end
> 
> Many of these neural network systems are stochastic, meaning that providing
> the same input will not always lead to the same output. The behaviour of
> such AI systems is ?emergent? ? which means despite the fact that the
> behaviour of each neuron is given by a precise mathematical formula, neither
> this behaviour nor the way the nodes are connected are of much help in
> explaining the network?s overall behaviour.
> 
> ...
> 
> This idea lies at the heart of piecewise development: parts can be
> engineered (and verified) separately and hence in parallel, and reused in
> the form of modules, libraries and the like in a ?black box? way, with
> re-users being able to rely on any verification outcomes of the component
> and only needing to know their interfaces and their behaviour at an abstract
> level. Reuse of components not only provides increased confidence through
> multiple and diverse use, but also saves costs.
> 
> ...
> 
> Current AI systems have no internal structure that relates meaningfully to
> their functionality. They cannot be developed, or reused, as components.
> There can be no separation of concerns or piecewise development. A related
> issue is that most current AI systems do not create explicit models of
> knowledge ? in fact, many of these systems developed from techniques in
> image analysis, where humans have been notably unable to create knowledge
> models for computers to use, and all learning is by example (?I know it when
> I see it 
> <https://www.acluohio.org/en/cases/jacobellis-v-ohio-378-us-184-1964#:~:text=TheU.S.SupremeCourtreversed,tohardcorepornography...>?).
> This has multiple consequences for development and verification.
> 
> ...
> 
> Systems are not explainable, as they have no model of knowledge and no 
> representation of any ?reasoning?.
> 
> ....
> 
> Verification comes with a subset of issues following from the above. The
> only verification that is possible is of the system in its entirety; if
> there are no handles for generating confidence in the system during its
> development, we have to put all our eggs in the basket of post-hoc
> verification.
> 
> ...
> 
> So, is there hope? I believe ? though I would be happy to be proved wrong on
> this ? that current generative AI systems represent a dead end, where
> exponential increases of training data and effort will give us modest
> increases in impressive plausibility but no foundational increase in
> reliability. I would love to see compositional approaches to neural
> networks, hard as it appears.
> 
> -- 
> Kim Holburn
> IT Network & Security Consultant
> +61 404072753
> mailto:[email protected]  aim://kimholburn
> skype://kholburn - PGP Public Key on request
> 
> _______________________________________________
> Link mailing list
> [email protected]
> https://mailman.anu.edu.au/mailman/listinfo/link


------------------------------

Subject: Digest Footer

_______________________________________________
Link mailing list
[email protected]
https://mailman.anu.edu.au/mailman/listinfo/link


------------------------------

End of Link Digest, Vol 385, Issue 5
************************************

Reply via email to