I read through that and my rule of thumb on those is anyone offering investment advice better have 100M in net worth if I'm going to pay any attention to what they have to say. If his AI isn't working to get him 100M it's definitely not going to help me.
The fundamental problem with AI though isn't whether you can make it work or not. The fundamental problem is liability. Suppose I can build an AI program that has a 99.9% success rate at driving a vehicle. It has crashes/accidents at the rate of .01% which is far better than 99% of humans out there who drive cars. Note that the very paper you quoted talks about safety protocols in AI to REDUCE the chance of problems. Not eliminate them. So already, that paper is assuming that any AI system is going to have an error rate. You can argue that more lives would be saved if the government passed laws that required every driver to stop driving and substitute my AI. And it would absolutely reduce accidents and save lives. That can be proven. But what you CANNOT do is find a car company that would build a car with that AI under that kind of legal system because if drivers are required by law to give control over to an AI then liability for the .01 accidents that DO happen is going to fall on the car company. And with the number of cars out there and mileage driven by them .01 would be billions of dollars in payouts a year to cover AI errors. Uber got it's hands burned by the Elaine Herzberg accident and sold off it's AI vehicle control division. They learned what I had been posting on car forums for years before anyone started testing AI controlled vehicles - there WILL be accidents. You are also NEVER going to find a car buyer who will buy a car that requires them to give up control over the vehicle - yet makes them personally liable for any accidents (errors) that the AI has with that vehicle. So AI for vehicle control is dead. And by extension if you think about it, because of that same problem, AI control of anything else having to do with health and human safety is also dead. Forget an AI controlled surgeon for example. Or an AI controlled electrical grid, or nuclear reactor or anything else. This is why we don't have AI controlled jet planes. Even though they have autopilots they still require a human's butt to warm a chair in the cockpit. Because of liability. You may be able to sell an AI controlled weapon since weapons by definition have no liability. For now. There is a huge movement to make gun makers liable, though. But the more you think about this the more you realize the inherent problems in AI. You put an AI in charge of customer service for let's say a cellular phone company and you WILL see a rise in complaints, tarnished reputation of that company, and decreased sales as a result. The business owner is eventually going to see the AI as a liability and jettison it. I've been involved in high tech for years now and every few years there is ALWAYS some new technology that everyone in high tech thinks is going to fundamentally change things. AI is just the latest one in a long list of these. The ONLY new tech I've ever seen in my life that did fundamentally change things is communications tech advancements like the Internet, cell phone texting, and so on. The new tech that lasts and becomes dominant is ALWAYS tech that facilitates human-to-human communication. Bet on that and you will never lose. This very mailing list is proof of that. Ted -----Original Message----- From: PLUG <[email protected]> On Behalf Of John Sechrest Sent: Wednesday, March 29, 2023 10:23 PM To: Portland Linux/Unix Group <[email protected]> Subject: Re: [PLUG] Google Bard - entry level sys-admin, learning fast? Let me suggest that much of our understanding of reasoning is not in the boxes that we think it is in. Let me point you to this paper: https://yoheinakajima.com/task-driven-autonomous-agent-utilizing-gpt-4-pinecone-and-langchain-for-diverse-applications/ and let me suggest that you look at what he is doing with http://yohei.me and his twitter address http://twitter.com/yoheinakajima A bit shift has happened. We are still in the process of understanding what it means. On Wed, Mar 29, 2023 at 10:05 PM MC_Sequoia <[email protected]> wrote: > "Career sys-admins, take note. You may want to retrain as a career > re-trainer; many sys-admins may soon be looking for new careers." > > Color me very skeptical. One large blind spot of AI is context and/or > situational understanding. > > A few examples. > > The AI that made a digital stick person fall on it's face then stand > up on its head and fall over again then repeat as the fastest way for > the person to get from point A to Point B. > > The AI security camera that was defeated by a group of Marines who > snuck up on it by posing as a cardboard box, bush, trashcan or just by > doing somersaults and not moving like a normal human being would. > > If I had a nickel for every time some newbie blew something up, > whether they be a sys-admin, developer, engineer, etc who didn't > understand the codebase, the problem, the network, the work flow, the use > case, etc. > > There are many ways to solve any given computer problem and usually we > humans have preferences in terms of efficiency, cost, elegance, > simplicity, service disruption, time, etc. > > To me, that is the mark of intelligence. It's not just solving a given > problem but solving it in a way that takes many of those into account > and/or in order of priority and/or preference. > > I mean, has any problem posted on the PLUG list ever been solved > without a fairly robust discussion regarding taking at least a few > things into consideration? > > > > > > > > -- -- [image: www.seattleangelconference.com] <http://www.seattleangelconference.com/> *JOHN SECHREST* *Founder, *Seattle Angel Conference TEL (541) 250-0844 EMAIL [email protected] Schedule A Meeting <http://sechrest.youcanbookme.com/> http://seattleangelconference.com @nwangelconf An Investor driven event bringing together new investors and new entrepreneurs to expand the startup ecosystem.
