Venky :
Thank you for your comments. It is fine if you are unable to continue
the discussion. I understand.
To summarize my position : there is just no reason to believe  that a
singularity could happen. A Singularity is still very hypothetical (more
or less in the realm of science fiction).
I have discussed this issue with the people at the Singularity meetup.
None of them is able to tell me how exactly AI could engage in
"continuous self-improvement", an idea which lies at the heart of the
Singularity argument. I believe that "continuous self-improvement" is a
purely hypothetical possibility.
So what is to be done? : given that the Singularity is a hypothetical
possibility, the argument from the organizational perspective is that
businesses cannot be expected to spend resources on such possibilities.
Instead, it is the role of the government to resolve such issues via
regulation. An example of how the government currently resolves the
possibility of danger to humans from AI would be in the deployment of
robots. If somebody were deploying a robot that could potentially injure
a human being, the government ensures that whoever is doing so would put
in the appropriate controls. There are checks and balances between
businesses (and individuals) and government to ensure that neither steps
beyond the boundaries of where they may operate.

All:
I would say that mine is a pretty black-and-white (I would even say
bold) position, and it is based on my belief that I can significant
fundamental flaws in the arguments advanced by the major proponents of
the idea of a Technological Singularity.  These arguments run into many
pages. I have gone beyond the call of duty to respond to people at the
Singularity Institute. I am doing this as a public service. I have met
some of the people at the Singularity meetup, including one person who
teaches there. I do not want them to be falsely informed about the idea
(not that I am saying that they are) and do not want them to spend years
on this only to discover later on that idea was not viable in the first
place. The case for Technological Singularity is made in several books,
articles and essays. It is not likely that everyone would be convinced
by a simple three-line response to all the arguments made by Vinge,
Kurzweil, Yudkowsky and co. I am willing to respond to anyone else on
silklist who may not have been convinced (assuming it doesn't take too
long).
Anand
--- In [email protected], Venky TV <venky.tv@...> wrote:
>
> On 7 February 2011 22:22, Anand Manikutty manikuttyanand@... wrote:
> > Hi Venky :
> > I think there has been some confusion/miscommunication. The List
(capital
> > "L") I am referring to is this one :
> > http://groups.yahoo.com/group/indo-euro-americo-asian_list/messages
> > Since it appears that you have not read the messages I have posted
there
> > (just read the posts from 215 onwards on Technological Singularity),
I am
> > going to assume that the last two comments of yours (which aim to
counter my
> > arguments) arise out of this confusion/miscommunication. I use the
List to
> > maintain the ongoing list of counter-arguments in one place rather
than have
> > it scattered all over the place, and to save myself the time and
effort of
> > repeating counter-arguments. It is more efficient for me.
>
> Well, I had a look.  Found a couple of emails in a thread which did
> not say anything new from what was
> posted here.  In any case, maintaining a debate over two lists, one
> which I check regularly for
> arguments and another where I respond is just not terribly efficient
> for me.  If this debate is not going to
> be on silklist, count me as not interested any more.
>
> Venky (the Second).
>

Reply via email to