Max, Paul & all;

Thanks for all the thought-provoking links, everyone.

Sometimes there are shades of panic in the way I see AI art. It’s like the
machine is getting deep into my psyche, colonizing the culture as data and
spitting something out that barely resembles art or beauty or play. I think
that reflects the weaponized ideology of broader data practices today: this
is *exactly* what machine learning *is* doing, often to catastrophic
results. And much of that comes from how we imagine the links between *our*
imaginations and the machine’s “imagination.”

The machine’s "imagination" (whatever happens in "latent space," which
seems to be the term we're using) is reaching to find patterns and
relationships, even when such patterns and relationships may not exist. We
hope that the way we take art into our minds is something different. But I
don’t know for sure.

At the moment, I can only respond to this machine “imagination” in the same
way that we find meaning within a human-produced painting, or poem, or
film, or television advertisement. We imagine ourselves *within* those
worlds. We do this within our private mental spaces, but we hand over some
internal control to the artists, poets, or marketing agencies. When we do,
our story and their stories become temporarily intertwined with something
external. Whether we are being manipulated by poets or design houses, we
know it was *human, and trying to meet us*.

With few exceptions, even the most alienating and experimental of these
communication forms are shaped by that desire for human comprehension.
Machines, in *simulating* art, do so without any desire to connect or
reassure us. The machine is not concerned with being understood, because it
doesn’t, and cannot, *understand*. It’s the cold indifference of a machine.
In the distance between us and it, we project all that we fear from the
Other: infallible, all-knowing, all-aware — and so we imagine the very
things that make them so frightening. I am used to the sense that the
screen is always there to take something from me, package it up, and offer
it back through the recommendation of some distant system. So, I am also
bringing that to my interactions with the system, in how I interpret
(imagine) what it is doing. Generative art systems don't "do this,"* I* do
it to *them*.

The uncanniness — that *close-but-not-quite-human* quality of machine
generated text and images — is a different way of intermingling
imaginations because we imagine it to be different. The image quality is
not so clear, and so the limits of the machine imagination intertwines with
a human desire to be immersed. I can see my own imagination reaching, and
how sometimes imagination fails, and unmasking that lie can be terrifying.
(The Lacanian "Real," etc.)

-e.




On Wed, Jul 14, 2021 at 3:38 PM Paul Hertz via NetBehaviour <
netbehaviour@lists.netbehaviour.org> wrote:

> There's an essay, "Intelligence Without Representation" that Brooks wrote
> in 1987, http://people.csail.mit.edu/brooks/papers/representation.pdf,
> that offered what was then a new point of view on how to consider AI.
>
> // Paul
>
>
>
> On Wed, Jul 14, 2021 at 2:10 PM Paul Hertz <igno...@gmail.com> wrote:
>
>> Hi Max,
>>
>> The robotics researcher Rodney Brooks back in the late 1980s argued the
>> AI based on the construction of a "knowledge base" was bound to fail. He
>> made the case that a robot adapting to an environment was far more likely
>> to achieve intelligence of the sort that humans demonstrate precisely
>> because it was embodied. Some of his ideas are presented in the movie Fast,
>> Cheap, and Out of Control, directed ISTR by Errol Morris. If you haven't
>> seen it yet, I can recommend it.
>>
>> -- Paul
>>
>> On Wed, Jul 14, 2021, 1:38 PM Max Herman via NetBehaviour <
>> netbehaviour@lists.netbehaviour.org> wrote:
>>
>>>
>>> Hi all,
>>>
>>> I know virtually nothing about AI, beyond what the letters stand for,
>>> but noticed this new article in Quanta Magazine.  Does it pertain at all?
>>> Interestingly it concludes that in order for AI to be human-like it will
>>> need to understand analogy, the basis of abstraction, which may require it
>>> to have a body!  🙂
>>>
>>>
>>> https://www.quantamagazine.org/melanie-mitchell-trains-ai-to-think-with-analogies-20210714/?mc_cid=362710ae88&mc_eid=df8a5187d9
>>>
>>> I have been interested in the book *GEB *by Hofstadter for some time,
>>> and have been researching how it was referenced (specifically its Chapter
>>> IV "Consistency, Completeness, and Geometry" and its Introduction) by Italo
>>> Calvino in *Six Memos for the Next Millennium*, so Mitchell's
>>> connection to Hofstadter and *GEB *is interesting on a general level.
>>>
>>> Coincidentally I contacted her a year ago to ask about the Calvino
>>> connection but she replied she hadn't read any Calvino or the *Six
>>> Memos*.  However, his titles for the six memos -- Lightness, Quickness,
>>> Exactitude, Visibility, Multiplicity, and Consistency -- might be exactly
>>> the kinds of "bodily" senses AI will need to have!
>>>
>>> All best,
>>>
>>> Max
>>>
>>> https://www.etymonline.com/word/analogy
>>> https://www.etymonline.com/word/analogue
>>>
>>>
>>> _______________________________________________
>>> NetBehaviour mailing list
>>> NetBehaviour@lists.netbehaviour.org
>>> https://lists.netbehaviour.org/mailman/listinfo/netbehaviour
>>>
>>
>
> --
> -----   |(*,+,#,=)(#,=,*,+)(=,#,+,*)(+,*,=,#)|   ---
> http://paulhertz.net/
> _______________________________________________
> NetBehaviour mailing list
> NetBehaviour@lists.netbehaviour.org
> https://lists.netbehaviour.org/mailman/listinfo/netbehaviour
>
_______________________________________________
NetBehaviour mailing list
NetBehaviour@lists.netbehaviour.org
https://lists.netbehaviour.org/mailman/listinfo/netbehaviour

Reply via email to