au contraire, prose is simpler than poetry. Mostly because there are more rules and constraints. Also, statistical analysis of prose to correctly identify author has been a thing for a long time. Richard has a really cool example of a prose story that emulates Hunter Thompson that, I would bet, no one on this list could have detected as a deep fake had you not be forewarned.
davew On Wed, Sep 1, 2021, at 11:50 AM, uǝlƃ ☤>$ wrote: > Yeah, both social media posts *and* poetry are a low bar. Machine > generated prose is more difficult, I expect. There are good examples > from GPT3. But I don't know of any other algorithm that does a decent > job. So I doubt the same techniques Gabriel uses to generate poetry and > social media posts would work to generate *some* of our postings, > particularly the long-winded amongst us. > > My own play with MegaHAL generated obvious garbage. > > On 9/1/21 10:25 AM, Prof David West wrote: > > Richard Gabriel has created software that can generate poetry in the style > > of any poet. It also generates poetry that passes the Turing test in that > > experts are unable to distinguish between machine generated poetry and > > human generated poetry. He demoed this at an annual meeting of poets at > > Warren Wilson College (where Richard got his MFA). > > > > I am certain he could use his program to create FRIAM posts that could > > emulate any of us. > > > > He also, for IBM on a DoD contract, created a NL program that monitored > > social media posts, detected those deemed inimical to government interests > > (e.g setting up a flash mob to protest the visit of a political personage), > > and generate counter postings (e.g., moving the mob to a pig farm instead > > of the county court house because "inside sources" confirm the personage > > changed her itinerary). > > > > Of course social media postings create a pretty low bar for an AI to be > > convincing. > > > > davew > > > > > > On Wed, Sep 1, 2021, at 10:33 AM, Marcus Daniels wrote: > >> > >> If we collected years of FRIAM archives and train it with a recycle GAN, I > >> think it would probably be possible to generate plausible sentences of > >> each other. To the extent we pay attention to what we say at all; so it > >> might not be the hard to fake really. I think we could get the basic > >> intent of all the regulars, if not the details of their writing (which the > >> GAN would get). I’ve often wished for a ML avatar that could stand in > >> for me on Zoom meetings, so I could go play with my dog or go running or > >> whatever. > >> > >> > >> > >> *From:* Friam <[email protected]> *On Behalf Of > >> *[email protected] > >> *Sent:* Wednesday, September 1, 2021 9:21 AM > >> *To:* 'The Friday Morning Applied Complexity Coffee Group' > >> <[email protected]> > >> *Subject:* Re: [FRIAM] aversive learning > >> > >> > >> > >> Would I pass the turing test if I could, by my emails, convince you that I > >> was Dave? > >> > >> > >> > >> Or is that just the dave Test. Would I pass the Turing test if I could > >> convince you that I was Turing? > >> > >> > >> > >> Who knows what evil lurks in the hearts of men! > >> > >> > >> > >> n > >> > >> > >> > >> Nick Thompson > >> > >> [email protected] <mailto:[email protected]> > >> > >> https://wordpress.clarku.edu/nthompson/ > >> <https://wordpress.clarku.edu/nthompson/> > >> > >> > >> > >> *From:* Friam <[email protected] > >> <mailto:[email protected]>> *On Behalf Of *Marcus Daniels > >> *Sent:* Wednesday, September 1, 2021 11:26 AM > >> *To:* The Friday Morning Applied Complexity Coffee Group > >> <[email protected] <mailto:[email protected]>> > >> *Subject:* Re: [FRIAM] aversive learning > >> > >> > >> > >> I’m already convinced Dave is bot. I know I am. > >> > >> > >> > >> https://www.theatlantic.com/technology/archive/2021/08/dead-internet-theory-wrong-but-feels-true/619937/ > >> > >> <https://www.theatlantic.com/technology/archive/2021/08/dead-internet-theory-wrong-but-feels-true/619937/> > >> > >> > >> > >> *From:* Friam <[email protected] > >> <mailto:[email protected]>> *On Behalf Of *Marcus Daniels > >> *Sent:* Wednesday, September 1, 2021 8:23 AM > >> *To:* The Friday Morning Applied Complexity Coffee Group > >> <[email protected] <mailto:[email protected]>> > >> *Subject:* Re: [FRIAM] aversive learning > >> > >> > >> > >> Culture is online now, didn’t you hear? > >> > >> > >> > >> *From:* Friam <[email protected] > >> <mailto:[email protected]>> *On Behalf Of *Prof David West > >> *Sent:* Wednesday, September 1, 2021 8:12 AM > >> *To:* [email protected] <mailto:[email protected]> > >> *Subject:* Re: [FRIAM] aversive learning > >> > >> > >> > >> Glen quoted BC Smith: > >> > >> > >> > >> /"What does all this mean in the case of AIs and computer systems > >> generally? Perhaps at least this: that it is hard to see how synthetic > >> systems could be trained in the ways of judgment except by gradually, > >> incrementally, and systematically enmeshed in normative practices that > >> engage with the world and that involve thick engagement with teachers > >> ('elders'), who can steadily develop and inculcate not just 'moral > >> sensibility' but also intellectual appreciation of intentional commitment > >> to the world."/ > >> > >> > >> > >> I read from (or into) this statement a position I have held via AI since > >> I did my masters thesis in CS (AI) — computers cannot be intelligent in > >> any general sense until and unless they participate in human culture. We > >> automatically and non-consciously "enculturate" (normative practices that > >> engage the world and involve thick engagement) our children. > >> > >> > >> > >> This is NOT education. Education is nothing more than a pale shadow of > >> enculturation. Not more than 10% of the 'knowledge' in your head > >> (knowledge about what to do and why and when and variations according to > >> circumstance and context ....) was learned via any kind of formal > >> education or training and yet it is absolutely essential and is the > >> foundation for comprehending and utilizing the 10% you did learn formally. > >> > >> > >> > >> Until we can enculturate our computers, we will never achieve general AI > >> (or even any complete specialized AI. > >> > >> > >> > >> davew > >> > >> > >> > >> > >> > >> On Wed, Sep 1, 2021, at 8:28 AM, uǝlƃ ☤>$ wrote: > >> > >> > > >> > >> > UK judge orders rightwing extremist to read classic literature or face > >> > >> > prison > >> > >> > > >> >https://www.theguardian.com/politics/2021/sep/01/judge-orders-rightwing-extremist-to-read-classic-literature-or-face-prison > >> > > >> ><https://www.theguardian.com/politics/2021/sep/01/judge-orders-rightwing-extremist-to-read-classic-literature-or-face-prison> > >> > >> > > >> > >> > I know several liberals who agree with the righties that vaccine and > >> > >> > mask mandates are bad, though not for the same reasons. Righties yap > >> > >> > about fascism and limits to their "freedom". But the liberals talk > >> > >> > about how mandates just push the righties further into their foxholes, > >> > >> > preventing collegial conversation. > >> > >> > > >> > >> > So the story above is an interesting situation in similar style. > >> > >> > Renee', to this day, hates Shakespeare because she was forced to > >> > >> > memorize Romeo and Juliet as a kid. Of course, she doesn't hate > >> > >> > Shakespeare, because she hasn't read much Shakespeare. She just > >> > >> > *thinks* she hates it because of this "mandate" she suffered under. > >> > >> > This court mandated "literature therapy" being imposed on this kid > >> > >> > could work, if he can read it sympathetically. But if he can't, if he > >> > >> > simply reads it "syntactically", what will he learn? > >> > >> > > >> > >> > BC Smith, in his book "The Promise of AI", channels Steels & Brooks [ψ] > >> > >> > in writing: > >> > >> > > >> > >> > "What does all this mean in the case of AIs and computer systems > >> > >> > generally? Perhaps at least this: that it is hard to see how synthetic > >> > >> > systems could be trained in the ways of judgment except by gradually, > >> > >> > incrementally, and systematically enmeshed in normative practices that > >> > >> > engage with the world and that involve thick engagement with teachers > >> > >> > ('elders'), who can steadily develop and inculcate not just 'moral > >> > >> > sensibility' but also intellectual appreciation of intentional > >> > >> > commitment to the world." > >> > >> > > >> > >> > If we think of this kid, Ben John, as an AI, what will he learn by > >> > >> > mandating he read Dickens? Similarly, what are the mandate protesters > >> > >> > learning from our mandates? Stupidity should be painful. And the > >> > >> > court's reaction to this kid's stupidity, the pain of reading Pride and > >> > >> > Prejudice, should teach that kid something. But which is the more > >> > >> > dangerous stupidity? Which stupidity runs the risk of a more > >> > >> > catastrophic outcome? Avoiding the vaccine? Or mandating vaccination? > >> > >> > > >> > >> > > >> > >> > [ψ] https://doi.org/10.4324/9781351001885 > >> > <https://doi.org/10.4324/9781351001885> > >> > >> > > >> > >> > -- > >> > >> > ☤>$ uǝlƃ > >> > > > > -- > ☤>$ uǝlƃ > > - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . > FRIAM Applied Complexity Group listserv > Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam > un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com > FRIAM-COMIC http://friam-comic.blogspot.com/ > archives: http://friam.471366.n2.nabble.com/ > - .... . -..-. . -. -.. -..-. .. ... -..-. .... . .-. . FRIAM Applied Complexity Group listserv Zoom Fridays 9:30a-12p Mtn GMT-6 bit.ly/virtualfriam un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com FRIAM-COMIC http://friam-comic.blogspot.com/ archives: http://friam.471366.n2.nabble.com/
