Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AI Idle

The First AI-written Play Isn't Shakespeare - but It Has Its Moments (sciencemag.org) 56

Science magazine describes what happens when a robot writes a play: The 60-minute productionAI: When a Robot Writes a Play — tells the journey of a character (this time a robot), who goes out into the world to learn about society, human emotions, and even death.

The script was created by a widely available artificial intelligence (AI) system called GPT-2. Created by Elon Musk's company OpenAI, this "robot" is a computer model designed to generate text by drawing from the enormous repository of information available on the internet. (You can test it here.) So far, the technology has been used to write fake news, short stories, and poems. The play is GPT-2's first theater production, the team behind it claims...

First, a human feeds the program with a prompt. In this case, the researchers — at Charles University in Prague — began with two sentences of dialogue, where one or two characters chat about human feelings and experiences... The software then takes things from there, generating up to 1000 words of additional text.

The result is far from William Shakespeare. After a few sentences, the program starts to write things that sometimes don't follow a logical storyline, or statements that contradict other passages of the text. For example, the AI sometimes forgot the main character was a robot, not a human. "Sometimes it would change a male to female in the middle of a dialogue," says Charles University computational linguist Rudolf Rosa, who started to work on the project 2 years ago... As it keeps going, there is more room for nonsense. To prevent that, the team didn't let GPT-2 write the entire play at once. Instead, the researchers broke the show down into eight scenes, each less than 5 minutes; each scene also only contained a dialogue between two characters at the same time. In addition, the scientists sometimes changed the text, for example altering the passages where the AI changed the character's gender from line to line or repeating their initial text prompt until the program spat out sensible prose.

Rosa estimates that 90% of the final script was left untouched, whereas 10% had human intervention.

It's a thought-provoking experience. (You can watch the whole play online -- with English subtitles.) The play's first lines?

"We both know that I'm dying."
"How do you know that you're dying?"
"I will die very soon."

And within seconds, the protagonist has asked the question: "How can you love someone who dies?"
This discussion has been archived. No new comments can be posted.

The First AI-written Play Isn't Shakespeare - but It Has Its Moments

Comments Filter:
  • Whoa (Score:5, Funny)

    by PeeAitchPee ( 712652 ) on Sunday February 28, 2021 @03:40PM (#61109002)

    "Sometimes it would change a male to female in the middle of a dialogue"

    That's not a bug, it's just a program that is "non-binary." :-/

    • *cries in Undefined/Bottom*

      (That's what heshe said!)

    • by AmiMoJo ( 196126 )

      It's what the recently fired AI ethics expert at Google was trying to warn about. This method of creating an AI does not lead to understanding. The AI doesn't know what makes sense, it has no real idea what the words mean or what the world is like, so stuff like this happens.

      • But we've known this exact thing for decades; why would we need to be warned about this in the third decade of the 21st century?
  • "We both know that I'm dying."
    "How do you know that you're dying?"
    "I will die very soon."
    "How can you love someone who dies?"

    That has got to be the dumbest lines I have ever heard.

    And I've watched all Pauly Shore's movies.
    • here [youtube.com]
    • by Rei ( 128717 )

      To be fair, they're using GPT-2, not GPT-3, which does a much better job.

      I occasionally used to try to make GPT-2 write plays / movie scripts. The dialogue was always stilted and the plots inconsistent, but through its sheer randomness it occasionally came up with gems of ideas - like when I tried to get it to write a Christmas movie and it had Santa involved in a homicide in a dark alley.

      I never succeeded at getting GPT-2 to write jokes. It was however superb at writing anti-jokes, where it constantly so

      • they're using GPT-2, not GPT-3

        Just adding to your comment: This is really the most important bit of information here. Being 2 years old, GPT-2 isn't exactly ancient in the AI world, but it is also quite far from the state of the art.

        GPT-3 is two orders of magnitude bigger than GPT-2: "GPT-3 has a whopping 175 billion parameters. By comparison, the largest version of GPT-2 was 1.5 billion parameters, and the largest Transformer-based language model in the world — introduced by Microsoft earlier this month — is 17 billion para

      • I don't think GPT-3 does a much better job. If you cherry pick examples, it might. But it still doesn't understand context or really any concept at all. It's just interpolating based on the previous words. ie, "Given the previous N words, what word is likely to come next?"

        • It's not "interpolating". it's true that it is predicting based on the previous words... and another word for previous words is "the context".

          If you think GPT-3 examples are cherry picked I encourage you to play around with AI Dungeon a bit.

          • Neural networks interpolate. They do not extrapolate well at all. Look it up, I am not your teacher.

            • That's right, you're not my teacher. You're a condescending person who thinks you're very smart and don't have to try to understand something new.

              I again encourage you to go play with AI Dungeon, to see what your "interpolation" means in practice.

              (You definitively aren't going to, because you're a condescending person who thinks they know all they need to know. But someone a little more open-minded might read this.)

              • You're a condescending person who thinks you're very smart and don't have to try to understand something new.

                I'm only condescending to people who refuse to learn. You.

                Stanford did a class on adversarial networks [youtube.com]. That's probably a simple way to learn about extrapolation vs interpolation. The reason GPT-3 seems so realistic is because it has a humongous corpus of text to copy from. But that's not how human brains work.

              • Also I just checked out AI Dungeon. I have no idea what you think is impressive there.

    • It may have a long way to go, but this isn't even state of the art. GPT2 is two years old, we've had GPT3 for six months now, and it's enough to emulate an erratic, but occasionally brilliant GM.


      > search my sister's body for vampire tooth marks
      You search your sister's body for any sign of vampire tooth marks.
      Your search takes you to the kitchen where you find a bottle of wine. You drink some of it and feel much better.

    • "Thanks for leaving me with a semi last night."

      Rip Pauly Shore all you want. Son-in-law is funny as hell.

  • by hey! ( 33014 ) on Sunday February 28, 2021 @03:52PM (#61109040) Homepage Journal

    Human find significance in purely random things -- Faces in gnarled tree trunks, Jesus on an unevenly toasted slice of bread. Human brains are significance recognizing machines with a strong tendency to find false positives -- apohenia [wikipedia.org]. The evolutionary cost of false positives is less than false negatives.

    An AI should produce significance recognition hits in the human brain far more frequently than random processes, because it's been designed and/or trained to ape human communication. But it's not communication between the AI and the audience unless the AI has some kind of internal experience to represent. When one human communicates with another, he transmits information in his brain to another brain by use of shared communication systems and referring to shared experiences. When an AI fabricates valid communication strings based purely on an analysis of some corpus of real communications, there is no internal idea or experience to relate; that's imagined by the recipient.

    I don't think this necessarily invalidates the Turing test; it just means short tests constrained to limited subject matter are too easy. A play or a novel is going to be tough, because eventually you are going to violate unspoken constraints that are part of humans shared experience.

    • It has its moments, all right - apparently they’re “senior moments”.

      • by hey! ( 33014 )

        Well, the AI doesn't *know* that a person doesn't randomly switch between being male and female -- why would it know if you've trained it on a corpus of text where that's never mentioned because everybody knows it?

        • That's the "I" part of AI. And don't you think that should be part of the "training"?

          This one is just a large list processor.
          • by hey! ( 33014 )

            Well, it turns out practical AI applications aren't about recreating human intelligence, real world apps seem to be doing classification tasks cheaper than a human could do it, or on scales humans couldn't do it.

            But to do something like write a story that would pass the Turing test, you absolutely would have to get implicit shared knowledge into your AI. The problem is there's a heck of a lot of things people never mention because they all know them from experience. You have to identify *all* of this know

            • by Ambvai ( 1106941 )

              That doesn't sound like an implicit constraint in a narrative; a frog becomes a human, a puppet becomes a human, a fish becomes a human and becomes a bunch of bubbles, a human turns into a lump of meat, a human turns into a condiment. There's a long history of things transforming to or from humans in the history of the written word.

              • It's completely a restraint.

                Magical world:Those things happen with great frequency, like you said.

                Real world: No one transforms into any of those except by ladling on even more restraints like evolution, cannibalism, etc. which then must be previously recognized. For the pedants: "Once upon a time" is shorthand for "Back when there was magic".

                If those things happen in a play about the real world, the "AI" shat itself.
            • Well, it turns out practical AI applications aren't about recreating human intelligence

              Fully understand. So do they, which is why they slap the word "intelligence" into the title of their list processing work. RRGLP (Really, Really Good List Processor) doesn't have that sexy ring to it.

              Kinda sounds more like something to put in a porno title.

        • Well, the AI doesn't *know* that a person doesn't randomly switch between being male and female -- why would it know if you've trained it on a corpus of text where that's never mentioned because everybody knows it?

          Or maybe the AI read the novels of Ursula K. LeGuin. And read about the many animal species (several fish, e.g.) who can and do change sex depending on local environment.

          • Here's what I was responding to:

            Well, the AI doesn't *know* that a person doesn't randomly switch between being male and female

            Ursula K. LeGuin wrote scifi and fantasy. So anyone reading those knows that going in (not random).

            And read about the many animal species

            Which would have told it that no mammals do. The fish (as you yourself indicated) do not do this randomly.

            Do you believe dogs can change sex even non-randomly? There are several species of fish that do, so why not dogs?

    • Finding significance in a gnarled tree trunk may sound stupid to you, but how is finding significance in anything any better?

      But it's not communication between the AI and the audience unless the AI has some kind of internal experience to represent.

      You're basically saying the AI has to "want" to communicate for it to be proper communication. And I agree, it's impractical to say the AI really "wants" anything, because it's just a product of our wants.

      But it is not wrong, as such. The decision to ascribe agenc

      • You're basically saying the AI has to "want" to communicate for it to be proper communication.

        Not at all what was said. That's a straw man for you to knock down.

        • How do you interpret "But it's not communication between the AI and the audience unless the AI has some kind of internal experience to represent" then?

          Sure the AI may be just a big matrix of numbers, but why shouldn't that constitute an "internal experience" if the state of your brain constitutes one?

          You have a teleology you don't acknowledge. The reason OP (and you?) regards the AI as merely "fabricating valid communication strings" as opposed to actually communicating, is that you refuse to assign it a "w

  • You need a monkey with a typewriter and infinite amounts of time to do that!

  • Requirement: (Score:4, Informative)

    by Gravis Zero ( 934156 ) on Sunday February 28, 2021 @04:04PM (#61109066)

    If it's learning about death then the play definitely better have someone who uses their dying breath to say, "delete my browser history."

  • Garbage (Score:4, Interesting)

    by fleeped ( 1945926 ) on Sunday February 28, 2021 @04:12PM (#61109088)

    10% editing to make it appear organic, as it's a pile of shit. Incredibly stupid introduction and continuation. Wake us up when the AI is aware of what it's writing about, rather than plastering things together based on frequency and using human editors to fix it up and make the nonsense appear artsy.

  • Remixes of remixes of remixes. And then it's automated with AI.

    • Remixes of remixes of remixes. And then it's automated with AI.

      Sounds an awful lot like large portions of music and movies today. Same themes/sound and throw in some autotune/cgi.

  • They sound like something ELIZA could have spit out a couple decades ago. And that’s the better part?

  • From prose to plays, this reminds me of a book written by an AI back in the early 1980's. The book was "The Policeman's Beard is Half Constructed." created by "Racter."

    https://en.wikipedia.org/wiki/... [wikipedia.org]

    https://www.3ammagazine.com/3a... [3ammagazine.com]

    JoshK.

  • People, real actual people, composed a set of plays, that, if you send them through an alhorithm, gives you exactly what they wanted to get. Aka it was a designed function. It is just a more convoluted equivalent of saying: This function { g(x) = f(x,n), where n=1, and f(x,y) = x+y } always gave us a result that was 1 higher than x.

    You're only allowed to use AI, if it's actually able to come up with things on its own, without you defining its input or setting constraints, as those effectively *are* the algo

  • After a few sentences, the program starts to write things that sometimes don't follow a logical storyline, or statements that contradict other passages of the text.

    So, a "play" only if you consider a spewing of text a play.

    • by edis ( 266347 )

      Well, the flow of the text does constitute the body of the play, while it is not at all pointless, even if robotty-imperfect. I have watched some, and the intelligent organization by men, acting included, produces surprisingly consistent overall impression. Even limits of automation are set to be included in organic way, since represent machine. Very acceptable piece of work, and proud display of the mighty 10% contribution by human.

      • GP:

        the program starts to write things that sometimes don't follow a logical storyline

        P:

        Well, ... human.

        Very good representation of what I meant. Not one sentence made sense.

  • I bet the part where gender gets all mixed up is actually sorta interesting. Too bad people thought that needed fixing. I mean its sorta like our era.

  • The GTP-3 came up with some impressive text here:
    https://www.youtube.com/watch?... [youtube.com]

    This feature has some problems but it is amusing just to see what reply the A.I. comes up with:
    https://play.aidungeon.io/main... [aidungeon.io]
    https://www.youtube.com/watch?... [youtube.com]
  • The result is far from William Shakespeare. After a few sentences, the program starts to write things that sometimes don't follow a logical storyline, or statements that contradict other passages of the text. For example, the AI sometimes forgot the main character was a robot, not a human. "Sometimes it would change a male to female in the middle of a dialogue,"

    Amazing! We now have AI that can create a work indistinguishable from the average piece of fan fiction!

It is easier to write an incorrect program than understand a correct one.

Working...