Alter Everything

A podcast about data science and analytics culture.
Episode Guide

Interested in a specific topic or guest? Check out the guide for a list of all our episodes!

VIEW NOW
AlteryxMatt
Moderator
Moderator

Join us as we explore the incredible ways Artificial Intelligence (AI) is revolutionizing the way we create, consume, and experience entertainment. Join us as we delve into conversation with Dr. Erica Reuter, Sr. engineering manager, where we address the concerns and challenges associated with the rise of AI in entertainment. Privacy issues, ethical considerations, and the potential impact on traditional job roles are all on the table. Interested in sharing your feedback with the Alter Everything team? Take our feedback survey here!

 

 

 


Panelists


Topics

 

 

Ep 147 (YT thumb).png

 

Transcript

Episode Transcription

[00:00:00] Megan: Hey Alter Everything listeners, we know that turning raw data into insights can be time consuming and challenging, but we've got good news for you. Alteryx can transform your analytics. Alteryx drives positive business outcomes by enabling fast data driven decisions, and you can try out our products today.

Start your 30 day free trial of Alteryx desktop or the analytics cloud platform. At alteryx.com/alter everything.

Welcome to Alter Everything, a podcast about data science and analytics culture. Do you ever see artificial intelligence in action in movies and tv and wonder, is this real? Is this possible? Could we see technology like this in the near future? Or do we already actually have it working behind the scenes?

Today, I'm joined by my colleague, Dr. Erika Reuter, for a fun episode to discuss data science, technology, and AI in entertainment. She covers how technology portrayed in film and TV either works or could work from her deep background in the data science space. Let's get started.

Well, hey, Erika. Thanks so much for joining us today on the podcast. Excited to chat with you. I'd love. To get a little introduction from you of who you are and your background, what brought you to Alteryx? Sure. 

[00:01:23] Erica: Hi, Megan. Good to talk to you again. My name is Erica Ruder. I manage our public sector solution engineering team.

So my team helps technically with our government clients and our education clients, both K through 12 and universities. I was a law enforcement analyst. I managed an Intel unit and was asked to do some predictive analytics, which I knew nothing about at the time. So I sort of learned how to do that from my job and had some mild success and got recruited into the tech world to teach other analysts and law enforcement agencies, how to do that.

And that brought me here to Alteryx, which is the best place to be. 

[00:02:01] Megan: Awesome. Well, I'm excited to talk today about AI and entertainment. Super fun episode. AI is obviously a pretty hot topic now in industry, but it's been around for a while and it's made a lot of appearances in pop culture and entertainment.

So before we dive into some examples of that, I'd love to hear how you would define AI or artificial intelligence. 

[00:02:25] Erica: Thanks. Yeah, isn't it funny? Did you take stats in college or high school?

[00:02:30] Megan: I did, 

[00:02:30] Erica: yeah. I love stats. What'd you think? You liked it, really?

[00:02:32] Megan: I did, I know. One of the rare ones. Everybody else in my class complains so much.

[00:02:36] Erica: Oh gosh, yeah, like I hated stats. And if you told me that I would be doing this for a living, I would have been like, shut up. No way. Um, so it's all based on statistics and pattern recognition. You hear the term machine learning, which is where a computer Can, can determine an answer without being programmed with that answer.

And the example I like to give of machine learning is our credit scores. Back in the day, all of the banks needed to figure out who's going to pay their loans back on time. They needed a system for that because they kept extending credit to people that were not, not paying. So what they did was through all of this data at the computer of everyone who's ever applied for a loan and some paid them back, some didn't pay them back on time and the computer said, okay, I'm seeing a pattern.

I'm able to build a model around what someone who pays their loan back on time looks like. And they have a low income to debt ratio and they only use 25 percent less of their cards and X number of inquiries. So it builds this model. It learned from that pattern of people and it compares each one of us to that model to say, okay, Erica, she's got a low income to debt ratio and her credit utilization is good, but maybe she has a few too many inquiries according to our model.

So she's not a hundred percent of a fit to this profile of someone who will pay their loan back on time, but maybe she's like a 90 percent fit. So we're going to say she's 90 percent likely to pay her loan back on time. So that's the machine learning part, right? And that's the statistics that we all love from school.

And then comes AI, which is where, okay, that's the brain. That's how I start to logically think about things and understand what's going to happen. Now, AI says, great, I'm going to impersonate a human and do something, apply some sort of action automatically. So in your car, you have like lane detection that says, oh, you're, you're about to crash based on the way you're driving, you're about to have a crash.

And then AI says, I'm going to hit the brakes for you. I'm impersonating a human and I'm going to push the brakes. So that's what AI is, right? It's a taking that information, that pattern recognition, figuring out what's going to happen next and doing something about it on your behalf. 

[00:04:40] Megan: Yeah, that's a great example and helpful to frame all of that working together, machine learning and how that is encompassed in AI.

That's great. Thanks. I'd love to get into some examples of where you've seen AI in TV shows and movies and entertainment and I think it'd be fun to break down what's really possible, what we could build with tech we have today, and what that could look like. 

[00:05:06] Erica: I love television. I know a lot of people say, I don't like TV.

I'm an intellectual. Like I, I, if I could be a professional couch potato, I would absolutely do that. Do you, do you watch a lot of TV? 

[00:05:16] Megan: I do watch a decent amount. Yeah. Okay. 

[00:05:19] Erica: Have you seen a lot of AI and things, or is it kind of one of those things where you're not even really noticing anymore? 

[00:05:24] Megan: Yeah. I'll notice it sometimes I gravitate towards more like sitcom type stuff.

So it comes up. A little bit less, but, but anytime I'm watching any kind of sci fi or, um, like Westworld when it's really heavily talking about Robots or AI, I get, I get really intrigued. 

[00:05:43] Erica: Yeah. Well, it's lurking in those sitcoms too, actually. So lately we had this whole reboot phase where all these old shows from like the 80s and 90s, they're doing like a new version of it, right?

Like Fuller House and Roseanne and the Conners. 

[00:05:58] Megan: I mean, I still see it going on and I see it with, with movies too. It's like, Oh yeah. Bringing back the new, the old concepts and revamping them. 

[00:06:05] Erica: So I was looking through Netflix the other day and I saw that there's a reboot of Punky Brewster. You probably, I don't know if you've ever saw Punky Brewster, but it was this like spunky little show in the nineties about, uh, a little girl who got adopted by an older guy and them living together and whatever fish out of water story.

So they rebooted it. And Punky Brewster has now grown up and has kids of her own and is going through a divorce. And I think that might be the saddest sentence I've ever said in my life, that Punky Brewster is going through a divorce. And she's starting to date. She's using dating apps, right? Dating apps to me are so interesting in terms of how they work and analytics.

Because if you can predict love and compatibility, What can't you predict? I feel like that's the most random thing in the universe that would be so hard to, to build an algorithm around. 

[00:07:01] Megan: Right. It feels like the least scientific thing. 

[00:07:04] Erica: Right? Doesn't it? Yeah. So she was using Tinder, I think. Now, I'm, I'm older, obviously, because I've seen Punky Brewster, and I've been married for a while, so I've actually never been on Tinder.

And I had some preconceived notions as to what it was and how it worked. And then as I was reading about it, I'm like, Oh, I was off a little bit. Have you seen the app? Do you have friends who use it? 

[00:07:23] Megan: Yeah, like it shows you people that it thinks you're gonna match with, and then you say yes or no, and then can start a conversation, I think.

[00:07:30] Erica: Yeah, I thought it was just basically showing you a photo, and then based on whether or not you were physically attracted to that person, and if they were close to you. I thought that's all it was. But I guess there's like a whole profile, and interests, and things like that going on as well. Oh yeah, they get your data.

Okay, I didn't know that. I thought it was purely just like a, a physical, oh, you look nice. It's not like that. As it turns out, it's using a couple of algorithms behind the scene, and some really interesting machine learning, and some really interesting analytics. I, again, I, I thought it was more simplistic and I was looking at eHarmony, the OG of dating apps.

eHarmony was 450 questions when it came out. I'm like, you got to really be serious. That, that's a time investment, right? That's like the SATs. And I think they got it down to 100 questions, somewhere in that range. But Tinder, What they do is, you fill out your little profile, so it says Megan, and then it has like a vector.

So, Megan, seeking whatever gender, likes dogs, runs on the beach, whatever, right? So it's got like some characteristics about you, after your name. So what happens is, when you indicate your interest in everything at first, Tinder is just looking for similarity. So your initial matches are just going to be people who have similar interests to you.

Pretty, pretty basic stuff. And then as you start to swipe, it starts to learn. And this is the machine learning part and the feedback part, like, Oh, I'm learning. I'm learning what she actually likes and what she's really interested in. It starts to build a pattern on the things that you're swiping on, right?

Like, Oh, she's swiping a lot on people that ride horses. Like that wasn't in her profile, but it seems like it might be important because it keeps coming up in her pics. They also analyze the photo and they break down the photo and they're like, Oh, she's swiping a lot on people with brown hair or different characteristics seem to be.

And then as time goes on, your matches get more and more curated based on your swiping pattern. So as they're going through in the refining your likes and your dislikes, one thing that I think is super interesting about dating apps specifically is this idea of cognitive dissonance. What research has found over and over.

Is that people will say they want one thing, and then as you look at who they're actually picking and selecting, that is totally not what they're going for. They want to think they want something, but it's not really true. 

[00:09:52] Megan: The data shows something different, 

[00:09:54] Erica: yeah. Absolutely, like it happens over and over.

So they're doing that, and they're analyzing the swipes, and they're learning what your true things are, and using that with your similarity to get you better matches as time goes on. So, as you start to make more matches or not make more matches, they basically use reinforcement learning. And reinforcement learning is how self driving cars are taught.

What happens is it runs an experiment. It says, Hey car, go drive. And a car drives and it drives like through a lawn, right? Or over a person or something. And the computer says, Oh, I didn't realize they would do that. I never expected that. And there's like a rewards penalty system with every experiment.

It's like, yeah, let's penalize you for leaving the roadway. And the car's like, okay, I get that next time I'm going to not do that. And let's reward you for actually stopping at the stop sign. Right. So it runs experiments and through this series of penalties and rewards, it reinforces, What the proper behavior is, or the best possible outcome.

Tinder does the same thing. And when Punky Brewster is going and looking for her match, at first when she's out on the site, it's going to use a random photo that she submitted. And then the next time a different photo, and the next time a different photo. And based on how people are reacting to that photo, it's going to be like, this is the one.

This is the popular one that is getting the best reactions. You're getting the most reward for using that photo. So through reinforcement learning, it's giving them better outcomes. And it's also looking at the people who are reviewing the photo and things they might be looking for and saying, okay, well, there's something in this photo that they might not like.

So for example, let's say the best photo is like you with your dog. It might go and someone wrote, I'm a cat person. Maybe you'll like swipe that dog with a cat or take them out or something. So it goes through and makes the best of you, which I think is. Kind of amazing. I mean, it's like terrifying reinforcement learning.

It would have to be for a self driving car like done in a lab because people would die. But for something like Tinder, the experiment's not that big a deal to go ahead and do it out in the wild, right? 

[00:11:55] Megan: Yeah, definitely. Another show we were talking about in preparation for this episode was The Good Place.

Yes. And you had talked about that on a previous podcast. So, would love to hear, like, the breakdown that you had about some of the quote unquote technology that was used 

[00:12:11] Erica: in that. Yeah, isn't that interesting that there's AI in heaven? Or the opposite of heaven? Yeah, in what is perceived as, yeah. Right? 

[00:12:19] Megan: No, the whole show is trippy in some ways, for sure.

[00:12:22] Erica: So the Good Place is basically this idea that after these people have passed away, they go to either the good place or the bad place. And it follows these characters and their afterlife experience in the good place. In the good place, there are some assistants. There's the guy that builds the neighborhoods, the architect, Michael.

And he has this assistant named Janet and you say, Janet, and she just appears out of nowhere, but you can beckon Janet just by saying her name and she knows everything about everything, but she exhibits feelings or emotions a little bit like she would pass the Turing test. There's episodes where she's doing different things.

A lot of times it's just answering a question, right? Cause she has been trained. She's been given a lot of content. So you ask her a question and she's able to generate a pretty appropriate response to you. So she was like. Human chat GPT before chat GPT was released. 

[00:13:13] Megan: Yeah, it reminds me of chat GPT now that we have that, like, in our toolbox.

It's like having a little Janet, but she doesn't dress all cute. True. Appear out of nowhere in the human form, but, you know, similar concept. 

[00:13:26] Erica: Maybe we need to make Aiden a physical being. All tricks Aiden. It could be like a little boy or something like, Hi, I'm Aiden. So Janet, she knows everything. I remember an episode where they wanted her to be a therapist.

So she went and she said, one moment and she stared off into space and then she said, okay, I'm back. I've read every book about psychotherapy ever published and I'm ready to go. So she basically trained herself. She learned like a human and she was ready to carry out that job of a human. The way that I think Janet was working is she's impersonating a human to like a human brain.

I think she, it's a neural net of some sort, right? And now we're talking about these large language models, which are all. Roughly based on neural nets, convolutional neural networks, or something of that nature, RNNs, CNNs. And what it's doing is, you know, your brain, it's got all these little synapses and neurons that fire and communicate and process the information.

Neural nets do the same thing. They get an input and in this hidden layer, it's like running all these different combinations of these inputs to give you an output, a prediction or a decision or a reason. I was reading something else the other day about how every brain can predict the future. And I'm like, what's this all about?

Cause it was in a scientific journal and it said, yeah, if, if you drop a glass, what's going to happen? It's going to shatter. Oh wow. You're psychic. Like you just, you don't like the glass shattered. You just predicted the future. Like if you drop a glass, the glass will shatter. So they ran all these tests on people and they did them to say, okay, in the midst of an event.

They stopped the tape and they said, what's going to happen next? And humans were like 90 percent accurate on what's going to happen next. I think it was people building Legos and things of that nature. What are they going to do next? What block are they going to pick up next? And then they ran it again at a conversion point where they switched from one activity to another and said, what's going to happen next?

And people are 80 percent accurate in that. And it's the same idea of these neural networks that you learn by watching, by testing, the first time you run a model, typically you get not great results. And the neural net does that, and then it goes back and it says, okay, I'm off by this much because I'm, I'm learning based on past behaviors and past patterns.

And let me go back and figure out where that error came from, where that disconnect came from and let me correct for that. And let me do that again. And let me do that again. And let me do that again. Until we have a pretty decent model that can come out of it. Because of that, they take a while to run and to train, but they're generally pretty accurate.

Super flexible, just like a brain. You can teach a person almost anything, good, bad, or indifferent. And the same thing with a neural net. 

[00:16:06] Megan: It's fun to see it personified, like in that show. Yes. It kind of brings it, brings it to life by making it look and seem like a human, but The whole concept of neural networks is really interesting and really complex when it's in papers, but then when you see it play out, it's pretty cool.

[00:16:24] Erica: That whole idea of getting better with time, these feedback loops, these ways of learning, the way the neural nets work. And she starts to learn emotions. I remember she would been annoyed. She's been angry. She's been like happy. She, she found that guy, Derek, and she created her own husband. So, I mean, That's the whole thing with like bias and with robots and with AI is that there's humans teaching them.

So they start to learn things from humans, too, which can be good and which can be bad. And imperfect creators create imperfect objects. 

[00:16:56] Megan: All depends on the data that's fed in, right? In your mind, how would some of the Alteryx products that we have and initiatives like AIDEN. Fit in with this conversation or what possibilities there are in terms of generating things 

[00:17:11] Erica: That's where Aiden comes in and we already have our magic documents type stuff out So you do an analysis or you upload some data and the computer says oh cool.

I see this This is what I see in your data your patterns your trends Let me generate a PowerPoint message for you. Let me generate a text message. Let me generate an email. And it creates that first draft for you. And then you can go through and tweak it. For people like me, that's amazing. Cause I feel like starting that email, starting that text, that's the hardest part, right?

So it gets you past that blank canvas where you can just go through and edit from there and make it look the way you want. So I think that's super cool. But they have AI studio coming out pretty soon, which is also going to have more of that large language model capability in it. I haven't seen the beta yet.

I've only read about it, but I'm really excited because it sounds like it's going to have all of the advantages of open AI, all of that great technology, but in a private controlled way. So enterprises can use it and really reap the benefits without having any privacy concerns. Yeah, 

[00:18:16] Megan: I'm really excited for that one too.

I'm excited to see also just like how customers end up using it to save time and everything. I do love the idea you said earlier of having a little, little guy named Aiden that we, A little, little assistant. 

[00:18:32] Erica: Well, like Clippy. Remember the clip art little guy, 

[00:18:34] Megan: Clippy? Yes. I think about Clippy quite often as I'm writing.

I really do. I, I, I think. That whole thing was so funny and the first little AI assistant in Microsoft Word. That's so great. 

[00:18:48] Erica: That's awesome. One of my customers is a medical school and they were saying they want to use it to go through and test procedures and make sure people were following SOPs and like they could type something in about I need to dispense this medication.

Is this correct? And it can give them some recommendations and just double check everything and that's Very cool. That's a great idea. Yeah. 

[00:19:07] Megan: So I'm curious, kind of back to the AI and entertainment that we've been talking about. Curious if you've ever seen AI represented in ways that made you nervous about future possibilities?

[00:19:19] Erica: Oh my gosh. Hello, Black Mirror. Yeah. Did you, did you watch Black Mirror? 

[00:19:22] Megan: Yeah. Black Mirror is what came to my mind first. 

[00:19:25] Erica: webcam cover for the longest time after watching Black Mirror? Because I went out and I bought like a microphone blocker and a webcam cover. That's right. 

There were a lot that kind of made me think about technology ethics.

There was one with military technology where they had this kind of augmented reality and saw things through a technology lens and didn't realize there was stuff being edited and it was being framed in a different way by the government. And that got really, really scary and confusing. So that definitely was like, okay, when you're working for an organization and they're using AI in ways that you don't realize.

I feel like that show made me think about some of the negative things that could happen in that case. 

Yeah, absolutely. I like dehumanizing the subject, right? That's pretty terrible. And I remember the one that stuck with me was The Grain, which appeared in a few episodes, which was like the little memory chip, um, that was in everyone's head and you could rewind or view their memories, replay things, which would be super handy, but it got abused in, in Black Mirror from what I remember and things were leaked and.

It was a little too over the line when you can't control and protect your own mind. And, and what people can see and what they can do. And I also remember someone bringing back like their fiancé from the dead, almost cloning and re implanting the thoughts. But all kinds of really interesting ethical issues that, that were raised by that show.

[00:20:49] Megan: Data privacy makes you think about that too. 

[00:20:52] Erica: In the new series, in the new episodes, the first show is something called Joan is Awful. And basically what they did is they took someone, and through all the cameras and everything in the world, your life is pretty much documented, right? Like we all have a really good digital footprint.

They streamed her life every night. As the day closed, her events of the day were streamed as a television show. And edited and only showing like a certain side of her. And when she went to complain, they're like, no, you didn't read the fine print and your 6, 000 page cable subscription to whatever, you gave us the rights.

You sold your rights to, to your life. And I can absolutely see stuff like that happening or terms and conditions are just like little things being popped in there and now we have the power to do crazy stuff, it's just our, our ethics that's stopping us from doing it. 

[00:21:38] Megan: Yeah, maybe we need AI assistants to comb through those terms and conditions for us.

Look for red flags. Hey, this is an anomaly right here. I compared this to all the other terms and conditions and I'm a little concerned. 

[00:21:53] Erica: Some sort of little AI lawyer that will call, I don't know, like Lila or something like that to go with Aiden. 

[00:21:58] Megan: Perfect. Well then, on the flip side, have you seen AI represented in entertainment that made you excited about future possibilities?

[00:22:08] Erica: I know there's been a lot of things where they talk about possibilities of medicine and prolonging life and doing crazy, amazing things there. Oh, what was that movie with Matt Damon in it? Elysium. I think it was called Elysium. And it would scan you and cure you of all of your ailments. So you, there was a little girl with cancer or something like that who was terminal.

And then, all right, you're good now. Like we took it all out. We scanned it. We fixed your body. We repaired your cells. That's amazing. With the intersection between technology and DNA. And the steps that are being taken there, depending on your perspective, like, could be life saving for a lot of people.

[00:22:43] Megan: Yeah, and I think earlier we were talking about even entertainment content created by AI. Mm hmm. Oh yeah. You mentioned this screenplay called Sunscreen. 

[00:22:52] Erica: Oh my gosh, yeah, Sunspring. It was an awkward, you got the point of what the scenes were, but it was like, didn't make any sense. It was like, Like you. I, like you, I do.

Like they were talking like that. 

[00:23:05] Megan: But it's, I mean, it's impressive that there even was a screenplay written by AI, I guess. It'll just continue to evolve and we've talked about that in some other episodes as well. Yeah. 

[00:23:15] Erica: There's some good music coming out with AI too. Girls Who Code teaches, teaches the girls how to create songs and be a DJ with AI and it's so neat.

[00:23:23] Megan: Oh, that's really fun. Yeah. I love that. Thanks for joining us for a fun episode, a little change up from our, from our normal on for. Providing your commentary on different ways that we see AI and just even broader technology represented entertainment and what that can really look like when you drill down into the science, how those things work.

[00:23:43] Erica: Yeah. Thank you so much for having me. It's been fun. 

[00:23:47] Megan: Thanks for listening to check out topics mentioned in this episode. Head over to our show notes on community. altrix. com slash podcast. Also, we would love to hear from you in our short engagement feedback survey. Click the link in the episode description wherever you're listening to let us know what you think and help us improve our podcast.

See you next time.


This episode was produced by Megan Dibble (@MeganDibble), Mike Cusic (@mikecusic), and Matt Rotundo (@AlteryxMatt). Special thanks to @andyuttley for the theme music track, and @mikecusic for our album artwork.