Getting More Women into A.I., Reducing Bias, and How to Use A.I. Responsibly with Juliette Powell: Podcast Ep. 359

Getting More Women into A.I., Reducing Bias, and How to Use A.I. Responsibly with Juliette Powell

Subscribe to the podcast!

Whatever your opinion is about AI, it’s being integrated into everything we do, from what we see on social media to how facial recognition systems work to how decisions are made about you by companies.

I personally enjoy using ChatGPT and find it incredibly valuable, but I also want to make sure I’m using these AI tools responsibly and ethically.

This is why I invited Juliette Powell, co-author of the recent book “The AI Dilemma: Seven Principles for Responsible Technology,” to come on the show. 

Juliette and I examine the compelling reasons why AI needs the influence of women’s voices and how we can work towards a more inclusive AI landscape.

 
We talk about:

  • The inspiration behind her co-authored book and why it’s a wake-up call for responsible AI
  • What companies are doing to reduce bias in AI systems
  • The importance of having more women in AI, not only to diversify the field but to embed empathy and ethics into the technology that is increasingly governing our lives
  • Actionable steps that can be taken to encourage women’s participation in AI, from education and mentorship to policy changes and community support
  • Our takes on venture capitalist Marc Andreessen’s recent, over-the-top manifesto on AI
  • How we, as individuals, can use AI tools ethically and responsibly
  • Juliette’s speaking experiences and tips

Want to learn how to use AI tools to give your speaking a boost? Come to our hands-on AI Tools for Speakers workshop live on Zoom on Thursday, December 7. You’ll learn how to use tools like ChatGPT and others to write outlines and speaking proposals, create images for your slides, add audio and music, and more. Sign up today at https://www.speakingyourbrand.com/ai-tools-workshop/


About My Guest: Juliette Powell is the founder and managing partner of Kleiner Powell International [KPI], a New York City-based AI advisory. As a consultant at the intersection of responsible technology and business, she has advised large companies and governments on the questions of how to deal with the accelerating change underway due to AI-enabled technological innovation coupled with shifting social dynamics and heightened global competition. Powell identifies the patterns and practices of successful business leaders who bank on ethical AI and data to win. Her co-authored book, “The AI Dilemma: 7 Principles for Responsible Technology” (August 2023), integrates the perspectives of engineering, business, government and social justice to help make sense of generative AI and other automated systems. A passionate advocate for digital literacy, critical thinking and collaboration, Powell speaks regularly on the future(s) of innovation, information, leadership and education. She is on the faculty at New York University, where she teaches graduate students of the Interactive Telecommunications Program.

About Us: The Speaking Your Brand podcast is hosted by Carol Cox. At Speaking Your Brand, we help women entrepreneurs and professionals clarify their brand message and story, create their signature talks, and develop their thought leadership platforms. Our mission is to get more women in positions of influence and power because it’s through women’s stories, voices, and visibility that we challenge the status quo and change existing systems. Check out our coaching programs at https://www.speakingyourbrand.com

Links:

Show notes at https://www.speakingyourbrand.com/359/ 

Juliette’s website & book: https://www.juliettepowell.com/ 

Juliette’s TEDx talk: https://www.youtube.com/watch?v=thkmVv54e6M 

Sign up for AI Tools for Speakers workshop: https://www.speakingyourbrand.com/ai-tools-workshop/

Discover your Speaker Archetype by taking our free quiz at https://www.speakingyourbrand.com/quiz/

Enroll in our Thought Leader Academy: https://www.speakingyourbrand.com/academy/ 

Connect on LinkedIn:

Related Podcast Episodes:

359-SYB-Juliette-Powell.mp3: Audio automatically transcribed by Sonix

359-SYB-Juliette-Powell.mp3: this mp3 audio file was automatically transcribed by Sonix with the best speech-to-text algorithms. This transcript may contain errors.

Carol Cox:
I have another really important conversation for you today around AI. My guest, Juliette Powell, and I talk about getting more women into AI, reducing bias, and how we as individuals can use AI ethically. On this episode of the Speaking Your Brand podcast. More and more women are making an impact by starting businesses, running for office and speaking up for what matters. With my background as a political analyst, entrepreneur, and speaker, I interview and coach purpose driven women to shape their brands, grow their companies, and become recognized as influencers in their field. This is speaking your brand, your place to learn how to persuasively communicate your message to your audience. Hi there and welcome to the Speaking Your Brand podcast. I’m your host, Carol Cox. Whatever your opinion is about AI, artificial intelligence, it’s being integrated into everything we do, from what we see on social media to how facial recognition systems work, to how decisions are made about you by companies. Now, I personally enjoy using ChatGPT and find it incredibly valuable, but I also want to make sure that I’m using these tools ethically. This is why I invited Juliette Powell, co-author of the recent book, the AI dilemma Seven Principles for Responsible Technology, to come on the show.

Carol Cox:
Juliette is the founder and managing partner of Kleiner Powell International and New York City based AI advisory firm. She’s also on the faculty at New York University, where she teaches graduate students of the Interactive Telecommunications Program. If you’re new to speaking your brand, welcome! I’m so glad that you’re here. We work with women entrepreneurs, executives, and leaders to clarify their brand message and story, create their signature talks, and develop their thought leadership platforms. You can learn more at speaking your brand.com now. If you want to learn about how to use AI tools to give your public speaking a boost, come to our hands on AI tools workshop that we’re holding live on Zoom on Thursday, December 7th. I don’t want you to have to stare at a blank page, and I want to save you time. And these AI tools can do that for you. You can get all of the details and sign up today as speaking your brand.com/ai-tools, dash workshop. That link is also in the show notes so you can click on it there. Now let’s get on with the show. Welcome to the Speaking Program podcast. Juliette.

Juliette Powell:
Hi. Thank you.

Carol Cox:
Well, it is a pleasure to have you here. You recently co-authored a book that I want to have us start talking about the book, and then we have a lot of other things to cover related to the book, but related to AI more generally. So the book is called the AI dilemma seven principles for Responsible Technology. You wrote it with Art Kleiner, who? I know that you have a consulting company together. And so I want to know why did you decide to write this book now? What conversations were you having that led you to this? Because I know you’ve been talking about AI for much longer than just the past year, when it seems, you know, the rest of the world kind of woke up to it.

Juliette Powell:
Yeah, I mean, I’ve been very fortunate in my life. I’ve had the pleasure of working with a bunch of engineers, and invariably they work in big tech, and invariably in some cases, they’re systems engineers, and they see the data that most of us never get a chance to see. And some of the things that they were seeing, things like pedophilia, things like human trafficking, just horrible, horrible things are things that they were reporting back to their bosses and their bosses were saying, no, we don’t want to hear it, because as soon as we know we’re de facto liable. And that really disturbed me. I mean, if you can’t talk to your boss about it, what do you do when you know that? You know, most tech companies, and at least in Silicon Valley, are operating under a very similar business model, are really vying for the same contracts. So in 2017, when I decided to go back to school after my mom passed, it was very clear to me that I wanted to understand the impact that my work that my friends work were having on society, on the rest of the world, and what I discovered was just horrifying, especially for vulnerable populations. And so my dissertation was on the possibilities and limitations of self regulation in artificial intelligence. It was dense. It was really not meant for anybody to read except for my advisors.

Juliette Powell:
And ultimately I sent it to one person, which was Art Kleiner, and he and I had worked together in the past. He at the time was the editor in chief of Strategy and Business magazine, which is published by Pricewaterhousecooper. And I really trusted his judgment. Our fellow mentor, we had the same mentor had introduced us back in 2007, and after working with him, getting to know him, I felt that if anybody could tell me what my dissertation could potentially bring to the world, he would be a great person. And he saw things that I didn’t see. And I so appreciated his perspective because he’d been in corporate for so long that that’s when I kind of came up with the idea of the four logics, the the four logics of power around artificial intelligence, corporate engineering, government and then social justice. And it’s in these back and forth conversations with him that I just asked him, would you consider being my co-author? Because I think that your logic, although very different from mine, really brings a lot to the conversation. I think it’s necessary that we all try to think about AI from the perspective of all of these different powers. In some cases, they’re they’re really fighting against each other. And, you know, we’re we’re stakeholders in this larger conversation.

Carol Cox:
Oh, absolutely. Every single one of us, whether we are using AI intentionally right now, are all being affected by by AI. And we have for quite a while, most notably in the algorithms that social media companies use to choose what to show us organically and advertising based. And I think a lot of people don’t necessarily realize that AI is already embedded into so many of the systems that we use. So can you tell us a little bit more about kind of the triple A system and how we’re coming across AI in our everyday life?

Juliette Powell:
So it’s it’s really interesting because I think a lot of people associate eye with what we’ve seen in science fiction, in the movies and books, and they’ve certainly give us a sense of what the future might hold, even what our present holds. But at the same time, we also have to understand that our entire modern civilization is essentially built on AI at this point, whether we’re aware of it or not. So it’s not just the algorithms in our social media, it’s the algorithms in everything that we use digitally. So the way that we find an apartment, the way that we find a school, the way that we find a job, the way that we find a mate, right? We’re allowing technology to dictate the most fundamental things of being human. And to me, that’s not scary. Again, it’s something that’s been around for a very long time, certainly for as long as you and I have been around. But now it’s in the hands of the everyday person. And I think that with great power comes great responsibility.

Carol Cox:
Well, let’s talk a little bit about bias in the systems. I know you had you did a Ted talk back in 2016 around unconscious bias. It wasn’t explicitly about AI. I highly recommend, for those of you listening, going to watch Juliette’s Ted talk. She’s a fantastic speaker. I’ll include a link in the show notes. But you did talk about unconscious bias in that. And we know that we’ve you know, we’ve heard a lot about bias within AI systems. Obviously, tools like ChatGPT and even image generators like Dall-E and Midjourney have been trained on the internet, which we know humans are biased. Certainly everything that’s on the internet, if we just think about what we’ve seen, definitely skews, you know, ways that, you know, as a woman, I certainly don’t like, like all the sexualized images of women that tend to predominate on the internet. Obviously, Juliette, as a woman of color, I know you also have not only the gender side, but also the race side. So what are companies like Google and Microsoft and OpenAI doing to try to mitigate bias? I don’t know if it can be removed, but try to at least mitigate it.

Juliette Powell:
Well, as we’ve seen there, for example, at Google, there’s a whole department, right, that’s dedicated to responsible technology, responsible artificial intelligence. And I think that these groups are very, very important. At Microsoft, they took a different model where they broke up that team and essentially assigned people to every single team that they have, an AI. These are two different approaches, but certainly two of the most popular tools that we have. But nonetheless, I just saw a study on Midjourney last week. I don’t know if you had a chance to look at this, but it was fascinating to see the stereotypes that these image generators create for us. So the the study, for example, would ask for a simple prompt like show us an image of an Indian man and invariably, oh, sorry, not even an Indian man, an Indian person. And no matter what the culture or the context, invariably a person was male, right? And so, yes, these are our human biases. But I would love to pull your entire viewership and listenership to see how many think of a person as being strictly male. Right? And it just gets worse and worse and worse when we ask for images that generated the streets of certain African cities, you know, the homes were all destroyed. It just perpetuates these negative stereotypes. But there are many different kinds of biases, right? There are negative biases that we are trying to eliminate.

Juliette Powell:
And one of the the ways that we as users of these technologies have is when we’re asked for feedback, thumbs up or thumbs down. If we don’t give it, then we’re essentially and using the tool again, we’re we’re de facto accepting that there should be issues with these systems. If we do give it feedback, hopefully it improves and we have seen some improvements. But of course, you know, I don’t know to what extent will ever really be able to eliminate all the biases, as you’ve said. I mean, it’s based on our human biases. All of all of these systems are fed by our personal data. And the more biased we are, the more likely our systems are not only to be biased, but to embed that bias. So you asked me about triple A systems. When we talk about artificial intelligence, as you know, it’s essentially a marketing term for the everyday person. And we’re talking about machine learning. We’re talking about deep learning, different techniques. But ultimately we are talking about autonomous systems. So they don’t necessarily need us as humans to tell them what to do. They’re adaptive systems. And again, they learn as they go and so they adapt to us. And in some cases their automated systems. Right. And we’ve seen a lot of that, a lot of fear coming out of autonomous cars, but also robotics in manufacturing, for example, replacing humans, a lot of people in white collar jobs that thought that, you know, their job was safe are now really nervous because the white collar jobs are essentially being challenged by AI.

Juliette Powell:
And it’s really interesting to see where some people view these technologies as co-pilots, where they work alongside with us and just enable us to be even more creative. In some cases, there are bosses, and we see that at Amazon, for example, where you’ve got something called time on task, where you’re an employee and essentially the system’s tracking. You know how many times you go to the bathroom, how many times you pause while you’re doing your work, even if you’re having a conversation about your job, if it’s not in a structured environment that the system recognizes your time off task and you potentially, you know, not only do you lose points, but ultimately you know your boss, your actual human boss comes and talks to you. And these are problems for many people, because when algorithms and systems make decisions about. Your life, for example, telling you that you shoplifted at a store when you didn’t take anything. How do you push back? You know, the law hasn’t caught up to where we are with our technologies yet, nor has regulation.

Carol Cox:
Well, and that’s something that you do mention in your book about everything, you know, facial recognition, criminal justice decisions that are being made. I know just recently the white House has been working on a suite of regulations around AI. And so you’re and your dissertation was around self-regulation by tech companies, which I have a feeling you think is probably not very likely for them to do very well. So what is your feeling about self-regulation by tech companies and about what the white House is doing? And potentially, what can we expect Congress to do around data privacy and so on?

Juliette Powell:
So self-regulation has been around in technology for a very long time. There are a series of laws that passed and so on and so forth, where the the platforms are not responsible for the content that we users post on there, which again, that and between that and deepfakes leads to massive disinformation. And we’ve certainly seen a lot of that in the last bit. Self-regulation, I think, has come very much to the forefront after the tech revolt movement that happened in 2018, where we saw actually said 2017, 2018, some in 2019 where we saw people that worked in Big Tech come onto the streets and say, hold on a second. You know, we didn’t sign up for this. And so you’ve got this social justice logic coming out of engineers who are also embedded in, you know, the corporate logic. And they’re all vying for these multi-billion dollar contracts that the government is putting out for defense. And so, again, just so many different factions fighting within the same person, right? As as a woman. I’m both a technologist and I strongly believe in social justice. I don’t think that, you know, some people should be better treated by technology than others. And I certainly believe that government, you know, funds most of the startups in artificial intelligence and definitely has a hand in it. And of course, again, it’s being used to find bad actors. So all of these things come into play when you think about regulations, which is why I think so far what I’ve seen from the Biden administration and from big tech kind of coming together is, you know, signing on to respect certain things, but they’re not necessarily enforceable.

Juliette Powell:
Right? So even if I say I’m going to do something, there’s nobody nobody’s stopping me from going ahead and doing it. In other words, nobody’s going to throw me into jail. It’s not like a Sam Bankman-Fried situation. And at the same time, I think that it’s a very, very tricky thing for regulators in the United States, in Congress, in the United States, because on one hand, you’ve got the founders and the leaders of Big Tech coming in and saying, yes, we need to be regulated. Let us show you how you should regulate it so that we can continue to innovate. So balancing the innovation piece with the safety to humans piece is kind of fundamental. And I don’t think that anybody has gotten it right so far. I think the European Union is doing its best to come out as the leaders with their artificial Intelligence Act, but nothing’s done yet. So ultimately, I do think that it’s up to us as individuals to use our critical thinking that all of us need to learn better data literacy, and we need to ask better questions of the responses that we get back from our technologies. Just as we question our teachers, we question, you know, our political leaders. We need to be able to question our technology and really look for multiple sources that we trust to make sure that what what we’re spitting out and incorporating through ChatGPT and other tools actually reflects what we want to put out there, as opposed to what we think looks really good but really has no foundation.

Carol Cox:
Yeah, Juliette, I’ve been thinking now all year long with ChatGPT. And again, these image generators and even some of the video generators, you know, they’re getting better and better all of the time, is that for about a hundred years, we as humans could rely on photographic and then video as evidence that something was quote unquote true, that it was real, that it really happened. And I think that was like a very, like very specific 100 year period or so, 150 years for photos that we had. And I feel like that period is gone. And it made me never to return, because how do we really know what is a real photo and not I mean, I know they’re talking about watermarking and being able to tell if things are generated or not. I’m a little bit skeptical, and I feel like it’s a cat and mouse game. Like there’s always going to get better at not being.

Juliette Powell:
Technology to count.

Carol Cox:
Exactly. So there’s things are always going to slip through, there’s always going to be bad actors. And so back to your point about data literacy and like. Our individual responsibilities that when you see things online, you can’t take it at face value anymore, that what you’re seeing is true and real.

Juliette Powell:
So the truth is, Carol, that I never did. And I think that it’s very dangerous to think that a photo actually represents reality no matter who generated the image. And I say this because I’m sure that you’re aware of a frame, right? That it’s a big, bad world out there. But when you’re taking a photo or you’re shooting a video, you’re shooting from a very specific point of view, and you’re the one who’s deciding what you’re capturing and what you’re not capturing. And you know, there are many, many different court cases that have arisen where photographs have been used as evidence. But ultimately, when you take a larger frame of the same situation, the situation is very, very different. So I think that we have to be careful with that one too. So critical thinking does not go away. It wouldn’t have gone away 100 years ago and it sure as heck should not go away right now. I do think that it’s more challenging than ever to be able to recognize whether an image has been generated by a human or not. It was difficult for many people to recognize Photoshop. You know, there are certain tricks that you can use, at least for right this second. This might not be true tomorrow, right? Things like shadows. Very important. Right? The lighting, the shadows. Does it fit? I think one of the few ways. And again, this is changing as I speak, I’m sure. But one of the ways that many people thought that they’d be able to recognize whether an image is true of an event, for example, is if the event is photographed from multiple images with multiple sources that are trusted. And again, by the time you air this, it probably won’t be true anymore.

Carol Cox:
Yes. Right. So again, like have that have your critical thinking hat on at all times as you’re perusing things. So let me ask you this, Juliette, is that as individuals, you know, because there’s certain things, you know, the big tech companies and the government hopefully will will have some regulations about things. But ultimately it’s up to us as individuals. How can we act responsibly and ethically when we’re using tools like I like using ChatGPT, I like using the image generators. I know there’s issues with copyright. I know there’s class action lawsuits by artists and by authors who are not happy that their work was, you know, that their work was being trained or used for training data. And so I understand that. But yet I still like to use these tools because they help me. So how as individuals can we approach this?

Juliette Powell:
I think it’s all in the way that we use them. First and foremost, I think it’s you should start by experimentation in a very, very limited fashion as opposed to just posting anything everywhere because you’re so excited. The other thing that I strongly recommend, both for individuals as well as for organizations and governments for everyone, we should all be doing this calculus of intentional risk. So I’ll give you an example of that. And this is a corporate example. But then we can bring it down to the individual level. So Google was not ready to launch Bart. Right. Not because the technology wasn’t all that, but rather because they wanted to be able to do it responsibly. And as a result, they decided to hold back because yes, it was biased. And yes, there were, you know, really negative connotations that were coming out. And we’ve seen this with with other chat bots in the past, Ty being an example that came out a few years before that from another corporation. Anyway, long story short, they essentially felt that they had to launch because Microsoft and OpenAI had launched exactly one year ago back in November 2022, and as a result, they accelerated their launch. They launched in February. Google did, but they they tripped, right. It was not a successful launch. Why? Because they were trying to do it responsibly. Now, when I talk about a calculus of intentional risk, obviously OpenAI had a very different calculus than Google. Why? Because Google goes to billions and billions of users every single day all over the world. Very few people even knew about OpenAI, let alone the, you know, the money that Microsoft had invested in it. And so for OpenAI to screw up, you know, was one thing.

Juliette Powell:
And that’s okay, because essentially nobody knows this. And we can work on our little startup, whereas Google is Google and Google is Alphabet. And it’s like, you know, everything and all things searchable to so many people. It’s a verb, for heaven’s sake. So I think that that calculus had to be different for these two very different organizations. When it comes down to individuals, it all depends what you’re using it for. If you’re using it for your work and you’re inputting. Any kind of work data that is supposed to be, you know, the company’s secret sauce, intellectual property of the organization, or even your own mom and pop small business. You don’t want to put that into a third party. System that is essentially using your data to train it to do other things. You don’t want your secret sauce to be spat out to your competition, or to others that might just find that perfect nugget and just build an entire empire while you’re still trying to figure out what the heck happened, let alone suing them. And I think it’s really interesting, too. You mentioned the lawsuits. Um, I think that the corporations that Adobe, for example, and Google that have both come out amongst other companies, essentially saying if you get sued, we will back you. Our insurance covers you getting sued. In other words, don’t you worry about it. We’ll pay for all of your fees. That is their calculus of intentional risk to make sure that they can still have clients as clients decide whether they want to. You know, even I certainly would not want to get sued for any reason, shape or form.

Juliette Powell:
So I think that suing is like the one that all of us think about. But there are serious harms that can actually happen to humans, like people getting pulled by robo taxis in San Francisco because they think that the system is safe. And, you know, hundreds of thousands of people are using it. And next thing you know, we realize, oh, wait a minute, this is a this is a system that does not necessarily recognize that I’m a human being dragged by it. And it’s it’s horrifying. Right. But we put these, I guess, these moral decisions on how a system should act in a certain situation, and we code them as best that we can, but we can’t even begin to imagine all of the negative consequences that can occur. And so I think one of the ways that we can practice this calculation of intentional risk is if you are deploying this for your small company, for example, for a specific targeted reason is to bring in the populations your customers bring in the people that these tools are aiming at to ask them what do they see as the potential benefits, but also what are the potential pitfalls, not just now, but medium and long term. And these are things that for the most part, when we’re doing, you know, a risk benefit analysis we do in business, but we don’t necessarily do this for before we go, you know, use ChatGPT or before we go Google something even. And I think that from now on, we kind of have to if we really want to be in control of what we’re doing as opposed to this illusion of control.

Carol Cox:
Mhm. So Juliette we met, we were talking about bias a little bit before. And one thing that I want to ask you about because obviously, you know, as a woman who you’ve been around the tech industry, you wrote this book around the dilemma is that what can be done to get more women in prominent positions in these, you know, in these companies that are making important decisions around AI. So, for example, when Elon Musk, who announced his leadership team for X, I, this was earlier this summer, it was 12 people. Actually, it was 12 men, all men, not one single woman of the 12. And you know, I posted about this on LinkedIn and and of course, all my network was was outraged as at it as I was. But here’s the thing is, like, I seriously doubt there was even one iota of thought among the 12 of them that there were no women. In amongst them. So like and obviously as you know better than anyone, you know, I is going to reshape so much of our society. Yet I feel like the so many voices, women’s voices, people of color, you know, other groups that have been underrepresented are still remaining underrepresented. So what can what can we do about this?

Juliette Powell:
Start our own companies? That simple, that simple. I don’t think that you should try to bend over backwards to enter into a culture that doesn’t take you as you are, that doesn’t see the benefit in what you’re saying, whether it’s your job or not. I think that, you know, part of making a great product is actually taking into account everyone’s voice, and that’s just not the case. So rather than fighting the system, which you can do, you can absolutely do that if you have the wherewithal. But I think that it’s it’s really interesting when you look at it from a sociological perspective, like every time we go to school, right from high school on to college or university, that system shapes us and then reshapes us, and then we go into the job market and our first job shapes us. And then as we grow in our career, we get shaped again, right? There are new things that we have to embody, inhabit, understand, be able to communicate that we didn’t have to do when we were just starting out. And it’s the same thing, I think when you know you have a startup, you have the best of intentions. And really, most of the founders that I know, in fact, all of the founders that I know are really trying to make a great product. And it’s not about, you know, male or female. It’s not about race. It’s just about who are the best people that I can find right now to make a great product.

Juliette Powell:
But ultimately, I do think that the idea of creative friction, which is the last chapter in the book, is really, really important because it reminds us what’s at stake. Essentially, the more complex the system, the more tightly coupled the system is, the more likely there are going to be some negative unintended consequences. There’s nothing more tightly coupled that I can think of right now than our relationship to our phones, to our devices, to our communications. For years now, because of the pandemic, we’ve been communicating through digital means like this one. That’s all AI enabled. And right now, the great thing with generative AI, because it’s only been one year in the hand of the everyday person, every day is an everyday person is an expert. We’re all experts, right? None of us know this stuff more than others unless you’re an AI researcher. And so I think now more than ever before in our history, has it been the time for women to really take advantage of this moment? You’ve got AI funding. You’ve got people that actually want to invest in more women. You’ve got everybody that wants to invest in AI, and you’ve got consumers galore that are just eating up every possible AI tool. So how do women grow and start leading in this larger conversation? We have to step up and not just know how to use the tools, but also know how to lead teams and know how to scale.

Carol Cox:
I love that answer, Juliette. Thank you. And okay, so we’re going to talk a little bit about a manifesto that was making the rounds of the internet. But before we do that, let me I’ll preface it by saying that. So I consider myself or I’ve been called a realistic optimist or an optimistic realist. I forget which one about technology. And I definitely consider that I’ve been in the tech field for over 20 years as a software developer, created a SaaS back in 2005 2006, a really long time ago. Started speaking your brand in 2015, but then have gotten back into like dabbling my hands into AI over the past year and creating some some tools to use internally within the business for ourselves and with our clients. And so I really excited about that. And so I definitely consider myself an optimist regarding, you know, the possibilities that AI can bring to us, whether they’re advancements in and medical and health care, you know, advancements in helping people to do work that they really love. And having a lot of the work that is monotonous and repetitive have the AI do that. But obviously, we know that there are also risks and potential downfalls with AI. But let me ask you this before we talk about the manifesto, what are you most excited about regarding AI? What do you like about it?

Juliette Powell:
I like that it has the potential to democratize so many different things, from education to health care, as you mentioned. My mom passed away of cancer, and to have had I being able to first do gene sequencing, but then really kind of identify exactly what her problem was and hopefully find a cure before she passed away, that that’s the dream, right? It’s the dream for all of us. So many people that are working in big Tech are also looking at longevity. And so it’s not surprising that these two areas kind of come together. So stick with that. But there, you know, just pick a vector. Climate change all of it.

Carol Cox:
Oh yeah. That’s another big one right. Finding some some solution, some actual like implementable solutions for climate change would be good. All right. So let’s get into this manifesto. So this has been going around the internet. Mark Andreessen is a very well known venture capitalist. He’s the one who created the Netscape Navigator web browser back in the mid 1990s, kind of the first graphical way to access the internet, you know, ended up selling that and becoming a venture capitalist from the proceeds from that. So he’s, you know, he’s been very influential in Silicon Valley, investing in a lot of companies, the companies that we know well and have really shaped what the internet has become. So he published his manifesto. A polemic about basically doubling down on all the potential benefits of AI. Kind of not not at all recognizing or being willing to recognize any of the potential pitfalls of AI, and really seeing that anyone who is challenging the forward progress of AI, anyone who wants to say, hey, let’s regulate some things, or let’s, let’s pause for a moment and figure out like, how is this going to impact society? How is this going to impact people? He’s having none of that whatsoever. And he uses some very strong language about lies and enemies like it was. It was I was rather surprised by it. So I would love to get your take Juliette on, you know, why do you think? Well, number one, where is he coming from? Other than obviously he’s a capitalist. And the fewer regulations there are, the more money he makes, because the more money his companies make. But other than that, like where like what what what? Like what do you see going on in the tech industry and and what are some of the things that you would challenge regarding what he said?

Juliette Powell:
I feel like he’s hand he’s holding. He’s reinforcement giving positive reinforcement to all the founders out there that are like the people that called me while I was doing my dissertation, kind of freaking out at the potential negative impacts of their creations, even though that’s not necessarily their intention at all. I think that, like it or not, we all are born with a conscience. Well, not all of us, but for the most part, we are. And that, you know, with the decisions that we make, our conscience gets quieter and quieter and quieter depending on what we choose to do. And so in one hand, I think that that’s a one really, really great way of placating people and saying, hey, you’re okay. Not only is my money behind you, but I’m going to write this manifesto and say to the world how okay, you are right too. I think that, yes, obviously the investment has a lot to do with it. I was asked to do an interview a couple of weeks ago around the rewind pendant, which is a system that allows you to record anything and everything, kind of like the Ray-Ban meta glasses and, you know, asking me what I thought about that just in terms of, you know, potential harms. And of course, the first thing you think of is things that have actually happened with the Ray-Ban meta tool, which is essentially if you walk into a changing room, right, or you walk into any place that is where people are not aware that you’re recording, right? That is problematic for a lot of people.

Juliette Powell:
Privacy is dead to a certain extent. But at the same time, I think when you’re, you know, coming into a room, you don’t necessarily think that you’re being recorded until you live in America, and then you realize, oh, yeah, that happens a lot. Okay. I can I can jive with that. But are you necessarily thinking that every single person that you’re talking to is recording the conversation and will upload it? Not necessarily. And if we do live in that society, then that means that we’re always on. You always have to be conscious of what’s around you, what you’re saying. And that doesn’t sound like living to me. So I think that pendants like the rewind is something that Andreeson not only has invested in, but is also on their home page of the product. Right? He is endorsing this product. I can think of some really positive uses for that. For example, I have a friend in Montreal that was arrested years ago. He was accused of raping a woman and he was found years later after serving his time in prison. Years later, not only was he innocent, but the entire police department was found guilty of doing all kinds of really horrific things. And certainly we’ve seen police departments here in the United States doing really horrific things. And so, yeah, wearing a rewind pendant is something that my friend really wants, because he wants to make sure that nobody can ever falsely accuse him of something.

Juliette Powell:
I also think of, you know, older people that that get abused in elderly care. How horrible is that? Well, with something like that, okay. Sure, I see a use case for it, but the negatives that could arise from all of these things, for example, police departments co-opting the images from these devices, we’ve seen that with the ring cameras. We’ve seen that with a number of different devices. There are generally some really negative implications, especially for vulnerable populations. And I think that, you know, Mark Andreessen is many things, but he is not and has never been in the category of being in a vulnerable population. And as a result, I don’t think that he really can put himself in those shoes. I don’t necessarily think that he’s surrounded by people that that can help him get into those shoes, and I don’t think that he necessarily wants to because he doesn’t really have a reason. Into. I think it’s really important, as you did when you set up. The question is to look at what people have to gain. And he’s got a lot to gain by by putting that out there. Now, we as the people that receive the manifesto, have a lot to gain in actually thinking about what the implications are. Do you consider yourself a techno optimist? Yes you do. Do you follow this particular philosophy? It doesn’t sound like you do. So again, just because something or someone spits something out doesn’t make it true.

Carol Cox:
Well, and this goes back to the point we were talking about just before that was is making sure more women have prominent positions in whether their own companies or in companies that are making decisions around AI. Because yes, to for Mark Andreesen and most of the other founders of the tech companies that basically rule our lives on the internet right now, they’re white men and they have never experienced what other populations have experienced. Yet don’t give them a pass, because I still think the onus is upon them to consider these things, not just to be like, well, I’ve never had that issue or I would never think, you know, have that problem, so I don’t have to think about it. But again, that’s why we need more other other people who also have who are in these positions to make these kind of decisions and to make funding decisions as well. All right. That is my soapbox.

Juliette Powell:
I also think. So. Another point too. Mention is. It’s not just about women and men, and it’s not just about race. It’s really about having different kinds of people weigh in on this conversation. So people that are neurodiverse, we want older people, we want younger people. We really want a cross section of backgrounds and religions and ways of looking at the world. People that have gone to school and people that are self-taught, they bring a lot of value as well. And we’re all part of this.

Carol Cox:
Yes, absolutely. 100% cannot agree more. All right. Juliette. So let me let me let us wrap up with I would love to get kind of any tips that you have as a public speaker. Again your TEDx talk is great. I know you do a lot of keynotes and public speaking, especially with the book having just come out. So tell me, like, you know, with our listeners who obviously they listen to this podcast because they enjoy public speaking, they want to get better at it. So how do you approach the talks that you give? What are what are you hoping that your audience takes away from it? Like, how do you think about your speaking as a as a performance or not? Share with us.

Juliette Powell:
Um. These are really, really good questions. I don’t know that I’ve necessarily given as much thought to public speaking as as you have, but, um, I essentially grew up on live television in Canada. So my first adventures were I think my first audience was 6 million people on live television, and I was terrified. And here I am, 20 some odd years later, and I’m still terrified, I really am. That’s not something I ever got over, but I’ve used it as a challenge to myself when I feel like there’s something that I can’t do, that I really feel that I should do that. It’s good for my own personal growth, and that also adds value to whomever is asking me to come do this. Then I try to push myself, and so it’s a constant push. The other thing is that that value piece, when I’m approached to give a talk, I ask them, why me? Like there are millions of people on billions of people on this planet? Why are you coming to me? What is it that you think that I can possibly bring to your audience, or to this larger conversation that you can’t find elsewhere? And when they give me an answer that I can relate to, it makes it a lot easier to go out there, because I’m not just going out there for my ego. I’m not going out there to sell books. I’m not going out there for any other reason than to try to satisfy that that thing that’s already been identified, that value that often people ask me to bring. One of the most surprising requests that I get is when people ask me to talk from a personal perspective, that is not something that comes naturally to me at all.

Juliette Powell:
But what I’ve noticed, and it’s the same thing in social media. I think one of the reasons why video is more sticky than anything else is because we get to see a slice of life, we get to see what we think is real, what is intimate, and often behind the scenes stuff that you don’t see on broadcast television, for example. And the more that I can share of myself while getting these larger points across, the more people seem to be able to relate to what I’m saying. Maybe not to me as a person, but to the stories, to the experiences, good and bad. You know, we’re all human, and so if you can connect with others on a human level, then there’s you’ve got nothing to lose. And I also don’t see that if I’m on stage, that I’m more important than the audience. To me, it’s all about that exchange, that give and take. So my favorite, favorite presentations. Are our fireside chats where? Sure, I might get a few questions from moderator, but really where I get to chat with the audience itself in a really organic way that’s not pre-planned. We don’t precede questions. It’s just, you know, you speak from the heart. That’s the key. Speaking from the heart and speaking about something that you really care about, that the audience really cares about in a way that maybe you’ve never done before, the audience has never experienced before. And to me, that’s the ultimate, because in that way, we all grow together.

Carol Cox:
Well, that’s a fantastic answer, Juliette. Thank you so much for it. And, you know, the listeners know how much we talk about the power of storytelling here. And it reminds me, I was at the TEDx women conference in October. I did an episode about my takeaways from there. Of course, have speakers who share personal stories, helps us as an audience to understand why their message, their idea, matters to them in the first place, and then by extension, why it matters to us as the audience. And Juliette, I think you should try to find a way to get in touch with the TEDx women organizers, because I think you would be a fantastic speaker at their next conference.

Juliette Powell:
Wow, that is high praise. Thank you so much, Carol. Listen, if you want to make that connection, I will show up. But either way, that is a wonderful compliment.Thank you.

Carol Cox:
Well, I. Will do my best to try to find how to get you there. And I’m sure you you have people too. All right. Juliette. Well, thank you so much for coming on the Speaking Your Brand podcast. I will make sure to include links in the show notes to your website, also to where listeners can get your book, The Dilemma, and to your LinkedIn profile so they can connect with you there.

Juliette Powell:
Yes. And I’ve got a QR code for you. For your listeners. It’s 30% off from the publisher, and hopefully it allows you to get it for yourself and share it with your friends. I’m not saying go out and buy a bunch of books that you can’t afford. What I’m saying is that one person, each one person, teach someone else. I think that that’s really, really important too.

Carol Cox:
Okay, great. We’ll make sure to include that as well. Juliette. Thank you so much.

Juliette Powell:
Thank you.

Carol Cox:
Thanks again to Juliette for coming on the podcast. Be sure to connect with her on LinkedIn and check out her TEDx talk. Those links are in the show notes, and be sure to sign up for our upcoming tools for Speakers workshop that we’re holding live on Zoom on Thursday, December 7th. We’re going to spend three hours together, and I’m going to show you and you’re going to learn how to use tools like ChatGPT and image generators, so you don’t have to stare at a blank screen anymore, or try to figure out what to do with your speaking proposals. Get ideas for your presentations, generate great images for your slides, and so on. You can get all the details as speaking your brand.com/i-tools-workshop or click on the link in the show notes. Until next time, thanks for listening. You.

Sonix is the world’s most advanced automated transcription, translation, and subtitling platform. Fast, accurate, and affordable.

Automatically convert your mp3 files to text (txt file), Microsoft Word (docx file), and SubRip Subtitle (srt file) in minutes.

Sonix has many features that you’d love including collaboration tools, powerful integrations and APIs, advanced search, generate automated summaries powered by AI, and easily transcribe your Zoom meetings. Try Sonix for free today.

Don't forget to subscribe to the podcast!

Get the #1 Proven Lead Generation Tool for Speakers

Leave a Comment





Other podcast episodes you may like...

Feeling Stuck or at a Plateau with Your Brand or Visibility? It's Time to Break the Rules with Carol Cox: Podcast Ep. 423

Feeling Stuck or at a Plateau with Your Brand or Visibility? It’s Time to Break the Rules with Carol Cox: Podcast Ep. 423

SYB-422-Julia-Korn-LinkedIn-1200x630-v2

How to Create a 10-Out-of-10 Keynote that Leaves Your Audience in Awe with Julia Korn: Podcast Ep. 422

100x Your Speaking in a Day: Insights from Our Recent In-Person Workshop with Carol Cox: Podcast Ep. 421

100x Your Speaking in a Day: Insights from Our Recent In-Person Workshop with Carol Cox: Podcast Ep. 421

Speaking Your Brand -Finding and Developing Your Big I.D.E.A. with Carol Cox: Podcast Ep. 354

Finding and Developing Your Big I.D.E.A.: Podcast Ep. 420 [Best Of]