Artwork

Treść dostarczona przez Tim Peter. Cała zawartość podcastów, w tym odcinki, grafika i opisy podcastów, jest przesyłana i udostępniana bezpośrednio przez Tim Peter lub jego partnera na platformie podcastów. Jeśli uważasz, że ktoś wykorzystuje Twoje dzieło chronione prawem autorskim bez Twojej zgody, możesz postępować zgodnie z procedurą opisaną tutaj https://pl.player.fm/legal.
Player FM - aplikacja do podcastów
Przejdź do trybu offline z Player FM !

Are AI and Digital Evil (Thinks Out Loud Episode 438)

21:13
 
Udostępnij
 

Manage episode 447829792 series 1462325
Treść dostarczona przez Tim Peter. Cała zawartość podcastów, w tym odcinki, grafika i opisy podcastów, jest przesyłana i udostępniana bezpośrednio przez Tim Peter lub jego partnera na platformie podcastów. Jeśli uważasz, że ktoś wykorzystuje Twoje dzieło chronione prawem autorskim bez Twojej zgody, możesz postępować zgodnie z procedurą opisaną tutaj https://pl.player.fm/legal.
MidJourney generated image of demonic computer and robot to illustrate the concept of whether AI and digital are evil

Do artificial intelligence and digital more broadly scare you? Do their potential harms keep you up at night? In short, are these tools evil?

I don’t think they are. Truly. And, most of the time, neither are the people who create them.

That doesn’t mean that AI and digital can’t cause harm for your customers, for your business, and for you.

The question instead should be, “How can you keep these tools from causing harm in the world?” And that’s what this episode of Thinks Out Loud is all about.

Want to learn more? Here are the show notes for you.

Are AI and Digital Evil (Thinks Out Loud Episode 438) — Headlines and Show Notes

Show Notes and Links

You might also enjoy this webinar I recently participated in with Miles Partnership that looked at "The Power of Generative AI and ChatGPT: What It Means for Tourism & Hospitality" here:

Free Downloads

We have some free downloads for you to help you navigate the current situation, which you can find right here:

Best of Thinks Out Loud

You can find our “Best of Thinks Out Loud” playlist on Spotify right here:

Subscribe to Thinks Out Loud

Contact information for the podcast: podcast@timpeter.com

Past Insights from Tim Peter Thinks

Technical Details for Thinks Out Loud

Recorded using a Shure SM7B Vocal Dynamic Microphone and a Focusrite Scarlett 4i4 (3rd Gen) USB Audio Interface into Logic Pro X for the Mac.

Running time: 21m 13s

You can subscribe to Thinks Out Loud in iTunes, the Google Play Store, via our dedicated podcast RSS feed (or sign up for our free newsletter). You can also download/listen to the podcast here on Thinks using the player at the top of this page.

Transcript: Are AI and Digital Evil (Thinks Out Loud Episode 438)

Well hello again everyone and welcome back to Thinks Out Loud, your source for all the digital expertise your business needs. My name is Tim Peter. This is episode 438 of The Big Show. And this week is Halloween, so I thought we’d talk a little bit about something a little more scary.

I am, by trade, training, and temperament, an expert in digital marketing and strategy. And I’ll explain in a minute what I mean by an expert, because it doesn’t always mean what people think.

Digital, of course, depends on technology of various stripes. And occasionally someone will ask me whether I’m afraid of technology. There are valid concerns that AI or the internet or what have you causes harms that outweigh their benefits. The question is whether these tools, AI or digital more broadly, are evil? And I mean, that’s a fair question. We see from time to time really terrible things being done here.

So let’s start with the landscape of what’s out there. Currently, the big innovation that everyone cares about, of course, is artificial intelligence. But there’s still social, there’s still mobile, there’s still the internet and email and websites and a whole host of other platforms and services your customers and you use every day. We’re also starting to think a bit about extended reality, augmented reality, and virtual reality, and how companies benefit from connecting with customers there.

When I say we, I mean me and my company. We’re paying attention to wearables like smart watches and smart glasses. We try to stay current at least a little bit on the state of IoT, that is the Internet of Things. We know that 6G is just over the horizon and will start showing up in your customers hands in about five years time. Like, that is the roadmap for mobile going forward, so that’s not a guess, that’s actually when everyone expects that will occur based on what the technology providers are working on.

Someday, maybe, there could be people implanted with chips. Seriously. The technology already exists, there’s already people using them. It’s just not commercial or at scale in any meaningful sense. What it’s going to take is some brilliant group of innovators to figure out the reasons why people might benefit, to bring those benefits to life, and to convince people that those benefits actually are, in fact, benefits. I’m not predicting that will happen, though I do think it’s likely that at least some people will benefit longer term, and some people will adopt them in the longer term.

I’m also reasonably confident that if these do hit the market at some scale, that what I just described is a fairly well known playbook that will create that reality.

Now, note that I’m not 100 percent sure about that last one that I just told you about. I don’t know that that product will exist. There are very few things that I’m 100 percent sure about.

I mentioned a moment ago that I’m an expert. As an expert, that doesn’t mean I have all the answers. Far from it. My job isn’t to have the answers. It’s to explore the questions and help my clients find answers they can live with. And that’s kind of my point here today.

As part of what I do, I’m a perpetual student. I’m always looking to learn. I read and I listen to podcasts and I watch videos and presentations endlessly about these tools that companies use to connect with their customers and that customers use to connect with companies.

Part of that learning is continually realizing that every innovation has its pros and cons. I’ve shared the Paul Verilio quote many times, that “when you invent the ship, you also invent the shipwreck.”

And I will be completely transparent about this. I tend to focus on the benefits that technology provides, because I tend to focus on the best in people. My sense is that at least in the context of business and marketing, there just aren’t that many mustache twirling villains seeking to use technology to make the world a worse place. I’m not saying they don’t exist. I’m saying in the context of business, there aren’t that many of them.

I’m also, for purposes of our discussion today, going to ignore geopolitical rivals and outright criminals. Obviously, those folks exist, and they’re absolutely worth discussing. They’re not my core area of expertise, and I suspect not why you listen to this show.

I also have a much more relevant point to what this show tends to be about. And that is, when tech goes wrong, It’s usually not because the tech malfunctioned. It’s usually not because somebody planned for it to be evil. Instead, it’s almost always sloppy design by innovators who didn’t think through the context and the consequences of their applications and their algorithms.

Sometimes, it’s because users who push the tools in bad directions too, which again, could be a lack of forethought by the innovators and entrepreneurs and developers. You know, if you think about it, the terrorists who perpetrated 9/11 didn’t need AI to think up their monstrous attack. And Google thought through a ton of truly awful use cases when it launched its Gemini generative AI to prevent it from doing harm. Those included things like pornography and other kinds of hateful material. Of course, they were so busy thinking about the worst things, that they missed lesser problems like misinformation that folks on the internet found and exploited within a couple of hours of Gemini’s release.

So, it wasn’t the technology that failed. It was people who failed. You know, are there exceptions? Do big companies sometimes deliberately make truly terrible decisions? Sure, of course they do. Invariably, there’s some jackass at some company who ignores the likelihood or the impact of a given approach or simply doesn’t care if those occur. You can absolutely find examples, but the notable cases where that does occur tend to be notable because they are, in fact, relatively rare. It’s not somebody being malicious. It’s far more likely that some hoodie wearing product manager made a snap judgment without thinking through what could happen.

That’s why when I’m asked if AI scares me, I don’t think it does. AI doesn’t scare me; people scare me. And not because I think people are terrible, but because they can’t see past their inherent incentives and biases and blind spots. Plus, of course, the actions of the occasional jerk. I mean, sure, I live in the real world, those folks do exist. But they’re not the majority.

Now you could argue when some of these failures, when some of these failures occur, it’s not especially important why it occurred if people suffer from it. You may have seen the recent story about a 14 year old boy in Florida who died by suicide allegedly with the encouragement of an AI chatbot. Clearly that is a tragic story and it’s entirely fair that if the chatbot has contributed to this boy’s death, it doesn’t matter whether the chatbot developer cut corners or simply overlooked a potential defect. They should still be held accountable if their actions or inactions caused someone to die.

But why those actions or inactions occurred matter, and I’m going to come back to why in a moment.

You also have to remember that stories like these aren’t limited to AI. We’ve all heard stories about social media bullying leading to awful outcomes, especially for girls. We know that search can surface misinformation, and so on. There’s plenty of these. It’s not hard to find them.

What I would also say is true is that tech, in and of itself, isn’t the bad guy. It’s people who create the tech, and market the tech, and use the tech, who keep me up at night. You have to remember that technology is easy, really. People are complicated.

You know, when the first commercial chip implants arrive, you’re not going to see me jumping in line to get one. It won’t be because I don’t trust the technology. Instead, it’s the innovators and other users who really scare me. And it doesn’t matter whether those tools are built with malicious features or simply due to the lack of forethought. We’re going to hear terrible stories of these tools go wrong, just as we do with AI and social and the internet itself.

Now, long time listeners of the show know that I like to keep things positive, I like to focus on the good aspects here. Not because I’m naive, but because there’s already enough negativity out there without me adding to it, which is why my point today is not to tell you technology is bad and you shouldn’t use it. Far from it.

Technology is in the world and it’s not going anywhere. It is a core component of the world we live in and we receive dramatic benefits again and again and again. And I would argue that we should look for the benefits of the ship while looking for ways to mitigate the shipwrecks, you know?

We’ve also seen what happens when people move to a remote cabin in the woods and hand type lengthy manifestos about the dangers of technology, right? That’s not really the path we should be heading down.

Instead, you have to stay informed. You have to keep learning. You have to be a perpetual student, just like I am. Not just of tech, but of people. Don’t just look for the examples of things that went wrong and say see, this technology is a bad thing.

Instead, try to understand why it went wrong. What happened? What decisions were made? And how could whatever happened have been done differently? This chatbot story I mentioned a moment ago, regarding the boy in Florida. It does matter what they did and why they did it, to understand how to not do it again. It doesn’t mean that that should limit their accountability if in fact they are responsible for this. It means we need to understand what decisions were made that led to this so that we can not do that in the future. So that others can learn from it and not do that in the future. Was this an engineering error or did someone overlook a problem that’s going to be obvious in hindsight? Those are questions we need to know.

You also need to be a smart consumer. Understand the ways that the tools you use can be used for you and used against you. And most importantly, as a marketing and business professional, make sure you are thinking about ways your choices can hurt customers and then do everything you can do to limit those harms. Including occasionally, if necessary, cancelling your program. Don’t do the thing you thought you were going to do.

There are some ways you can actually prevent yourself from making these errors. There’s a couple of techniques that I really like.

One is making sure you have a devil’s advocate in your discussions. If you’re not familiar with this, a devil’s advocate is someone whose job it is to argue against whatever you’re planning to do. It could be your lawyer, or it could be an operations person, or it could be an IT person. But their incentive needs to be to stop whatever you’re working on from moving forward. Their bonus can’t be based on releasing the product, or releasing the service, or releasing the campaign. Their bonus has to be based on, did we eliminate risks?

Listen to the concerns of those people, and then come up with plans to address those risks before they ever surface in the real world.

Another way you can do this is you can conduct a pre mortem or a risk assessment.

If you’re not familiar with a pre mortem, it’s an exercise where you visualize the aftermath of your product or your service or your campaign’s launch before you actually launch. You take a moment and look back and ask, what went well? What went badly? What caused us to succeed? What caused us to fail? What did we do that helped our customers? And what did we do that hurt our customers? Take the time with your team to think through all of those questions. And I’ll link to some others in the show notes.

Then create mitigation plans for the things that, you know, hurt your customers and caused you to fail. How do you take those out of the loop to make sure they’re not a problem?

A risk assessment is a similar concept. You assess how likely it is that something bad happens. You assess how much impact it will have if those things occur.

And again, you take steps to mitigate the things that are the most likely and have the biggest impacts before they ever occur.

When we look at the things that have happened in the world, the really terrible ones that have occurred over time, usually it’s because nobody took the time to ask these questions. They didn’t think through, what do we do? How does the system respond? How does the tool respond? How does the technology respond? How do we as a company take action before they happen so that they don’t actually cause real problems?

Finally, I realize this episode might seem like kind of a bummer. It might seem like kind of a drag. It is easy when you think about these things to get overwhelmed. It is easy to find yourself going, “Oh my gosh, this is terrible and nobody should use these tools.” When that occurs, it is absolutely okay to, as folk say, touch grass, right? Take a walk outside. Stick your phone in a drawer and go to the beach for a day. You know, it’s not just okay, it’s downright good for you to step away for a minute or two. Go for a hike in the woods. Just maybe don’t build a cabin and retreat there, right?

AI and digital are not evil. They’re not. And most of the time, neither are the people who create them.

They absolutely can cause evil, whether intentionally or through lack of forethought.

What’s also true is they’re not going anywhere. They’re in our lives and generally drive positive outcomes.

So if we know that they’re going to be in our lives, and we know that they have the potential for evil, our job is to keep learning about what these tools are, what their strengths and limitations are, how they can provide benefits and risks to ourselves and to our customers, and then take the actions necessary to eliminate the risks and to mitigate the harms. Our job is to ensure that they actually deliver the benefit that they’re supposed to.

So no, I don’t believe that AI or digital are evil. What I do think is true, though, is it’s our job every day to ensure that stays true.

Show Wrap-Up and Credits

Now, looking at the clock on the wall, we are out of time for this week.

And I want to remind you again that you can find the show notes for this episode. As well as an archive of all past episodes by going to timpeter.com/podcast. Again, that’s timpeter.com/podcast. Just look for episode 437.

Subscribe to Thinks Out Loud

Don’t forget that you can click on the subscribe link in any of the episodes that you find there to have Thinks Out Loud delivered to your favorite podcatcher every single week. You can also find Thinks Out Loud on Apple Podcasts, Spotify, YouTube Music, anywhere fine podcasts are found.

Leave a Rating or Review for Thinks Out Loud

I would also very much appreciate it if you could provide a positive rating or review for the show whenever you use one of those services.

If you like what you hear on Thinks Out Loud, if you enjoy what we talk about, if you like being part of the community that we’re building here, please give us a positive rating or review.

Reviews help other listeners find the podcast. Reviews help other listeners understand what Thinks Out Loud is all about. They help to build our community and they mean the world to me. So thank you so much for doing that. I very, very much appreciate it.

Thinks Out Loud on Social Media

You can also find Thinks Out Loud on LinkedIn by going to linkedin.com/tim-peter-and-associates-llc. You can find me on Twitter or X or whatever you want to call it this week by using the Twitter handle @tcpeter. And of course, you can email me by sending an email to podcast(at)timpeter.com. Again, that’s podcast(at)timpeter.com.

Show Outro

Finally, and I know I say this a lot, I want you to know how thrilled I am that you keep listening to what we do here. It means so much to me. You are the reason we do this show. You’re the reason that Thinks Out Loud happens every single week.

So please, keep your messages coming on LinkedIn. Keep hitting me up on Twitter, sending things via email. I love getting a chance to talk with you, to hear what’s going on in your world, and to learn how we can do a better job building on the types of information and insights and content and community that work for you and work for your business.

So with all that said, I hope you have a fantastic rest of your day, I hope you have a wonderful week ahead, and I will look forward to speaking with you here on Thinks Out Loud next time. Until then, please be well, be safe, and especially given what we talked about today, take care, everybody.

The post Are AI and Digital Evil (Thinks Out Loud Episode 438) appeared first on Tim Peter & Associates.

  continue reading

100 odcinków

Artwork
iconUdostępnij
 
Manage episode 447829792 series 1462325
Treść dostarczona przez Tim Peter. Cała zawartość podcastów, w tym odcinki, grafika i opisy podcastów, jest przesyłana i udostępniana bezpośrednio przez Tim Peter lub jego partnera na platformie podcastów. Jeśli uważasz, że ktoś wykorzystuje Twoje dzieło chronione prawem autorskim bez Twojej zgody, możesz postępować zgodnie z procedurą opisaną tutaj https://pl.player.fm/legal.
MidJourney generated image of demonic computer and robot to illustrate the concept of whether AI and digital are evil

Do artificial intelligence and digital more broadly scare you? Do their potential harms keep you up at night? In short, are these tools evil?

I don’t think they are. Truly. And, most of the time, neither are the people who create them.

That doesn’t mean that AI and digital can’t cause harm for your customers, for your business, and for you.

The question instead should be, “How can you keep these tools from causing harm in the world?” And that’s what this episode of Thinks Out Loud is all about.

Want to learn more? Here are the show notes for you.

Are AI and Digital Evil (Thinks Out Loud Episode 438) — Headlines and Show Notes

Show Notes and Links

You might also enjoy this webinar I recently participated in with Miles Partnership that looked at "The Power of Generative AI and ChatGPT: What It Means for Tourism & Hospitality" here:

Free Downloads

We have some free downloads for you to help you navigate the current situation, which you can find right here:

Best of Thinks Out Loud

You can find our “Best of Thinks Out Loud” playlist on Spotify right here:

Subscribe to Thinks Out Loud

Contact information for the podcast: podcast@timpeter.com

Past Insights from Tim Peter Thinks

Technical Details for Thinks Out Loud

Recorded using a Shure SM7B Vocal Dynamic Microphone and a Focusrite Scarlett 4i4 (3rd Gen) USB Audio Interface into Logic Pro X for the Mac.

Running time: 21m 13s

You can subscribe to Thinks Out Loud in iTunes, the Google Play Store, via our dedicated podcast RSS feed (or sign up for our free newsletter). You can also download/listen to the podcast here on Thinks using the player at the top of this page.

Transcript: Are AI and Digital Evil (Thinks Out Loud Episode 438)

Well hello again everyone and welcome back to Thinks Out Loud, your source for all the digital expertise your business needs. My name is Tim Peter. This is episode 438 of The Big Show. And this week is Halloween, so I thought we’d talk a little bit about something a little more scary.

I am, by trade, training, and temperament, an expert in digital marketing and strategy. And I’ll explain in a minute what I mean by an expert, because it doesn’t always mean what people think.

Digital, of course, depends on technology of various stripes. And occasionally someone will ask me whether I’m afraid of technology. There are valid concerns that AI or the internet or what have you causes harms that outweigh their benefits. The question is whether these tools, AI or digital more broadly, are evil? And I mean, that’s a fair question. We see from time to time really terrible things being done here.

So let’s start with the landscape of what’s out there. Currently, the big innovation that everyone cares about, of course, is artificial intelligence. But there’s still social, there’s still mobile, there’s still the internet and email and websites and a whole host of other platforms and services your customers and you use every day. We’re also starting to think a bit about extended reality, augmented reality, and virtual reality, and how companies benefit from connecting with customers there.

When I say we, I mean me and my company. We’re paying attention to wearables like smart watches and smart glasses. We try to stay current at least a little bit on the state of IoT, that is the Internet of Things. We know that 6G is just over the horizon and will start showing up in your customers hands in about five years time. Like, that is the roadmap for mobile going forward, so that’s not a guess, that’s actually when everyone expects that will occur based on what the technology providers are working on.

Someday, maybe, there could be people implanted with chips. Seriously. The technology already exists, there’s already people using them. It’s just not commercial or at scale in any meaningful sense. What it’s going to take is some brilliant group of innovators to figure out the reasons why people might benefit, to bring those benefits to life, and to convince people that those benefits actually are, in fact, benefits. I’m not predicting that will happen, though I do think it’s likely that at least some people will benefit longer term, and some people will adopt them in the longer term.

I’m also reasonably confident that if these do hit the market at some scale, that what I just described is a fairly well known playbook that will create that reality.

Now, note that I’m not 100 percent sure about that last one that I just told you about. I don’t know that that product will exist. There are very few things that I’m 100 percent sure about.

I mentioned a moment ago that I’m an expert. As an expert, that doesn’t mean I have all the answers. Far from it. My job isn’t to have the answers. It’s to explore the questions and help my clients find answers they can live with. And that’s kind of my point here today.

As part of what I do, I’m a perpetual student. I’m always looking to learn. I read and I listen to podcasts and I watch videos and presentations endlessly about these tools that companies use to connect with their customers and that customers use to connect with companies.

Part of that learning is continually realizing that every innovation has its pros and cons. I’ve shared the Paul Verilio quote many times, that “when you invent the ship, you also invent the shipwreck.”

And I will be completely transparent about this. I tend to focus on the benefits that technology provides, because I tend to focus on the best in people. My sense is that at least in the context of business and marketing, there just aren’t that many mustache twirling villains seeking to use technology to make the world a worse place. I’m not saying they don’t exist. I’m saying in the context of business, there aren’t that many of them.

I’m also, for purposes of our discussion today, going to ignore geopolitical rivals and outright criminals. Obviously, those folks exist, and they’re absolutely worth discussing. They’re not my core area of expertise, and I suspect not why you listen to this show.

I also have a much more relevant point to what this show tends to be about. And that is, when tech goes wrong, It’s usually not because the tech malfunctioned. It’s usually not because somebody planned for it to be evil. Instead, it’s almost always sloppy design by innovators who didn’t think through the context and the consequences of their applications and their algorithms.

Sometimes, it’s because users who push the tools in bad directions too, which again, could be a lack of forethought by the innovators and entrepreneurs and developers. You know, if you think about it, the terrorists who perpetrated 9/11 didn’t need AI to think up their monstrous attack. And Google thought through a ton of truly awful use cases when it launched its Gemini generative AI to prevent it from doing harm. Those included things like pornography and other kinds of hateful material. Of course, they were so busy thinking about the worst things, that they missed lesser problems like misinformation that folks on the internet found and exploited within a couple of hours of Gemini’s release.

So, it wasn’t the technology that failed. It was people who failed. You know, are there exceptions? Do big companies sometimes deliberately make truly terrible decisions? Sure, of course they do. Invariably, there’s some jackass at some company who ignores the likelihood or the impact of a given approach or simply doesn’t care if those occur. You can absolutely find examples, but the notable cases where that does occur tend to be notable because they are, in fact, relatively rare. It’s not somebody being malicious. It’s far more likely that some hoodie wearing product manager made a snap judgment without thinking through what could happen.

That’s why when I’m asked if AI scares me, I don’t think it does. AI doesn’t scare me; people scare me. And not because I think people are terrible, but because they can’t see past their inherent incentives and biases and blind spots. Plus, of course, the actions of the occasional jerk. I mean, sure, I live in the real world, those folks do exist. But they’re not the majority.

Now you could argue when some of these failures, when some of these failures occur, it’s not especially important why it occurred if people suffer from it. You may have seen the recent story about a 14 year old boy in Florida who died by suicide allegedly with the encouragement of an AI chatbot. Clearly that is a tragic story and it’s entirely fair that if the chatbot has contributed to this boy’s death, it doesn’t matter whether the chatbot developer cut corners or simply overlooked a potential defect. They should still be held accountable if their actions or inactions caused someone to die.

But why those actions or inactions occurred matter, and I’m going to come back to why in a moment.

You also have to remember that stories like these aren’t limited to AI. We’ve all heard stories about social media bullying leading to awful outcomes, especially for girls. We know that search can surface misinformation, and so on. There’s plenty of these. It’s not hard to find them.

What I would also say is true is that tech, in and of itself, isn’t the bad guy. It’s people who create the tech, and market the tech, and use the tech, who keep me up at night. You have to remember that technology is easy, really. People are complicated.

You know, when the first commercial chip implants arrive, you’re not going to see me jumping in line to get one. It won’t be because I don’t trust the technology. Instead, it’s the innovators and other users who really scare me. And it doesn’t matter whether those tools are built with malicious features or simply due to the lack of forethought. We’re going to hear terrible stories of these tools go wrong, just as we do with AI and social and the internet itself.

Now, long time listeners of the show know that I like to keep things positive, I like to focus on the good aspects here. Not because I’m naive, but because there’s already enough negativity out there without me adding to it, which is why my point today is not to tell you technology is bad and you shouldn’t use it. Far from it.

Technology is in the world and it’s not going anywhere. It is a core component of the world we live in and we receive dramatic benefits again and again and again. And I would argue that we should look for the benefits of the ship while looking for ways to mitigate the shipwrecks, you know?

We’ve also seen what happens when people move to a remote cabin in the woods and hand type lengthy manifestos about the dangers of technology, right? That’s not really the path we should be heading down.

Instead, you have to stay informed. You have to keep learning. You have to be a perpetual student, just like I am. Not just of tech, but of people. Don’t just look for the examples of things that went wrong and say see, this technology is a bad thing.

Instead, try to understand why it went wrong. What happened? What decisions were made? And how could whatever happened have been done differently? This chatbot story I mentioned a moment ago, regarding the boy in Florida. It does matter what they did and why they did it, to understand how to not do it again. It doesn’t mean that that should limit their accountability if in fact they are responsible for this. It means we need to understand what decisions were made that led to this so that we can not do that in the future. So that others can learn from it and not do that in the future. Was this an engineering error or did someone overlook a problem that’s going to be obvious in hindsight? Those are questions we need to know.

You also need to be a smart consumer. Understand the ways that the tools you use can be used for you and used against you. And most importantly, as a marketing and business professional, make sure you are thinking about ways your choices can hurt customers and then do everything you can do to limit those harms. Including occasionally, if necessary, cancelling your program. Don’t do the thing you thought you were going to do.

There are some ways you can actually prevent yourself from making these errors. There’s a couple of techniques that I really like.

One is making sure you have a devil’s advocate in your discussions. If you’re not familiar with this, a devil’s advocate is someone whose job it is to argue against whatever you’re planning to do. It could be your lawyer, or it could be an operations person, or it could be an IT person. But their incentive needs to be to stop whatever you’re working on from moving forward. Their bonus can’t be based on releasing the product, or releasing the service, or releasing the campaign. Their bonus has to be based on, did we eliminate risks?

Listen to the concerns of those people, and then come up with plans to address those risks before they ever surface in the real world.

Another way you can do this is you can conduct a pre mortem or a risk assessment.

If you’re not familiar with a pre mortem, it’s an exercise where you visualize the aftermath of your product or your service or your campaign’s launch before you actually launch. You take a moment and look back and ask, what went well? What went badly? What caused us to succeed? What caused us to fail? What did we do that helped our customers? And what did we do that hurt our customers? Take the time with your team to think through all of those questions. And I’ll link to some others in the show notes.

Then create mitigation plans for the things that, you know, hurt your customers and caused you to fail. How do you take those out of the loop to make sure they’re not a problem?

A risk assessment is a similar concept. You assess how likely it is that something bad happens. You assess how much impact it will have if those things occur.

And again, you take steps to mitigate the things that are the most likely and have the biggest impacts before they ever occur.

When we look at the things that have happened in the world, the really terrible ones that have occurred over time, usually it’s because nobody took the time to ask these questions. They didn’t think through, what do we do? How does the system respond? How does the tool respond? How does the technology respond? How do we as a company take action before they happen so that they don’t actually cause real problems?

Finally, I realize this episode might seem like kind of a bummer. It might seem like kind of a drag. It is easy when you think about these things to get overwhelmed. It is easy to find yourself going, “Oh my gosh, this is terrible and nobody should use these tools.” When that occurs, it is absolutely okay to, as folk say, touch grass, right? Take a walk outside. Stick your phone in a drawer and go to the beach for a day. You know, it’s not just okay, it’s downright good for you to step away for a minute or two. Go for a hike in the woods. Just maybe don’t build a cabin and retreat there, right?

AI and digital are not evil. They’re not. And most of the time, neither are the people who create them.

They absolutely can cause evil, whether intentionally or through lack of forethought.

What’s also true is they’re not going anywhere. They’re in our lives and generally drive positive outcomes.

So if we know that they’re going to be in our lives, and we know that they have the potential for evil, our job is to keep learning about what these tools are, what their strengths and limitations are, how they can provide benefits and risks to ourselves and to our customers, and then take the actions necessary to eliminate the risks and to mitigate the harms. Our job is to ensure that they actually deliver the benefit that they’re supposed to.

So no, I don’t believe that AI or digital are evil. What I do think is true, though, is it’s our job every day to ensure that stays true.

Show Wrap-Up and Credits

Now, looking at the clock on the wall, we are out of time for this week.

And I want to remind you again that you can find the show notes for this episode. As well as an archive of all past episodes by going to timpeter.com/podcast. Again, that’s timpeter.com/podcast. Just look for episode 437.

Subscribe to Thinks Out Loud

Don’t forget that you can click on the subscribe link in any of the episodes that you find there to have Thinks Out Loud delivered to your favorite podcatcher every single week. You can also find Thinks Out Loud on Apple Podcasts, Spotify, YouTube Music, anywhere fine podcasts are found.

Leave a Rating or Review for Thinks Out Loud

I would also very much appreciate it if you could provide a positive rating or review for the show whenever you use one of those services.

If you like what you hear on Thinks Out Loud, if you enjoy what we talk about, if you like being part of the community that we’re building here, please give us a positive rating or review.

Reviews help other listeners find the podcast. Reviews help other listeners understand what Thinks Out Loud is all about. They help to build our community and they mean the world to me. So thank you so much for doing that. I very, very much appreciate it.

Thinks Out Loud on Social Media

You can also find Thinks Out Loud on LinkedIn by going to linkedin.com/tim-peter-and-associates-llc. You can find me on Twitter or X or whatever you want to call it this week by using the Twitter handle @tcpeter. And of course, you can email me by sending an email to podcast(at)timpeter.com. Again, that’s podcast(at)timpeter.com.

Show Outro

Finally, and I know I say this a lot, I want you to know how thrilled I am that you keep listening to what we do here. It means so much to me. You are the reason we do this show. You’re the reason that Thinks Out Loud happens every single week.

So please, keep your messages coming on LinkedIn. Keep hitting me up on Twitter, sending things via email. I love getting a chance to talk with you, to hear what’s going on in your world, and to learn how we can do a better job building on the types of information and insights and content and community that work for you and work for your business.

So with all that said, I hope you have a fantastic rest of your day, I hope you have a wonderful week ahead, and I will look forward to speaking with you here on Thinks Out Loud next time. Until then, please be well, be safe, and especially given what we talked about today, take care, everybody.

The post Are AI and Digital Evil (Thinks Out Loud Episode 438) appeared first on Tim Peter & Associates.

  continue reading

100 odcinków

所有剧集

×
 
Loading …

Zapraszamy w Player FM

Odtwarzacz FM skanuje sieć w poszukiwaniu wysokiej jakości podcastów, abyś mógł się nią cieszyć już teraz. To najlepsza aplikacja do podcastów, działająca na Androidzie, iPhonie i Internecie. Zarejestruj się, aby zsynchronizować subskrypcje na różnych urządzeniach.

 

Skrócona instrukcja obsługi