π€ 3 Reasons You Feel So Conflicted About AI
Hello, and welcome to Off The Grid, a podcast about leaving social media without losing all your clients. I'm your host, Amelia Hruby. And on this show, I share stories, strategies, and experiments for sharing your work and making money online without relying on big tech. Today, I am continuing our AI series with a quick solo episode. I should never say quick because I don't know how long it will be when I'm recording this intro.
Amelia Hruby:But I aspire for it to be quick with a concise solo episode about three reasons you might feel so conflicted about AI. This episode is coming in the second half of our AI series here on the show, And I've created it really in response to a lot of email replies and clubhouse comments and general responses that I've received from folks tuning in, sharing that they have a lot of big feelings about how AI is showing up in the world and even ways that they're using it in their creative work and life. And I've been receiving all these emails. I have been sitting with people and witnessing their feelings. And I'm not gonna get into all the reasons AI might feel bad, but I am gonna talk about three reasons that I think so many of us, especially as creative, value centered business owners, why we feel so conflicted about AI, and particularly the AI tools that are being sold to us as business owners.
Amelia Hruby:This episode really feels like it goes back to the roots of the podcast because our first ever episode was called four myths and a truth about social media for small businesses. And in that episode, I unpacked where the conflicts were in our relationship to social media, what the apps were telling us, what we were experiencing. I I think this episode is gonna be quite similar. The reasons we feel so conflicted about AI is because we're getting a lot of messaging about how these tools are revolutionizing technology. They're changing the world.
Amelia Hruby:AI is already here and it's not going away, we just have to accept it. Right? Like, whether you're using AI or not, you've probably heard that messaging. And I think that we know something is off with it. And it's that tension that's causing this conflict and making many of us feel pretty bad, honestly.
Amelia Hruby:That's what I heard over the past two weeks. We're feeling bad, friends. So in this episode, I'm gonna try to elucidate a few reasons why. And if you hear this and then you'd like to be part of our community conversation about AI, how we're feeling about it, if we're using it in our work or not, what's been helpful to think through or feel through throughout the series, you can find our community comment thread inside the clubhouse. I will link it below in the show notes.
Amelia Hruby:You do have to become a paid member of the clubhouse to join the conversation, but it only costs $5 a month. So for $5, you can get access to a space where I am facilitating an ongoing conversation about AI in creative work and life. And if these episodes spark anything for you, I would love to invite you to be a part of that conversation. So that's all linked in the show notes alongside the Leaving Social Media Toolkit, alongside the previous episodes of the AI series. You can find all of that anywhere you listen to your podcasts.
Amelia Hruby:But for now, let's dive into the three reasons you feel so conflicted about AI. Okay. I'm gonna start with a disclaimer because we all know I love a disclaimer. This episode is primarily for people who are using AI and feeling really conflicted about it. I wanted to create it because I know that the interviews I'm doing in this series are all with people who don't use AI.
Amelia Hruby:That said, I'm not here to judge what you, dear listener, are doing with AI or not with AI. That is not my goal here, and I heard from a lot of listeners who are using AI, and they're feeling super conflicted about it. So this episode goes out to all of you. I went with the catchy title, Three Reasons You Feel So Conflicted About AI, but there are certainly many more reasons than this, and you may not feel conflicted about these particular reasons that I'm going to talk about here. So, that said, let me tell you reason number one.
Amelia Hruby:The first reason that you might feel so conflicted about AI is because if you're listening to this podcast, you're probably having an uncanny feeling that your experience with AI might be like your experience with social media. The most popular generative AI tools and the most popular social media platforms are all owned by big tech companies that prioritize growth, that prioritize profits for shareholders, and that frankly steal content from people across the Internet, including their users. In the case of Anthropic, there's court documentation that they stole all of the books that they used to train the models. Right? So in that case, I think the language is appropriate.
Amelia Hruby:With the case of social media and many other generative AI tools, we sign away our rights to what we share there when we sign up for the platforms. Right? Like in the Instagram terms of service or the TikTok terms of service, you agree that they can use all your stuff in their marketing or their training. You have to agree to use the platform. So to me, the companies that own social media and AI feel like two of a kind.
Amelia Hruby:And because I am so critical of those companies on this podcast, because you dear listener have heard me be so critical of them, there's a real tension between, okay, I'm stepping back and away from social media because I see what's going on there, But maybe I'm not stepping back or away from AI. So I think that that is one site of this tension. It's like we have this lived experience of social media, and when we try to apply those frames and understandings to AI, it's kind of the same, but we're being told it's different. Then we rub up against this other piece that I think is really important because so many people come to this podcast because social media felt bad to use. It may have felt bad to you tuning in, and it stopped working.
Amelia Hruby:Right? Like I hear from so many listeners that they were pretty happy being on Instagram or TikTok until it just stopped sharing their posts with their followers, until the algorithm just stopped showing their stuff to other people. And so when you arrive at off the grid, you may be in this position of like social media feels bad and it doesn't work. I'm so grateful that there is this podcast out there that's helping me find other ways to support my business. But on the flip side of that, what I'm hearing from a lot of folks is that where social media felt bad and didn't work, AI kinda feels good for some people, and it does work.
Amelia Hruby:When you ask AI to write the email, it writes the email. Now how good is that email? How much does it sound like you? Those are all questions that are up in the air, but I think that there is a real efficacy of AI right now that just doesn't exist for social media anymore. And so again, like, the conflict you might feel about AI could really be rooted in this tension of like, I clarified my values about social media.
Amelia Hruby:I changed my behavior because I saw that it was so problematic, and it wasn't working, and it made me feel bad. But now, there's this other thing, AI, that I see that it's problematic, but it kind of works, and it makes me feel okay. Because it basically affirms everything I say to it. It tells me that's a great idea. Again, and again, and again.
Amelia Hruby:Right? And so I think that that is one real side of conflict for many of us. And I'll be honest, this is basically where I sat for all of 2025. As I've shared in conversations during this series, I did use ChatGPT and Claude for some projects and some email writing here and there. And as I did that, I was just sitting in this tension of realizing that I was going back to a different technology that just replicated all of the same patterns that I talk about about social media all the time.
Amelia Hruby:And so my first reason that you might feel so conflicted about AI is the way that it sort of makes you overlook your social media experience or grapple with the fact that even though you know these companies are harmful, and I haven't even mentioned their environmental impact or their labor rights impact or all the stuff I've talked through with Mel and Casey so far. Even though you know those harms are happening, and you've seen how that all went on social media, maybe you're deciding to use it anyway. And and that's a real conflict. I'm with you. I have no judgment of that decision.
Amelia Hruby:But I think just naming that that's part of what feels so hard to sit in. That is the first reason I think many people feel so conflicted. The second reason on my list is gonna feel pretty obvious once I say it. But the second reason I think many of us feel so conflicted about AI is capitalism. Especially the pressure to do more, work more, get more done, and be more efficient in the process.
Amelia Hruby:I think this is one of the biggest promises of AI, right? You can get more done without putting in any more effort, And you can outsource the stuff you don't even love doing anyway, and just get AI to do it. So then you can do even more of the stuff you do like, but again, it's all oriented toward doing more. AI is sold to us as the ultimate productivity hack. And for those of us who are already feeling so overwhelmed and overworked, that is really tempting.
Amelia Hruby:I have heard from so many listeners in the past two weeks who are like, thanks to AI, I've finally gone through all the emails in my inbox. Or thanks to this chatbot, I finally did my first launch because it helped me write all the emails and I could actually get them done. This goes back to what I was saying before, right? Like AI does work. It will do these things for you.
Amelia Hruby:And yet, I still see over and over and over again that the only reward for getting more work done is having more work to do. This is truly one of the like karmic cycles that I am working through in my own lifetime and lineage. Like the value of hard work, the fruits of our labor, that is very much something that is being passed down generation to generation in my family, and that I am actively rooting out of myself so I don't keep passing it on. And when I look at AI, it just feels like a trap to me, I'll be honest, of wow, I can be more productive just so that I can be even more productive? Question mark.
Amelia Hruby:And there is early research that people are doing about folks who have fully integrated AI into their work, and what that research is finding is that they just have more work to do. It's not that they got more done, and then they had leisure time, or their boss was like, cool, you can have a day off now. Instead, more was piled on, and the expectation was that AI would increase overall productivity. So once the tools were applied, the standard was raised. And so I think that this is a tension that a lot of us are sitting in.
Amelia Hruby:I know it is because I heard from listeners who were like, I need this to be more efficient, and also I still have too much to do. Right? I think that AI is a productivity hack, but it's only that. It's just a hack. It doesn't actually resolve any of the systemic problems.
Amelia Hruby:In fact, it only exacerbates the systemic problems of capitalism. I feel like Mel spoke to this really well in our conversation about AI sobriety and really pointed to the ways that AI companies are actively eroding labor rights. There are also a few really wonderful pieces that I'll link to in the show notes that point to the ways that the workers who train AI are really mistreated and suffer severe mental health costs from all of the things they have to look at as they do data labeling and train models. So four zero four media has a great article called AI is African Intelligence about this and about the workers in Africa who are doing that work. They've also got a podcast episode on this where they talk to Michael Joffrey Asia about his experience doing this work in Kenya, and I'll link to his first hand testimony about how it negatively impacted his life and relationships as well.
Amelia Hruby:So again, yes, the AI quote unquote does the thing for us, but those of us who are looking one layer deeper than that, we see the impacts that it has. We know that it's causing harm to other people. We know that it's harming the planet. I can also link to many of the statistics about the environmental costs of AI. I know that the numbers around how much water it uses are actively being discussed and refuted, but we can also think about the amount of mining that's had to be done to create the chips and the data centers that run these AI models and companies, as well as just the amount of space and the impact on utility prices for the communities that data centers are located near.
Amelia Hruby:So there are very real impacts. And I think that that's another tension we're sitting in. For so many of us who are based in The US and who work online and do this sort of like service work or knowledge work or digital product work, We have been so often insulated by the violence of colonialism from the real impact of what it takes to run the Internet, to have our website online, to send our emails. Like, that all feels like something that just happens with no cost, but AI is really bringing those costs to us. And so we're feeling this pressure of capitalism, not only in our efficiency, but on our natural resources where we live.
Amelia Hruby:And to be clear, The US has been extracting and exploiting the natural resources and manual labor of people of the global majority since its inception. I just think that for many of us who've been so insulated by empire, the impacts that AI data centers are having on our communities and how they're reshaping our daily work online. This is a moment where many of us are having to face the realities of colonialism and imperialism in a way that perhaps we haven't before. And I'll be explicit here when I say we, I mean white folks like myself. For listeners who live outside The US or Europe, for BIPOC folks tuned in and listeners of the global majority, this likely will not be a new experience to you or you'll have a totally different relationship to these oppressive powers.
Amelia Hruby:But because I am a highly educated white woman who's worked online for most of my life, I think that these systems are worth naming so that we can reckon with them, and then hopefully deconstruct and overthrow them. That's the ultimate dream. Right? And again, I'm not saying that just because you feel that conflict doesn't mean I'm also judging you if you use AI. I have no judgments here.
Amelia Hruby:I just think it's helpful to name, this is why it feels so hard. We're sitting in that tension. And we sit in that tension every day. Right? As the oft cited phrase goes, there is no ethical consumption under capitalism.
Amelia Hruby:However, I just think that AI has really distilled and crystallized that for many of us, and we're feeling it in our bodies and in our well-being. And then the third reason, honestly, wraps in these first two, and I hope will give you some new language to think about why you might feel so conflicted, or people you know might feel conflicted, or using AI or not using AI just feels hard and bad. And that is moral injury. So I've pulled a definition of this from the Syracuse University Moral Injury Project, because this is not at all a concept that I have invented, and in fact, it has been theorized for decades. But here's how they define moral injury.
Amelia Hruby:Moral injury is the damage done to one's conscience or moral compass when that person perpetrates, witnesses, or fails to prevent acts that transgress one's own moral beliefs, values, or ethical codes of conduct. So a lot of the early literature on moral injury actually came out of psychological and sociological studies of people in the military who had to do things that like went against their personal beliefs or values or even like religious morals, and then had to grapple with that, and had to live with that. And like, how do we do that? And I think that in the case of AI, what I've just been hearing over and over and over again is that people are using it and then feeling bad about that. I literally have gotten lines and emails like, I feel so guilty about how I'm using AI.
Amelia Hruby:Or I'm concerned about natural resources, privacy, theft of my work, loss of human agency, concentration of power, effect on the economy. And yet, I use it because it makes me more efficient in my business, and then I can make more money to support my family. I cannot tell you how many emails I have gotten like this, and how much empathy I have for them. And those are all from folks who are self employed. It doesn't even get into the folks that I know who have jobs and are being told that they have to use AI.
Amelia Hruby:They're being commanded to use it. Right? And when we know that this tool is harmful, and we believe it to be bad, if we do, and then we use it anyway, because we have to. Whether that have to is being told to by a boss, or being told to by the imperative of capitalism, or the pressure to do more in our work. When that's the case, that dissonance results in moral injury.
Amelia Hruby:And we just experience it over and over again. Any of us who use AI and feel guilty about it, that guilt is coming from this place of moral injury. And I like this language because instead of putting it back on you, like, oh, I feel guilty and that's my fault too. And now in addition to figuring out if or how I wanna use AI, I also have to figure out how to not feel guilty. Like, that's so much.
Amelia Hruby:And instead of internalizing and putting that on our own shoulders, if we understand it as moral injury, we can see it as a product of the systems we live in. This is why I can truly come to this conversation without judgment. I can read these emails with people telling me, here's my litany of reasons I think AI is bad, and I use it anyway. The reason I don't judge that is because I get that it's moral injury. I get how much pressure we are all under.
Amelia Hruby:And in some ways, think it's doubly hard as someone who's self employed because there's a certain injury to your boss telling you, you have to use AI. But when you're your boss, and you're telling you you have to use AI even though you don't want to, that's just a mind fuck. Right? Like, it's so hard. And so as I say all of that, like, I wish I had a solution for you.
Amelia Hruby:I wish I had a really clear get out of jail free card. Here's how you can use AI and feel good about it or not use AI and feel like your business is going to be okay. Like, I don't have a clear answer for you at this point. However, as I wrap up this episode, I do have a few resources. So if you're someone who's really feeling conflicted and caught in all these tensions, My new friend Alexa at Cut Off The Spigot has written a really wonderful post about AI tools that are perhaps more ethical than others.
Amelia Hruby:I'm really grateful that Alexa is out there doing that research, because I am not. But I will link to that in the show notes in case it's helpful for you. This is what I would call a sort of harm reduction approach to AI and moral injury, right? It's like, if you have to use the tools, either because you say so or your job says so or just the pressure of capitalism says so, can you use tools that are owned by people that you respect? I think that this list of tools is a great place to start with that.
Amelia Hruby:And if you wanna hear more about how I think through harm reduction in relationship to choosing platforms, I did talk about that in my episode on the tension of platforms and why I haven't left Substack yet. So scroll back a few weeks toward the beginning of season nine, and that episode is there to walk through my different approaches to sort of mitigating moral injury by opting out of certain tools, limiting my use of other tools, and then practicing harm reduction with certain tools. So I walk through all of that in that tension of platforms episode. But I think that the cut the spigot list is a great way to practice harm reduction in your use of AI if you wanna continue using it while perhaps feeling less conflicted or bad about that. The other thing that I'll offer you if you're really feeling stuck in this tension is I think it can be really helpful to write an AI policy.
Amelia Hruby:So I did an episode last week on how to write a thoughtful AI policy. And in that episode, I emphasized like how you might use AI in your business, how to think about what you're okay with your like customers or clients or peers doing. But I think the way that this can help with our internal conflict is to be really clear about what you will or won't do. Because if you thoughtfully craft a policy, if you're like, I will use ChatGPT to help me organize my inbox, but I won't use it to write my newsletter, just one example. Once you decide that and you put it in your policy, you can stop feeling so bad about it.
Amelia Hruby:At least that's how policies work for me. Once I make it a policy, even if it's just a policy that I'm writing for myself, once it's a policy, I can let go of the baggage of trying to decide if I'm going to do it every single time. And so again, if you're feeling conflicted, I would also recommend writing a policy for yourself. What are the things that you will and won't do? What are the tools that you will and won't use?
Amelia Hruby:Once you make those decisions, can you let go of the guilt? Can you let go of the stress? Can you let go of the tension? Can you be okay with the decisions you've made and acting those out? I think that policies can give us a clear sense of alignment, and that can lessen our feelings of conflict and tension.
Amelia Hruby:And I think that's what I have to offer you today. So to do a quick recap, in this episode, I've talked about three reasons that you might feel so conflicted about AI, especially if you are currently using it and feeling guilty or weird or bad about it. The first reason was that if you're savvy about social media, you probably see that AI has a lot of the same problems, but maybe you're feeling drawn toward the way that it works and that it affirms you and makes you feel good. And so you're feeling like, oh, do I have to compromise my values that I figured out with social media now to use AI? That's a challenging question and tension.
Amelia Hruby:It can create conflict. The second conflict was the pressure of capitalism and the demand for productivity and efficiency in our work. This world asks so much of us just to earn a living. Even that phrase, right, that we have to earn a living. We can't just live.
Amelia Hruby:It's really hard. And AI has become this promise that we can like be more productive without putting in more work. But I think that most of us see through that promise as either overtly false because once we start doing more with AI, then there's just more and more and more to do, or we see it as covering up the other people and the planet. The AI is actively harming just so we can be more productive. So I think that's our second tension that we get caught in, is this bind between what capitalism demands of us and what we know is good for us or for the planet.
Amelia Hruby:And that's a hard place to be. That's another reason many of us feel so conflicted about AI. And then the third reason was really just sort of an umbrella to understand all of this within, and that is moral injury. And again, moral injury is the damage that's done to our conscience, our moral compass, even our sense of self when we transgress our own values or ethics. And so for many of us with AI, we have a sense that we know it's bad or wrong, but we're using it anyway, and that breeds this feeling of guilt or pain or frustration or conflict.
Amelia Hruby:And we have to find ways to grapple with that. And so the two ways that I offer to start to work through this is one, to reconsider your approach to AI. Is it going to be opting out only using certain things or practicing harm reduction? And I've linked to the cut the spigot post with some ethical quote unquote AI tools in it if you want more resources on using different AI tools that feel more aligned with your values. And then my second suggestion is to write an AI policy so that you're super clear with yourself what you will and won't do with AI.
Amelia Hruby:And then you can set down all the worrying and guilt and stressing because you've made a decision and you can be aligned with that decision. Even in the context of the broader misalignment of the world. Right? We're not gonna fix capitalism by using or not using AI. Like, even all the guests I've talked to, right, who've decided to stop using AI, that didn't just like fix their whole life or fix the whole world.
Amelia Hruby:Right? We're not trying to claim that. It's just the way that we've found alignment within these broader harmful systems. Thank you so much for tuning in to this episode of Off the Grid. As always, I'm so grateful that you're here.
Amelia Hruby:And again, if you wanna be a part of the conversation, come find us in the clubhouse. We're having an ongoing asynchronous supportive chat about AI, and I'd love to see you there. Until then, you can find me off the grid. I kinda hate social media feeds on my brain like Okay. That was an abridged version of Social Media by Surfer Boy and Wreck Tangle.
Amelia Hruby:To hear the entire song, find Surfer Boy on Spotify or head to the link in the show notes. Thanks so much to them for sharing the song with us, as well as to Melissa Kaitlyn Carter, who sings our theme song that you hear at the start of every show. I'm your host, Amelia Hruby. And if you enjoyed this episode, I hope you will download the free leading social media toolkit at offthegrid.fun/toolkit. Until next time, I will see you off the grid.
Creators and Guests
