🤖 How to Write a Thoughtful AI Policy
Hello, and welcome to Off The Grid, a podcast that's normally about sharing your work and making money online without relying on social media. But today or rather this week, next week, late March, twenty twenty six, we are doing a series all about AI. I'm your host, Amelia Hruby. And we kicked off that series with a conversation with my friend, Mel Mitchell Jackson about AI sobriety. And one of the things I love about Mel's work is that they have a very clear AI policy on their website.
Amelia Hruby:In that policy, Mel is incredibly clear about why they don't use AI tools and then exactly the tools that they do use that in some way may engage generative AI or machine learning. I'm gonna link Mel's policy in the show notes that you can check it out for yourself. But in today's episode, I wanted to share a few thoughts on how to write your own AI policy. So this episode is meant to be a starting place for you to start thinking about how you're using AI in your work and what types of policies you might want to make around that. As with any conversation on this show, it will not include every possible idea or instance of how you may interact with AI in your business.
Amelia Hruby:My goal for off the grid episodes is not to be like an airtight comprehensive training on anything. It's meant to help you think through these ideas for yourself, make your own decisions, go far beyond anything I say here on the show, and root into your values and your integrity in the process. So with all of that shared, thank you so much for tuning into the podcast. Let's dive in, shall we, to this mini episode on how to write an AI policy. Let's begin with different types of AI policies that you might want to have in your business.
Amelia Hruby:The first one is a policy for how you use AI in your work. What are the things or the tools or the practices that you will do or won't do? So for example, if you are a lawyer, maybe you'll use AI to sort emails in your inbox, but you won't use AI to write legal briefings. Right? We can make some distinctions here.
Amelia Hruby:The things that you yourself will or will not do in your business. Another type of AI policy that you might wanna have in place is a policy for AI use with your collaborators or particularly with team members and contractors that you might hire. So what do you expect the people that you work with to do or not do? Are you totally cool with any and all AI use? Or would you feel pretty weird if you realized that you hired a copywriter to work on your website and they just use ChatDPT and send it back to you?
Amelia Hruby:I might have some feelings if I paid for a copywriter and then realized I could have just asked Claude myself. So it might be helpful for you to think through and have a policy around your AI use with contractors or collaborators. And then you also might want a policy for AI use with your clients or in the online communities that you facilitate. So what do you expect the people who work with you to do? Where are you open to them using AI or not in your work together?
Amelia Hruby:Now, of course, the purpose of these policies is not to control or demand that people have specific relationships to AI, but to set your own boundaries. So for instance, I wrote an AI policy for the interweb this year that I will actually share at the end of this episode. And that really helped me think about how are we going to use AI in this community and not. And I'll tell you, like, my personal policy around AI use is quite different than the one that I wrote for the interweb. But it was really helpful to think about, you know, what is this space about?
Amelia Hruby:What are the values that we gathered around? And how does AI interact with that? When we had some community conversations with this, I also started hearing more and more from Interweb members about how they're thinking about AI policies with their clients. So for instance, if you run a program and people are uploading work for feedback, are you okay if that work was written by AI or created by AI? Do you wanna read the emails or web pages that AI has written and give feedback on them?
Amelia Hruby:Similarly, if you are a working artist and part of your work is to give critique or to run artist programs, how would you feel about AI generated art by people in those programs? I think these things are going to come up more and more to the point where they may feel unavoidable, and it can be helpful to have thought through it for yourself before it happens. That's really the purpose of an AI policy to me. To think through what we might come up against, where we might have friction around AI in our work, both personally and with others, and then to try to feel into where our boundaries might lie. Sometimes we can think about it in advance.
Amelia Hruby:Sometimes it will just be responsive to things happening. In fact, the entire reason that I wrote the Interweb AI policy was because we had a few calls where a number of like AI assistants were trying to come into the rooms. And I know that this is just a setting that people turn on and, you know, I didn't have any judgment around it, but it did make me pause and think about, do I want AI assistance in our interweb spaces? What does it mean if they're there? And what are the implications of that?
Amelia Hruby:So I'll tell you at the end of this episode where I landed on that. But these are just the three types of AI policies that I think might be helpful to reflect on and even write up for your business. Again, that's a policy for how you use AI in your work, a policy for your boundaries around using AI with collaborators and team members, and then a policy for AI use with your clients and or your online communities. Three types of AI policies that you might want to think about. Now, the next piece of this is getting more specific about what do we even mean when we say AI.
Amelia Hruby:I talked about this in my conversation with Mel earlier this week. When we're talking about AI, artificial intelligence is a huge field that includes so many different types of tools. And honestly, at this point, a lot of tools are being rebranded as AI that are just the like same old same old algorithms we've had for quite a while. So it can be hard to tell, like, what are we even talking about when we talk about AI? For me personally, I have started sorting AI tools into three buckets, and I'm gonna describe those here.
Amelia Hruby:So three categories of AI tools, three types of AI. The first is assistive AI. This is when AI provides review and support for original work. So an example of this would be a tool like Grammarly. You know, there's a browser extension for Grammarly.
Amelia Hruby:You can be hanging out in your email inbox, writing your emails, and it will review them and suggest changes. So there so that is assistive AI. The next type is generative AI. This is when AI creates the work. It's not that you've made original work and it's doing some polishing.
Amelia Hruby:It is creating that work either alone or conversationally. So I think even if you are really working with prompts and you're in there doing revising in an AI tool like this, if the AI is primarily creating the work alone or conversationally, it's generative AI. Examples of this would be many of the AI tools you're probably familiar with. ChatGPT, Claude, Sora, Nano Banana, or whatever that other one's called. These are AI text and image generators.
Amelia Hruby:They are generative AI. And there is some overlap here between assistive and generative. Right? Like, many of these chatbot tools will do assistive things and generative things. So some of it depends on how you're using them.
Amelia Hruby:The third type or category of AI that I've been thinking about is agentic AI. So this is when AI independently executes workflows for you. So examples of this would be the new notion agents or AI schedulers. So if you have an AI tool that is looking at your calendar and scheduling via email with people who wanna book time with you, that is agentic AI. Typically, requires hooking up access to different tools that you already have and then linking between them and communicating for you.
Amelia Hruby:So again, the three types of AI that I think about when I'm writing an AI policy are assistive AI, generative AI, and agentic AI. And sometimes there are specific tools that are obviously one of these things like Grammarly is assistive AI. But often it has more to do with how you're using the tool. So you could probably argue that ChatGPT is assistive generative and agentic AI. It depends on how you're using it.
Amelia Hruby:And I find that thinking of it that way can be quite helpful when you're writing an AI policy because it can be challenging to just blanket say that people can or can't use a tool. Because if you set your boundary around a specific tool, then your boundary will always be changing because the tool will always be changing. Right? ChatGPT can do things now that it couldn't do six months ago, like send you Shopify products. Right?
Amelia Hruby:So I think that it's more helpful to write our policies based on types of AI and boundaries that we can root into over time than to write them based on the tools themselves because those tools will always be changing. So those are the first two things I wanted to talk about. Types of AI policies and types of AI. Now we get to writing your AI policy. So what I wanna say here is again that I think a good policy is simply a document of boundaries.
Amelia Hruby:It is not about what other people are doing. It is about what you, the leader of a business or the facilitator of a community are saying yes or no to. That's what boundaries are. So when I'm writing an AI policy, I consider a few different things. The first is the spectrum of use.
Amelia Hruby:So if we have a spectrum of how we're using AI, there's no use on one end, absolutely abstaining to unrestricted use on the other end. Do whatever you want. Use all the AI in the world. Right? And in the world of online business, pretty much all of us are somewhere in between those two ends of the spectrum.
Amelia Hruby:We have some type of conditional use of AI. I guess maybe I shouldn't say that. There are plenty of online businesses that are in unrestricted use. But the folks I'm working with typically fall into conditional use, especially as creative business owners and artists. Right?
Amelia Hruby:You probably have a craft that you are not trying to simply outsource to AI. Right? If you're a animator, you're probably not just being like, well, guess I'll just use AI all the time and my skills are useless. But you're probably thinking about like, how will I use AI and how will I not? So in our policy, we wanna consider that spectrum of use from no use to conditional use to unrestricted use.
Amelia Hruby:What are you comfortable with? As I've already said, most of us are gonna fall in some type of conditional use. So from there, we have a few more considerations. The next thing I consider is the nature of the tool. So not the type of AI, but like who owns this?
Amelia Hruby:Where does it live? How does it work? So the first might be a large language model embedded in a chatbot that some other company owns. So what I'm thinking here would be a chat GPT, Claude, Gemini, Perplexity, any of these major tools and apps that we're using with AI. These are owned by private companies, some of whom are nonprofits, but that's a whole other discourse in my opinion.
Amelia Hruby:And when you use them, they are collecting your data. They're training their model unless you opt out, which you have to be very careful about. That's one type of tool. Another type of AI tool would be the AI features in common tools you may already be using. Right?
Amelia Hruby:So Gmail may be summarizing your emails with AI. Notion has AI embedded in it in many ways. Canva has AI to help you design or even create images. So the next thing you might wanna consider in your policy is how are you interacting with AI features in those types of tools? And then in some cases, this is probably not likely for our micro businesses out there, but some larger businesses may also have internally developed AI tools.
Amelia Hruby:And so maybe you work for a tech company that has built its own large or small language model that they're working with internally. So I think for most off the grid listeners, what's helpful here is to think about your boundaries and the difference between large language models like ChatGPT versus the AI features and the tools you're already using like Gmail or Notion or Canva. So that's the second thing I consider for an AI policy. The third thing that I consider, which will not surprise you if you're an off the grid listener, is privacy, security, and consent. So I think that this is really under discussed in so many AI policies, but I am very concerned with questions like when you're using an AI tool, where and how is your data stored?
Amelia Hruby:Who has access to that data? Is it used to train a for profit model? So are all of your prompts and information helping a major AI company make money? Quite likely they are, just so you know. And then also have you gathered the consent of all parties whose work will be input into the tool?
Amelia Hruby:I think this is really important and it really helped me shape my interweb AI policy because I think we're gonna get into some pretty slippery moments around like, you know, say I give a presentation in the interweb and someone wasn't able to attend, but they're like, cool. I'll go grab the video and the slides and put them into chat GPT and ask it to give me a summary. That's a totally coherent thought. It's saving you time if it's a tool you're already using. But could we perhaps slow down and consider if that's a violation of my intellectual property if I taught that workshop and made those slides?
Amelia Hruby:Right? I'm assuming that no one in the interweb is intentionally trying to infringe on my ideas or sell them to open AI. But if you take the work that I've created and you put it into chat GPT, that is actually what has happened. I didn't give that work nor did I consent to sharing that with OpenAI or ChatGPT. So I think that there's gonna be a lot more like trickiness.
Amelia Hruby:And so I think these are things we really need to consider when we think about AI and consent. Similarly, if you use an AI assistant, are you getting the consent of the other people in the room to record with AI and to send that recording to whatever company owns the AI you're working with? And a lot of these companies have white labeled OpenAI or ChatGPT LLMs. So it sometimes it feels to me like it's just all going uphill to OpenAI even if you're using a tool that doesn't explicitly say that. Now there are different models and different companies who own them, and I'm not gonna get into that here.
Amelia Hruby:But at the end of the day, I think this is really the crux of so many AI policies. The things we really wanna consider are privacy, security, and consent for your information, for your knowledge, and for the people that you work with or are on calls with or collaborate with in any way. And then the fourth consideration that I wanted to include for our AI policies is transparency. When or how will you acknowledge the use of AI publicly? This is where I started the episode because I love Mel's AI policy.
Amelia Hruby:It is fantastic. It's so transparent. And I would love to see more creative small business owners write AI policies, whatever your policy might be. Again, I do try to have a sort of nonjudgmental approach here even if I myself am not using AI. If you are, I would just love to know how.
Amelia Hruby:I would love to know that you've thought about it. Even if your use is always changing or in flux, you know, I think this is something I will really be looking for when I am hiring contractors, working with collaborators, even having podcast guests on. It's not that there's any specific use or nonuse of AI or social media, but an intentionality around it. That's what I think a policy can give us. Clear intentionality in alignment with our values.
Amelia Hruby:And when we offer that transparently to the public or to our community, even if it's not totally public, I think that that really builds trust. And again, that is the theme of 2026, trust building. And AI policies, tech use policies are one way that we can do that. So where have I landed? You might be wondering after you've listened to this episode, what is my AI policy?
Amelia Hruby:Well, I have a few types that I will briefly share with you here. So as you heard in my conversation with Mel, my personal policy for AI use in my work this year is that I am taking 2026 to practice what I'm calling GPT sobriety. So I have stepped off or closed my accounts on the popular generative AI tools like ChatGPT and Claude. I am not using them in my personal life or my work in any way in 2026. That's my personal policy.
Amelia Hruby:And you can hear a lot more about that in my conversation with Mel and the other episodes in our AI series. So I'm not gonna give all the reasons why again here, those are in other conversations. My policy for AI use with collaborators is simply that I would like them to have an AI policy that I can view and reference before we begin work together. So I don't have a specific boundary there, but before I hire anyone or pay folks money at this point, I want to know that they're actively thinking about how they're using AI in their work. And I will say that I probably won't be working with folks who are on the unrestricted use end of the spectrum.
Amelia Hruby:I am not interested in centering AI in my business or the world of off the grid. I mean, obviously, we're having these conversations, but I wanna work with people who are, what do they say, powered by human intelligence, who are pulling from their own creative spirit, not from the chatbots. So when I'm working with collaborators, I want to know that they have an AI policy and that they are not purely just like turning to AI tools for everything. And then there's the policy for AI use in my online community, the interweb. So this is what really sparked much of this conversation for me was writing this policy earlier this year.
Amelia Hruby:And as I've already shared, I felt like I needed to write it after a number of AI assistants tried to come into our interweb calls. So I will include the full text of the policy in the show notes, but I'm going to try to read it or briefly summarize it here for you. So let me read you the intro and then I'll summarize the three main points. Here's the interweb AI policy. As we move toward an Internet and online business world that's more powered by AI, I want to be very thoughtful in how we do and don't engage with AI inside the interweb.
Amelia Hruby:Some interwebbers are AI advocates. Others are AI adverse. With that diversity of approach in mind, the interweb and off the grid will remain agnostic about which path you take with AI. I think how you use AI or social media or any tech tool is up to you, and I empower you to claim full agency in those decisions. That said, there are three guidelines that I'm putting in place and committing to for the interweb specifically.
Amelia Hruby:Number one, the interweb will not be hosting or promoting workshops or events about using AI. Since the mission of our community is to do business without relying on social media and big tech, I don't think it's aligned to center AI in our programming. Number two, I will not be allowing AI assistance into our calls. I do understand that these tools help with accessibility, but I don't feel comfortable allowing individual member tools to record our group conversations and input them into large language models that other attendees cannot interface with. To balance accessibility and privacy, I will continue to record calls and use Zoom's closed caption feature to upload captions with our recordings.
Amelia Hruby:And then number three, I am explicitly banning any use of generative AI tools to capture, scrape, or generate assets from the interweb Slack live call conversations or portal resources. Please do not copy and paste messages, slides, or transcripts into your own GPT tools as references or sources. This violates other members and my intellectual property and emotional vulnerability. And then to wrap up the policy I say, I want to be very clear that nothing in this policy is a critique of any member's choices or actions. It's simply my attempt to balance agency, accessibility, affordability, and community in our shared space.
Amelia Hruby:And from there, I have a few links to the difference between AI and GPTs and violations and other things like that, but that's neither here nor there for this episode. What I hope you're able to sort of hear as I share all of this is that my personal choices around AI in my business are not the same as the policy I wrote for my community. And that's because a big part of the off the grid and interweb ethos is agency and is intentionality for each of us. The decisions that I make around AI and social media in my business may not work for other people's businesses. And I don't think that means we can't be in community together.
Amelia Hruby:That said, if someone is seeking out an AI centric space, I I wanted to be clear to them that the interweb is not it. And there are plenty of other online business communities that have pivoted to AI over the past two years. So lots of other places you can go. Inside the interweb, people talk about AI in our Slack. They share how they're using it, and I don't police that.
Amelia Hruby:I'm not interested in telling people, you can talk about this, but not this. That's the okay amount of AI, but this is the too much AI. Like, I don't do that. We have like a blanket ban of all hateful and discriminatory speech inside the interweb. But beyond that, I trust the members to bring topical and respectful conversations there.
Amelia Hruby:I'm not policing if or how they talk about AI. But I'm very clear here that we won't have programming around it, that personal tools can't be brought into our shared spaces, and that you cannot use generative AI to build assets from the things that I create or members share inside the interweb. And again, none of that is GPT sobriety, which is what I'm doing this here. So if you listen to this whole episode, I hope what you can take away is that your AI policy is an attempt to set some boundaries with yourself, your collaborators, and your clients or community around AI and your shared spaces. I do think that you may have more than one AI policy for different reasons.
Amelia Hruby:Right? The things that you will or won't do, the things you expect the people you work with to do or not do, and the things that you will allow or not in spaces that you facilitate or in client interactions. And you may have three totally different policies for those different contexts. When you write your policy, I think it's important to consider the different types of AI, as well as the spectrum of use that you are comfortable with, and privacy, security, and consent concerns. I also think the purpose of an AI policy is to be transparent about when or how you will use AI.
Amelia Hruby:And again, you might notice that nothing in this episode was actually prescriptive. Nothing in this episode was about figuring out if you want to use AI or not. It was simply about how to think through crafting your own policy. The conversations that I'm having on the podcast in this series, those are about how different creative and online business owners that I've worked with are coming to their own conclusions about AI. Those conversations are where we're working out if or how we're using this.
Amelia Hruby:But I think when you sit down to write an AI policy, what you're actually doing is articulating what you figured out. So listen to the conversations if you're still not sure how you do or don't wanna use AI. But when you're ready to write the policy, these are some things that you can consider. And I have included all of this in like a helpful note form with more resources linked in the show notes. So go there if you want to get, a quick bullet point overview of everything I talked about, as well as a bunch more links and the full text of my Interweb AI policy.
Amelia Hruby:I am not giving you permission to put that into your own AI chatbot and do with it what you like, but I also can't stop you. Because again, my AI policy can't actually control what other people do. I request that you don't do that, but I can't stop you. That's simply a request and perhaps a social contract between podcaster and listener. All of that said, thank you so much for tuning in to this episode of Off The Grid and for joining us in our AI series.
Amelia Hruby:I am really grateful to have this space to consider and connect around the topic of AI, and these are ongoing conversations that are happening inside of my paid community spaces as well. So if you want to be more intimately in the conversation, I invite you into the clubhouse. I invite you into the interweb. I invite you to learn more about Mel's work and the other folks in the series. They're all doing great work around this, and I hope these episodes can just be the beginning of your considerations of how you do or don't use AI in your creative business.
Amelia Hruby:There's lots more to come for our spring AI series, but until then, you can find me off the grid. I kinda hate social media. Post a picture of myself, sacrifice my mental health. Use your Okay. That was an abridged version of Social Media by Surfer Boy and Wreck Tangle.
Amelia Hruby:To hear the entire song, find Surfer Boy on Spotify or head to the link in the show notes. Thanks so much to them for sharing the song with us, as well as to Melissa Kaitlyn Carter, who sings our theme song that you hear at the start of every show. I'm your host, Amelia Hruby. And if you enjoyed this episode, I hope you will download the free Leading Social Media Toolkit at offthegrid.fun/toolkit. Until next time, I will see you off the grid.
Creators and Guests
