Hello everyone, and welcome to The Upstream Leader. My name is Jeremy Clopton. We are back with another exciting episode for you today, and this one is on a topic that seems to come up in every meeting, every conference, every webinar that I have been to at least in the last, I don’t know, a couple years perhaps, and that is Artificial Intelligence, but we’re going to take a slightly different approach to it. We’re going to talk about Artificial Intelligence through the lens of ethical use, we’re going to talk about it through the lens of what skills it is starting to take away from us that we need to make sure we are still developing, we are still exercising the right muscles as it comes to, say, critical thinking and others. And really just how artificial intelligence plays a role in the things that we do on an everyday basis. So it’s not going to be how does it automate your audit or your tax return, it’s going to be more about how does it help you solve problems, do research—maybe it’s some marketing, maybe it’s emails, maybe it’s storytelling. We’re going to talk about it from a number of directions.
And with me for this conversation, I have someone that I used to interact with and work with a lot back when I was a fraud investigator, so it’s probably fitting that ethical use of AI came into play a little bit here. I’ve got with me today the writing manager for Dragonfly Editorial, Emily Primeaux. Emily, welcome to the show.
Thank you so much. I’m happy to be here and excited to have this conversation.
Yeah! I am really looking forward to it. And it’s been like with many guests, a lot of what you shared education-wise on LinkedIn and your thoughts there that really got me thinking about this path, got my wheels turning. But before we jump in, everybody knows I’m going to ask you the same question that I ask every guest that we have on the show: How did you become the leader that you are today?
That is such a loaded question, and I appreciate it. I’d say I became the leader I am today thanks to everyone around me—the experiences I’ve had with other professionals in the space, and the opportunities I’ve had over time to interview really impactful executives and people in my industry and other industries that I write for; all the knowledge I’ve gained over time, and the way I’ve almost gotten rid of my imposter syndrome to feel like a leader is through these experiences, doing the work, putting my head down, doing the research, learning from other people, being open to conversations like this one, and truly listening to what’s going on around me. I think that’s led the way.
Yeah. Listening is such an important skill. In leadership development, I don’t know that we talk about that one as much. We talk a lot about communicating and focusing on the speaking side of it, but listening is such a superpower, isn’t it?
It really is—one that, as a primarily a ghostwriter, or even, I used to do long narrative profiles, as you know, at Fraud Magazine. That active listening is a skill you have to hone over time to really make sure you’re getting everything from around you, catching nuances—it’s kind of funny that this is how I’m leading the conversation that we’re having today considering we’re going to be talking about AI, which only listens one way, you know what I mean? So yeah, just kind of a reminder of how important listening and being aware of what’s going on around you and in the industry is.
Yeah. Before we jump into the skills that AI may replace, the ethical side of it, I know there’s a lot in our profession on the accounting side where people are worried, “Oh, AI is going to take it all over, AI is going to take it all over.” You do ghostwriting for a living. You are an author, an editor, a writer. I would have to imagine the angst, perhaps, or the fear from some in your industry has to be much greater than it is in the accounting profession because so much of what you do—well, in a way that’s kind of what we’re going to talk about, how people are using it today. How do you navigate that as a professional in that industry, but also as a leader in your organization, in helping perhaps calm the fear, or not minimize the anxiety, but address the anxiety that the AI evolution could cause for your people?
Yeah. It’s interesting, and what was it, 2023 when large language models really came onto the scene as writing assistants, I feel like every conference I went to or every conversation I had, someone would ask me inevitably, “Are you scared that AI is going to replace you or that you’re going to lose your job to AI?” And perhaps it was naive at the time—well, no, I can’t even say that because I think my answer would still be the same today. At the time I said, no, I’m not scared of it. I think that, which, your audience, I think maybe what can relate to this analogy that I use a bit more, I see it almost like the stock market. You see when something new comes onto the market, people are really bullish about it and perhaps invest in it heavily and see all of the benefits. And then there’s a tapering off period where you start to notice the things that don’t work, or how it’s maybe not returning on your investment like you thought it would, and it tapers off and then maybe it goes the other way. That was my perspective then as it is, I’m actually kind of seeing it play out now.
At the time I said, I think this is going to be a great assistive tool. I think the more and more people and companies use it, we’re going to see—as my six-year-old would say—brain rot on the internet. We’re going to see repetitive messaging. We’re going to see companies that want to stand out sounding the same as their competitors if they use it too much and they don’t have human intervention. This is probably one of my LinkedIn posts you saw recently: I saw it happen in real time, in one day, getting three different marketing communications and all of them leading with the same sentence structure that was so clearly AI. So it’s like a twofold answer. It’s, I ease people’s concerns and I approach this with healthy skepticism, but also knowing that it’s not going to go away, so how do we reframe the narrative to say, “I can use this as my assistant, it can make me more efficient,” but it’s hit that point where people are starting to realize its flaws and are saying, “I don’t want to sound like my competitor, so I need to go back to the drawing board and maybe it’s using AI a little bit less.”
Yeah. I appreciate that perspective. As we think about using it as a tool, and AI has been a tool that’s been around for a lot longer than we actually want to admit. It’s interesting. I just kind of had one of those full-circle moments in my mind. I believe you were the editor or you were the editor assistant at Fraud Magazine for quite some time. And I would say, back in the 2010s, you were in that role as well, is that correct?
I joined Fraud Magazine as Assistant Editor in early 2014, and then was Associate Editor starting around 2015, 2016.
Okay, that’s what I was thinking. And my first, I guess, getting out there in the AI front was an article series in Fraud Magazine in 2013 and 2014 that spanned an entire year’s calendar cycle. So it wasn’t, I think, January to December, but it was a slightly different cycle than that. We were talking about AI and fraud use, and now we’re full circle talking about AI even in all of these areas. So, I am with you on helping people understand it’s a tool. There can be a lot of fear, or we can embrace it. My concern when we embrace it, and this is one of the posts that you mentioned, you articulated it so well that “I worry that people have replaced their critical thinking and research skills with AI and then take the provided information as immediately accurate and trustworthy.” You said that on LinkedIn just a little while ago, and then you went where a lot of people don’t want to go, which is, okay, well, let’s play that out just a little bit, because that’s scary in and of itself, oh my gosh, we don’t have critical thinking skills, but then we take it, we publish it, and that feeds back into the model that gave us the wrong answer to start with, and then it learns from that, and we’re actually using inaccurate results to train future results, which makes them even more inaccurate.
How do you teach—or guard against, maybe is a different way to put that—but how do you address the reduction in critical thinking in a professional environment where people are becoming too reliant on the tool and not necessarily using it as much as a supplement, but instead, a replacement for critical thinking?
Ooh. You know, it’s hard because where I sit, I have—and I’m not saying not everybody has highly intelligent teams, but—I’m very lucky to have a really highly intelligent team that has spent decades writing and knowing that even before AI, you don’t just Google search something, click the first link, and then trust that the information you get from it is correct and accurate. I think it goes back throughout time with journalism too, you know, citing your sources, those first-person interviews in journalism, but being skeptical of what you receive as information. Whether it’s directly from the source—I once did an interview for Fraud Magazine where my source was telling me all of these wildly outlandish stories and sure, this was his experience, but I had to verify that with actual data and information before I published it.
Yeah.
My writers all know this, my editors know this, but I do fear that people who don’t use AI on a day-to-day basis like we do—because we do use it because we want to stay ahead of it—are using it like Google. And it’s not that. What I would tell anybody listening to this podcast or anybody who has a conversation with me about this is AI was built to make you happy. You feed it a prompt and it wants to fulfill that question. It wants to give you what you’ve asked for, and it will find any way to do that. And even if that’s as simple as saying, “What are the details of the latest executive order out of the White House?” It’s going to pick and choose and pull together a narrative for you and you then have to fact-check that against WhiteHouse.gov, make sure that this tool—ask it, give me the source that you pulled this information from. Because that prompt I just off the cuff gave you was very vague, you know? “The latest executive order.” AI has to assume what that is. It has to find the information and it’s going to put it together in a way that is answering your question and wanting to make you happy. And we have to take it a step further. We need to not feel appeased by these tools. We need to be consistently skeptical of them because of that very strong desire of the tool to give you what you’ve asked for, no matter the cost or accuracy or veracity.
In a way, what you’re describing sounds a lot to me like having a new hire, new to the organization, new to the policies, but they really want to keep the job. And you ask them something, and as a result, maybe they give you an answer that’s the fastest, perhaps it wasn’t the most accurate. You’ve got to then go back and provide feedback, and train, and check, and make sure that, no, this is not accurate. You’ve got to fact-check. You’ve got to do this. So given that analogy, also having to teach new people to the profession, how to use AI, how do you balance that out? Because I would imagine for you and some of your more experienced writers, you can likely spot some of the inaccuracies or the styles that immediately flag for you as, “Man, that’s just not right.” How do you do that for someone that’s brand new and they don’t have the lived experience? And in a way, AI is functioning at a similar level to they are, and if they’re new and they’re not sure and its goal is now to appease them rather than help them be successful—instead, it’s helped them feel fulfilled—how do you train that skill set early on, that critical thinking?
I think that we are going to have to integrate AI continuing education and training into our processes at our companies, no matter our industry, from the start. We have to assume even experienced people like myself or my senior copywriters maybe have been resistant to AI, and they don’t want to use it for ethical concern reasons or because, in all fairness, don’t want it to overrule their skills—practice makes perfect in everything. If I use AI too much, am I going to lose my skill as a writer? So, I try not to use AI in that way so that I can continue to hone my craft. So I think that if I were to hire someone tomorrow into my department, I am asking what AI trainings they’ve taken to this point. I’m planning in the first three months, six months and year, to put them into some sort of classes where they’re learning, specific to my industry, how to use the tool, where there are places they shouldn’t go, what they should be skeptical of, how to use it as an assistant rather than a replacement, all of that.
We do that a bit at Dragonfly too. We’ve had some guest speakers that are—we’re a small business—so at our full staff meeting, we’ve had some guest speakers who have taught us different methods and uses of AI, just so we can stay ahead of the curve and know that we are educated in what we’re doing at all times. So just like you would have a new hire take maybe some sort of internal training on your company and your business and the history and who reports to who, you’re probably going to want to throw in an AI training there from here on out because it’s becoming really fundamental to our processes.
We’ve really got to use education rather than—and this word is going to sound more crass than it’s intended to—education rather than ignorance. And I don’t mean ignorance in the fact that we can’t comprehend it, but more if we don’t address it and we just kind of set it off to the side, we don’t have to worry about it. What you’re saying is, “Let’s pull it in intentionally and ensure that they know what it is, what it can do: Pros, cons, risks.” And when you’re thinking about risks, I know that you all have, and I’ve seen you post a lot about the ethical use of AI, because you’re in a creative space. And what you do is creative by default. When I think about leaders in our profession, I think we are more creators than anybody would ever admit, but we’re not going to debate that on this episode, that’s probably a much longer debate, perhaps. But marketing, blog posts, LinkedIn posts, there are things that accountants and leaders in accounting firms may create, where they say, “Hey, this isn’t my jam, but I could use AI to do that.” What are your thoughts around the ethical side of things, especially as it relates to creating original pieces, attribution, where does the line fall from an ethical use of AI in that regard?
Yeah. I recently gave a webinar where my audience was mainly creatives in my space, so editors and writers, and I think my reaction to the ethical use had some of them aghast, because the way I look at it is, if you’re using me to ghostwrite, how is that very far different than using a tool—and without getting into the sustainability issues and everything that comes with that—but in my mind, there’s not a huge difference there, except for the fact that when you’re using the tool and it is pulling from already available information, we start to infringe on other creatives’ work, right? I don’t want to ever read a novel that’s been completely written by AI, a piece of fiction, because that to me feels like it’s stolen bits and pieces from all the other creative writers that have come before the tool. So from that standpoint, there is kind of an ethical gray line that we’re straddling here.
I look at B2B communications a little bit differently than creative writing in the sense that a lot of the writing that I’m going to do for a company is going to come with a subject matter expert interview and is going to come with a lot of research, and that research is pulling readily available information from the internet. So the way I see the ethical use of it is I’m using those insights that I’m gleaning on my own and formulating my content around in addition to maybe my AI tool can speed up that research process, and I can marry the two together for an ethical approach to creation.
I also think that if you’re going to be using AI in any way, you should have an ethical AI use policy somewhere on your website that details how you’re using it and where. We have one at Dragonfly, and it very plainly says we will use AI to do these things, we will never use AI to do these things, unless our clients ask us to, and it’s a living document. In fact, our current iteration of it has a link in it to an old version of our policies, so people can see how it’s changed just within one year. I’m no legal expert here, of course, but I think people should really also look into the copyright issues that come with using too much AI in your content. If you want to copyright your magazine or your article, marketing collateral for any reason, you may be a little more hesitant to use these tools because then you might not be able to do that from a legal perspective. I’m not sure of actual laws and rules surrounding that, but it’s something that people should look into.
And we can say now, whatever the necessary disclaimers are here, we’re invoking those. So, take nothing we say as legal advice—we are not lawyers, we don’t play lawyers on TV. As the old commercial says, “We didn’t stay at a Holiday Inn Express last night,” so we have no legal authority here, please don’t act like we do. So one of the things that I’m hearing though, and as you’re describing the ethical use, I don’t know why, but I hadn’t even really thought about the fact that ghostwriting and the use of AI could arguably be a little bit similar, but when I think about that, and I think I understand ghostwriting well enough to try to draw this comparison, ghostwriting and AI can be used in a similar fashion if you’re providing as the subject matter expert. You’re providing the content, the information, the ideation, the original thought to either the ghostwriter or the AI assistant, and then they are essentially—and I know you’re doing way more than just organizing those thoughts—but it’s the insights, but it’s based in what’s been shared rather than, “Hey, Emily, can you create a brand new article on leadership and throw my name on it?”
Right.
Which would be not ghostwriting, but that would just be original content creation. So it sounds like it maybe it’s a bit easier on the ethical side to have the AI assistant organize your thoughts, maybe have it interview you, or you upload an audio file of you talking about something and saying, “Hey, take my ideas and synthesize it into an article or a post;” much more ethical than, “Hey, can you just write this based on whatever existing content is already out there,” which then runs the risk of the copyright. Am I summarizing that effectively?
Yeah. I would think you’re summarizing that well, and not just from a copyright perspective, but just from a repetition perspective, because as we kind of talked about earlier on, what you’re getting from me, a human being, when I ghostwrite for you—and this is different from the ethical side of things—is that’s me talking to you, getting your insights, and you’re getting it written in a very original way. Uploading that audio file and asking AI to synthesize it could be a very great way to do it, but it’s going to sound very formulaic compared to what AI understands as our approach to narrative. It’s like one of my LinkedIn posts I shared recently. Right now, it loves to make a declarative statement, put an em dash, and then answer that declarative statement with some revelatory idea. That is the formula right now for content and the more people put that in their newsletters and put that on the internet, the more AI is going to cling to that as “the correct way to write.”
So that’s a little bit where if we’re putting ethics aside, that’s a bit where the human direction versus the robot direction—that’s where they aren’t the same. But from that ethical lens, yes, you’re essentially, if you’re doing what you described, and you’re really focusing on good prompting with your tool, and these are your original insights, and you’re not asking it to put it together based on other insights, and you do have to be very clear with it, you have to say, “Do not do this,” then I could see it being a little more ethically sound. Because at the end of the day, how many articles exist on the internet that are under a byline that are not my name, but were written by me? There’s a lot out there and there are many me’s in the world. So think about all the content online. A lot of it is not by those people and those bylines. So yeah, AI kind of can replace it, but just in a very formulaic way right now.
I want to explore the formulaic side in a slightly different context. One of the things that I know I’ve heard you talk a lot about is storytelling. And that’s something that I’m a huge fan of. Being a speaker, I’ve learned long ago and I’ve really always worked to try to figure out, how do you tell a compelling story? Because in my view, everything is easier to learn if there’s a compelling story that I can anchor it with. And you’ve talked about how AI tends to be very formulaic. What keeps coming to mind for me is, in a way, storytelling is also formulaic. The hero’s journey is probably the most common that a lot of people will use. So, you recently posted—I say recently, I think it’s been about a month ago maybe—you were talking about authentic storytelling. I think it was through the lens of your son, how everything’s a story. He just always wants to tell the story. It’s not a fact-based reporting, but it’s storytelling. You talk a little bit about authentic storytelling. Storytelling follows a bit of a formula with the hero’s journey. There’s a reason most movies are somewhat predictable, especially the ones that are blockbuster successes, because they follow a similar formula. How do you bring authenticity into storytelling and creative works when you know that there is actually a formula that works really well here? How do you bring authenticity without breaking the thing that works really well?
Ooh, that’s such a good question. I would say my immediate reaction is through lived experiences. I think that even in that post, I think I listed some of the most famous starts to a story, and it is very formulaic. You know, “In a galaxy far, far away,” “Once upon a time”—these stories do follow a formulaic model. But the way we tell these stories are imbued and influenced by our lived experiences, the people we’ve been surrounded by during that time. It’s like, I may be able to follow the hero’s journey storytelling process to write a fantasy novel because that’s what I read most of the time. I love sci-fi and fantasy. I’m going to struggle a lot more to maybe do, oh, I don’t know, a romance novel, I don’t read it as often. It’s not necessarily part of my lived experience to be able to draw on those elements and those tools and that way of world-building.
Similarly to how even when I would interview people at Fraud Magazine, we followed a very formulaic structure for our narrative profiles there in the way that we wrote them, in the way that we interviewed, in the way we tried to have the story unfold, but I don’t think anybody ever saw them as formulaic because they were all driven by the lived experiences of the people that we were interviewing. So I may have one profile, like about Daphne Caruana Galizia, I started with a very empathetic lead and a very, like, who is this person behind this story? Where with Bastian Obermeyer, at the start of that profile, I really dropped people into the moment he received the Panama Papers. So each one I started with a different angle, a different way of storytelling. And even though it unveils itself in a formulaic way, it’s being influenced by so many different factors to really make it relatable or interesting or true to the person at the heart of that hero’s journey.
Sounds to me that that would then be the key to ensuring that when you use AI to become more efficient, that you don’t lose the effectiveness of the communication, because I’m a big believer, AI doesn’t replace. If we can augment intelligence—so the best of the tech, the best of the people—that’s the long-term play. That’s also the differentiator for anyone in professional service that will have long-term success: it’s how do you keep the people in it in a way that makes your clients want to work with your people? Because if they only need the efficient output, they’re not going to hire professional services. And when I think about the communications that leaders would do, it really is, you can follow the formulaic, but what I’m hearing is the difference between the formulaic on the AI side and the formulaic on the people-based side is, it’s hidden—the framework is hidden under all of the lived experience that becomes much more artistic in a way? I don’t know what the right word is, it’s so much more that you can’t see that framework, whereas in AI, it feels formulaic and you can see that framework a bit more exposed. Is that fair?
Yes, and I would say I even have great examples of this in my day-to-day work. We have our ethical AI use policy, we tell people how we will use AI and how we won’t. I’ve had clients say, “This is great, this is going to speed up your efficiency, which means that this is going to be cheaper for me.” And I say, “Tell me how you want me to use it, and sure.” I’ve had some, they do a first draft with an AI tool and send it to me, and then I add the voice and the tone and the perspective and I rewrite it, and I was just telling my project manager the other day on one project in particular, where this was the ask: it’s a 6,000-word report entirely generated by an AI tool. And I said to her, “This would have been faster if I had just been able to spend 30 minutes on the phone with the people who pulled the data, was given a PowerPoint of the data, and then could just write it myself.”
Because so much of this goes into the prompting. If people are going to use AI, and like we were talking about earlier, training people, whether they’re new or higher up in the organization, prompting AI tools correctly is the only way to use it. Otherwise you’re going to miss the mark. And I’m seeing that with some of these pieces of content that come my way, where the prompt was obviously, here’s the data, write me an article. And I’m like, “Just give me the data and let me write the article because this is actually becoming more cumbersome to work through.” And then in that instance, I will still probably use AI to help me think of words that I can’t quite grasp, or to rework some sentences that aren’t coming out well, or suggest subheads for me. So yeah. It’s very much hand in hand, it has to augment what we do, and humans, we’re providing that storytelling value through prompting and through all of the iterations, whether that be me creative writing or your audience using the tools for their purposes.
Yeah. And it’s key to not confuse the tool with being the solution, which is what it sounds like you’re describing—a little bit where somebody goes in and they say, hey, here’s the data, write the article. And they think the tool is now the actual solution. It’s not the solution. It’s not the entirety of the process. It’s a tool in the process, and you’ve got to understand prompting-wise, right? You’ve got to understand, how do you use the tool most effectively. If you don’t know that, like with any piece of software, any tool, not even software-based, if you don’t know how to use it, you’re going to end up being less efficient and less effective in the long run, and the idea is it makes us better. It doesn’t hinder us and set us back a little bit, which is the risk. Very good.
It makes me kind of go back to those days with Fraud Magazine—I used to be a certified fraud examiner, but again, with all the disclaimers, I don’t fight fraud. But I do think about how you probably don’t have any certified fraud examiner grabbing a tool and saying, find me all the red flags of fraud in this system, and then getting all of those red flags and saying, “Oh my gosh, we have so many red flags of fraud.” You then have to work through the data, find context, some of them may be false positives, because AI is going to try to give you what you’ve asked for, what you want. I see this very similarly with the generative AI and LLMs that we have now.
Yeah, we think of it as an “easy button,” not something that makes it a little bit easier. And that’s such a challenge. I know when I was a fraud examiner—I’m also not a certified fraud examiner anymore, just over the years, it’s been a while since I’ve done that—and you’re right, people always wanted the find fraud button, and I always joked that if I could create that, I could have retired at a very young age, but it’s not a thing. Even the best technology was not going to do all of the things that the people had to do. We’re still not there on the creative side either.
So, Emily, I have really enjoyed this conversation. We’ve gone a variety of directions. I appreciate you letting me lead you down some of those different paths and those questions. If people are looking to improve, whether it’s their writing, their use of AI in writing and the creative process, what resources, whether it’s books, articles, TED Talks, what would you recommend for folks that are looking to get better in this area?
Yeah, I would definitely recommend Erin Servais, her AI for Editors, AI for Writers courses. They say “for editors,” they say “for writers,” but she really gets into the fundamental structure of these LLMs, how they work, how to prompt them correctly with some really great examples. I’ve taken some of her classes. She was kind of right at the very start of when these large language models dropped, she saw a business opportunity and jumped on it and said, I’m going to learn everything I can about this and shift my career. And she’s really done a great job with it.
It sounds like you might have marketers in the accounting space that might listen to this. I would always say the Content Marketing Institute has really great classes and programs for all sorts of topics, but they will certainly be on top of AI. What else? The American Marketing Association has been coming out with some really fantastic AI courses, as well as, I want to say twice a year, the AI for Writers Summit happens. And there’s always fantastic insights. It’s about a half-day conference, all virtual, and just like our ethical AI use policy on our website, they continue to update the information and they get some really great technicians at that conference too who can show you tools and how they’re working. And then I would be remiss if I didn’t point people to the resources page of DragonflyEditorial.com because we publish field guides. And these field guides are free resources. I’ve helped write a few of them and one of our most recent field guides was how to prompt AI. So we’re constantly putting out field guides, we have an upcoming webinar from us in August, our president and founder of Dragonfly, Sam Enslen, she’s going to be talking about AI in 2025 for our industries.
Wonderful. We will link to all of those in the show notes, and you mentioned AI for Writers a few times, and I’m sure that some of the accountants that are listening are thinking, I’m not a writer, I’m an accountant, but I would be remiss if I didn’t mention one of my favorites, which is Everybody Writes by Ann Handley. It’s a great book. Ann is just—I love following her content, but everybody is a writer, whether you think you are or not, we write constantly and communication is a critical skill, so those resources would be incredibly valuable for our audience. Emily, thank you so much for making the time—I know you’re quite busy. I appreciate you joining me on the podcast and look forward to maybe talking again soon.
Yes! Thanks again for having me. I enjoyed this and congrats on so many great podcast episodes to date.
Thank you so much.