Welcome everyone to The Upstream Leader. My name is Jeremy Clopton. Excited to be with you here today to talk about one of my favorite topics, and that is technology, and what on earth does it mean for accounting leaders? What do we need to be thinking about? What do we not need to be thinking about? What’s the noise, and what’s actually going to be helpful to us? For that conversation today, I have with us Danielle Supkis-Cheek. She is the vice president and head of analytics and AI at Caseware. Danielle, great to have you on the show.
Thank you for having me, Jeremy.
Alright. So before we get into the tech topic, which we frequently talk about, I’m going to start things off with the same way that I start off every conversation here on the podcast, and that is, how did you become the leader that you are today?
I don’t think there’s just one thing. I wasn’t able to like, find that one item. I think the piece that probably starts it all is that my mother is a very strong woman. We saw at a very early age her leading teams and what it meant to be a leader, how hard it can be to be a leader and how much extra work it takes. You know, I think a lot of people believe that the leader is the one that doesn’t have to work as hard, or something like that, because they get to lead everybody and delegate to everything else. And I very much believe in the concept of, it’s called servant leadership, where the leader is actually making sure [to enable] all those around them and making sure that they can do their job and the best ability as possible. So it’s actually quite a lot of effort and hard work to put in. So I don’t have a great response on root cause, but I think somewhere between having good role models while growing up and then hard work is probably why I’m here, but I’m sure others could dissect that even more so.
I love the fact that you went root cause. That is so fitting based on the conversations that we’ve had. We will get into the details and we get into the tech and everything. And it’s interesting, Danielle and I actually met at an analytics conference we were both speaking at a number of years ago. I was still in public accounting. You had your own firm at the time and, fast forward several years and we’re both in completely different spots. So now that you are really focusing on what tech means for the profession, there’s a lot of doomsday scenarios with technology, how it’s going to take all of our jobs and accountants will be useless. There are others that are maybe a touch too pollyannaish that are, oh no, technology is wonderful. And it’s just going to make everything better and more profitable, and we’ll all sing Kumbaya. My guess is we’re probably somewhere in the middle. So help us understand, as a leader in accounting, what should we even be thinking about or paying attention to right now?
I completely agree. We’re going to be somewhere in the middle, the kind of pragmatic view. I think anytime you have an extreme opinion on anything, it’s never going to work out too well for yourself. So to me, I think where we’re going to end up is in an enablement situation, where in some cases you’re going to have technology be able to create massive efficiency, but then it adds and particularly in some cases, a slightly new twist on a risk profile, that’s going to change how you have to supervise the work that is being done.
The hard part is going to be for me, or for what I’ve perceived to be the hard part is going to be how to think about change, and almost this metacognition about change concept in a very like, esoteric way of how do we practice changing? Because as a profession, we’re not used to having to change as much, because we are highly regulated and we’re used to changing regulations. We’re very good at dealing with the changing regulations, but we’re not necessarily always great with changing our process around it. The process is already assuming a changing regulation concept, and this is a changing process potentially.
And so for me, I think where people are going to struggle and where the technology lands is trying to figure out, okay, let’s test out a few things, let’s see what actually creates efficiencies. It adds a potentially new, slightly different risk profile that’s an appropriate additional risk profile, how we can mitigate it. And it’s going to be a little bit more iterative. Most accountants have processes that are fairly linear in the past, you know, do X, then Y, then Z. And in some future state, we may be in a very iterative process where we kind of do a little mini, let’s, you know, I’m in software, so I’ll liken it to agile, a little mini test, do some work, realize, okay, the proof of concept, let’s do a little bit more. Let’s then refine it. Let’s institutionalize it. And let’s keep improving and iterating.
So I think that’s going to be the biggest change, but it’s going to be very, I think each organization is going to be a little bit different in what technology creates the efficiency for themselves. So I don’t think there’s the one silver bullet that’s going to change everything for everybody. I think there’s going to be, each person is going to have something, depending on the nature of what they do that creates massive efficiency, but then they have to think about how they respond to that efficiency, potentially new risk profile it could create. And then how do they supervise that risk to make sure they don’t have to undo, let’s call it a risk profile, from the change.
So it’s not that everybody needs to go out and adopt the exact same tech. What you’re suggesting is every firm, depending on where they’re at, what they do, and what their risk appetite is, some may still be going for, we’ll call it the traditional analytics, whereas others may go for generative AI that they’ve purchased, and others yet may build their own platforms that do the things for them. There’s not really a right or a wrong. It’s figuring out, okay, what’s best for us. Is that fair?
Yeah, I think so. I’ll caveat it with one thing. So yes, I think every firm is going to sit there and look and figure out what’s best for them. I think there’s going to be similarities. I think you’re going to have lessons learned from others, and from other industries, even. I don’t think anybody’s going to go build their own LLM and generative AI model. They may create and use one that they set up their own platform where they like, let’s say, rent somebody else’s LLM, but the cost to train an LLM from the ground up is astronomically high. I doubt a CPA firm would do that.
Sure.
Like even the Big Four would struggle to finance something like that.
And that’s like a ChatGPT would be an example of an LLM.
But using those ones into other models, oh, for sure, there’s ways to build around them. And I know a lot of firms are playing with those kinds of aspects. So I completely agree with you. I think every firm or every organization is probably on a slightly different journey, where they are on that journey, what is that risk appetite they’re willing to have? But you’ll start to see similarities in what foundations people put in place. But I think where you’re going to start seeing the differences is how people use those foundational tools at their disposal. I mean, the tech stack within accounting has been fairly similar across different organizations. It’s just, each item is used slightly differently, or a different level of reliance on a particular technology, depending on what your use case is.
So do you think it’s going to be more driven off of, I know we talked about risk profile, is that going to be more service line driven? You know, tax compliance is going to be adopting something different than maybe an audit or even different than an advisory heavy firm. Do you think it’s going to be size based? What do you think will be the biggest driver behind what people need?
I think it’s actually a three dimensional access.
Okay.
I think what you just talked about, of industry, like subspecialty within accounting, I think it’s going to be either the size of your firm or the size of your clients, depending on the nature of your—so some kind of scale concept. But then it’s also going to be industry and what industry are you primarily servicing? If you start to think about the structure of data behind the scenes, data can be very differently structured depending on what industry you’re in. So let’s go with a retail store, you’re having high frequency, lower dollar transaction items. That data inherently is incredibly different than, let’s say, large scale construction that’s building a skyscraper. They may have just as many transactions, but the kind of relationship between the transactions and the structure of the data is going to be so fundamentally different that I think it’s going to change your problem statements, in enough way that you’re going to look at tools slightly differently.
It’s a slightly different problem, every industry that you deal with. And then when you start to think about how much data is coming into the pretty much general ledger products and ERP systems and GL systems, those ancillary third party systems that data is coming in and being piped in is so differently structured of what it could be and how it’s batched. Again, that industry, I think, to go back to your original question, industry is going to have a shocking relevance on how do you kind of size up your organization on what technologies you choose, because your problems are going to be slightly different.
So, again, to recap, that was like the service line that you’re in, tax, audit, whatever, maybe the nuance case, the size of the organizations you deal with, and then actually industry of those organizations, I think, is that three dimensional plane you’re going to have to be thinking about when you’re picking technologies.
And that really, this speaks a lot to the complexity, right? The more complex, nuanced inference base the service line is, you pair that with a super complex data set. Obviously, you’re going to have to have a much more robust set of technology. But if you’re taking something that’s more of a traditional, compliance-based, where it’s rules, testing, yes or no, there’s not a lot of inference per se. And like you said, a fairly straightforward data set, though, I feel like in 2024, I don’t know that there’s a lot of straightforward data sets. Even the simple ones seem hard. But you get that, you may be able to get by with a simpler product, assuming the scale and the size of the client base is okay. So that’s very helpful.
So, you mentioned you don’t see firms of any size, whether it’s Big Four or otherwise, just going out and full scale building their own, you know, LLMs, ChatGPT equivalents from scratch, but rather leveraging something that already exists and then perhaps customizing it. Do you think that actually creates a bit more of a level playing field for firms of all sizes, because that’s always been one of the things that, you know, over the years I’ve heard is, oh, we’re not big enough for the tech. The fact that the tech is so accessible today, and it’s so—I don’t want to oversimplify it—but easily customizable compared to, say, a decade ago, does that level the playing field for firms of all sizes?
I think it actually does. It’s a very insightful comment that you made, and I think it started with cloud. You know, everybody always talks about the security benefits of cloud, and that’s all great, but you’re right. It has a democratization effect to it, because you get to pay relative to your consumption, usually, and you can scale up more efficiently. Back in the let’s call it the day, you know, when you had like, servers in your offices, and I realize many of us still have servers in our offices. Let’s go with a more traditional viewpoint of software and how it was built. In order to really build up something that is at scale, you had to have, before we had centralized data processing or centralized data centers and stuff like that, you had to have an entire series of servers that you need, all the ancillary equipment, by the way, I don’t know if everybody knows this, but you have to have like special HVAC to like cool that room because that room gets so hot from the equipment. You have all the wiring. Then if you’re in a hurricane area like me, you have to deal with your backup system, and then you have to deal with having it to be able to be backed up. Sometimes, you know, the old tape backups, you know, can’t leave those in the car. They’ll melt. All those different things that you had to buy so much hardware and put it all in up front. And then, only then, could you then start to build whatever software perspective.
By going into this more shared but yet safe environment where you can scale up and scale down with just a request to your system and even things like dynamic provisioning, you’re able to put very little front end barriers to cost. Yes, it can get expensive as you go further and further down your journey, and build in more and more technology, and more and more efficiency, but you don’t have that upfront initial investment where you’re having to like, take pretty much a flyer on, is this going to work out? And put in that large sunk cost at the onset to hopefully have it work out. I completely agree, you’re going to see more democratization of analytics and access to generative AI capabilities, because now they’re turning into services, and like software as a service concepts. And now I think you’re going to start to see more IP being generated around how to get the best prompts out of systems, all that prompt engineering, I know people kind of like, don’t quite fully understand what prompt engineering is right now, and think it’s just somebody that makes an incredible amount of money for like typing in a couple of words into a model, which is, it’s actually far more complex than that. I think that’s when you’re going to start to see firms differ and what they’re able to do of being able to get access to systems and then put their IP on top of it. And then that’s what’s going to be able to create the efficiencies that are relevant to them.
And again, kind of back to our original concepts, it’s not going to be the, you know, this is like the overnight fix and this solves every problem, but for a lot of firms, depending on the problems they’re working on, there’s going to be a significant benefit, and the investment will be now in thought leadership, and thinking about how to get the best outcome, rather than sometimes millions and millions of dollars of upfront harbor costs.
Is that going to require that thought leadership? Is it going to require firms to, maybe we’re not spending millions of dollars in upfront costs, or we’re going to be spending half a million dollars in IT salaries to hire those professionals? Is that a bit of a service as well, where it’s more of a bring the industry and the service line knowledge and then find, you know, a vendor that works with, what do you see that looking like? It seems like that would be another potential barrier for some firms is like, yeah, it’s great. I don’t have to spend a million dollars to create a server room and a backup and all the things. But I also don’t have a clue what LLM is beyond being able to spell it. So do I have to go hire programmers and IT people? Like, what do I need to do to actually take advantage of this democratization?
So I think those are really great questions. Are we transferring our hardware spending to professional spend? And I don’t think it’s going to be nearly the same amount. I do think you’re going to have to rethink, and I know you’ve talked a lot about this, the concept of what do you bill for, and what is your value? So if you think about a lot of the work that we’re going to be doing to create efficiencies are creating efficiencies at scale. You do it one time, get it organized, and then it can reap benefits and it’s an ROI. If you’re going on something that’s like a highly, or if your organization is very much billed by the hour, the hourly rate as the professional is the value of those professionals, then you are going to perceive that there’s a high administrative cost that you have to allocate of salaries because you have to create some of this content.
And it, you may not have to have It professionals it may be that you’re taking one of your domain experts in a particular industry. Let’s say we want to build something out that’s like construction bot that teaches us some construction accounting issues under Revenue recognition because there is you know a ton of complexity there and we want to make sure staff are ready for that. So if we need to, you know, say, okay, construction expert within our firm, we’re going to now have you give some of your expertise into this particular application, into this professional, whatever may be the case of needing to transfer that knowledge to them, so that that person’s knowledge can actually expand a little bit further and getting access to that knowledge that now it most firms would be considered like non-billable time.
But that can create massive efficiencies for all the other engagements. And so how do you think about allocating that, or value billing, or whatever may be the case from an organizational standpoint that you’re finding valuable? So I completely understand the question. I agree that there is going to have to be some professional time associated with either configuring a tool, giving it some, if you get like, access to prompt engineering kind of screens or menus, and being able to add more customization. I think you’re going to need some IT professional. I don’t think most firms are going to need to spend $500,000 on, you know, a whole series of it. But if you look at what firms are already spending for IT teams to handle just Office 365 administration, cybersecurity, the standard, like, routine penetration testing, dealing with their various, you know, compliance audits just even for merchant services and PCI compliance. And so you’re seeing firms already carry a decent amount of IT professionals. You’re also seeing those firms absorb those IT professionals into some of their service lines, if they’re looking at SOC engagements. I think you’re going to start to see, even if you think about what they’ve done with the new CPA exam, the line is blurring somewhat between what is a pure accountant and what is a pure technologist, because of the interplay of how the data is going into organizations, and how are you going to use that for whatever service you may be offering, if it’s, you know, tax and doing some compliance or planning, doing assurance over it, whatever may be the case.
So that’s a huge takeaway that I want to call out that you mentioned in that response to that question is in order to really see the benefit of this and the investment in technology, we’ve got to adopt worth based pricing. We just had Michelle Golden on the podcast talking about not value, worth based—are you getting what you’re worth, and your efficiency and your worth are not related. So we’ve got to get away from the whole hourly billing model so that we are actually willing to increase efficiencies, and therefore increase profitability because we’re able to do the same work in less time. It’s still worth the same. So we got to overcome that hurdle.
So there’s a few different directions I want to go. I’m going to, you mentioned revenue recognition. Before anybody freaks out, no, I’m not going deep into revenue recognition. That is not my area of interest in this conversation. So standards, are they a barrier to technology implementation? Do they create opportunity for technology implementation or yes, it’s both. Where are we at as it relates to the standards? I’ve always heard them talked about as a barrier. Where really are they?
I believe the standards, most standards, there’s a couple spots where there’s a couple rules that are like, well, that’s a little bit dated, and there’s going to need to be some workarounds. Overall, most standards are very technology neutral. It’s mostly the application guidance in the historic way, we’ve always done it this way, that creates the perception that standards are adverse to technology or prohibit technology. Some of us are so used to the way it’s been done, and our application guidance or interpretive guidance, firm-specific, you know, third party provider specific, that you go back and you read some of the standards that you like known for what feels like forever for your entire career that haven’t changed much, and you’re like, man, that’s what it says. Oh, man, I mean, I see how what I’ve been doing fits within that. But if I take a brand new viewpoint to that, I can still navigate that.
And so I’ve been using the concept of treat most technology like it’s a first year staff, maybe second, maybe third year staff or senior, doing the work. We already have a lot of things in place that would make the supervision of that make sense. And so while I’m not saying that the technology would replace a third year staff, but if you look in it to, okay, I know how to review a third year staff’s work. Okay. Now I have a third year competency level potentially. Now I have a slightly different risk profile. I have to think through that. The what could go wrong. But as accountants, particularly those that are in audit, you’re used to thinking about that from all your internal controls or SOX. Now, I, I, by the way, I always say S O X or S O C because SOC and SOX are so similar that I worry I confuse people.
And so, it’s something we’re already used to, thinking about what those things could go wrong. And so, the standards themselves, you know, I, I’m not aware of a standard that puts in like, vouch shall not use this technology. I’ve actually done this where I take a staff that has never seen the actual standards themselves, or doesn’t spend time dealing with the standards the same way some of the national office teams can. And when they take a read, they have a completely different viewpoint on what the standard says, because they don’t have as much bias of the history and as bias is like, okay, this is the way I’m supposed to read this concept. I’ll find that most standards really aren’t that prohibitive to technology. It’s our viewpoint that perceives an inability to navigate the standards and actually recently, the IAASB has publicly said in their most recent board meeting that they are reconsidering their technology, they’re calling it now posture, and that they’ve historically their technology posture has been that they are technology neutral and they are considering being pro technology in their standards.
Okay.
That’s what is like the ongoing just starting to open up as they open up the technology project. I’m not saying that it’s not going to be without having to think through the risk profile. So the risk profile, let’s say somebody puts in RPA, Robotic Process Automation, to automate a routine, mundane task of like something like, it’s from a major electric company in my area, therefore should be electric built. That’s something that’s easily automated. Many times there is behind the scenes AI or process automation that is helping you do that. Okay?
Now I’m probably pretty confident that when I have the electric bill, it gets coded to electricity. The problem is if I’m no longer worried about a miscoding from a human, but I’m now worried about the mis-setting up of the original human that set that up, because humans are going to always be valuable. So the area of what could go wrong will slightly shift on us. So we have to rethink those what could go wrongs in a more meaningful way. That’s how we’re going to comply with standards. But the standards never outright said, like, you can’t—
Has to be a person.
You have to be a person for your process, just the kind of automation risk and the like.
So how do we get people comfortable with reviewing something from technology, rather than something from a human? Because that seems to kind of be the gap there, right? It’s like, oh, well, if a staff did it, I feel okay reviewing it. But if a computer did, I don’t know how they did it. And in my view, we don’t know how the staff did it either. If it’s wrong, it’s wrong, right? And then we go figure that out. So how do you help people, and maybe that’s not your role, but, and if it isn’t, what are your thoughts? How do we help people get comfortable reviewing something that was done through automation rather than by a staff per se?
So it, to me, it’s very much based on reliance on output. How can we rely on output? You’re exactly right. We’ve never drilled into staff brains to watch exactly what’s happening or put them in an MRI machine and watch what they’re doing in their brain to do the work. That’s not something we do. We trained the staff. We then supervise their work. We put them in conditions where we hope they will survive and thrive. And so the same thing, if you think about technology, okay, we need to make sure we’re doing proper maintenance. We need to be making sure we’re setting it up correctly. We need to be not neglecting our technology positions. And so then we need to take responsibility for the output and make sure we can rely on that output.
So depending on the nature of your technology, that’s what’s going to drive the how. And there are different hows. So even when you have like a black box, which is perceived to be like, I have no understanding of how the process works, you can still test that. It may be more work than you’d like to do, and you may need somebody to help you, but it can be tested. So for example, let’s say we have an incredibly, let’s go with an extreme example just because it’s one that’s in the news. A lot of people always talk about, is something in the resume screening tools. And I realized some firms may be using these, some may not. The EEOC came out recently that says you have to take responsibility for what your AI does. And so it’s probably an area that people are like, there’s probably even more regulation outside of the profession on a technology like that. Because what we have, the risk, is the taking all of our human, particularly our unconscious biases that may be embedded in data that we don’t even fully understand the bias, and exacerbating those across larger populations and automating bias, like that’s really the theoretical risk there.
So if we have an attribute that should have no bearing on the output, maybe we shouldn’t put it in first place, but let’s just pretend it’s there. And let’s say it’s about race because that’s probably the most politically charged one. If we sit there and take the inputs and flip around all the different races, or just one or two people, whatever may be the case, and then we run it through the system, it should have no bearing on the results. It’s called a flip test. It should have absolutely no bearing on the results. And if you see any difference in the results, and that’s the only thing you change, I mean, that’s a pretty simplistic test. And if you do that enough, you can get confidence over, okay, I realized that race has no dependency in that model. Okay, good. Now, how often do we have to do this? Like what, how often does the model change? But you start to build up, even a lay person that is not a technologist, not a programmer, can start to test models and start to test technology by doing things that I mean, even a simplistic, let’s say it’s a basic analytic. Well, get an analytic that’s something that you can actually recalculate yourselves fairly manually, maybe it’s a smaller data set. Put it through a system, see if it gets the same result that you did a re performance test. You can also hire people to do more advanced testing where they’ll do like this radiant feature auditing kind of concept, but there are companies out there that are creating third party assurance over algorithms and generative AI models, and there are ways to do this in a cost effective and not overly like you have to get a Ph.D in data science yourself.
But it goes back to, you have to think through what’s the risk profile. If we’re going from a risk of a one off mistake to now, we run the risk of automating a mistake that’s made. So we’re consistently wrong. You know, back from your forensics days, you know, that consistently wrong is one of the hardest things to catch because it’s not anomalous, it’s consistently wrong!
Right! It’s always wrong.
And so thinking through those concepts and it goes back to my earlier statements on thinking about how to change, and that very meta esoteric concepts of, and none of the stuff we’re talking about has needed a technology concept. It’s all just pure basic risk of what could go wrong, and thinking through the just broad strokes of the technology.
So I’m going to ask the question that I’m sure somebody is thinking as they’re listening to this, it’s like, yeah, I love the idea of, test something, see if it works, implement it, test it again, see if it works, implement it a little bit more, and that approach to change, like you talked about, which is more common, right? In the programming side of of the world. We’re really busy, Danielle. How are we going to do that? Somebody’s thinking it!
Yeah. Oh, that’s so true. So I get it. I’m also very busy myself. I think this goes back to, you make time for what’s important at the end of the day, and I know I’ve heard you talk about working in your business versus working on your business and to me, if you’re in the weeds on really mundane and routine tasks, there’s gotta be an approach, and a piece of technology that you can implement in a minor way to start freeing up a little bit of capacity. I’m not saying change everything within your firm overnight or your organization. There’s something actually called Maya, which is “most advanced, yet acceptable.” It’s actually a design principle that pretty much talks about people resist wide scale, large change, but if you start incrementally changing things, there’s not an aversion to it when the change is okay, I understand what’s happening, I’m communicating this well, I’m figuring out what I need to do, okay.
And yes, you can run the risk of change fatigue with all the little iterative changes. But if you’re creating improvements that drive capacity and efficiency, and you’re really focused on how you spend your time, I get we’re all busy. I am incredibly busy as well. And I’m outside of public now. I actually have a year round busy season. I don’t lament my days of busy season, but also busy season shouldn’t be the badge of courage that it seems to still be in my generation. That’s not where we want to be.
So I would say find areas where you can create small efficiencies. If you don’t have any good ideas, I would recommend asking the closest to the front lines, and some of the youngest staff that you may have. They’re usually full of good ideas because they haven’t been biased by our institutional, historic ways of doing things. And my guess is you can start to find small, incremental pieces that start to help free up time to make time for what’s really important, of putting together a sustainable technology program for your organization.
That’s really helpful, Danielle. I appreciate it. And two comments on generations there. Your generation having hours as the badge of honor, spoken like a true millennial, which all the Baby Boomers in Gen X will say, wait a second, millennials aren’t allowed to say that. They don’t believe it. But we do. There are plenty of us out there that still have that. And you’re right. The next generation, they don’t have all of the way we’ve always done it baggage that we do. And it’s so important to get those perspectives, especially when it comes to change. And I go back to a project that I had early in my career. I was working with a company and they did an innovation challenge and internal audit, and they asked everybody if you could redesign the process, how would you do it? And the person that had only been out of school two months, I said that they were the only one that didn’t approach it through the lens of, well, here’s what we ’re doing and how to change it. They didn’t even know how they did it yet. They were that new to it. So they came in unbiased and said, well, this would be the best management looked at it and they’re like, Oh, wow, yeah, it actually really would be. And it was, it was that fresh perspective. That was so important.
And then that person’s that new, they may not even have a full schedule yet. There’s your time right there.
Exactly. Now they’ve got time to do something with it. That’s right. That’s right.
So as we’re working to conclude, I’m going to ask a question I’ve never asked anybody before, but I want to get your input on it. What are you excited about right now in this space? What has you like, super jazzed?
I want to go super nerdy, but I’m going to try to not go too nerdy. So there’s two things that are super exciting to me and they’re going to seem, I’m so nerdy. One is called a chain of thought reasoning. It’s in generative AI, and what it does is it helps you understand how a generative AI model thought through a process, to get you an answer, as well as help improve the answer because it takes little baby steps in the logical steps of reasoning. So if you ever get like, well, that was a surprisingly insightful answer that you had to translate five different things from a large language model, they’re probably using some kind of chain of thought reasoning behind the scenes to help improve that outcome as well as provide the transparency of how that works. So that’s really exciting to me to get improved outcomes out of generative AI models.
Yeah, that makes total sense because it gives that input, that insight into the why, which so many people are like, I don’t get it, it’s scary. But if that’s out there, like you said, the transparency, it’s like, well, no, here’s why. And you can understand it. I love it. Keep going.
There’s a lot of research and a lot of cool things. And if I say anything more, that’s beyond the scope of extra nerdy. So I’ll stop there. The other one is something called small language models. And, I know this again sounds boring and basic, but from an ability of being able to deploy generative AI to, to more and more use cases in a cost effective and efficient way, it’s the concept of once you’ve trained a really large model and you can de-synthesize it down, and have this smaller language model that still performs pretty darn well without as much behind the scenes infrastructure costs and everything else, and even speed. And so it’s a concept of once we’re at a stage of good enough, we can actually, let’s say, um, provision out enough of the content to make it so that we have a really good, you know, interface and ability to converse with a model, but without all that behind the scenes cost and heaviness.
And so I think that’s going to be really exciting for where the future is on being able to deploy the benefits of generative AI in certain use cases that may otherwise be cost prohibitive. So I think that’s gonna be really exciting. I think it’s also gonna create a whole new risk profile of, it may become so efficient to create generative AI in a couple places, understanding where you have AI and an inventory, especially some of us start looking at as an organization, where does the EU AI Act land? That could be a little bit scary for people. But I think you’re going to start to see more and more transparency as the regulations become more. So even though the technology is really exciting. I think it will be mitigated by all the different regulations and need for transparency.
Yeah. So the small language models, I’ve not heard of that, and admittedly, I don’t keep up with tech quite as detailed as I used to, but it sounds like that is, we don’t need access to all of ChatGPT. We’re only using this little subset of it, so let’s pull that out and use that. Yeah, we don’t get all the other benefits, all the other stuff, but, really we weren’t using it, so it’s allowing organizations to pull the components that they’re using the most and apply it for a straightforward, simple application. Is that fair?
Yeah. Like, you know how calculus you derive something from something else and you have like this other thing that’s a smaller component unit of the main. And so that’s the kind of concept you drive it from the main and then you can have it more efficiently run.
That’s awesome, Danielle. I appreciate that. As we wind up any resources, books, podcasts, information that if people are trying to figure out tech in the accounting space or tech as a leader of an organization, what would you recommend they go look at?
I’d say if you haven’t seen AICPA’s Generative AI toolkit, that’s probably one of the first places you should stop. They have a really good section toward the end, where it goes through a lot of the theoretical risks and safeguards against those, and helps you think through the new risk profiles that you could be adopting. So one of the best tools I’ve seen on that.
I think keeping up with the news and like the accounting news, with all your regular accounting publications and your favorite ones, I think it’s becoming more and more important to see what’s happening. You know, that’s how we dealt with change from a lot of change management from the regulation standpoint, like when the tax laws change, a lot of us stay up to date with those concepts. I think from a technology, when it’s moving so fast, actually mainstream, like main mainstream is important, but more accounting news will help synthesize and filter that down, and I think that it becomes a really good resource for staying on top of what’s new and seeing what other risks.
I’d also generally look at what’s happening in the legal profession. The legal profession is historically been slightly outpaced to accounting. They also have a slightly different risk profile, but a fair amount of Venn diagram of overlap. So I’d say watch what’s happening in some of the news there and seeing it, how it applies to accounting. Is it somebody wants to go like a little bit extra.
Okay, very good. And if somebody wants to reach out to you and connect, what’s the best place?
Probably LinkedIn, is probably the best way to catch me. Uh, Danielle Supkis-Cheek is the name. S-U-P-K-I-S is the hard part.
Very good. Danielle, thank you so much for joining me on The Upstream Leader. I have thoroughly enjoyed our conversation today.
Thank you for having me. It’s always a great time.