Apple Podcasts ¦ Amazon Music ¦ Spotify
Hi, and welcome to Try AI for Growth, a podcast out of Make Space for Growth.
Here, I share short—and sometimes surprising—stories of how I’ve used AI to tackle real challenges at work and at home.
I’m your host, Sara Vicente Barreto, and today I want to tell you about how AI helped me automate something we had been doing manually every single month… and how it completely failed the first time. And, for a change, I wanted to follow the hype and use Claude!
The Problem
So here’s the situation. Every month, after all donour receipts are registered in our database (we use Salesforce at the charity), these need to be imported by the accountant for receipt issuance and for importing into the correct accounting fields. All of the project classifications have been done manually before, so this is literally a process of correspondence to accounting numbers (POC in Portugal). It’s one of those tasks that is:
- Repetitive
- Rule-based
- Detail-heavy
It is “easy” but absolutely unnecessary to do manually. Which means it does not sit in the forefront for automation, because it kind of works, and it is not too bad to do. You apply a few filters in Excel, and with a few steps and checklists of what to look for, you get through it in 30 minutes.
In fact, just this morning, my teammate who is responsible for it told me:
“Don’t worry about it, I am used to it and I can get through it fast”!
That just made me want to automate it even more.
Establishing the Rules
I knew a few basics of what I had to get right. I needed to define the rules (noting that we have 3 different types of payments) and I needed to have a clean correspondence between our internal classifications and the accounting ones. Meaning – clean data.
I knew I wanted to use Claude. After all, everyone is talking about Claude Code, and I had barely experimented with it. But more importantly, I wanted to start with the best possible basis. I didn’t want Claude to just give me the code, I wanted it to have a full understanding of the problem, my rules, the exceptions and what great would look like.
I took the time writing my prompt, clearly stating all the rules embedded in the file. And then I did something different. I asked Claude to:
- Analyse the structure of the spreadsheet
- Knowing my rules, identify issues to apply these rules
- With the spreadsheet from January already mapped, identify where my rules were not applicable or had not been followed
- Recommend improvements to the file
Before writing a single line of code, it audited my entire file and found multiple issues:
- inconsistencies in mapping
- missing accounts
- text mismatches
- structural problems that would break automation
And this part is important. To be fair, it did something very impressive. Because the real work wasn’t writing the macro, it was fixing the system so the macro could work.
As a bonus, it helped me straight away identify small errors in prior months. I was winning already.
The Macro Step
Once Claude helped me improve my file (and checked it 3 times), we were ready to develop the VBA code to work on the automation.
Claude saved the code into a file ready to import into Excel and, alongside it, explained to me how I could import it, make it work and what was behind the code.
I was feeling pumped, so I tried to do it immediately. I was so sure it was going to work that I did not even read carefully through all the steps.
And then I hit “RUN”.
The Failure
You probably guessed it already. It did not work. Every single line came back as: “No match.”
Zero classifications.
Completely failed.
I sent the results back to Claude, and I must admit, we went in circles for a bit. Claude claimed I changed the file, and I just said: “No, I did not”. 3 times I uploaded the file, claiming I did nothing on it, and 3 times it came back saying “Yes, you did”.
That is when it hit me. In my excitement, I had not realised I had gotten a first error message about “overflow”. As soon as I said this, we were onto something.
Debugging With AI
I had to go back to the original file and run the macro again, so that I could get that same error message. With it, Claude knew exactly what was wrong. Indeed, my columns had shifted, but it was a result of the macro; it had nothing to do with me! So really, we were both right.
Then we started debugging. Part of the problem was that the numbers had too many digits, so as soon as Claude treated the numbers as text in the code, this did not happen again. A small detail—but everything depended on it.
I was excited, but more doubtful this second time round. You have to see, my teammate that does this monthly is VERY sceptical of changes. I had to get this to 100% accuracy. There was no other way.
And then it worked!
What was my result the second time round, you may ask?
- 363 lines processed
- 363 correctly classified
- zero errors
Everything worked. Now we’re talking!
With this, I added a few more verification steps. I put the manual results and the macro results side by side and looked for inconsistencies. I applied it to January and February, as both these months were done manually. And I asked Claude to run the checks. Guess what?
There were inconsistencies in 2 circumstances:
- In a few exception cases, where Claude already expected an exception because I said that in my rules
- In places where we made human errors manually.
Yes, I found mistakes in our prior submission and immediately sent them in for correction.
So it didn’t just automate the task. I improved the quality of the process.
Lessons – What This Actually Teaches About AI
Now, there are a few lessons here that go beyond Excel or accounting.
1. AI doesn’t just automate—it structures and audits
Before solving the problem, it helped me determine exactly what the steps needed to be, and to find issues I didn’t know existed.
2. The first result is rarely the final result
The first version completely failed. The value came from iteration. Because I made my initial rules very clear and I had training data with completed outcomes, this allowed Claude to identify the problem very quickly
3. Small technical details matter
Something as simple as number vs text formatting can break everything. Don’t be discouraged, and use AI to help you debug
4. AI works best as a collaborator, not a one-shot tool
This only worked because I stayed in the loop—testing, questioning, refining. I knew exactly what my rules had to be, what small problem I was trying to solve and what could go wrong.
Game-Changer
This experience promises to be a game-changer for some operational processes we have in the charity. It is not uncommon for SMEs to run manual processes that are spreadsheet-based and that just take a lot of time. Moreover, these are processes that rely on repeated knowledge and can be accumulated.
Because I sought to build it together with Claude, rather than outsourcing the coding to an expert, I had to be clear about my goals and steps, and I maintained full transparency in the process.
Whilst the automation will be very valuable, there was also huge value in cleaning up the data and writing on paper clear rules. You can get more clarity on how your systems work, and you can recognise when you need exceptions.
If you have a process you repeat every month, this might be your sign to try automating it.
Not perfectly. Just start. Because you might not just save time—you might uncover things you didn’t even realise were broken.
As soon as I was done with this one (and I showed off to my teammate), I immediately jumped into a more complex automation. I am ready to test the limits of the system. Are you inspired to try something new? Reach out if you are not sure!
Thanks for joining me on this episode of Try AI for Growth. If you try something similar, I’d genuinely love to hear how it goes. If you can, share this episode with someone who may be struggling with operational processes. The more you share, the more we will all learn!
Until next time—keep experimenting and keep having fun.
Photo by RDNE Stock on Pexels
