selective focus photo of thick books

Combining AI Tools: How I Used Perplexity to Strengthen My ChatGPT Briefing

Apple Podcasts ¦ Amazon Music ¦ Spotify

Hi, and welcome back to Try AI for Growth, a podcast where I share short, sometimes surprising stories about how I use AI to tackle everyday challenges— at work, in organisations, and in everyday life.

I’m Sara Vicente Barreto, and today I want to talk about something slightly different — not just using AI, but using multiple AI tools together, on purpose.

One of the things people often ask me about is the difference between AI models. And as much as I have dipped my toes across different models, I do have a bias towards ChatGPT. I can converse with it, and it has a solid memory of the work I do across areas. In fact, and I quote it:

One reason ChatGPT feels different in your work is continuity. Most assistants don’t maintain that level of contextual continuity yet. That’s what makes ChatGPT useful for portfolio careers, founders, advisors, creators, operators, which is basically your profile.

Talk about self-serving answers!

And I have to admit — it does fit the multiple “personalities” of my work.

Spot the difference

I have been wanting to dig a bit more into the differences between different AI models and how I can and should use them. The reality is, they are good at different things. Some are better at thinking and writing. Others are better at searching and verifying. Others are increasingly embedded in documents and productivity tools.

For example:

  • Tools like ChatGPT or Claude are strong thinking partners — great for structuring ideas, writing, brainstorming, and planning. Claude, in particular, is often praised for working through large documents, structured data, spreadsheets, and even code explanations, where careful reasoning really matters.
  • Tools like Perplexity are closer to AI-powered research engines — focused on retrieving information, showing sources, and validating claims.
  • And tools like Gemini or Copilot are increasingly integrated into productivity environments like Google Workspace or Microsoft Office, helping directly inside documents, presentations, and spreadsheets.

The differences are subtle, but they matter, especially when you start combining tools intentionally.  So instead of debating which tool is “best”, I decided to test how they might work together.

And that’s exactly what I experimented with this week.

Starting with ChatGPT

I was preparing a briefing on a company ahead of a meeting. My first step was what I often do: I worked with ChatGPT to structure the analysis.

We covered company overview, management, team, products, clients, competitors, market analysis, business model, projects, strengths and weaknesses. ChatGPT helped me think through the structure of the briefing and generate a first version.

That part felt natural — AI as a thinking partner. But then I wanted to test something different.

As is often the case with ChatGPT, it felt slightly long, but more importantly, I was unsure about the sources behind it.

Now, I wanted verification.

Getting started with Perplexity

My first “Talk” with Perplexity was actually about what it could do differently. As it described its strengths vs. other LLMs, and where others were more helpful, I knew I was onto something.

So I uploaded my briefing document (all sources coming from ChatGPT), and I asked it to do 2 things: first, verify the information included and second, provide corrections or additions.

The result was very clear. Perplexity behaves less like a conversational assistant and more like a research engine with AI summaries. It’s not as much a “Chat”.

Reviewing the Research

Perplexity then gave me a section-by-section assessment of each statement — confirming the sources it could find, flagging where I needed to be cautious, and suggesting ways to validate further. It was very clear where it had found sources and where it had not. Making you realise the number of things that sounded plausible just a minute before, but had no real validation after all.

This verification of claims is here to stay for me.

Improving the research

After this assessment, Perplexity added in a structured manner a clear distinction between:

  • What I should keep
  • What I should tighten or where statements need to be qualified
  • What I should drop or soften

As a final touch, it added what other blocks my briefing could benefit from. There were no dramatic changes, but useful small additions.

With this super-powered analyst verification, I felt much better about the document I was using. It was still only for my internal purposes and learning, but it just felt stronger.

A first use case developed

This experiment reinforced something simple. Different AI tools can play distinct roles within the same task.

ChatGPT helped me structure the briefing and adapt it to my needs, as it understands my line of work and how I usually structure things. But Perplexity helped me move beyond the form and first level of content and verify claims, find sources, challenge assumptions and add factual grounding.

One tool helped me think and structure, another verify and enhance.

By the way, something Perplexity had and that can really add to research is a dedicated news and finance section, which can be helpful for preparing meetings, tracking sectors and scanning for the latest on a company. Not something I tested deeply yet, but something to explore as you try to do your own experiments.

Lessons Learned

As ever, this experiment gave me a few lessons to take forward

  1. AI research and AI writing are not the same thing.
  2. With all the dangers of hallucinations, having an AI tool for verification can be super powerful
  3. These tools are meant to be combined. And the combination works better than relying on just one.

No doubt, I had an initial resistance to working on different tools. It was a prompt from a friend that alerted me to my blind spot the day before, and it was an emergent use case that led me to apply my curiosity. It was a minor adjustment to the workflow, but one that I could clearly see the results.

That’s it for today’s experiment. If you’re curious, try this yourself.

If you’re preparing for a client meeting or researching a new market, Perplexity is a strong place to start. And I only used the free version so far. You can start in your usual AI tool of choice, and then hand it to Perplexity for verification and challenge.

You might be surprised by what holds up… and what doesn’t. For me, it changed how I think about “done”.

Let me know how your experiments go, and subscribe to keep getting more ideas. Thanks for listening to Try AI for Growth.

Until next time — keep experimenting and keep having fun.

Photo by Jens Mahnke, Pexels

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.