Understanding Opportunities And Risks That Come With Using AI Tools

As artificial intelligence becomes a growing part of conversations with clients, wealth advisors need to understand the opportunities and risks that come with tools like ChatGPT, Google AI, Claude, and Perplexity. Whether used as a curiosity, a research aid, or a substitute for professional advice, these platforms are already influencing how clients think about financial and estate planning. Advisors must be prepared to address client questions, highlight where AI can be useful, and clarify where its limitations pose risks.

Recent research underscores why this distinction matters. A study released on September 23 by EncorEstate Plans evaluated how well four leading AI platforms—ChatGPT, Claude, Perplexity, and Google AI Mode—handled 46 common estate planning questions. These were not abstract hypotheticals but practical questions that the firm’s planning team has fielded from clients over more than four decades. The results demonstrate just how inconsistent and unreliable AI can be in this high-stakes domain.

Estate planning is a space where accuracy, nuance, and context are non-negotiable. A small error in a trust, power of attorney, or beneficiary designation can have significant financial and legal consequences for a client and their family. Advisors know this from experience, but the temptation for clients to test AI tools is growing. As a result, understanding how these platforms perform is essential for protecting client interests and reinforcing the advisor’s role.

The study graded AI responses on a traditional A–F scale. Among the four platforms tested, Claude performed best, with 69% of its answers rated an A or B. Perplexity followed with 63%. By comparison, ChatGPT—a tool that clients are most likely to be familiar with—fared far worse, with nearly half its answers earning a D or F. Google AI Mode scored the worst of all, failing 61% of the time before abruptly stopping after question 19, leaving 28 questions unanswered entirely.

Taken at face value, these results are concerning. For a client who believes AI can provide estate planning guidance, the consequences of relying on incorrect or incomplete information could be profound. As EncorEstate CEO Matt Morris notes, his team has already seen clients bring in AI-generated documents that contain “nonsense POA powers” or clauses that would never hold up in practice. The problem isn’t just technical errors—it’s the false sense of confidence clients may feel when an AI response appears well-written or authoritative.

Even when AI delivers a complete answer, it typically misses the client-specific context that shapes estate planning. For example, two families might ask about the use of revocable trusts, but the correct advice will differ dramatically depending on family dynamics, state laws, tax considerations, and long-term goals. AI cannot replicate the judgment, experience, and personal knowledge that an advisor brings to these conversations.

For advisors, the findings offer both a warning and an opportunity. The warning is clear: AI tools should not be relied upon to generate legally binding estate plans. They are prone to errors, omissions, and a lack of nuance. But the opportunity is just as important. Advisors can use these developments to reinforce their value proposition by showing clients what AI misses and why professional oversight is indispensable.

One practical approach is to acknowledge where AI has utility while setting clear boundaries. For instance, platforms like Claude and Perplexity may serve as a reasonable starting point for basic education. They can help clients familiarize themselves with estate planning concepts or generate questions to bring into a planning meeting. This can enhance engagement and make conversations more productive. However, these tools cannot replace professional review, customized strategies, or the human guidance necessary to navigate sensitive family and financial issues.

By framing AI in this way, advisors can both validate a client’s curiosity and emphasize their role in protecting outcomes. A client might come into a meeting with AI-generated notes on the differences between wills and trusts. That’s an opportunity for the advisor to say, “This is a good overview, but here’s where the nuances really matter for your family.” This shifts the focus from information-gathering to professional interpretation, where advisors provide irreplaceable value.

The study also suggests that advisors need to be proactive. Clients may not always disclose when they’ve turned to AI for estate planning advice, particularly if they feel embarrassed about the results. Advisors should consider asking open-ended questions such as, “Have you looked at any online tools or resources on this?” Creating a judgment-free environment allows clients to share their experiences with AI and opens the door for advisors to provide clarity and correction where needed.

Looking ahead, AI will likely continue to evolve, and its role in wealth management will expand. But estate planning highlights the limits of relying on technology without human expertise. Advisors can position themselves as both knowledgeable about the technology and clear about its boundaries. Doing so not only protects clients but also strengthens the advisor-client relationship.

In practice, this means integrating AI awareness into the advisory process without allowing it to undermine professional judgment. Advisors can:

  • Educate clients on AI’s role: Explain that while AI can summarize concepts, it cannot capture individual goals, family dynamics, or state-specific laws.

  • Set clear expectations: Position AI as a tool for learning, not for executing estate planning strategies.

  • Highlight real-world risks: Share anonymized examples of AI-generated errors—such as flawed powers of attorney—to illustrate the dangers of bypassing professional review.

  • Use AI as a conversation starter: Treat client interest in AI-generated output as an opportunity to deepen discussions about goals and values.

  • Reinforce advisor value: Emphasize that estate planning is not just about documents—it’s about judgment, context, and guiding families through complex choices.

The EncorEstate study makes one point abundantly clear: technology cannot replace the human advisor in estate planning. Advisors should not shy away from these conversations. Instead, they should embrace the chance to demonstrate why their expertise matters more than ever in an era of automated answers.

In the end, AI’s rise is not a threat to advisory practices but a chance to reframe them. Clients are curious and increasingly engaged with new tools. Advisors who lean into that curiosity, validate it, and then provide the depth that AI cannot will stand out as trusted guides. As this study reminds us, estate planning requires precision, foresight, and an understanding of family complexity—qualities that no chatbot can replicate.

For wealth advisors and RIAs, the message is simple: AI may provide a starting point, but it will never be the final word in estate planning. The responsibility—and the opportunity—to get it right remains firmly in human hands.

Popular

More Articles

Popular