Author: Charles Jorgensen CFP, JBL Wealth Management

Artificial intelligence tools like ChatGPT are improving at a remarkable pace. Many clients are already using them to ask questions about investing, retirement, tax, and portfolio construction. That trend will only accelerate.

AI is not the enemy. Used correctly, it can be genuinely helpful. Used blindly, it can be dangerous — particularly in financial planning, where small errors can have large long-term consequences.

As a financial advisor, I occasionally test these tools by asking questions from a client’s point of view. More than once, I’ve found answers that sound confident and logical, but contain material errors that only a trained professional would spot. A layperson would have no reason to question them.

That’s the real risk.

The Core Problem: Confident Answers That Can Be Wrong

Large Language Models (LLMs) don’t “know” things in the way a human expert does. They predict likely answers based on patterns in data. When they’re right, they’re impressive. When they’re wrong, they’re often convincingly wrong.

In financial planning, this matters because:

  • Incorrect assumptions (tax rates, withdrawal rules, product structures) can materially change outcomes
  • Context is often missing (jurisdiction, residency, age, income source, regulatory constraints)
  • Rules change, and AI may rely on outdated or overly general information
  • Edge cases (situations that sit outside the normal, typical scenario that are exceptions rather than the rule), the ones that matter most are often handled poorly

The danger isn’t that AI gives obviously bad advice. It’s that it gives plausible advice that feels complete, but isn’t.

An Often Overlooked Risk: Sharing Too Much Personal Information

Another issue that deserves attention is how much personal and financial information people are willing to share with AI tools.

To get more “accurate” answers, users often provide detailed data: income, assets, account balances, family circumstances, residency status, even future plans. While this may feel harmless, it introduces real risks:

  • Loss of privacy and control over sensitive financial information
  • Uncertainty about how data is stored, used, or retained
  • Potential misuse or exposure if information is later linked, breached, or misunderstood

Unlike a regulated financial advisor, an AI tool has no fiduciary duty, no confidentiality agreement, and no accountability to you as a client.

A sensible rule: never treat an AI platform as a secure vault for detailed personal financial information.

Where AI Is Most Risky in Financial Advice

Extra caution is warranted when using AI for:

  • Tax planning, especially cross-border or multi-jurisdictional situations
  • Retirement drawdown strategies
  • Product selection or replacement advice
  • Estate planning structures
  • Regulatory or compliance-related decisions
  • “Optimal” investment strategies that ignore risk tolerance and real-world constraints

These areas require judgment, accountability, and an understanding of how rules interact — something AI does not reliably provide.

Where AI Can Be Useful (Lower-Risk Use Cases)

Used appropriately, AI can be a valuable support tool. Lower-risk applications include:

  • Education and explanations

Plain-English explanations of concepts like diversification, inflation, compound interest, or market cycles.

  • Scenario thinking

Helping clients frame better questions to ask their advisor, rather than replacing the advisor.

  • Administrative assistance

Summarising documents, drafting questions, organising information, or preparing meeting notes.

  • Behavioural coaching support

Reinforcing good habits such as long-term thinking, discipline during market volatility, and avoiding emotional decisions.

  • General financial literacy

Understanding terminology, product categories, and how different parts of a financial plan fit together.

In short: AI is useful for learning and organising, not for deciding.

A Practical Rule of Thumb

If a financial decision:

  • Is irreversible,
  • Has tax consequences,
  • Affects retirement income, or
  • Could materially impact your long-term financial security,

then AI should be treated as a starting point — never the final authority.

The Role of a Professional Advisor in an AI World

AI doesn’t understand your full financial picture. It doesn’t carry legal responsibility. It doesn’t sit across the table from you when markets fall or when life changes.

A professional financial advisor:

  • Identifies risks you didn’t know to ask about
  • Tests assumptions
  • Applies judgment built on experience
  • Takes responsibility for advice given
  • Adapts strategies as rules, markets, and your life evolves

AI can support this process. It should not replace it.

Final Thought

Technology will continue to reshape financial advice — and that’s not a bad thing. But wisdom lies in knowing where tools end and responsibility begins.

Use AI to become better informed.

Use a professional to make better decisions.

That combination is where real value lies.

 

 

The information contained in this document is for information purposes only and should not be construed as financial, legal, tax, investment or other advice as defined and contemplated in the Financial Advisory and Intermediary Services Act, Act 37 of 2002. It does not constitute an offer to sell, or the solicitation of an offer to buy any product (the “Information”).