Is Your AI Use an Asset or a Risk?

AI is already in your business. But is your use of it smart, strategic—and safe?

In this recent LinkedIn Live, I peeled back another layer of the AI conversation: the ethics.

Not the legal stuff (we’ve covered copyright and contracts elsewhere). This time, I focused on the less talked about—but equally critical—questions of bias, attribution, and something called agentic AI.

Here’s what every expert-driven business needs to know:


Attribution isn’t always required—but it still matters.

When we use AI to draft content, are we ethically obligated to disclose it? Many think it’s plagiarism not to.

But here’s the thing: AI isn’t a someone. It’s not a creator. It’s a tool. So from a legal and ethical standpoint, using unmodified AI output without attribution may not be plagiarism—but that doesn’t mean there’s no responsibility.

That said, if you follow Erin’s #1 AI Usage Rule: “Never use raw AI output for deliverables or money-making IP,” then most of your attribution concerns go away.

You don’t need to attribute AI if you’re adding your own expertise, voice, and judgment to the final product.

If you're using AI ethically, you're using it as a starting point—not the finished product.


Bias is baked in—even if you don’t see it. AI systems are trained on massive (and murky) data sets. That means bias is inevitable and it can impact everything from search results to hiring decisions to client-facing copy.

Case in point: One tool produced a white male 95% of the time when asked to generate a stock image of a "CEO." That’s not a bug; that’s the bias baked into the training data.

Erin’s #2 AI Usage Rule:  Always question the source.”


Agentic AI is here (and it’s a bit terrifying). “Agentic AI” refers to AI tools that take autonomous action. Not just producing content—but making decisions.

Many of our everyday productivity tools, such as Adobe, are adding agentic capabilities in its software. That means AI can now act without human input.

When we let tools make decisions without fully understanding how they’re trained or what they’re prioritizing, we’re injecting risk into our processes.

Erin’s #3 AI Usage Rule: “Without clear boundaries, AI becomes a liability.”


This isn’t about fear. It’s about awareness.

Ask Yourself: Where is AI saving you time but costing you credibility, clarity, or control?


P.S. Missed the LinkedIn Live? Catch the full replay here. And feel free to forward this to a colleague or share it on LinkedIn. Let’s help more experts navigate the legal, ethical, and operational implications of AI use in expert businesses.


Productivity Without Paranoia - Using AI Legally and Ethically

Yes, you can move fast and not break things (like trust, contracts, or copyrights).

An optimistic guide for professionals who want to work smarter with AI—without tripping legal landmines or compromising client relationships.

Book Erin to teach your community of expert business owners or consultants.

Previous
Previous

Redefining Success in Consulting

Next
Next

Shift From Being the Business… to Building a Business