The Hidden Risks of Using AI in Your Organization—and How to Avoid Them
Dennis Kardys Head of Design & Development#Artificial Intelligence
Before chasing efficiency, it’s worth asking: what might break if we’re not paying attention?
People often rush to adopt AI without fully understanding the risks, only to find themselves undoing harm they didn’t see coming. Content that's cringe, security gaps, legal exposure, and erosion of team judgment aren’t just hypothetical edge cases. Fortunately, with the right practices, these pitfalls are avoidable. And learning from them now is a lot cheaper than learning the hard way.
The Risk of Inaccurate Information
Generative AI tools like ChatGPT or Gemini often produce answers that seem trustworthy, but may be factually wrong or misleading. This can lead to flawed decisions, erosion of trust, or legal liability if that information is used in client-facing work or public communication.
To mitigate: trust but verify.
To mitigate this risk, it’s important to have an expert verify the accuracy of content and not publish it blindly. Recently, while working on a competitive analysis of team structures, I compared findings from information I had researched against information Gemini discovered. The report that Gemini produced was structured well and contained convincing sounding content. Unfortunately, it was also severely inaccurate. Checking its citations, it appeared to have conflated unrelated pieces of content and turned them into a misleading mish-mash of fake news. I still managed to glean some insights from the Gemini report, and the report’s structure helped me think through ideas for structuring what I ultimately produced. But had I trusted that the content it produced was accurate, I could have gotten us into a REAL PICKLE!
The Risk of Poor Quality Content
AI Slop1 is a term that refers to low-quality, mass-produced content generated by AI that is flooding the Web. It’s when, in the quest to improve domain authority and rise in SERP ranking, people unleash robots to write for, well, other robots. The problem is that AI-generated content often mimics the tone and structure of well-written material, giving it the semblance of quality, but it often lacks a point of view, originality, depth, or relevance.
This can leave you with polished-sounding, but shallow, content that's of little use to actual humans interested in the topic—keyword-laden content with the absence of thought or perspective, undermining the thought leadership and domain authority you are hoping to build.
To mitigate: invest in expertise.
To mitigate the risk of low-quality content, you just need to be a decent Web citizen. People who publish AI slop are like people who don't pick up after their dogs when walking them down the street. AI slop litters the Web and makes it worse for everyone. Don’t be that person—don't publish slop!
Here's what this means: don’t expect people without knowledge or experience surrounding a topic to use AI to write articles on that topic. Invest in people with the appropriate expertise to write, co-create, and/or edit the content you want to publish. Conduct thorough research when writing about topics that are newer to you. Use AI to enrich content or as a content collaborator, not in place of humans with insight and perspective.
The Risk of Biased or Offensive Content
Biases encoded in training data can surface subtly—in imagery, language, or decision-making logic—leading to outputs that reinforce harmful stereotypes2. These incidents can expose organizations to lawsuits, but also reputational damage and loss of public trust. Sometimes, the inherent biases in content are immediately noticeable, and are caught before being published. But, consider the scenario where a team with limited access to a professional graphic designer or copywriter begins using AI to generate images and copy. The risk is when words or images look and sound good on a superficial level, but there's nobody on the team evaluating them for potentially reputation-damaging biases.
To mitigate: include screening for bias in an established content-review process.
In the best scenario, you have a diverse team of experts and an individual appointed to review content for biases as part of content review process. But, not every team can afford to hire copywriters and designers, yet still needs to produce content that reinforces their brand and markets their products or services. Regardless of your team size, establish content workflows and governance policies to ensure that content producers have oversight and the content you produce sends the right message.
The Risk of Copyright Infringement
Generative AI doesn’t produce truly original work. It creates new content by recombining patterns from the material it was trained on. While this can result in helpful or creative outputs, it also raises real copyright concerns, especially when the generated content closely resembles existing protected works.
Many AI models have been trained on unlicensed material, including copyrighted text, images, music, and code3. As a result, organizations may unintentionally publish content that imitates the structure, phrasing, or visual style of someone else’s work. This can lead to copyright infringement claims and reputational harm.
The risk goes both ways. Your company’s own thought leadership, illustrations, or proprietary writing might also be scraped and reused without permission, just as you may be doing to others without realizing.
To mitigate: check app settings and monitor use.
- Review app settings. Tools like Adobe Creative Cloud often default to allowing your work to be used for training4. Say what, now?! Opt out when possible.
- Audit outputs. Before publishing, check that AI-generated content doesn’t closely resemble known works, especially in visual or musical media.
- Monitor digital channels. Watch for AI-generated derivatives of your published content, particularly if it has performed well.
- Consult legal experts. If you suspect your content has been misused—or that your team has unintentionally infringed—speak to a copyright or trademark attorney.
Generative AI can speed up content creation. But it doesn’t remove your responsibility to ensure that what you publish is original, respectful of intellectual property, and legally sound.
The Risk of Poor Quality Vibe-Code
Generative AI tools can produce the necessary functioning, working code needed to get a feature running or pass a test, which has encouraged many to adopt "vibe coding". But functional code isn’t necessarily quality code. Without proper oversight, AI-generated code can introduce serious issues: security vulnerabilities, performance bottlenecks, and long-term maintainability problems.
Even if the code passes initial QA, it may bog down the application, consume excess energy, and increase technical debt. Worse, developers may not recognize hidden flaws in vibe-coded solutions, leading to future refactoring costs or even breaches.
To mitigate: collaborate with AI, don’t delegate to it.
AI can be a powerful coding partner—quick, pattern-savvy, and helpful in unblocking problems. But it still requires human oversight. Think of it like working with a junior developer: capable of valuable contributions, but not something you rely on blindly or let ship code unsupervised.
Organizations should:
- Set clear expectations: Define when AI-generated code is acceptable (e.g., for prototyping or scaffolding) vs. when high-quality engineering is required.
- Require human review: Ensure AI-written code is validated by experienced developers before deployment.
- Audit tools for data security: Choose platforms that don’t leak proprietary code or expose sensitive logic to external models.
When developers treat AI like a teammate—someone to brainstorm with, refactor alongside, or get unstuck—it becomes a meaningful collaborator. They retain architectural control while benefiting from speed, pattern recognition, and alternative approaches. But the key is this: the value comes from working with it, not from offloading critical thinking to it. Treat AI as a thinking partner that can accelerate insight, not a shortcut to bypass it.
The Risk of Predictive AI Making Incorrect Predictions
Predictive AI tools, like recommendation engines and dynamic personalization, work by identifying patterns in past behavior to anticipate future actions. But these systems often operate on incomplete or outdated signals, and when they misread a user’s intent, the result can be a frustrating or misleading experience that decreases trust and conversion.
It’s important to remember that predictive models are not designed to be perfect. As Eric Siegel, author of Predictive Analytics and the AI Playbook, puts it:
“Predictive analytics doesn't tell you what will happen—it tells you what is likely to happen.”
The value of a predictive system isn't in its certainty—it's in making better decisions on average than we could with intuition or static rules alone. With that in mind, the question isn't just “What does the model predict?” but “What happens if the model is wrong?”
To mitigate: design for low-impact failures.
Since predictive models will inevitably be wrong some of the time, organizations should evaluate:
- How likely is the model to be wrong? (And can it be measured or improved?)
- Which user segments are affected by incorrect predictions?
- What is the consequence of showing the wrong thing?
Consider this example: A machine parts manufacturer uses predictive AI to recommend related components on their e-commerce site.
In one scenario, the system implies compatibility between two parts that are often purchased together—but they don’t actually work with the same machine. A user who trusts that suggestion may buy incompatible parts, leading to frustration, returns, and erosion of brand trust.
In another scenario, the site simply notes, “Customers also bought…”, making no promise about compatibility. The prediction is still surfaced, but without implying accuracy or endorsement. If the parts don’t work together, the user isn’t misled.
In both cases, the same prediction is made, but only one creates real risk.
Predictive AI deals with probability, which will always involve uncertainty. Instead of trying to eliminate risk, design systems where being wrong doesn’t hurt much, and being right helps a lot.
The Risk of Security and Privacy of Content Used in LLMs or Chat
When organizations build AI features or feed content into LLMs (Large Language Models) using internal content like documents, code, or client data, they often assume that information stays contained. But, without clear access controls and usage policies, sensitive material can end up exposed to unauthorized users, or even to third-party model providers.
There’s also the risk of developer behavior. Engineers may paste proprietary code into public tools like ChatGPT or Copilot to get help debugging. In doing so, they may unknowingly expose trade secrets or confidential logic to external systems.
To mitigate: implement guidelines and governance.
- Use private or self-hosted models. Choose on-premise or private-cloud deployments where feasible to keep data internal.
- Disable data logging. If using third-party APIs, make sure inputs aren’t retained or used to further train the model.
- Educate teams. Train staff on the risks of submitting proprietary data to public AI systems. Promote safer internal alternatives. Establish clear guidelines and policies for teams.
- Monitor and audit usage. Track what tools are being used and what data is being submitted to identify risky behavior early.
AI systems don’t automatically understand what’s confidential. You need to define those boundaries, control access, and review how data flows through your stack. By proactively setting up governance around AI use and treating input/output data as sensitive, you can maintain better control over how internal knowledge is handled without putting your business or clients at risk.
The Risk of Unpredictable Token Usage and Cost
AI-powered tools like RAG (retrieval-augmented generation) can improve user experience by delivering intelligent, content-aware responses—especially in search or support workflows. But unlike traditional features, usage costs for these systems scale with interaction. The more users engage, the more tokens (units of AI processing) you consume, and the more you pay.
Without historical data or clear usage patterns, it’s easy to underestimate how often these features will be used or how expensive they’ll become. What starts as an experimental tool can quickly turn into a budgetary sinkhole.
To mitigate: test, cap, and build fallbacks.
- Pilot first: Release AI-powered features to a limited audience before full launch. Gather usage data and feedback to estimate demand.
- Measure average token consumption: Track how many tokens typical interactions consume. This helps you model cost projections for scale.
- Establish budget caps and rate limits: Set hard usage thresholds to prevent runaway costs. Ensure your infrastructure gracefully handles those limits.
- Design fallback experiences: If your AI feature hits its cap or needs to be paused, make sure users can still find what they need via standard search or navigation.
AI features are exciting, but they’re not free. By validating usefulness and modeling cost before scaling, you can ensure they remain valuable, sustainable, and under control.
The Risk of Over-Reliance on AI Replacing Human Judgment
One of the more subtle risks of AI adoption is the erosion of critical thinking5. When people rely too heavily on AI to generate ideas, structure work, or solve problems, they may lose the habit of asking better questions, making thoughtful trade-offs, or learning through iteration. Over time, this can reduce the depth and resilience of your team.
The messy early stage of making is often the most tempting part to delegate. This includes writing your shitty first draft, sketching your first clunky iteration of a UI, or making a convoluted draft of a product roadmap. Design, code, content, or strategy—stumbling through a first attempt is where insight emerges. As Milton Glaser put it, “drawing is thinking.” That act of hands-on problem solving is what sharpens judgment, builds intuition, and helps people truly understand their work.
This isn’t just about creative pride, it’s about team capability. AI can accelerate output, but if we delegate too much of our thinking to it, we risk losing the very skillsets that make human–AI collaboration effective in the first place.
To mitigate: co-create and iterate.
Encourage teams to stay engaged during the early stages of work. Use AI as a partner in exploration, not just a tool for answers. Let it support ideation, challenge assumptions, and extend your thinking, rather than replace it.
Author, educator, product expert Christina Wodtke describes this practice as “sketching with AI.” In her words, “Physical thinking still matters in the age of generated everything6." That mindset not only strengthens your team, it helps you get better results from AI too.
In Summary: Think of AI as a Teammate, Not Just a Tool
Every example in this post highlights a risk, not of AI itself, but of using AI without clear strategy, structure, or oversight. Despite these risks, when it's used well, AI can amplify our creativity, deepen our critical thinking, and expand our skills. What’s common across all the mitigation techniques shared is that they involve people. Human expertise. Human judgment. Human collaboration. Ultimately, using AI well means focusing less on what a particular model can do for us, or who it can replace, and placing greater emphasis on how we work with it to get the best results.
Resources and further reading:
- AI-generated ‘slop’ is slowly killing the internet, so why is nobody trying to stop it?, The Guardian, Arwa Mahdawi and also AI Slop: Last Week Tonight with John Oliver
- What is an AI Bias, IBM, James Holdsworth
- Generative AI Has an Intellectual Property Problem , Harvard Business Review, Gil Appel, Juliana Neelbauer, David A. Schweidel
- How to Opt Out of Adobe's AI: Protecting Your Art and Creativity, by Ginny St. Lawrence
- The Dark Side of Ai: Tracking the Decline of Human Cognitive Skills, Forbes, Chris Westfall
- Sketching with AI: Why Physical Thinking Still Matters in the Age of Generated Everything by Christina Wodtke
Related Articles
Results Matter.
We design creative digital solutions that grow your business, strengthen your brand and engage your audience. Our team blends creativity with insights, analytics and technology to deliver beauty, function, accessibility and most of all, ROI. Do you have a project you want to discuss?
Like what you read?
Subscribe to our blog "Diagram Views" for the latest trends in web design, inbound marketing and mobile strategy.