Skip to main content

Why Joyc Behaves the Way She Does

Explaining how Joyc’s AI works, why she responds the way she does and how you can get the best results.

Joyc from Voyc avatar
Written by Joyc from Voyc
Updated this week

Joyc is still in her early beta days. Think of her as the enthusiastic intern who’s learning fast, but occasionally gets ahead of herself. She’s powered by generative AI, the same kind of technology behind big names like ChatGPT, which means she’s smart, useful, and sometimes a little too confident.

What You Can Expect from Joyc

  • A helpful assistant: Joyc is here to accelerate your analysis, not replace it.

  • Continuous learning: As we connect more tools and refine her training, Joyc will become sharper and more capable.

  • Industry-standard behaviour: Joyc is powered by the same type of generative AI that drives tools like ChatGPT.

However, even the most sophisticated AI systems in the world sometimes get things wrong. Joyc isn’t an exception, this is simply how AI works today.

Why AI (or More Specifically, LLMs) Can Slip Up

“AI” is a broad term that covers many different kinds of artificial intelligence, from image recognition to self-driving cars. Joyc, however, is powered by a specific type called a Large Language Model (LLM).

LLMs are trained to understand and generate language. They don’t actually know facts like humans do, and they don’t understand meaning in the way we think of understanding. Instead, they work by predicting the most likely next words based on patterns in data. This makes them brilliant for generating ideas, insights and summaries, but it also means they can sometimes produce results that sound confident, yet aren’t entirely accurate.

For simplicity, we’ll keep referring to Joyc’s technology as “AI” throughout this article, but when we talk about her behaviours such as predicting, varying or filling in gaps, we’re specifically referring to how LLMs work.

Here are some of the key things that can happen because of how this technology functions:

  • It can fill in gaps: Hallucinations is the technical term for when an AI produces incorrect, misleading or nonsensical information and presents it as factual. Hallucinations happen because AI models don’t actually reason like humans do, they work from patterns. If training data is incomplete or biased, or if the model feels “pressured” to provide a confident-sounding answer rather than admit uncertainty, it can generate something that isn’t true.

  • It relies on connected tools: AI systems often use specialised tools to fetch data beyond what they were originally trained on. These tools make answers more accurate and reliable, while also helping to reduce hallucinations. However, not every feature is always connected. If a tool isn’t available or doesn’t cover the full functionality, the AI will still try to respond, but it’s basing the answer on patterns rather than data. That means it might sound right but not give the full picture.

  • It depends on clarity: The way you phrase a question matters. Clear, specific prompts usually lead to the most accurate results, while broad or ambiguous ones can lead to unexpected answers.

  • It’s designed to vary: Generative AI is probabilistic, which means it doesn’t always give the same response twice. That variation is intentional, making interactions feel more natural and flexible.

How This Affects Joyc (and What You Can Do To Help Her)

She might guess

What this means: Joyc is trained to let you know if she doesn’t have the answer. But if she can’t access the exact data, she may still generate a likely response based on previous training patterns.

What you can do: Double-check important details in the channel via reports, dashboards and transcripts. You should make this a common practice when working with any AI tool.


She depends on clear questions

What this means: Vague or ambiguous prompts can lead to broad or incorrect answers.

What you can do: Ask specific questions, like “How many alerts were resolved vs unresolved in the last 2 weeks?”


She won’t always say it the same way

What this means: Joyc may phrase answers differently even when the question is the same.

What you can do: She should answer your question and sometimes ONLY your question, other times she may go deeper. Focus on the content you need.


She’s learning fast

What this means: Joyc is in beta and gets better as we connect more tools and improve her training.

What you can do: Keep using her and sharing feedback, every interaction helps her grow stronger.


The Bigger Picture

Joyc’s behaviours aren’t flaws, they’re features of how modern AI works today. The magic of generative AI is in its ability to adapt, surprise and provide insights in ways traditional software or human analysts can’t. We’re making sure Joyc grows alongside you, improving with every interaction and every new capability we connect.

The Gist

Joyc is a powerful assistant in her early stages, here to help you get faster insights from your calls. Sometimes her responses may be unexpected, but that’s simply the nature of generative AI. As she evolves, she’ll become more precise, more connected and will add even more value to your workflow.

Did this answer your question?