TIOZ Howest

Howest Logo

The Human-Centered Evolution of AI

Artificial intelligence has made remarkable progress, with models achieving high accuracy in tasks like image recognition and language processing. Advances in generative AI, such as the launch of ChatGPT, have opened up new opportunities for a wide audience. Today, nearly every company and organization is exploring how AI can enhance their workflows. However, the rapid increase in demand for powerful AI systems has presented major challenges for developers. The prevailing approach has been to scale up computational power and expand data centers—a brute-force strategy that is ultimately unsustainable.

Instead of focusing solely on technological challenges, we argue that AI's evolution must be human-centered. Humans are at the core of AI in three fundamental ways: they develop it, they use it, and human intelligence provides the blueprint for building more efficient, adaptable, and explainable systems.

Get in touch!

Cover image

Quick facts

  • /

    Humans are at the core of AI: as developers, users and as a blueprint for intelligent systems

  • /

    Transparency is essential for trust, usability, and ethical deployment.

  • /

    While AI should not simply mimic human intelligence, it can be inspired by its underlying principles

  • /

    The future of AI lies in embracing its connection to human intelligence

The Need for Explainability and Trust

AI should not only be powerful but also understandable. As users, humans rely on AI for critical decision-making in domains like healthcare, finance, and autonomous systems. However, many AI models operate as black boxes—accurate but difficult to understand. Transparency is essential for trust, usability, and ethical deployment.

Several techniques enhance AI explainability for users:

  • Dimensionality Reduction: AI models process information in high-dimensional spaces with an enormous number of parameters. However, humans are generally limited to interpreting visualizations in 3D or 4D (if considering time), making it challenging to grasp structures in data with more than four dimensions. Dimensionality reduction helps bridge this gap by projecting complex data into a more comprehensible form without losing essential information.
  • Confidence Scores: AI models can indicate how certain they are about a prediction. For example, when classifying an image as a cat, a model might assign a 95% confidence score, signaling high certainty. Lower confidence scores can alert users to potential errors or uncertain predictions.
  • Confusion Metrics: These metrics assess a model's performance by showing how frequently it misclassifies one category as another. In self-driving cars, for example, it is less concerning if a model confuses a red traffic light with a stop sign than if it mistakes a red light for a green one. While both cases might have the same accuracy level, the latter poses a much greater risk in daily use.
  • SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-Agnostic Explanations):These methods help identify which features had the strongest influence on a model’s decisions. This also enables counterfactual explanations, which clarify what would have needed to change for a different outcome. For example, in a healthcare setting, a diagnostic AI might explain its decision by stating: “If the patient's cholesterol level were lower, the risk assessment would have changed.” Such insights make AI decision-making more actionable for medical professionals and patients.

Humans as the Blueprint: Cognitive Science in AI Design

In our quest to develop intelligent systems, an important question arises: what is intelligence? While AI should not simply mimic human intelligence, it can be inspired by its underlying principles. Concepts from cognitive science—such as attentional mechanisms, modular learning, and hierarchical processing—have directly shaped modern AI architectures.

We know for instance that humans will often decompose complex tasks into smaller components. When learning to play the piano for example, we don’t start by mastering an entire composition. Instead, we first learn to read musical notes, coordinate hand movements, and understand rhythm. Later, these skills can be repurposed for other musical instruments, composition, or even understanding mathematical patterns in music.

This ability to recombine knowledge across different domains is a hallmark of human cognition and was central to the recent breakthrough of DeepSeek’s Mixture of Experts (MoE) architecture . These models distribute tasks among multiple specialized networks (experts). A gating network then routes input to the most relevant expert(s) for efficient processing.

The Future of AI

AI research must move beyond maximizing accuracy alone. The next generation of AI must be efficient, interpretable, and deeply aligned with human needs.

This requires:

  1. Human-Inspired Design: Leveraging cognitive principles to enhance AI efficiency and adaptability.
  2. Transparent and Fair Systems: Ensuring AI decisions are explainable, trustworthy, and free from harmful biases.
  3. User-Centric Development: Building AI that serves and empowers its human users.

By recognizing that humans are not only AI’s developers and users but also its foundational blueprint, we can build intelligent systems that are both powerful and comprehensible. The responsibility falls on AI researchers, policymakers, and organizations to ensure that AI aligns with human values. The future of AI lies in embracing its deep connection to human intelligence—creating systems that enhance, rather than obscure, our understanding of the world.

Authors

  • /

    Pieter Verbeke, AI Researcher

Want to know more about our team?

Visit the team page