MENU
Back

Design · 13 min read

AI Design-Led Research: Reclaiming Agency in the Age of LLMs

When Intelligence Becomes Cheap
and Metals Lead Markets

Why designers must stop being consultants to Silicon Valley’s imperial ambitions and start leading the inquiry into AI’s human futures

We are living through the most profound technological transformation in human history, yet the people who will live with artificial intelligence systems have almost no say in how they’re built. While engineers optimize for efficiency and ethicists debate principles, AI systems are already reshaping everything in ways that diminish rather than enhance human agency. For example, an AI-driven social media algorithm can influence political discourse in unpredictable ways, and a hiring algorithm can reinforce existing biases.

The question is whether design will finally step up to lead this transformation, or continue playing the role of a facilitator to make Silicon Valley’s imperial project more human-centered.

I argue for a methodological approach that positions designers as primary investigators who frame the questions, lead interdisciplinary teams, and generate knowledge about AI’s human impacts through the distinctive ways designers think, make, and know.

This approach, which I call AI Design-Led Research, represents an emergent but yet-to-be formalized methodological framework.

It synthesizes two foundational ideas: the “research through design” (RtD) approach, which contends that design artifacts can serve as a conduit for research findings and “transform the world from its current state to a preferred state”, and “critical design,” which uses design as a tool to pose questions and speculate on “how things could be” to open debate about alternative futures.

This is not about making AI interfaces prettier. This is about fundamentally reorienting how we investigate and develop artificial intelligence systems by centering design’s unique epistemological contributions: synthesis thinking, embodied prototyping, and the capacity to generate alternative futures through speculative making. We’re not proposing that designers become machine learning engineers, but we do need basic AI literacy to ask meaningful questions. When we investigate supervised versus unsupervised learning approaches, we’re not optimizing algorithms—we’re asking how different training paradigms affect human agency and data sovereignty. Design-led research brings crucial questions about data governance, algorithmic accountability, and distributed AI systems that engineers often aren’t equipped to investigate.

The Poverty of Current Approaches

Before we can understand what AI Design-Led Research offers, we need to acknowledge the limitations of existing approaches to AI development and research.

Engineering-Led AI Development typically focuses on optimizing technical performance metrics, which may not always align directly with broader human flourishing goals. When engineers ask, “How can we make this algorithm more accurate?” they’re addressing important technical questions, but this approach might miss equally crucial considerations like, “How might this system reshape human relationships, and is that reshaping desirable?” The field has made significant strides, for instance, in developing methods for explainable AI (XAI) to improve trust in AI models. Researchers have even proposed guidelines for human-AI interaction that were validated by practitioners against popular AI-infused products. However, while engineers and computer scientists are actively working on problems like interpretability and usability, their training often emphasizes technical solutions for these issues. The goal isn’t to replace technical expertise but to ensure it serves human flourishing by complementing technical efforts with a design-led inquiry into the deeper social and cultural implications of these systems.

Human-Centered AI Research represents a significant improvement, bringing social scientists and psychologists into conversation with computer scientists. This work has produced valuable insights about trust, explainability, and bias. However, it often studies AI’s human impacts without considering the implications of its design.

Researchers document problems without prototyping alternatives. They analyze what is without imagining what could be.

This is precisely where a “research through design” approach fills a critical gap by providing a method for HCI research to move from analytical observation to creative synthesis and the creation of tangible prototypes that generate new knowledge.

AI Ethics has emerged as a necessary response to algorithmic harm, but it operates primarily through principles and policies rather than embedded practice. Ethicists can tell us that AI systems should be fair, transparent, and accountable, but they rarely prototype what fairness actually looks like in interaction design, or how transparency might manifest in everyday AI encounters. For instance, a design-led research project could prototype a dashboard that allows human recruiters to review and interrogate the algorithmic recommendations of an AI hiring system, thereby making the invisible assumptions of the system visible.

Participatory AI brings affected communities into AI development processes. But participation often means consultation rather than co-investigation. The evolution of design research has moved from a “user-centered” approach where users are passive subjects to “co-designing” which is defined as “collective creativity”. AI Design-Led Research should be framed as the next step in this evolution, adopting the principles of Design Justice to empower marginalized communities as true co-investigators who lead the entire process. Communities are asked to respond to AI systems rather than lead inquiries into what kinds of AI futures they actually want.

Each of these approaches contributes something vital, but none positions design thinking as the primary methodology for investigating AI’s human implications.

Design-led research reveals the hidden costs and dependencies that sustain AI systems, often extracting value from communities that derive little benefit from the resulting technologies. This is a profound oversight, because design uniquely offers the capacity to think through making, to generate knowledge by prototyping possible futures, and to synthesize insights across technical, social, and ethical domains through embodied practice.

What Makes Design-Led Research Distinctive

AI Design-Led Research is not simply the application of design methods to AI research.

It’s a different way of knowing that emerges from design’s distinctive relationship to both making and meaning.

Design drives the inquiry. Design-led research starts with designerly questions: How might we prototype more democratic AI futures? What would AI systems look like if they were designed to enhance rather than replace human agency? How can we make visible the hidden assumptions embedded in algorithmic systems? These are questions that emerge from design’s orientation toward possibility rather than optimization.

Prototyping as a research instrument.

In design-led research, prototypes are research instruments that generate new knowledge about how AI systems behave in social contexts.

When we prototype an AI interaction that reveals hidden biases, or create a speculative AI service that imagines different data relationships, we’re conducting inquiry through making that addresses AI’s fundamental dependence on data. Our prototypes serve as a means to test how different approaches to data sovereignty, algorithmic transparency, and human-AI collaboration might work in practice. They help us understand not just what AI systems do, but how they reshape the social contexts in which they operate.

Synthesis as core methodology. Design-led research leverages design’s unique capacity for synthesis—the ability to hold technical constraints, human needs, ethical considerations, and speculative possibilities in productive tension. While other disciplines tend toward reductive analysis, design-led research embraces the complexity of AI’s sociotechnical entanglements. It is a complementary function that positions design as a connector of disciplines. For instance, while a legal scholar might analyze a policy document and an engineer might analyze lines of code, the designer synthesizes these disparate analyses into a physical or experiential prototype that reveals new insights that might not be visible through analysis alone.

Futures orientation. Unlike research that studies existing AI systems, design-led research is oriented toward what could be. This is a commitment to expanding the space of possibility through rigorous speculation and critical making. This approach, grounded in the concept of “critical design” , does not seek to predict the future but to pose “what if” questions that are intended to “open debate and discussion about the kind of future people want (and do not want)”.

What This Looks Like in Practice

AI Design-Led Research manifests through specific methodological innovations that distinguish it from both traditional design practice and conventional AI research.

Algorithmic empathy mapping goes beyond user journey mapping to document the emotional and cognitive labor that AI systems demand from humans. Instead of asking “How do users experience this AI interface?” we ask “How does this algorithmic system reshape human subjectivity, and what kinds of selves does it assume and produce?”

Participatory AI prototyping brings communities into the design of AI futures through co-creation workshops that use rapid prototyping, speculative scenarios, and role-playing to explore different possibilities for human-AI collaboration. Rather than consulting communities about predetermined AI systems, we facilitate processes where communities lead the imagining of AI futures that serve their own definitions of wellbeing.

Critical AI probes deploy design’s cultural probe methodology to reveal the hidden assumptions and unintended consequences of AI systems. These are often artifacts given to and left with people to elicit reflection about experiences in settings where direct observation is impossible. For example, a provocative AI probe could be a simple, non-functional AI device given to people in their homes that, when activated, asks them a pointed question about their data privacy or emotional labor, revealing hidden assumptions and power dynamics. These methods are particularly important for addressing AI’s tendency toward hallucination (generating plausible but false information). While engineers approach this as a technical problem to solve through better training data or retrieval systems, design-led research investigates why people trust AI-generated information and how we might design interactions that help humans develop appropriate skepticism. Our probes reveal how people actually verify AI outputs, what makes them suspicious of automated responses, and how different interface designs either encourage or discourage critical thinking about AI-generated content.

Speculative AI scenarios use design fiction and critical design methods to prototype AI futures that center human agency over technological capability. These scenarios serve as research instruments that generate insights into the types of AI development we should pursue and those we should avoid.

Power-sensitive design research adapts design research methods to account for the power dynamics inherent in AI development, including the global infrastructure that enables AI systems. We ask not just “How do people use AI?” but “Who benefits from particular AI configurations, who is harmed, and how might we design research processes that challenge rather than reproduce existing inequalities?”. This means investigating the full “anatomy of AI“—from the mineral extraction in the Global South that enables semiconductor production, to the data labor that trains large language models, to the energy consumption of massive data centers.

Design-led research reveals the hidden costs and dependencies that sustain AI systems, often extracting value from communities that derive little benefit from the resulting technologies. 

Our research methods must account for these power asymmetries. When we conduct participatory AI workshops, we’re not just involving affected communities in design decisions, but investigating how AI development could be restructured to serve those who currently bear its costs while receiving few of its benefits. This includes exploring models of data sovereignty, community ownership of AI infrastructure, and equitable benefit-sharing from AI development.

Addressing the Skeptics

Some argue that designers lack the technical expertise to lead AI research. We are not proposing that designers become engineers, but that they gain enough AI literacy to ask meaningful questions. For example, when considering supervised versus unsupervised learning, the issue is not algorithmic optimization but how training paradigms affect human agency, bias, and data sovereignty.

Design-led research is uniquely positioned to probe AI’s uncertainty and complexity, raising questions about governance, accountability, and social impact that technical approaches alone cannot answer. The most important issues in AI are not purely technical but social, cultural, and political—precisely the territory where design inquiry excels.

Others worry that design-led research is too speculative. Speculation, as shown by Dunne & Raby, is the rigorous and crucial practice of posing “what if” questions that expand debate about the futures people want. What we need is not less speculation, but speculation grounded in research rather than corporate fantasy.

The most serious critique is that design-led research might become another form of ethics washing—a way for technology companies to appear concerned about human impacts without fundamentally changing their practices. AI Design-Led Research must maintain its critical edge, must refuse to be co-opted into narrow product development processes, and must insist on generating knowledge that serves public rather than private interests.

The Institutional Challenge

Establishing AI Design-Led Research as a legitimate field requires institutional transformation on top of methodological innovation.

Design schools need to develop curricula that prepare designers to lead rather than merely participate in AI research. We need new PhD programs that combine design research methods with AI literacy. For instance, a new PhD program could be structured around the “Research Through Design” framework , where the final dissertation is not a written thesis but a series of designed artifacts and accompanying reflections that constitute a direct research contribution. We also need new funding mechanisms that support design-led investigation of AI futures, and new publishing venues that legitimate design-led knowledge production.

This is not just about adding AI courses to design curricula, but about fundamentally rethinking design education for an era when the most significant design challenges are not about objects or interfaces, but about algorithmic systems that reshape human experience in invisible yet profound ways.

We also need new forms of collaboration between design schools and computer science departments—not the typical pattern where designers are brought in to “humanize” technologies developed elsewhere, but genuine co-investigation where design thinking shapes the questions being asked and the methods being used to explore them. The call for “genuine co-investigation” can be connected to Design Justice principles , providing an ethical and political mandate for these new institutional arrangements to be led by marginalized communities.

Reclaiming Agency

AI Design-Led Research represents a claim about agency in an era of LLMs.

Against the narrative that AI development is inevitable and inexorable, design-led research insists that we can investigate, critique, and prototype alternative AI futures through rigorous creative practice.

I am not advocating for stopping AI development, which would be neither possible nor desirable. I want to create a design discipline that ensures AI development serves human flourishing over technical optimization or corporate profit. It’s about creating spaces for the people who will live with AI systems to lead the investigation of what kinds of AI futures we actually want.

The window for influencing AI’s trajectory is closing rapidly. Every day that passes, AI systems become more embedded in social infrastructure, more accepted as natural rather than designed. If design is going to contribute to more democratic AI futures, we need to move beyond consulting and start leading.

The question facing design education and practice is whether we’re willing to claim this leadership role—to position design as a form of public knowledge production oriented toward collective wellbeing. AI Design-Led Research is one way to answer that question affirmatively and insist that those who will live with AI systems should lead the inquiry into what those systems might become.

The algorithms are already among us. The question is whether we’ll investigate their implications through design thinking, or simply accept whatever futures the current configuration of power produces. I vote for investigation, for speculation, for the kind of rigorous creative practice that design-led research makes possible.

The time for polite consultation is over. The time for design-led investigation has begun.

September 24, 2025 . Written by Fas Lebbie