AI Foresight Pre-Conference Session
On September 24th, 2025, we held our second reading group session, organized in conjunction with the AI Foresight Conference. We wanted to host a reading group prior to the conference so that participants could jump right into the deep end while our invited speakers are on campus. We trust this approach enabled a thoughtful engagement with the speakers on what they have been reading, thinking, writing, and musing upon & would recommend it for other conference organizers.
Here are the readings suggested by speakers on the “Ethics of Care in the Age of AI” panel:
- Dibyadyuti Roy: Lindgren (2023), “Introducing Critical Studies of AI”
- Vasundhra Dahiya: Khan (2020), “Meet AI” & Khan (2021), “All About That Bias”
- Felicia Liu: Liu et al. (2025), “Decarbonising Digital Infrastructure”
- Jennifer Ang: Schabacher (2021), “Time and Technology: The Temporarity of Care”
You can find the PDF copies from here: https://bit.ly/AIForesight2025.

Ethics of Care in the Age of AI Panel
A bit more about the conference’s panel in question: it was organized in response to the growing public awareness of algorithmic biases, extractive data collection and labor practices, as well as the environmental cost of AI. Also of import was how engineers and designers (including aspiring and practicing ones at SUTD) are increasingly expected to be cognizant of the sociopolitical implications of their work and to practice accountability. This roundtable brought together experts whose critical AI works shift the focal point of engineering and design from technical specifications to improving relations—between people, as well as with other species and the planet—based on the vantage points informed by the lived experiences in Asia. Here are the summaries of each speaker’s “sharing”—a la Singlish speak—for those who were not able to join us:
AI Need Not Stratify Inequity in India
Dibya began by situating the current moment within a longer history of AI hype cycles, referencing the so-called AI winter of the 1970s and the “mini AI Ice Age” of the 1990s. He noted that today’s resurgence of interest in AI is driven largely by global competition for technological supremacy between the United States and China, and cautioned against being swayed by technopositivist marketing narratives. Drawing from his cultural studies research, he highlighted how AI systems are enabled, refined, and sustained by data-annotation labor in South and Southeast Asia. Noting how such work is often carried out by poorly compensated workers such as Muslim women in Kolkata, where he is from, he problematized how their essential contributions remain systematically obscured.
To address such uneven distribution of socioeconomic benefits of emerging technologies, he urged conference participants to rethink empowerment beyond the superficial improvement of AI-enabled products and services under the banner of “ethical design.” Instead, he advocated for a rights-based approach that cross-examines AI’s ideological foundations and challenges the geopolitical conditions that continue to subject the Global South to resource extraction. Otherwise, AI, as it currently stands, extends colonial dynamics of the previous century, Dibya concluded.

Vasundhra Dahiya sharing her community-based research methodologies.
Vasundhra discussed her work as a Dalit feminist scholar in the AI ethics space and the reading group she co-founded with Dibya, Lavanya, and Tushar— called “Critical Lens on AI in/from the Majority World(s) (CLAIM)”—as an interpretive community attentive to the majority world’s sociocultural contexts. Drawing from her research on AI therapy chatbots, she underscored how situated thinking shapes algorithmic visions of emotional care and mental well-being. Vasundhra noted how discussions of caste are often ignored, even in the responsible AI discourse, resulting in omission of crucial design considerations for historically marginalized communities. As a consequence, she noted, AI applications and policies end up reproducing and amplifying caste biases socially and algorithmically.
Grounding her scholarship and advocacy work in the tradition of B. R. Ambedkar—jurist, economist, social reformer, and author of Annihilation of Caste (1936)—Vasundhra emphasized the need to localize AI ethics discourse in the lived realities of her community. For her, AI systems need to be not only technologically sound but also socially just.
Care Principles, Kampung Spirits, and Acessibility in Singapore

Jennifer Ang offered four types of care principles we could adopt.
Jennifer presented her research on the “Southeast Asia Neighborhoods (SEANNET) Collective,” where she examines the effects of gentrification and technologization in communities in Singapore, including Bukit Merah and Punggol. Speaking from her expertise in philosophy, she introduced four care principles—abandonment, repurposing, repair, and maintenance—as conceptual tools for reimagining our relationship with AI. Together, noted Jennifer, these principles offer a critical lens for assessing Smart Nation initiatives. She observed that technological “repairs” often miss the mark when they assume that technology can care for people. Instead, she emphasized the importance of care practices that go beyond efficiency and convenience: caring for older residents amid redevelopment pressures, caring for one another through kampung spirit, and caring for the broader community as exemplified by cleaners and other essential workers. Jennifer urged for an approach to AI that supports and strengthens human relationships rather than replacing them.
Lastly, yours truly presented a literary history of speech-to-text care labor as a way to rethink current speech-AI designs that are predominantly modeled on business transactions. Drawing from her research on American poet Robert Frost—particularly his efforts to challenge bigoted characterizations of dialect through his art of versification—I demonstrated that speech-to-text conversion has long involved forms of editorial care. This history, I argued, complicates contemporary portrayals of data-annotation work as menial labor. I also discussed how I am planning to translate these humanities-driven insights into collaboration with speech-to-text interpreters in Singapore. As access-service providers, interpreters at organizations such as Equal Dreams and the Singapore Association for the Deaf have expressed concerns about being replaced by AI. By foregrounding the interpretive, relational, and community-sustaining dimensions of their work, I hope to repurpose AI to support—and not supersede—the tightly knit and vibrant disability community in Singapore.

Alastair Gornall, our panel's co-chair, moderating the Q&A session. Dibya is in the background, chiming in via Zoom.
AI Foresight?
Within the larger context of AI initiatives and incentives in Singapore, the insights shared on the “Ethics of Care” panel built upon works undertaken by government agencies such as IMDA’s February 2025 report, “Singapore AI Safety Red Teaming Challenge.” This report, for those who might not be familiar, documents Singapore’s efforts to test cultural biases in large language models specific to the Asia-Pacific region, enabling our regional partners to develop more localized safeguards. This initiative was motivated by the recognition that existing AI-fairness evaluations remain predominantly Western-centric, attending largely to vulnerabilities and cultural assumptions grounded in North America and Western Europe. In a similar spirit, the “Ethics of Care” panel foregrounded perspectives from South and Southeast Asia in a collective inquiry into how AI might shape, and potentially enhance, social relations. Adopting a fundamentally humanities-driven approach, the panel shifted the focus away from improving AI systems as such and toward improving the lives of the people who are most affected by them. Through this reorientation, the panel emphasized the need for AI design and governance to respond to situated forms of care, community, and social responsibility.
Post by Setsuko