<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://syokoyams.github.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://syokoyams.github.io/" rel="alternate" type="text/html" /><updated>2026-04-25T11:19:31+00:00</updated><id>https://syokoyams.github.io/feed.xml</id><title type="html">Critical AI Reading Group</title><subtitle>This website chronicles the Critical AI Reading Group activities at Singapore University of Technology and Design.</subtitle><entry><title type="html">AI Meets Humanities Event Report</title><link href="https://syokoyams.github.io/reading-group/2025/11/19/ai-meets-humanities-event-report.html" rel="alternate" type="text/html" title="AI Meets Humanities Event Report" /><published>2025-11-19T16:00:00+00:00</published><updated>2025-11-19T16:00:00+00:00</updated><id>https://syokoyams.github.io/reading-group/2025/11/19/ai-meets-humanities-event-report</id><content type="html" xml:base="https://syokoyams.github.io/reading-group/2025/11/19/ai-meets-humanities-event-report.html"><![CDATA[<p>This past week, I was invited to chair a keynote session at the <a href="https://blogs.ntu.edu.sg/aimeetshumanities/speakers/">“AI Meets Humanities” workshop at Nanyang Technological University (NTU)</a>. I am sharing the program here in hopes of it serving as a quick reference of scholars who are pushing the boundaries by developing new vocabularies and framings to think about and with AI, grounded in the collective knowledge base in the humanities and in rejection of the premises and assumptions of how Silicon Valley wants us to understand and participate in the age of AI.</p>

<p>This was one of the most intellectually stimulating and gratifying meetings of the minds, and I hope we get to cultivate such grounds at SUTD also.</p>

<p>The workshop brought together scholars from NTU, King’s College London, and Australian National University. NTU’s new dean <a href="https://www.ntu.edu.sg/soh/news-events/news/detail/welcome-to-the-cohass-family--professor-jon-wilson-appointed-as-new-dean">Jon Wilson</a> participated in discussions throughout.</p>

<p>Here are my rough notes from the two-day events, highlighting some of the scholars and their writings I would love to come back to for closer reading. I am sharing these just in case they might be of your interests also:</p>

<ul>
  <li><strong><a href="https://blogs.ntu.edu.sg/aimeetshumanities/public-lecture/">Thao Phan</a></strong> — feminist STS approach; on lost opportunities of playfulness in the original Turing test; how AI is making us work more, not less</li>
  <li><strong><a href="https://katherinebode.wordpress.com/">Katherine Bode</a></strong> — dialectics in humanities and computing; see her <a href="https://katherinebode.wordpress.com/books/">monograph</a> and the special issue she edited for Duke University’s journal <em>Critical AI</em>, <a href="https://read.dukeupress.edu/critical-ai/article/doi/10.1215/2834703X-10734026/382463/Data-Worlds-An-Introduction">“Data Worlds”</a></li>
  <li><strong><a href="https://www.lisawinstanley.com/">Lisa Winstanley</a></strong> — emerging discourse on image generation, copyrights, and design literacy education</li>
  <li><strong><a href="https://blogs.ntu.edu.sg/aimeetshumanities/speakers/bernard-dionysius-geoghegan-2/">Bernard Dionysius Geoghegan</a></strong> — the commons, structuralism, latency, and creativity to think beyond the human/AI dichotomy</li>
  <li><strong><a href="https://www.ntu.edu.sg/hass/news-events/news/detail/shape-x-stem-talk-series--featuring-dr-tobias-rees">Tobias Rees</a></strong> — AI redefining the historical characterization of machines; creative and speculative potentials of small models</li>
</ul>

<p><img src="/critical-ai-reading-group/assets/2025-11-20-ai-meets-humanities.jpg" alt="AI Meets Humanities Workshop" /></p>
<p style="text-align: center; color: gray;"><em>A group picture before we headed over to dinner in the city center. Especially grateful to the main organizers Li Nguyen and Thao Phan (seen here in matching outfits in the center), as well as a team of graduate students, for nourishing our bodyminds. Image shared here in courtesy of NTU.</em></p>

<h2 id="ps">p.s.</h2>

<p>The occasion coincided with the launch of a new <a href="https://www.ntu.edu.sg/hass/master-of-arts-in-digital-humanities">Master’s Program in Digital Humanities</a> at NTU. Their DH program adopts a holistic definition of what DH has to offer: combining computational humanities with humanistic inquiries of computing methods and other digital phenomena. This opens up a great opportunity for our DH Minor undergraduate students at SUTD to continue pursuing their intellectual curiosity locally. Congratulations!</p>

<p style="text-align: right;"><em>Post by Setsuko</em></p>]]></content><author><name></name></author><category term="reading-group" /><summary type="html"><![CDATA[This past week, I was invited to chair a keynote session at the “AI Meets Humanities” workshop at Nanyang Technological University (NTU). I am sharing the program here in hopes of it serving as a quick reference of scholars who are pushing the boundaries by developing new vocabularies and framings to think about and with AI, grounded in the collective knowledge base in the humanities and in rejection of the premises and assumptions of how Silicon Valley wants us to understand and participate in the age of AI.]]></summary></entry><entry><title type="html">AI Foresight Pre-Conference Session</title><link href="https://syokoyams.github.io/reading-group/2025/09/24/ai-foresight-pre-conference.html" rel="alternate" type="text/html" title="AI Foresight Pre-Conference Session" /><published>2025-09-24T06:00:00+00:00</published><updated>2025-09-24T06:00:00+00:00</updated><id>https://syokoyams.github.io/reading-group/2025/09/24/ai-foresight-pre-conference</id><content type="html" xml:base="https://syokoyams.github.io/reading-group/2025/09/24/ai-foresight-pre-conference.html"><![CDATA[<p>On September 24th, 2025, we held our second reading group session, organized in conjunction with the <a href="https://www.sutd.edu.sg/stories-listing/futures-by-design-human-centred-ai-and-transformative-possibilities-international-foresight-conference-2025">AI Foresight Conference</a>. We wanted to host a reading group prior to the conference so that participants could jump right into the deep end while our invited speakers are on campus. We trust this approach enabled a thoughtful engagement with the speakers on what they have been reading, thinking, writing, and musing upon &amp; would recommend it for other conference organizers.</p>

<p>Here are the readings suggested by speakers on the “Ethics of Care in the Age of AI” panel:</p>

<ul>
  <li><strong>Dibyadyuti Roy</strong>: Lindgren (2023), “Introducing Critical Studies of AI”</li>
  <li><strong>Vasundhra Dahiya</strong>: Khan (2020), “Meet AI” &amp; Khan (2021), “All About That Bias”</li>
  <li><strong>Felicia Liu</strong>: Liu et al. (2025), “Decarbonising Digital Infrastructure”</li>
  <li><strong>Jennifer Ang</strong>: Schabacher (2021), “Time and Technology: The Temporarity of Care”</li>
</ul>

<p>You can find the PDF copies from here: <a href="https://bit.ly/AIForesight2025">https://bit.ly/AIForesight2025</a>.</p>

<p><img src="/critical-ai-reading-group/assets/2025-09-24-flyer.png" alt="AI Foresight Pre-Conference Session flyer" /></p>

<h2 id="ethics-of-care-in-the-age-of-ai-panel">Ethics of Care in the Age of AI Panel</h2>

<p>A bit more about the conference’s panel in question: it was organized in response to the growing public awareness of algorithmic biases, extractive data collection and labor practices, as well as the environmental cost of AI. Also of import was how engineers and designers (including aspiring and practicing ones at SUTD) are increasingly expected to be cognizant of the sociopolitical implications of their work and to practice accountability. This roundtable brought together experts whose critical AI works shift the focal point of engineering and design from technical specifications to improving relations—between people, as well as with other species and the planet—based on the vantage points informed by the lived experiences in Asia. Here are the summaries of each speaker’s “sharing”—a la Singlish speak—for those who were not able to join us:</p>

<h3 id="ai-need-not-stratify-inequity-in-india">AI Need Not Stratify Inequity in India</h3>

<p><strong><a href="https://ahc.leeds.ac.uk/fine-art/staff/4366/dr-dibyadyuti-roy-">Dibya</a></strong> began by situating the current moment within a longer history of AI hype cycles, referencing the so-called AI winter of the 1970s and the “mini AI Ice Age” of the 1990s. He noted that today’s resurgence of interest in AI is driven largely by global competition for technological supremacy between the United States and China, and cautioned against being swayed by technopositivist marketing narratives. Drawing from his cultural studies research, he highlighted how AI systems are enabled, refined, and sustained by data-annotation labor in South and Southeast Asia. Noting how such work is often carried out by poorly compensated workers such as Muslim women in Kolkata, where he is from, he problematized how their essential contributions remain systematically obscured.</p>

<p>To address such uneven distribution of socioeconomic benefits of emerging technologies, he urged conference participants to rethink empowerment beyond the superficial improvement of AI-enabled products and services under the banner of “ethical design.” Instead, he advocated for a rights-based approach that cross-examines AI’s ideological foundations and challenges the geopolitical conditions that continue to subject the Global South to resource extraction. Otherwise, AI, as it currently stands, extends colonial dynamics of the previous century, Dibya concluded.</p>

<p><img src="/critical-ai-reading-group/assets/2025-09-24-conference-vasundhra.jpg" alt="Vasundhra Dahiya" /></p>
<p style="text-align: center; color: gray;"><em>Vasundhra Dahiya sharing her community-based research methodologies.</em></p>

<p><strong><a href="https://scholar.google.com/citations?user=cFmUc3YAAAAJ&amp;hl=en">Vasundhra</a></strong> discussed her work as a Dalit feminist scholar in the AI ethics space and the reading group she co-founded with Dibya, Lavanya, and Tushar— called <a href="https://ai-criticality.net">“Critical Lens on AI in/from the Majority World(s) (CLAIM)”</a>—as an interpretive community attentive to the majority world’s sociocultural contexts. Drawing from her research on AI therapy chatbots, she underscored how situated thinking shapes algorithmic visions of emotional care and mental well-being. Vasundhra noted how discussions of caste are often ignored, even in the responsible AI discourse, resulting in omission of crucial design considerations for historically marginalized communities. As a consequence, she noted, AI applications and policies end up reproducing and amplifying caste biases socially and algorithmically.</p>

<p>Grounding her scholarship and advocacy work in the tradition of <a href="https://en.wikipedia.org/wiki/B._R._Ambedkar">B. R. Ambedkar</a>—jurist, economist, social reformer, and author of <em>Annihilation of Caste</em> (1936)—Vasundhra emphasized the need to localize AI ethics discourse in the lived realities of her community. For her, AI systems need to be not only technologically sound but also socially just.</p>

<h3 id="care-principles-kampung-spirits-and-acessibility-in-singapore">Care Principles, Kampung Spirits, and Acessibility in Singapore</h3>
<p><img src="/critical-ai-reading-group/assets/2025-09-24-conference-jennifer.jpg" alt="Jennifer Ang" /></p>
<p style="text-align: center; color: gray;"><em>Jennifer Ang offered four types of care principles we could adopt.</em></p>

<p><strong><a href="https://www.suss.edu.sg/academics/schools-college/faculty-listing/detail/jennifer-ang">Jennifer</a></strong> presented her research on the <a href="https://seannet.org/">“Southeast Asia Neighborhoods (SEANNET) Collective,”</a> where she examines the effects of gentrification and technologization in communities in Singapore, including Bukit Merah and Punggol. Speaking from her expertise in philosophy, she introduced four care principles—abandonment, repurposing, repair, and maintenance—as conceptual tools for reimagining our relationship with AI. Together, noted Jennifer, these principles offer a critical lens for assessing Smart Nation initiatives. She observed that technological “repairs” often miss the mark when they assume that technology can care for people. Instead, she emphasized the importance of care practices that go beyond efficiency and convenience: caring for older residents amid redevelopment pressures, caring for one another through kampung spirit, and caring for the broader community as exemplified by cleaners and other essential workers. Jennifer urged for an approach to AI that supports and strengthens human relationships rather than replacing them.</p>

<p>Lastly, yours truly presented a literary history of speech-to-text care labor as a way to rethink current speech-AI designs that are predominantly modeled on business transactions. Drawing from her research on American poet Robert Frost—particularly his efforts to challenge bigoted characterizations of dialect through his art of versification—I demonstrated that speech-to-text conversion has long involved forms of editorial care. This history, I argued, complicates contemporary portrayals of data-annotation work as menial labor. I also discussed how I am  planning to translate these humanities-driven insights into collaboration with speech-to-text interpreters in Singapore. As access-service providers, interpreters at organizations such as Equal Dreams and the Singapore Association for the Deaf have expressed concerns about being replaced by AI. By foregrounding the interpretive, relational, and community-sustaining dimensions of their work, I hope to repurpose AI to support—and not supersede—the tightly knit and vibrant disability community in Singapore.</p>

<p><img src="/critical-ai-reading-group/assets/2025-09-24-conference-alastair.jpg" alt="Alastair Gornall" /></p>
<p style="text-align: center; color: gray;"><em>Alastair Gornall, our panel's co-chair, moderating the Q&amp;A session. Dibya is in the background, chiming in via Zoom.</em></p>

<h2 id="ai-foresight">AI Foresight?</h2>

<p>Within the larger context of AI initiatives and incentives in Singapore, the insights shared on the “Ethics of Care” panel built upon works undertaken by government agencies such as IMDA’s February 2025 report, “Singapore AI Safety Red Teaming Challenge.” This report, for those who might not be familiar, documents Singapore’s efforts to test cultural biases in large language models specific to the Asia-Pacific region, enabling our regional partners to develop more localized safeguards. This initiative was motivated by the recognition that existing AI-fairness evaluations remain predominantly Western-centric, attending largely to vulnerabilities and cultural assumptions grounded in North America and Western Europe. In a similar spirit, the “Ethics of Care” panel foregrounded perspectives from South and Southeast Asia in a collective inquiry into how AI might shape, and potentially enhance, social relations. Adopting a fundamentally humanities-driven approach, the panel shifted the focus away from improving AI systems as such and toward improving the lives of the people who are most affected by them. Through this reorientation, the panel emphasized the need for AI design and governance to respond to situated forms of care, community, and social responsibility.</p>

<p style="text-align: right;"><em>Post by Setsuko</em></p>]]></content><author><name></name></author><category term="reading-group" /><summary type="html"><![CDATA[On September 24th, 2025, we held our second reading group session, organized in conjunction with the AI Foresight Conference. We wanted to host a reading group prior to the conference so that participants could jump right into the deep end while our invited speakers are on campus. We trust this approach enabled a thoughtful engagement with the speakers on what they have been reading, thinking, writing, and musing upon &amp; would recommend it for other conference organizers.]]></summary></entry><entry><title type="html">Pasquinelli (2023), The Eye of the Master</title><link href="https://syokoyams.github.io/reading-group/2025/02/06/pasquinelli-eye-of-the-master.html" rel="alternate" type="text/html" title="Pasquinelli (2023), The Eye of the Master" /><published>2025-02-06T04:30:00+00:00</published><updated>2025-02-06T04:30:00+00:00</updated><id>https://syokoyams.github.io/reading-group/2025/02/06/pasquinelli-eye-of-the-master</id><content type="html" xml:base="https://syokoyams.github.io/reading-group/2025/02/06/pasquinelli-eye-of-the-master.html"><![CDATA[<p>On February 6th, 2025, we held our inaugural reading group session to discuss the conclusion chapter of Matteo Pasquinelli’s <em>The Eye of the Master</em> (2023), exploring the relationship between artificial intelligence, labour, and capitalism.</p>

<p><img src="/critical-ai-reading-group/assets/2025-02-06-flyer.png" alt="Reading group session on February 6th, 2025" /></p>

<p><img src="/critical-ai-reading-group/assets/2025-02-06-group-photo.jpeg" alt="Reading group group photo on February 6th, 2025" /></p>

<p>Thank you, everyone, for joining our first gathering! Keen observational analyses through and through on how AI is likely to affect white-collar jobs. Not to mention its known impact on the global working class (what Mary Gray and Siddharth Suri call “<a href="https://en.wikipedia.org/wiki/Ghost_work">ghost work</a>”). Looking forward to many more sessions to come, so that AI initiatives on our campus remain cognizant of the longer history of labor relations.</p>

<p style="text-align: right;"><em>Post by Setsuko</em></p>]]></content><author><name></name></author><category term="reading-group" /><summary type="html"><![CDATA[On February 6th, 2025, we held our inaugural reading group session to discuss the conclusion chapter of Matteo Pasquinelli’s The Eye of the Master (2023), exploring the relationship between artificial intelligence, labour, and capitalism.]]></summary></entry><entry><title type="html">Naming the Series</title><link href="https://syokoyams.github.io/reading-group/2024/12/11/naming-the-series.html" rel="alternate" type="text/html" title="Naming the Series" /><published>2024-12-11T04:00:00+00:00</published><updated>2024-12-11T04:00:00+00:00</updated><id>https://syokoyams.github.io/reading-group/2024/12/11/naming-the-series</id><content type="html" xml:base="https://syokoyams.github.io/reading-group/2024/12/11/naming-the-series.html"><![CDATA[<p>The discrepancy between our website’s title and the title of our first reading group series is intentional and by design. Many of us who are trained in the humanities, arts, and social sciences embrace criticality. But we are also aware how criticality could be intimidating or off-putting for our participants from other disciplines. So, while we are aware of problematic tendencies in ways HASS subject expertise in relation to artificial intelligence is often reduced to “ethics,” we have decided to dwell with this very shorthand as an inviting hook and opening for us to transform into something a bit more capacious. In other words, this limitation in our existing vocabulary for discussing emerging technologies in a truly interdisciplinary manner is where the work is called for. It is our hope that our reading group series will help invent a generative language for everyone with different disciplinary backgrounds and methodologies. Here’s to no discipline being instrumentalized in service of the other, and for bearing witness to what fusions of diverse expertise and perspectives can bring forth.</p>

<p><img src="/critical-ai-reading-group/assets/2024-12-11-naming.png" alt="Naming the Series" /></p>]]></content><author><name></name></author><category term="reading-group" /><summary type="html"><![CDATA[The discrepancy between our website’s title and the title of our first reading group series is intentional and by design. Many of us who are trained in the humanities, arts, and social sciences embrace criticality. But we are also aware how criticality could be intimidating or off-putting for our participants from other disciplines. So, while we are aware of problematic tendencies in ways HASS subject expertise in relation to artificial intelligence is often reduced to “ethics,” we have decided to dwell with this very shorthand as an inviting hook and opening for us to transform into something a bit more capacious. In other words, this limitation in our existing vocabulary for discussing emerging technologies in a truly interdisciplinary manner is where the work is called for. It is our hope that our reading group series will help invent a generative language for everyone with different disciplinary backgrounds and methodologies. Here’s to no discipline being instrumentalized in service of the other, and for bearing witness to what fusions of diverse expertise and perspectives can bring forth.]]></summary></entry></feed>