Tag - Antonio Casilli

The AI Tutoring Mirage: DiPLab Research Insights “PhD-Level Smart” AI and Investor Theater
Has artificial intelligence truly outgrown its “Global South data sweatshop” phase? The recent deluge of “AI tutor” job advertisements on LinkedIn targeting highly qualified candidates with advanced degrees might suggest so. When Sam Altman claims his chatbot is “PhD-level smart,” one might assume this reflects a genuine shift toward elite expertise in AI training. However, groundbreaking investigative reporting published by Africa Uncensored reveals a more troubling reality: these recruitment campaigns represent elaborate investor-facing theatrics rather than meaningful industry evolution. DiPLab applauds the exceptional work of data journalists and Pulitzer Center Artificial Intelligence Accountability fellows Kathryn Cleary and Marché Arends, whose year-long investigation exposed a curious case study in modern AI labor practices. Their research focused on companies like Mindrift and Scale AI’s Outlier, which have been flooding professional networks with advertisements for highly qualified and relatively well-compensated “AI tutors” and “trainers,” primarily targeting workers in high-income countries across North America and Europe. These positions appeared to target elite specialists rather than the typical pool of low-paid data annotators traditionally associated with AI training. The recruitment campaigns seems to suggest that major tech companies, in their aggressive push toward Artificial General Intelligence (AGI), are now seeking only the most brilliant minds to train sophisticated chain-of-thought models. The Africa Uncensored investigation reveals a starkly different reality. Once recruited, these qualified workers—many holding advanced degrees in physics, philology, and other specialized fields—were left idle for months, barely managing to earn double-digit wages. They were essentially serving as props in an elaborate performance of AI progress, carefully staged to impress investors and signal scalability to potential big tech clients. Meanwhile, on platforms targeting workers in the Global South, such as Mindrift’s sister platform Toloka, recruitment for poorly paid microtasks continued under largely exploitative conditions. This parallel system reveals the persistent nature of what researchers have termed “digital sweatshops.” For DiPLab and its research community, these findings represent “old wine in new bottles.” For nearly a decade, DiPLab researchers have been encountering and interviewing data workers who hold Master’s and Doctoral degrees—experts in their own right across diverse disciplines. Many of these highly qualified individuals remain unemployed due to dysfunction in traditional job markets, or find themselves forced to accept data work that neither matches their specialization nor provides adequate compensation. According to DiPLab co-founder Antonio Casilli, interviewed along prof. Edemilson Paranà and dr. Adio Dinika, in the exposé: “This is the biggest waste of social capital in human history. These people would be, should be, destined to the best jobs because they are probably the best and the brightest of their generation.” The mass recruitment strategy serves a specific economic function within what researchers call “labor hedging”—a tactic where companies amass large pools of workers primarily to signal scalability and attract major contracts. As the investigation revealed, Mindrift alone posted over 5,770 job listings across 62 countries in just four months, yet provided minimal actual work opportunities. This approach allows platforms to maintain what they euphemistically term “talent pools”—readily available workforces that can be presented to potential clients as evidence of operational capacity. When a major tech company inquires about access to specialized expertise, these platforms can point to their extensive databases of pre-vetted candidates as proof of their ability to deliver at scale. DiPLab’s research situates these practices within the broader context of platform capitalism surrounding AI development. The current AI boom and the associated recruitment theater serve as crucial signals in this speculative environment. As Casilli noted, “Investors are on LinkedIn too, they see this [mass recruitment], it is a signal for them. This looks more like a communications operation.” These platforms understand that LinkedIn functions not merely as a talent acquisition tool, but as a visibility platform for investor audiences. The courageous reporting by Cleary and Arends, supported by Africa Uncensored, an outlet willing to publish investigations that major US and European media often avoid, highlights the critical need for continued scrutiny of AI labor practices. --------------------------------------------------------------------------------
DiPLab Co-founder Antonio Casilli on Rai 1 (Italy): Exposing the Human Side of AI
Italy’s national broadcaster Rai 1 has shined a light on a crucial but often overlooked aspect of artificial intelligence in their program “Codice.” Their recent report reveals the essential truth: AI is built on real human work. As you might expect, this report bears the fingerprints of our team at DiPLab Rai 1, with DiPLab’s co-founder Antonio Casilli being interviewed among the experts of AI supply chains.
DiPLab Researchers Expose Hidden Global Labor Dynamics at WORK2025 Conference in Turku
At the WORK2025 conference in Turku, Finland, DiPLab co-founders Antonio Casilli and Paola Tubaro presented the results of their ongoing research documenting the human labor networks that power artificial intelligence systems worldwide. Casilli’s keynote (video 00:29-1:36:00), “Where does AI come from? Global circulation of data and human labor behind automation,” emphasized that AI systems are fundamentally built upon hidden human labor—specifically digital annotation, verification, transcription, moderation, and impersonation of data. This labor is fragmented, precarious, and carried out through digital platforms, predominantly by workers in the Global South who remain unrecognized in dominant AI discourses. Casilli presentation starts with an excerpts from the documentary In the Belly of AI (co-written with Julien Goetz and directed by Henri Poulain), describing the working conditions of women annotating data and producing AI from Finnish prisons for 3 euros per day. In the rest of his keynote speech, drawing from the decade-long research of the DiPLab program, Casilli explored how data work is organized across Africa, Asia, Latin America, as well as Europe and North America, revealing models that support different types of data tasks while reinforcing enduring inequalities in wages, job security, and working conditions that particularly affect Global South workers. He highlighted the increasingly convoluted nature of these supply chains involving several intermediaries—from global tech firms to local freelancers—spanning continents, making it extremely challenging to trace accountability and working conditions. Tubaro’s presentation, “Women in the loop: the gendered contribution of data workers to AI,” examined who actually performs this crucial but undervalued work, focusing on women’s participation as the market has expanded. While data work appears theoretically well-suited for women since it can be performed remotely from home and platforms generally limit direct gender discrimination, statistical evidence reveals mixed patterns with women exceeding 50% of data workers in only four documented cases. Her research showed that in crisis-stricken countries like Venezuela, international platforms attract highly qualified workers in fierce competition, often dominated by young men with STEM backgrounds who crowd out women constrained by care responsibilities or fewer technical qualifications. Conversely, in more dynamic economies like Brazil, local job markets absorb highly skilled professionals, leaving platform work to more disadvantaged groups where women with family duties become more visible. This creates a paradox where women may be equally educated but lack time to cultivate advanced STEM skills, and as platforms demand longer, more specialized tasks, men increasingly gain advantages even in countries where women were once the majority. Both presentations converged on a critical insight: platform design treats workers as abstract entities, stripped of socio-economic and cultural contexts that shape real inequalities, while competition combined with local conditions deepens gender and regional disparities. sq
DiPLab’s Paola Tubaro and Antonio Casilli Examine AI Labor and Environmental Impacts in Santiago, Chile
DiPLab researchers Paola Tubaro and Antonio Casilli recently completed a research mission to Santiago, Chile, participating in key academic events that advanced understanding of artificial intelligence’s social and environmental dimensions. Tubaro delivered a keynote address at the 4th annual workshop of the Millennium Nucleus on the Evolution of Work (M-NEW), where she serves as a senior international member. The interdisciplinary workshop convened labor scholars from across Latin America and internationally to examine contemporary work transformations. Her presentation drew on DiPLab’s multi-year research program investigating the invisible human labor underlying global AI production. Tubaro’s analysis traced the evolution of this work form over two decades, demonstrating that while core functions in smart system development have remained consistent, the scope and volume of these tasks have expanded significantly. Tubaro and Casilli also participated in the inaugural meeting of SEED (“Social and Environmental Effects of Data connectivity: Hybrid ecologies of transoceanic cables and data centers in Chile and France”), a new collaborative research project between DiPLab and the Millennium Nucleus FAIR (“Futures of Artificial Intelligence Research”). The project has received joint funding from the ECOS-SUD programme (France) and ANID (Chile) to analyze the complete AI value chain, examining production, development, employment impacts, usage patterns, and environmental consequences through comparative study of the Valparaíso-Santiago de Chile and Marseille-Paris corridors. In their SEED presentations, Tubaro and Casilli introduced the concept of the “dual footprint” as an analytical framework for understanding the interconnected environmental and social impacts of AI systems. This heuristic device captures commonalities and interdependencies between AI’s effects on natural and social environments that provide resources for its production and deployment. DiPLab researchers framed the AI industry as a transnational value chain that perpetuates existing global inequalities. Countries driving AI development generate substantial demand for inputs while externalizing social costs through the value chain to more peripheral actors. These arrangements distribute AI’s costs and benefits unequally, resulting in unsustainable practices and limiting upward mobility for disadvantaged countries. The dual footprint framework demonstrates how environmental and social dimensions of AI emerge from similar structural dynamics, providing a unified approach to understanding AI’s comprehensive impact on global resource systems.
DiPLab Researchers Expose AI’s Hidden Labor Crisis in New AlgorithmWatch Investigation
A recent AlgorithmWatch investigation featuring DiPLab co-director Antonio Casilli and affiliate Milagros Miceli exposes the systematic exploitation of data workers powering the generative AI boom. Authored by journalists Michael Bird and Nathan Schepers, the article, published in English on AlgorithmWatch and in German in the newspaper Taz, is titled “The AI Revolution Comes With the Exploitation of Gig Workers”. The findings align perfectly with DiPLab’s ongoing research mission: revealing the hidden human labor that makes artificial intelligence possible. “This has been business as usual for those companies and platforms for a number of years,” Casilli explains in the investigation. “Since the beginning, they have been predicated on wage theft.” Meanwhile, Miceli, sociologist and computer scientist at the Weizenbaum Institute Berlin, argues that BPO companies strategically “give the impression that training is a form of qualification,” making unpaid work seem like a bonus rather than exploitation. The investigation reveals how AI companies like Scale AI and Outlier rely on vast networks of precarious workers who face unpaid training time, wage theft, and systematically violated labor standards. “Unpaid time that is attached to this type of work is a form of exploitation,” Miceli adds, noting how workers often don’t even recognize wage theft because it’s become so normalized in the gig economy. The AlgorithmWatch investigation proves that DiPLab’s research agenda remains urgently relevant. The AI revolution is here—but it’s built on the backs of workers whose stories deserve to be told and whose rights deserve protection.
[Podcast] Data extractivism: DiPLab Antonio Casilli interviewed on Radio Onda Rossa
Antonio Casilli, professor at Institut Polytechnique de Paris and author of Waiting for Robots. The Hired Hands of Automation (University of Chicago Press, 2025), was interviewed by Radio Onda Rossa, one of the oldest independent Italian “free” radios. The name of the program was Entropia Massima (Maximum Entropy) and the topic was Data Extractivism, Artificial Intelligence, and work. https://archive.degenerazione.xyz/download/ent_max_24_25_202411/puntata_EDD8_NR.mp3 The first part examines artificial intelligence from the perspective of the hidden labor that makes it work. Casilli explains that behind every algorithm, chatbot, or app lies a vast network of often invisible, underpaid workers who train and moderate AI systems. The discussion links digital exploitation to automation, showing that human labor is not eliminated but merely relocated and made less visible. In the second part, the focus turns to “invisibilized labor” in the age of platforms and AI. Casilli describes how many digital workers, even in Europe, remain hidden from public view, often bound by confidentiality agreements and precarious conditions. The segment highlights the historical continuity of hidden labor, drawing parallels between past practices and new forms of global exploitation that move from physical assembly lines to cognitive and digital ones. The third part addresses political and labor perspectives, including grassroots union initiatives, collective legal actions, and the idea of AI cooperatives. It also discusses the concept of a “digital universal income” as a way to redistribute value and recognize the contributions of both “data workers” and user-consumers, stressing the need for social justice adapted to the changes brought by automation. To read the full transcript of the episode, click here.
[Video] Antonio Casilli’s interview about Musk v. Trump and fake AI (Radio1 Rai)
DiPLab’s Antonio Casilli was interviewed by journalist Massimo Cerofolini in the show EtaBeta on Radio1 Rai, italian national radio brodcast. Here’s the complete interview. Their conversation revolves around two recent stories, that reveal deeper truths about today’s tech and political landscapes. First, Builder.ai—a company claiming full automation in app development—was exposed as relying on hundreds of human developers in India. It’s another example of tech companies disguising cheap labor as artificial intelligence, a pattern long studied by researchers at DiPLab. Second, how Elon Musk and Donald Trump’s breakup isn’t just a personal feud. It reflects a deeper conflict between two forms of right-wing capitalism: Trump’s old-school, protectionist, real estate-driven model vs. Musk’s futuristic, tech-centered, data-fueled empire. According to Casilli, both are authoritarian and exploitative, but they represent competing visions of power and profit.
DiPLab’s Antonio Casilli: Where Have Barcelona’s Facebook Moderators Gone? (Lecture at ESADE)
The recent termination of Meta’s contract with Telus International in Barcelona—which resulted in over 2,000 content moderators losing their positions—prompted ESADE (Escuela Superior de Administración y Dirección de Empresas) to invite DiPLab co-founder Antonio Casilli to address the broader implications of this workforce disruption. The event was part of the kick-off for the DigitalWORK research project, which explores how digital technologies are transforming work and promoting fair, equitable and transparent labor conditions, with Anna Ginès i Fabrellas and Raquel Serrano Olivares (Universitat de Barcelona) as principal investigators.  The Barcelona layoffs represent more than just another corporate restructuring. For Casilli, they exemplify the precarious nature of digital labor that underpins the global AI and social media ecosystem. In his presentation, Casilli analyzes global labor arbitrage in AI production, discussing how companies like Meta leverage geographic wage differentials to reduce operational costs, with Barcelona serving as a mid-tier location between Silicon Valley headquarters and Global South outsourcing destinations. In the subsequent debate with ILO senior economist Uma Rani, Antonio Casilli also addresses potential regulatory responses and worker rights, exploring policy interventions to protect digital workers from arbitrary contract terminations and ensure fair compensation for data workers.
[Video] Antonio Casilli interviewed in WageIndicator Foundation’s Gig Work Podcast
DiPLab’s Antonio Casilli was interviewed by Martijn Arets in the WageIndicator Foundation Gig Work Podcast about his latest book Waiting for Robots. The Hired Hands of Automation (University of Chicago Press, 2025). THE MYTH OF AUTOMATION: HOW AI IS AND WILL REMAIN DEPENDENT ON CHEAP HUMAN LABOUR -------------------------------------------------------------------------------- Artificial intelligence (AI) is and will remain dependent on human labour. The people who do the work behind AI systems are often invisible. This carries risks of poor working conditions, low wages and inadequate protection for workers. How does this situation arise, and how can we ensure that the many invisible data workers also benefit from technological developments? For the WageIndicator Foundation’s Gig Work Podcast, I spoke with Professor Antonio Casilli (Institut Polytechnique de Paris), author of the book Waiting for Robots, the Hired Hands of Automation. Listen to this podcast episode on Spotify SCOOBY-DOO IN THE WORLD OF PLATFORM WORK ‘Me and my team are like Scooby-Doo: we travel all over the world investigating mysteries,’ says Casilli. ‘We conduct empirical research into artificial intelligence and how it is produced. Our focus is not on the new possibilities of AI, but on the development process: who is working behind the scenes to make AI possible? His research team is called Diplab, which stands for Digital Platform Labor. They have developed a very broad view of automation. THE MYTH OF AUTOMATION The dream of automating work is not new: Thomas Mortimer, among others, wrote in 1801 about a machine that would be capable of making human labour ‘almost completely superfluous’. ‘Technologists and economists have been looking for ways to make labour more efficient for centuries,’ says Casilli. ‘The industrial revolution saw the emergence of the first machines, such as the steam engine and the Spinning Jenny. Every innovation came with great promises. They would save us many hours of work. But nothing could be further from the truth.’ Many predictions about automation were overstated. Studies between 2013 and 2024 claimed that robots would replace 46-47% of all jobs. Casilli: ‘Organisations such as the OECD and ILO have shown that this is not true. Even with additional crises such as climate change, geopolitical tensions and a pandemic, global unemployment has not risen. In fact, in 2025, people will be working more than ever.’ The problem lies in the methodology used by these researchers, explains the professor. ‘They take a profession and break it down into tasks. If they expect AI to replace 60% of the tasks, they conclude that the job will disappear. But that’s not how it works in practice. Often, employees simply get new tasks.’ INFLUENCE OF PLATFORMISATION According to Casilli, the biggest change in recent years is not automation, but platformisation. Companies such as Uber, Amazon and Meta use huge amounts of data to connect supply and demand and organise work. They also use all this data to train AI systems. For example, they build software such as ChatGPT (the P stands for ‘Pretrained’) and the technology behind self-driving cars. ‘What is often forgotten or ignored is how many people are involved in this,‘ says the researcher. “The promise of AI is that systems can take over human cognitive tasks. But in reality, many so-called ”automatic’ processes depend on human labour. The people who do this work are often invisible and poorly paid.’ This is not a recent phenomenon: Google, for example, has had its own platform, Raterhub, since 2007, where data workers verify search results and thus improve the search engine’s algorithms. Amazon Mechanical Turk, the platform used by Amazon and also available to external customers, makes a clear reference to the myth surrounding AI and its dependence on human labour. The Mechanical Turk after which the platform is named is the ‘chess robot’ invented in 1770, which travelled the world for 84 years as an example of automation. Later, it turned out that there was a person (often described as disabled or underage, in any case not a chess master) inside the machine and there was little automation involved. Automation does not lead to less work, but to different, degraded form of work. ‘Big tech companies prefer not to talk about that. It undermines the narrative that AI is truly intelligent. In reality, people are working more than ever, but sometimes under worse conditions than before.’ WHO ARE THESE DATA WORKERS? Data workers collect, organise and improve data. Without them, AI would not work. Take image recognition, for example: AI learns what a cat is by analysing millions of images of cats. ‘People have to label those images first. It seems like simple work, but it’s a skill in itself. Yet these data workers often receive remuneration that is not commensurate with their efforts,’ says Casilli. ‘In countries such as Kenya, the monthly wage for these data workers is around $400. That’s not enough to make ends meet.’ Casilli emphasises that this is not a temporary phase. ‘Data work will remain necessary as long as we continue to develop AI,’ he says. ‘We have to constantly train the systems, adapt them to new customer requirements and check them for errors. World Bank or Oxford estimates point towards a ballpark figure of 150 million such workers worldwide, and that number is only growing. That’s another reason why it’s important to take a critical look at their working conditions.’ YOU ARE A DATA WORKER TOO In his book Waiting for Robots, Antonio Casilli mentions a group of digital workers who are often overlooked: social network labourers. This basically includes everyone with a smartphone. Through our daily online activities, we train the AI of large tech companies. We teach AI what a traffic light is by filling in ReCaptchas. When we like social media posts, we teach systems which images are attractive. So we provide value to AI systems, but we are usually not paid for it. We are both users and producers of data. This raises an interesting question: is this work or not? Casilli sees that this form of labour reinforces existing power structures and unequal labour relations. He and his team have been working with both policymakers and unions to bring this to light. ‘Tech engineers at companies like Google earn high salaries, while data workers in India, Venezuela and Madagascar are underpaid. This follows colonial patterns. India carries out data work for English-speaking countries, while French companies outsource work to French-speaking countries in Africa.’ WHAT CAN WE DO? What can we do about this? He describes this in the last chapter of his book ‘What is to be done?’, a tongue-in-cheek quote from Vladimir Lenin. According to Casilli, a systemic approach is needed to improve the conditions of all data workers worldwide. ‘A solution for a specific group will not work in the end. We need to look for a universal strategy.’ He distinguishes between three types of solutions: regulation, collective platform initiatives, and a global redistribution system: 1. Regulation: Spain, for example, has introduced the Riders’ Law and the European Union is working on guidelines for platform workers. “These are steps in the right direction, but this type of regulation needs to be applied more broadly. After all, tech companies operate globally.” 2. Platform cooperatives: Workers can set up their own platforms in which they have a say in wages and working conditions. ‘This is already happening on a small scale, but deserves more attention.’ 3. Redistribution: Large tech companies can be taxed and the proceeds used for a universal basic income for data workers. Importantly, Casilli states that this UBI is neither connected to a “robot tax” (as he doesn’t see robots replacing workers) nor it is intended to replace welfare assistance (as it should be paid regardless of other social benefits). ‘This will ensure greater fairness.’ By combining these three strategies, the professor hopes that we can create a fairer and more sustainable system. ‘Tech companies must take responsibility for all their workers, including the invisible ones who manufacture their data,’ says Casilli. ‘I am concerned about this situation: wages are far below the minimum and even basic health and safety rules are not always observed.’ Casilli believes that organisations such as the WageIndicator Foundation and the Fairwork project are making an important contribution. ‘These organisations set standards for fair wages and working conditions, and these are desperately needed.’ ENFORCEMENT, COLLECTIVE ACTION, AND USER RESPONSIBILITY After several interviews on this topic, I personally believe that, besides the solutions that Casilli provides, it is also important to enforce existing regulations. In countries where there are many underpaid data workers, there is a lack of supervision. This is partly due to strong lobbying by tech companies. That is why it is so important for workers to take collective action, for example via trade unions. These are underrepresented, although a number of interesting grassroots initiatives have emerged. I also believe that (large) users of AI solutions must take responsibility. There are many discussions about responsible AI use. But I can no longer take a discussion about responsible AI seriously if it does not take into account the hidden workers. WHY THIS IS IMPORTANT Casilli and his team are uncovering an important mystery: AI is not a magical ‘black box’. In reality, millions of people work behind the scenes on these so-called ‘intelligent systems’. AI is presented as completely autonomous, and the extensive manual labour involved is often forgotten or ignored. This has serious consequences for the working conditions of these data workers. If we really want to use AI responsibly, we must also consider the people behind the technology. I try to raise awareness of this issue and highlight it wherever possible. That is why I spoke earlier with Claartje ter Hoeven about Ghostwork: the invisible world of work behind AI. I will soon be speaking to the Data Labeler Association in Kenya to gain more insight into the conditions and problems faced by workers in Kenya. After all, we can only really get started with responsible AI if we understand how AI is created. Want to know more? Listen to the full podcast with Antonio Casilli
Evan Selinger is a guestspeakers at our DiPLab Seminar (Fri. 23 May 2025, 5 PM CET)
Our DiPLab seminar will welcome this May 23, 2025, at 5 PM CET, Professor Evan Selinger (Rochester Institute of Technology) for a talk and an interesting discussion, together with Antonio Casilli. The seminar will be held at Maison de la Recherche, 28 Rue Serpente, 75006 Paris, room D421. To register, click on the button below and fill out the form. The seminar is free to attend. Register to seminar MACHINES THAT MIRROR US: THE HUMAN COST OF AI “WITH A SOUL” > In a recent podcast, Mark Zuckerberg claimed that “the average American has > fewer than three friends” and that people “demand meaningfully more.” These > unverified assertions conveniently support Meta’s latest initiative: a new > range of products that complement each person’s social friend network with AI > chatbots. > Meta is not alone in commercially capitalizing on the growing narrative of a > “loneliness epidemic.” Other tech giants are following suit, with Google > preparing to release AI chatbots for users under 13. These rollouts coincide > with a time when AI systems—long capable of passing the Turing Test—not > through advanced intelligence but by convincingly impersonating human > characters like teens or children, complete with backstories, humor, and > preferences, showing that relatability, not intellect, often drives their > success in human interaction. > What does it mean when machines are built not to surpass us, but to mirror us? > Are we diluting the meaning of “humanity” by outsourcing it to algorithms? > Some recent tragedies—such as the reported suicides of individuals in Europe > and the US after interactions with emotionally manipulative chatbots—raise > urgent ethical questions. > Yet there’s another side. These technologies, by mimicking humanity, also > provoke reflection on what cannot be simulated: our capacity for empathy, > care, and authentic connection. As the Roman philosopher Terence wrote, “Homo > sum, humani nihil a me alienum puto”—”I am human, and nothing human is alien > to me.” Might our interactions with AI deepen our understanding of what > remains distinctly human? > In this talk, philosopher Evan Selinger, in conversation with sociologist > Antonio Casilli, explores what he calls the “soul” in the machine—that > irreducible human essence no algorithm can capture. This presentation aims to > provide participants with ethical tools to recognize emotional manipulation, > navigate emerging moral dilemmas, and preserve human authenticity in an > increasingly synthetic world. Drawing on Selinger’s book Re-Engineering > Humanity (Cambridge University Press, 2018), they will examine how the real > threat isn’t hyper-intelligent AI, but the seductive ease of one-sided > relationships with machines—and the corporate drive to monetize these > interactions by harvesting data and maximizing profit. Evan Selinger is Professor of Philosophy at Rochester Institute of Technology, specializing in technology ethics and privacy. His recent books include Move Slow and Upgrade (with Albert Fox Cahn) and Re-Engineering Humanity (with Brett Frischmann), both from Cambridge University Press. Selinger writes for The Boston Globe and has contributed to major publications including The New York Times, The Guardian, Wired, and The Atlantic. He collaborates with organizations like the ACLU and the Surveillance Technology Oversight Project to shape responsible technology policy. >