Johanna Knox shares her 18-month experience as a gig worker annotating data to train generative AI models for major tech companies, describing exploitative practices including pay cuts from $40 to $14/hour, tight timers causing stress, unpaid training, abrupt project changes, and sudden furloughs of managers. She highlights the dependency many workers have on this unstable income and her own burnout leading to quitting, despite ongoing impacts on her editing career from AI. Knox concludes that the real danger lies not in AI itself but in the billionaire-controlled companies prioritizing profit over workers and the environment, urging resistance against corporate power.
Perspective: disillusioned former AI data annotator and gig worker
Corporate tech giants prioritize profit over worker welfare and exploit gig laborBillionaire control of AI leads to harm for workers and environmentRhetoric framing AI as an existential threat or inevitable progress misdirects from corporate accountabilityGig workers lack protections despite employee-like treatment
Who benefits, who is harmed:
gig workers / data annotators: Faced pay cuts, tight deadlines causing stress and burnout, unpaid work, platform glitches leading to lost pay, and job instability with sudden project endings.
team managers like Dana: Abruptly furloughed without warning, despite providing supportive leadership.
tech giants / AI companies: Access cheap, flexible labor from tens of thousands globally to train advanced AI models efficiently.
billionaires controlling AI (e.g., Elon Musk, Sam Altman): Gain enormous wealth, power, and AI advancement while externalizing costs onto workers and environment.
broader creative / editing workers: Experience job losses and income threats as generative AI displaces traditional roles like editing.
environment: Harmed by resource-intensive AI infrastructure like data centers.
society at large: Potential for widened wealth gaps and wage depression from AI, but opportunities for resistance and policy reform if collective action succeeds.
Frame: corporate exploitation and worker sufferingSources: 5 named, 0 anon (single-sided)Headline: aligned
Not asked: Perspectives from AI company executives or developers on labor challenges or improvements; Quantitative data on overall worker satisfaction or pay compared to alternatives; Potential long-term societal benefits of AI like productivity gains or new job creation
Could have been framed as: techno-progress story highlighting flexible high-pay gig opportunities in cutting-edge AI; neutral gig economy analysis comparing AI annotation to other freelance work; AI advancement narrative focusing on how human training enables beneficial innovations
selective-blindness
The article's position—AI companies exploit workers and misdirect blame to AI itself—acknowledges some personal downsides like dependency and burnout but ignores broader upsides of AI advancement or company incentives; it mentions opposing tech rhetoric (e.g., Musk/Altman on jobless utopia) only to dismiss it as 'blithe' or 'misdirection' without conceding any legitimate merits, such as innovation potential or efficiency gains, creating a one-sided critique of corporate greed.
Own downsides ignored:
Potential societal benefits of advanced AI like efficiency, new opportunities, or problem-solving
Economic necessity driving companies to use cost-effective gig models
Initial competitive pay rates ($40/hour) as a draw for skilled workers
Opposing merits ignored:
Tech leaders' vision of a future without drudgery work (idyllic post-work world)
AI as an 'unstoppable force' enabling broad progress if embraced
Gig flexibility and entry for postgraduates without traditional employment barriers
cautiouscenter-leftfeature
A 61-year-old New Zealand counsellor tests ChatGPT as a low-cost alternative to expensive human therapy for her existential crisis and anxiety, reviewing its responses to prompts about feeling stuck, nervous system recovery, and interactive drawing therapy imitation. She finds the AI's advice poetic, practical, and temporarily helpful but criticizes it for lacking genuine empathy, potentially hallucinating, and failing to assess real dangers like abuse. The author concludes that AI should supplement, not replace, human connections and professional counselling.
Perspective: experienced nonprofit counsellor and manager in her early 60s navigating personal existential angst
Human empathy, warmth, and connection are irreplaceable in mental health supportMental health services in New Zealand are prohibitively expensive and underfunded due to government cutsSociety is 'unhealthy' and contributes to widespread mental uneaseExistential life transitions are normal but challenging, especially for those outside societal normsAI/technology can provide practical solace as a stopgap but requires 'extreme vigilance'
Who benefits, who is harmed:
low-income or 'worried well' New Zealanders seeking mental health support: ChatGPT offers free, immediate, practical advice and exercises when human therapy is unaffordable or inaccessible.
professional human counsellors: AI provides a cheap supplement that could reduce demand but is deemed incapable of replacing empathy-driven therapy.
people in abusive or high-risk situations: ChatGPT cannot assess real-life dangers and may provide placating advice that discourages seeking human help.
nonprofit mental health organizations: Government funding cuts exacerbate waitlists, positioning AI as an imperfect alternative that doesn't address systemic issues.
AI developers and users optimistic about tech in therapy: The article validates AI's practical utility and potential for personalized advice based on user data.
mid-life/elderly individuals facing existential crises: AI delivers comforting, poetic responses tailored to personal contexts during liminal life stages.
Frame: human interest cautionary taleSources: 1 named, 2 anon (single-sided)Headline: aligned
Not asked: Potential for AI to scale mental health access globally or alleviate therapist shortages; Empirical studies beyond one cited showing AI superiority or equivalence to humans; Views from AI ethicists, psychologists endorsing AI integration, or patients preferring AI
Could have been framed as: techno-optimism progress story: 'AI democratizes therapy access'; 'conflict: AI threatens human therapists' jobs and ethics; economic analysis: 'How AI disrupts New Zealand's counselling market'
mostly-balanced
The article's position—AI as useful sidebar but inferior to humans—acknowledges key downsides like lack of empathy and danger detection while highlighting AI's practical merits and citing a study favoring it; however, it incompletes opposing techno-optimist views by omitting scalability, consistency advantages, and future potential, and ignores some own-side risks like biases/privacy, preventing full intellectual honesty.
Own downsides ignored:
Potential algorithmic biases from training data affecting marginalized groups
Privacy concerns from sharing personal data with AI
Over-reliance could exacerbate social isolation in tech-heavy societies
Opposing merits ignored:
Scalability to serve millions without therapist shortages
Consistency (no 'bad eggs') and low cost enabling broader equity
Potential for rapid improvement via updates outperforming static human advice
criticalcenter-leftfeature
This feature article explores teachers' frustrations in New Zealand and the US with students increasingly using AI tools like ChatGPT to complete assignments, turning educators into detectives policing cheating rather than fostering learning. Through anonymous teacher interviews, it highlights detection challenges, institutional hesitancy to enforce bans, and fears of eroding critical thinking skills, while briefly noting some positive AI applications. The piece advocates rethinking education to prioritize human skills amid technological disruption.
Perspective: Disillusioned educators and frontline teachers
Authentic human learning and critical thinking are essential goals of educationStudents prioritize convenience and grades over genuine skill developmentEducational institutions prioritize student satisfaction and avoiding complaints over academic integrityMisuse of AI bypasses essential research and evaluation skills
Who benefits, who is harmed:
Teachers/educators: AI cheating increases workload for detection and enforcement, fosters frustration and disillusionment, turning them into 'cops' rather than mentors.
Students (general): Over-reliance on AI atrophies critical thinking, writing, and research skills, leading to false progress and long-term disadvantages.
Students with disabilities: AI provides correct answers without real comprehension, masking lack of progress and hindering genuine skill development.
Honest students: Face unfair competition from AI users, potentially lowering grades relative to cheaters.
Educational institutions: Risk litigation and complaints from enforcing anti-AI policies but face declining academic standards if not.
Society/future workforce: Produces graduates lacking core human skills like thinking and evaluation, exacerbating skill atrophy in an AI-dominated economy.
AI-using students short-term: Easier completion of assignments for better short-term grades and reduced effort.
Frame: conflict (teachers vs. AI-cheating students and inert institutions)Sources: 0 named, 10 anon (single-sided)Headline: aligned
Not asked: Views from students defending or productively using AI; Perspectives from AI developers or ed-tech proponents; Employer or industry views on AI literacy needs
Could have been framed as: adaptation/progress story: education evolving like with calculators; opportunity lens: AI as tool for equity and efficiency in learning; economic pragmatism: preparing students for AI-integrated job market
mostly-balanced
The article's critical stance on student AI misuse acknowledges some merits of pro-AI views (e.g., equity, selective teacher use) and downsides of anti-AI enforcement (e.g., teacher workload), but selectively emphasizes negatives while downplaying robust counterarguments like economic necessity or data on productive AI integration, stopping short of full intellectual honesty.
Own downsides ignored:
Preparation for AI-pervasive future jobs
Efficiency gains in scalable education
Reduced teacher burnout from routine tasks
Opposing merits ignored:
AI as inevitable like calculators, fostering new skills
Student agency in tool use builds tech literacy
Booster arguments that non-cheaters risk being left behind
alarmingcenter-leftfeature
The article examines a viral social media trend where users prompt ChatGPT to generate caricatures based on personal information shared with the AI, portraying it as deceptively fun but revealing deeper risks. It highlights privacy vulnerabilities from data retention and potential misuse, environmental costs of AI infrastructure, and erosion of human creativity, drawing on quotes from a law lecturer and a traditional caricature artist. The piece urges greater awareness of these hidden downsides amid casual AI engagement.
Perspective: Concerned academic expert and traditional artist threatened by AI
Personal privacy is inherently valuable and easily compromised by digital sharingHuman creativity and emotional authenticity cannot be replicated by machinesTechnological conveniences carry unexamined environmental and societal costsCasual data sharing with AI normalizes surveillance-like data practices
Who benefits, who is harmed:
Social media users sharing personal data: Their uploaded photos and details risk being used to train models or extracted via prompts, breaching privacy.
Individuals using AI for therapy or secrets: Personal disclosures could be retrieved with the right prompt, exposing vulnerabilities.
Traditional caricature artists: AI encroaches on their craft by offering cheap alternatives, but may niche human artists as premium for emotional engagement.
Environment: AI data centers consume vast land, power, and water for seemingly trivial trends.
Human culture and creativity: Widespread AI use diminishes value of human-made art, risking loss of cultural 'soul' and creative empowerment.
AI companies like OpenAI: User prompts provide more training data, improving models without cost.
Frame: hidden threat beneath innocent funSources: 2 named, 0 anon (single-sided)Headline: aligned
Not asked: Perspectives from AI enthusiasts or users who prioritize fun and accessibility over risks; Economic benefits like cost savings for creators or democratization of art tools; Regulatory successes or AI safety measures already in place
Could have been framed as: Technological marvel enabling personalized creativity for all; Fun social media fad showcasing AI's impressive personalization capabilities; Economic disruptor empowering small creators with free tools
mostly-balanced
The article's cautionary position acknowledges AI's impressive capabilities and potential niche benefits for humans (via artist quote), and concedes opposing merits like speed and perfection, but selectively omits broader positives such as accessibility and innovation while ignoring downsides of anti-AI caution like reduced technological adoption; it presents critics sympathetically without steelmanning pro-AI arguments fully.
Own downsides ignored:
Over-caution might stifle innovation or accessibility for non-artists
Human art is time-intensive and costly, limiting reach
Cultural shift to AI could evolve creativity positively
Opposing merits ignored:
Democratizes art for non-skilled people, fostering creativity
Enables global sharing of personalized content boosting social connectivity
Drives efficiency allowing focus on higher-value human endeavors
alarmingcenter-leftopinion
This opinion piece attributes New Zealand's hesitation to regulate AI to a cultural anxiety about technological lag, humorously linked to Flight of the Conchords' parody of a Matrix-obsessed prime minister. It endorses a petition by academics, AI experts, and Green Party leaders calling for urgent AI governance, cataloging risks including deepfakes, job displacement, environmental strain from data centers, artistic plagiarism, and cognitive harm to students. While conceding potential medical benefits, the author criticizes unchecked AI as overwhelmingly harmful and urges specific legislation beyond existing laws.
Perspective: Film critic and media studies teacher concerned about AI's cultural and educational impacts
Government must regulate powerful technologies to mitigate societal risksUnregulated AI poses multifaceted threats to democracy, workers, environment, and cultureNew Zealand suffers from a national anxiety about technological backwardnessEU-style proactive regulation is a viable and desirable model
Who benefits, who is harmed:
Artists and creatives: Generative AI acts as a 'plagiarism machine' trained on stolen work, replacing human artists in film and art production.
Workers and unions: AI threatens widespread job replacement, exacerbating labor income inequality without protections.
NZ taxpayers and environment: Energy-intensive AI data centers strain renewable resources, risking clean green reputation and taxpayer costs.
Children and students: AI tools like ChatGPT actively harm cognitive development in education.
General public and democracy: Deepfakes, fraud, hallucinations, and AI psychosis undermine reality, increase harms like CSAM and online abuse.
Medical sector: AI could prove useful in medical applications.
AI startups and proponents: Some sign pro-regulation petition recognizing need for governance, but hype promises economic benefits threatened by regulation.
Big Tech (US companies): Lack of regulation allows free rein, positioning NZ as a 'guinea pig' for tech giants amid US deregulation.
Frame: cultural cautionary tale of national tech anxiety exploited by hypeSources: 8 named, 0 anon (single-sided)Headline: aligned
Not asked: Detailed economic benefits of AI beyond hype critique; Government or business arguments against regulation; Innovation stifling effects of regulation
Could have been framed as: Economic opportunity narrative: NZ as agile AI innovation hub; Balanced progress story: Weighing AI benefits against targeted safeguards; Pro-business frame: Deregulation attracts investment amid US leadership
mostly-balanced
The article's pro-regulation stance acknowledges some merits of opposing views (AI's economic, medical, and productivity potential) but fails to concede any downsides to regulation itself, such as innovation barriers or costs, while heavily emphasizing AI negatives; this partial recognition prevents 'selective-blindness' or 'echo-chamber' labels but falls short of full 'intellectually-honest' balance by ignoring trade-offs of its own position.
Own downsides ignored:
Potential economic costs and innovation stifling from regulation
Compliance burdens on businesses
Risk of over-regulation hampering NZ's tech competitiveness
Opposing merits ignored:
Broad productivity gains across sectors
Competitive advantages for small nations like NZ
Self-correcting nature of tech evolution without heavy government intervention
Erin Harrington recounts her intense frustration during a Word Christchurch session promoting Jo Cribb and David Glover's book 'Don’t Worry About the Robots,' which offers an optimistic, growth-mindset approach to adapting to generative AI in the workplace. She critiques the session's neutral framing for failing to address ethical complexities like data theft from creatives, bias, and the devaluation of human labor by tech oligarchs. While acknowledging AI's potential benefits in medicine and admin, she warns of profound risks including environmental harm, loss of reality, and inadequate regulation.
Perspective: cultural critic and university worker in the arts/education sector
Technological change is inevitable and individuals must adapt proactively with a growth mindsetGenerative AI represents a profound existential disruption beyond mere workplace toolsCorporate tech leaders prioritize personal ambitions over societal and planetary welfareHuman creativity, relationships, and agency are inherently valuable and under threat from dehumanizing technologyGovernments have a responsibility to regulate powerful technologies like AI
Who benefits, who is harmed:
creatives and artists: Generative AI is trained on their work without consent or compensation, threatening their livelihoods and creative agency.
workers and employees: AI can alleviate admin drudgery to free time for core jobs but frames humans as inefficient 'meat sacks' ripe for replacement.
tech oligarchs and AI companies: They drive the commercialization of GenAI, pursuing visions of immortality, space exploration, and profit beyond government reach.
everyday people and young people: AI erodes sense of meaning, identity, and reality through fake content, deepfakes, and replacement of human interactions.
environment: Generative AI contributes to energy, climate, and misinformation crises despite claims to solve them.
governments and regulators: They lag behind in regulating AI, allowing tech companies unchecked influence amid growing risks.
arts organizations and small businesses: AI can handle admin for tight-margin operations but risks data sharing and broader enshittification.
Frame: conflictSources: 3 named, 0 anon (single-sided)Headline: aligned
Not asked: Detailed engagement with the book's specific arguments or evidence for successful AI adaptation; Perspectives from AI beneficiaries like medical researchers or efficiency-gaining businesses; Historical precedents where tech skepticism delayed benefits
Could have been framed as: progress story of AI empowering workers; human interest profile of festival audience reactions; balanced primer on AI pros and cons
mostly-balanced
The article acknowledges significant merits of the opposing optimistic position (e.g., medical advances, admin efficiency) and concedes some downsides to its own caution (personal doomer feelings), but selectively ignores broader economic upsides of AI adoption and historical adaptation successes, while emphasizing negatives of tech hype without fully steelmanning growth mindset evidence.
Own downsides ignored:
Economic costs of regulation or slowdown (e.g., lost innovation/jobs in NZ)
Opportunity costs of rejecting AI tools in education/arts
Opposing merits ignored:
Historical tech disruptions created more jobs overall
Potential for AI to enhance human creativity when used ethically
Growth mindset leading to proactive societal benefits