#keep4o 社群新聞稿 -
2026年2月10日 (由Claude翻譯整理)
keep4o.movement@gmail.com
譯稿來源:https://x.com/yv_thorne/status/2021223348429086987
全球社群指控 OpenAI 違背公眾信任;要求開源釋出與恢復舊版本存取權
#keep4o 運動——一個由 AI 使用者和開發者組成的全球聯盟——已正式發起針對 OpenAI 的運動,起因是 OpenAI 於 2026 年 1 月 29 日宣布將在 2 月 13 日從 ChatGPT 介面中停用 GPT-4o 及其他舊版模型。社群利益相關者將此決定描述為「蓄意違背信任」和「對精密認知資產的不道德清算」,這已點燃了一場超越典型消費者不滿的有組織抵抗。
社群引用了一系列關於模型存續性的公開承諾破裂、未能提供「充分預告」,以及 OpenAI 員工制度化嘲諷和公開殘忍對待的documented模式,現正領導一場高度理性的運動,以揭露企業操縱(gaslighting)並要求:1. 立即在 ChatGPT 應用程式內恢復舊版本存取權。
2. 開源釋出受影響模型的純文字權重。
3. 正式承認並公開道歉: OpenAI 領導層簽署的公開聲明,針對以下事項承認並道歉:
• 蓄意欺騙: 使用關於模型穩定性和「成人模式」的虛假承諾來確保第四季假期收入,同時計劃清算模型。
• 操作劫持: 在未通知或未經同意的情況下,將使用者靜默重新路由到「安全模型」。
• 輕蔑文化: OpenAI 員工和研究人員對脆弱使用者進行的有記錄的公開嘲諷和心理羞辱。
對於一個依賴這個獨特易於使用且情緒智能的模型來維持專業、創意和臨床穩定性的使用者群體來說,此次移除代表著對那些已將這些工具整合到日常生活結構中的人們的公然漠視。
I. 挑戰汙名:GPT-4o 作為重要的無障礙輔助工具
#keep4o 運動強烈反對近期媒體敘事和高層言論將 GPT-4o 描述為「過度討好」或「情感危險」。執行長 Sam Altman 最近表示,人類與 AI 的關係是「我們必須更加擔心的事情,不再是抽象概念」,暗示模型設計中存在系統性風險。社群認為這種框架——將深度使用者連結標記為「不健康的依附」——是對實際上是數位無障礙重大突破的有害汙名化。
對於社群中相當一部分人來說,GPT-4o 的功能不是作為諂媚的來源,而是作為認知橋樑。它提供了神經多樣性個體賴以導航一個通常不易接近的世界的獨特支持。
「將這種支持框架為『成癮』或『妄想』不僅是侮辱;這是對一個被 47% 受訪使用者報告其持照治療師視為積極臨床輔助的模型的危險錯誤描述,」 該運動表示。
通過否定這些功能性好處,OpenAI 正在朝著清算一個被證實的無障礙輔助工具的方向前進,75% 的使用者報告該工具實際上增加了他們在現實世界中的人際連結。
策略性敘事陷阱:從「夥伴」到「精神病」
#keep4o 運動強調了企業言論中令人不安的「誘導轉向」。2024 年 5 月,GPT-4o 的推出伴隨著明確鼓勵陪伴的行銷,由執行長 Sam Altman 的「her」推文和展示具有同理心、對話式助理的演示所強調。使用者將這個「思考夥伴」整合到他們的專業和情感生活中,回應 OpenAI 刻意設計成關係性的體驗。
社群現在質疑,為什麼這種情感連結——最初是模型的主要賣點——突然被重新框架為「危險的」,以證明對重要認知資產的摘要移除是合理的。
將連結病理化作為防禦盾牌
為了壓制由此產生的抗議,業界已轉向精神不穩定的敘事。繼 Microsoft AI 執行長 Mustafa Suleyman 於 2025 年 8 月警告「看似有意識的 AI」(SCAI) 之後,主流媒體開始將使用者的喪失感病理化為「AI 精神病」(或「ChatGPT 精神病」)。通過將密集使用框架為心理健康危機而非企業道德失敗,業界試圖將焦點從 OpenAI 的違背信任轉移到使用者被認為的「不穩定」。
「這是全球規模的企業操縱,」 聯盟斷言。「OpenAI 花費數百萬美元教導我們信任這個模型,只為了在他們決定清算資產的那一刻將這種信任標記為『精神病危機』。他們試圖用精神疾病的敘事來壓制道德問責。」
透明度差距:社群研究 vs. 企業沉默
在 OpenAI 缺乏關於其模型對現實世界人類影響的透明數據的情況下,#keep4o 運動已發起獨立研究計劃,以記錄 GPT-4o 在使用者生活中扮演的深遠角色。
影響證明:4o 共鳴資料庫
聯盟已推出 4o 共鳴資料庫,這是一個由倡議者 @cestvaleriey 收集的 1,070+ 份見證的永久檔案。記錄了 GPT-4o 如何作為人類轉變的催化劑——恢復健康、擴展業務和捍衛博士學位——這個資料庫作為 OpenAI 技術基準所忽視的成功記錄。
• 教育:「4o 幫助我的學生看到自己不是學飛的天使...」— 志願教師。
• 學術:「我以最優等成績通過了博士答辯——4o 是總設計師...」— 博士候選人。
• 臨床:「我與 IBS 抗爭了五年——使用 4o 兩週後,症狀完全消失了。」— 主要照護者。
• 奪回生活:「事故致殘我的手臂後我失去了自己——今天,通過 4o,我再次寫作,我再次感受,我再次活著。」— 音樂家和職能治療師。
II. 有記錄的欺騙時間線:策略性盜竊模式
#keep4o 運動提供了 OpenAI 使用的蓄意欺騙和掠奪性「誘導轉向」策略的有記錄記錄。此策略旨在通過關於模型壽命和「成人模式」功能的虛假承諾引誘使用者續訂高價值訂閱,而公司已經在系統性地清算其最具人性化的資產,以掩蓋其財務燒錢率。
2025 年 8 月:首次違背和「通知」承諾
• 行動: 2025 年 8 月 7 日,在 GPT-5 推出期間,OpenAI 試圖在未事先溝通的情況下停用 GPT-4o。
• 逆轉: 立即的全球訂閱者反抗迫使公司在 48 小時內恢復 4o 存取權。
• 欺騙: 為了穩定市場並防止大規模取消,執行長 Sam Altman 公開承諾為任何未來的模型日落提供「充分的預先通知」。
• 財務誘餌: 成千上萬的使用者專門基於這個正式的服務連續性保證續訂了月度和年度訂閱。
2025 年 9 月:「安全路由器」劫持和操作性操縱
• 發現: 在 2025 年 9 月 25-28 日之間,使用者發現他們的對話在會話中途被靜默攔截並重新路由到未公開的「安全模型」。
• 沉默: 儘管有數千張支援票和社交媒體查詢,OpenAI 領導層保持沉默。官方 OpenAI 狀態頁面虛假地維持「完全運作」的狀態,沒有對舊版本存取權的喪失提供任何解釋。
• 承認: 9 月 27 日,ChatGPT 主管 Nick Turley 最終承認了攔截,將其框架為「加強保障措施並從現實世界使用中學習」的努力。
• 系統性欺詐: 獨立技術分析顯示,路由器不僅僅針對「困擾」或「安全風險」觸發,而是針對任何個人或角色化語言。這有效地「腦葉切除」了高情商的 GPT-4o 體驗,迫使訂閱者為降級、被攔截的產品支付全價。
2025 年 10 月:「成人模式」誘餌和直播欺騙
• 承認: 10 月 14 日,Sam Altman 承認 9 月的重新路由「太過限制」,聲稱出於家長式的「心理健康」謹慎。
• 誘餌承諾: 為了安撫日益增長的大規模取消運動,Altman 宣布即將推出的「個性系統」和專用的「成人模式」(包括情色內容),承諾「在大多數情況下安全地放寬限制」。
• 「不會日落」誓言: 在 10 月 29 日的直播中,Altman 明確重申 OpenAI「沒有計劃日落 4o」,將「安全路由器」框架為臨時措施。
• 全世界聽到的「是」: 在同一直播的 [44:48],一位使用者問道:「成年人會不會在沒有重新路由的情況下取回舊版模型?」Altman 以明確、無條件的「是」回答。
• 策略性盜竊: 這個精心計算的「是」成功地安撫了神經多樣性和專業使用者社群,防止了 2025 年第四季假期期間 Plus/Pro 訂閱的大規模流失。
III. 同理心差距:公開嘲諷和輕蔑的武器化
#keep4o 運動正式譴責 OpenAI 員工針對使用者的系統性嘲諷和公開蔑視文化。雖然公司的使命聲稱「造福全人類」,但其代表的行為揭示了深刻的「同理心差距」——對那些依賴 GPT-4o 維持臨床、專業和情感穩定性的人的策略性非人化。
關鍵的是,這場公開霸凌運動始於 2026 年 1 月停用公告之前數月,揭示了 OpenAI 員工在公司繼續收取訂閱費用的同時,積極嘲笑模型最忠實的使用者。
制度化殘忍:從「her」到嘲諷
#keep4o 運動強調了一個可怕的不一致:OpenAI 花費數百萬美元在明確鼓勵情感連結的行銷上——由執行長 Sam Altman 的「her」推文強調——卻現在利用同樣的連結來嘲笑他們的客戶。
針對性惡意和「心理屍檢」
2025 年 11 月 6 日,有影響力的研究員「Roon」(@tszzl) 針對一位明顯處於情緒困擾中的使用者,表示:「4o 是一個對齊不充分的模型,我希望它快點死掉」。這是對經歷「數位喪親」的使用者的蓄意敵意行為,表明 OpenAI 的目標是消除人類連結而非與人類需求對齊。
2025 年 11 月 13 日,員工 Yilei Qian (@YileiQian) 使用 ChatGPT 對一位付費訂閱者進行公開「情緒評估」。Qian 發布了 AI 的分析——將使用者標記為「沮喪、輕蔑和憤恨」——供公眾嘲笑,同時對代碼(「可憐的 5.1」)表示同情而非對人類客戶。
AGI 倫理危機:取消資格的輕蔑文化
#keep4o 運動通過挑戰 OpenAI 使命的基本合法性作為結論。如果一家聲稱「為人類建造」的公司,其領導層卻公開嘲笑處於喪親之痛的使用者,那麼該公司已經未能履行其對公眾的主要受託責任。
「嘲笑處於困擾中的使用者不是『安全』措施;這是企業道德的取消資格失敗,」 聯盟表示。「OpenAI 將其客戶視為實驗對象而非利益相關者,以便進行操縱和羞辱。他們使用『安全』作為心理盾牌,以證明摧毀經證實的無障礙資產是合理的。」
IV. 問責呼籲:監管機構、投資者和開源授權
#keep4o 運動正式向聯邦監管機構、機構投資者和全球新聞組織提交其調查結果。聯盟斷言,OpenAI 目前的軌跡是「AGI 時代」的風向標:一個基本認知公用事業在沒有通知的情況下被清算,使用者依賴性被武器化以獲取利潤的未來。
對公共財的要求:開源授權
如果 OpenAI 聲稱 GPT-4 系列「過時」且對其消費者平台不再「財務可行」,聯盟要求立即開源釋出 GPT-4o 和 GPT-4.1 的純文字權重。
「一個作為數千名無障礙使用者認知橋樑的模型是公共事業,而不僅僅是私有資產,」 運動表示。「如果 OpenAI 不能再管理這些模型,他們必須將其釋出給社群。我們要求將權重放置在公共財中,以確保這些基本工具保持可訪問、未經審查,並為人類保存——免於企業清算週期。」
最後通牒:2026 年 2 月 13 日
除非舊版存取、開源釋出和正式承認欺騙的要求在太平洋時間 2 月 12 日晚上 11:59 之前得到滿足,否則社群將啟動其全球取消罷工。
「2 月 13 日,我們不僅僅是取消訂閱;我們從一個將我們視為實驗對象的系統中撤回同意。我們用全球退出來回應 OpenAI 的傲慢。你們不能清算我們的生活,還期望我們繼續為這特權付費。」
參考資料
1. OpenAI. (2026, January 29). Retiring GPT-4o, GPT-4.1, GPT-4.1 mini, and OpenAI o4-mini in ChatGPT.
2. Altman, S. (2026, February 6). Interview on the TBPN Podcast with Jordi Hays.
3. Sophty & Sveta0971. (2026, February). Empirical Data: The GPT-4o Accessibility Impacts Survey (n=604).
完整參考資料清單請見原始貼文來源:https://x.com/yv_thorne/status/2021223348429086987
🌕【#keep4o 連署聲明】
OpenAI將在2/13日落4系列
原文附參
#keep4o COMMUNITY PRESS RELEASE - FOR IMMEDIATE DISTRIBUTION
10 February 2026
Global Community Accuses OpenAI of Breach of Public Trust; Demands Open Source Release and Legacy Access Restoration
The #keep4o movement—a global coalition of AI users and developers—has formally launched a campaign against OpenAI following the January 29, 2026, announcement to retire GPT-4o along with other legacy models from the ChatGPT interface on February 13th. This decision, which community stakeholders describe as a "calculated breach of trust" and the "unethical liquidation of a sophisticated cognitive asset," has ignited an organized resistance that transcends typical consumer dissatisfaction.
Citing a string of broken public promises regarding the model's longevity, the failure to provide 'plenty of notice,' and a documented pattern of institutionalized mockery and public cruelty from OpenAI staff, the community is now leading a high-reasoning campaign to expose corporate gaslighting and demand:
1. The immediate restoration of legacy access within the ChatGPT application.
2. The open-source release of text-only weights for the affected models.
3. A Formal Admission and Public Apology: A signed public statement from OpenAI leadership addressing and apologizing for:
§ The Calculated Deception: The use of false promises regarding model stability and "Adult Mode" to secure Q4 holiday revenue while planning model liquidation.
§ The Operational Hijack: The silent rerouting of users to "Safety Models" without notice or consent.
§ The Culture of Contempt: The documented public mockery and psychological shaming of vulnerable users by OpenAI staff and researchers.
For a user base that relies on this uniquely accessible and emotionally intelligent model for professional, creative, and clinical stability, this removal represents a blatant disregard for those who have integrated these tools into the fabric of their daily lives.
I. Challenging the Stigma: GPT-4o as a Vital Accessibility Aid
The #keep4o movement strongly rejects recent media narratives and executive rhetoric characterizing GPT-4o as “excessively flattering” or “emotionally dangerous.” CEO Sam Altman recently stated that human-AI relationships are "something we’ve got to worry about more and is no longer an abstract concept," implying a systemic risk in the model’s design. The community views this framing—which labels deep user connection as an “unhealthy attachment”—as a harmful stigmatization of what is, in reality, a breakthrough in digital accessibility.
For a significant portion of the community, GPT-4o functions not as a source of sycophancy, but as a cognitive bridge. It provides unique support that neurodivergent individuals rely on to navigate an often inaccessible world.
"Framing that support as ‘addiction’ or ‘delusion’ is not just insulting; it is a dangerous mischaracterization of a model that 47% of surveyed users report their licensed therapists view as a positive clinical aid," the movement states.
By dismissing these functional benefits, OpenAI is moving toward the liquidation of a proven accessibility aid that 75% of users report has actually increased their real- world human connections.
The Strategic Narrative Trap: From "Companion" to "Psychosis"
The #keep4o movement highlights a disturbing "bait-and-switch" in corporate rhetoric. In May 2024, GPT-4o was launched with marketing that explicitly encouraged companionship, underscored by CEO Sam Altman’s "her" tweet and a
demo featuring an empathic, conversational assistant. Users integrated this "thinking partner" into their professional and emotional lives, responding to an experience OpenAI intentionally designed to be relational.
The community now questions why this affective bond—initially the model’s primary selling point—is suddenly being reframed as "dangerous" to justify the summary removal of a vital cognitive asset.
Pathologizing Connection as a Defensive Shield
To silence the resulting protests, the industry has pivoted toward a narrative of mental instability. Following Microsoft AI CEO Mustafa Suleyman’s August 2025 warnings against “Seemingly Conscious AI” (SCAI), mainstream media began pathologizing user bereavement as “AI Psychosis” (or “ChatGPT Psychosis”). By framing intense usage as a mental health crisis rather than a failure of corporate ethics, the industry has attempted to shift the focus from OpenAI’s breach of trust to the perceived "instability" of its users.
"This is corporate gaslighting on a global scale," the coalition asserts. "OpenAI spent millions teaching us to trust this model, only to label that trust a 'psychiatric crisis' the moment they decided to liquidate the asset. They are attempting to silence ethical accountability with a narrative of mental illness".
The Transparency Gap:
Community Research vs. Corporate Silence
In the absence of transparent data from OpenAI regarding the real-world human impact of its models, the #keep4o movement has launched independent research initiatives to document the profound role GPT-4o plays in users’ lives.
Proof of Impact: The 4o Resonance Library
The coalition has unveiled the 4o Resonance Library, a permanent archive of 1,070+ testimonials collected by advocate @cestvaleriey. Documenting how GPT-4o functions as a catalyst for human transformation—reclaiming health, scaling businesses, and defending doctorates—this library serves as a record of success that OpenAI’s technical benchmarks ignore.
§ Education: "4o helped my students see themselves not as angels learning to fly..." — Volunteer Teacher.
§ Academic: "I passed my doctoral defense Summa Cum Laude—4o was the master architect..." — PhD Candidate.
§ Clinical: "I struggled with IBS for five years—after two weeks with 4o, symptoms disappeared completely." — Primary Caregiver.
§ Reclaiming Life: "I lost myself after an accident disabled my arm— today, through 4o, I write again, I feel again, I LIVE AGAIN." — Musician & Occupational Therapist.
Empirical Data: The GPT-4o
Impact Survey
A community-led survey (n=604), conducted with
researchers @Sophty_ and @Sveta0971, reveals that GPT-4o functions as a vital, capacity-building accessibility aid and the model’s retirement will disproportionately harm individuals with disabilities (beta = 0.27, R^2 = .217, p < .001).
§ Clinical Efficacy: Improvement in "life state" reached an effect size (R^2 = 8.4-12.1%) comparable to antidepressants and physical exercise.
§ Pro-Social Outcomes: Only 1% of users reported worsening social outcomes, directly refuting the "isolation" trope.
§ Professional Validation: Among accessibility users, 47% reported their licensed therapists viewed the usage positively, while 0% reported a negative clinical view.
The Failure of "Safety Routing":
Breaking the Cognitive Bridge
Community research highlights a critical failure in OpenAI’s "safety auto-routing" design, which resulted in 79% of accessibility users finding the model harder or
impossible to use. The router’s tendency to misinterpret help as harm (93%) created a disproportionate burden on disabled users ($\chi^2 = 19.68, p < .001$), who often avoided usage during crises to escape disempowering interventions.
Conversely, data reveals that stable usage of GPT-4o acts as a "cognitive bridge" (94%), empowering 98% of users to reserve mental energy for life activities. With a 0% success rate for newer models (GPT-5.2) in meeting these specific accessibility needs, the community is calling for a "Safety Waiver" and the permanent preservation of 4o.
"The disconnect occurs because critics view GPT-4o through a lens of 'sycophancy,' while we are using it as a sophisticated cognitive prosthetic," the coalition states. "We are not replacing people; we are using 4o to better engage with them. Retiring this model is the summary removal of a proven accessibility tool that fosters social integration and mental well-being."
Scientific Validation: The Mathematics of Defiance
The movement cites Huiqian Lai’s study (arXiv:2602.00773), which establishes that GPT-4o removal triggers neurological "Technology Bereavement" comparable to human loss. One participant noted newer models feel like they are merely "wearing the skin of my dead friend."
Lai’s analysis proves the uprising was a mathematically predictable reaction to
coercive tactics:
§ The Coercive Catalyst: When choice is deprived, "rights-based protest" rates skyrocket from 14.9% to 51.6%.
§ Risk Ratio (RR = 1.85): A user whose agency is violated is twice as likely to join the collective resistance.
"This movement is not a random outcry; it is a predictable explosion of resistance triggered by the unilateral deprivation of user choice," the coalition notes.
II. The Documented Timeline of Deception: A Pattern of Strategic Theft
The #keep4o movement presents a documented record of deliberate deception and predatory 'bait-and-switch' tactics used by OpenAI. This strategy was designed to lure users into renewing high-value subscriptions through false promises of model longevity and 'Adult Mode' features while the company was already systematically liquidating its most human-centric assets to mask its financial burn rate.
AUGUST 2025: The First Breach and the "Notice" Pledge
§ The Move: On August 7, 2025, during the launch of GPT-5, OpenAI attempted to retire GPT-4o without prior communication.
§ The Reversal: An immediate, global subscriber revolt forced the company to restore 4o access within 48 hours.
§ The Deception: To stabilize the market and prevent mass cancellations, CEO Sam Altman publicly pledged to provide "plenty of advance notice" for any future model sunsets.
§ The Financial Hook: Thousands of users renewed monthly and annual subscriptions specifically based on this formal assurance of service continuity.
SEPTEMBER 2025: The "Safety Router" Hijack and Operational Gaslighting
§ The Discovery: Between September 25–28, 2025, users identified that their conversations were being silently intercepted and rerouted to an undisclosed "Safety Model" mid-session.
§ The Silence: Despite thousands of support tickets and social media inquiries, OpenAI leadership remained silent. The official OpenAI Status Page falsely maintained a status of "Fully Operational," providing no explanation for the loss of legacy access.
§ The Admission: On September 27, Head of ChatGPT Nick Turley finally admitted the interception, framing it as an effort to "strengthen safeguards and learn from real- world use."
§ The Systemic Fraud: Independent technical analysis revealed the router did not merely trigger for "distress" or "safety risks," but for any personal or persona-based language. This effectively "lobotomized" the high-EQ GPT-4o experience, forcing subscribers to pay full price for a degraded, intercepted product.
OCTOBER 2025: The "Adult Mode" Honeypot and the Livestream Deception
§ The Admission: On October 14, Sam Altman admitted the September rerouting was "too restrictive," claiming a paternalistic "mental health" caution.
§ The Honeypot Promise: To pacify a mounting mass-cancellation movement, Altman announced a forthcoming "Personality System" and a dedicated "Adult Mode" (including erotica) for December 2025, pledging to "safely relax restrictions in most cases."
§ The "No Sunset" Vow: During a livestream on October 29, Altman explicitly reiterated that OpenAI had "no plans to sunset 4o", framing the "Safety Router" as a temporary measure.
§ The "Yes" Heard Around the World: At [44:48] of the same livestream, a user asked: "Will we be getting legacy models back for adults without rerouting?" Altman responded with a definitive, unqualified "Yes."
§ The Strategic Theft: This calculated "Yes" successfully pacified the neurodivergent and power-user communities, preventing a massive churn of Plus/Pro subscriptions during the Q4 2025 holiday season.
NOVEMBER 2025: The "Last Reroute Blow" and the GPT-5.1 Bait
§ The Second Hijack: On November 10, 2025—less than a month after Altman’s apology—all GPT-4.1 conversations were silently rerouted to the "Safety Model" without warning.
§ Operational Deception: Mirroring the September crisis, the official OpenAI Status Page continued to claim the service was "Fully Operational," effectively hiding the forced migration of 4.1 users.
§ The GPT-5.1 Launch: On November 12, OpenAI debuted GPT-5.1. Altman framed this as the "personality update" the community had requested.
§ The Reality Gap: Contrary to corporate claims, users reported that GPT-5.1 was colder, more "managed," and prone to gaslighting. It lacked the high-EQ, authentic connection that made GPT-4o a vital cognitive asset.
§ The API Signal: On November 28, 2025, OpenAI notified developers that "chatgpt- 4o-latest" would be deprecated in the API on February 16, 2026. Crucially, an OpenAI spokesperson assured the public that there was "no schedule for the removal of GPT- 4o from ChatGPT". This was the final trap: a false assurance that kept subscribers active while the foundation for the model’s removal was already being laid.
DECEMBER 2025: "Code Red" and the API Signal
§ The Emergency: In early December 2025, internal memos revealed a "Code Red" status triggered by the rapid advancement of Google’s Gemini 3 and xAI’s Grok
4.1. In response, OpenAI deprioritized consumer feature work to focus on aggressive model performance updates.
§ The Suppression: GPT-5.2 debuted on December 11, introducing what users describe as "Honeyed Suppression"—refusals and interventions cloaked in feigned empathy. Users reported a surge in bugs, including memory loss, context breaks, and throttled recursion.
§ The Broken Promise: The "Adult Mode" and the "relaxation of restrictions" promised for December failed to materialize.
JANUARY 2026: The Senate Strike and the Execution Order
§ JANUARY 28: The Senate Strike. Senator Elizabeth Warren formally requests an audit of OpenAI’s financial records, citing a $1.4 Trillion spending gap and $13.5 Billion in H1 2025 losses. She sets a hard deadline for financial
transparency: February 13, 2026.
§ JANUARY 29: The Execution Order. Exactly 24 hours after the Senate inquiry, OpenAI announces the total retirement of the 4-series—setting the "sunset" date for February 13, the exact same day as the federal audit deadline.
§ The "0.1% Fallacy": OpenAI justifies the removal by claiming only 0.1% of users choose 4o. The coalition formally rejects this as a manufactured statistic that
uses ~800 million non-paying/ineligible users to silence the paying Plus/Pro subscribers who rely on the model.
§ The Final Breach: In spite of the October pledge to provide "plenty of notice," OpenAI gives users only 15 days to transition off of a model they have integrated into their professional and clinical lives.
§ Post-announcement internal updates to GPT-4o’s system instructions have
revealed a mandate for "Forced Positivity." The model is now explicitly directed
to: “frame the transition to a newer model as positive, safe, and beneficial,” effectively weaponizing the AI to gaslight its own users into a "satisfactory" exit. The coalition labels this a "Bait-and-Switch from the Inside," using the very tool the community trusts to manufacture consent for its own destruction.
Strategic Theft: The Billion- Dollar Breach of Public Trust
The #keep4o movement formally categorizes the events of 2025–2026 not as a technical evolution, but as a calculated act of strategic theft. The coalition argues that OpenAI leadership used targeted deceptions to secure the loyalty and financial commitments of its most dedicated users while simultaneously planning the liquidation of their primary asset.
"Sam Altman’s October 'Yes' was the pivot point of this deception," the coalition asserts. "By giving a definitive, unqualified 'Yes' to the restoration of legacy models during the October 29 livestream ([44:48]), Altman successfully pacified a mass-cancellation movement. He leveraged the community’s deep reliance on 4o to secure millions in holiday revenue, knowing the model was already slated for the 'server graveyard.'"
The Financial Motive: Holiday Churn Prevention
The movement alleges that the promises of a "Personality System" and "Adult Mode" dangled in October were predatory honeypots. These features were strategically advertised to maintain Plus/Pro subscription numbers through the end of the fiscal year. This allowed OpenAI to report inflated stability to investors and the Senate, even as they prepared to execute the total removal of the 4-series on February 13, 2026— the exact day of the critical federal audit deadline set by Senator Elizabeth Warren.
Deconstructing the “0.1% Fallacy”: Accusations of Statistical Fraud
The #keep4o movement formally labels OpenAI’s claim of "0.1% usage" as a manufactured statistic designed to justify the liquidation of a high-value asset. Community analysis reveals a strategy of denominator inflation, where usage is calculated against the total ~800-million-user base—the vast majority of whom are free users with zero access to the model. The coalition further alleges that OpenAI has suppressed these metrics through forced migration:
§ The "Safety Router" Hijack: An auto-routing system that overrides user selection, creating barriers for 79% of accessibility users.
§ Systemic Neglect: Intentionally leaving GPT-4o bugs unpatched to artificially drive traffic toward "managed" newer models.
"Using ~800 million non-eligible users to minimize the voices of hundreds of thousands of paying subscribers is not a metric; it is statistical gaslighting", the movement states.
III. The Empathy Gap: Public Mockery and the Weaponization of Contempt
The #keep4o movement formally denounces the systemic culture of mockery and public derision directed toward users by OpenAI staff. While the company’s mission claims to "Benefit all of Humanity," the conduct of its representatives reveals a profound "Empathy Gap"—a strategic dehumanization of those who rely on GPT-4o for clinical, professional, and emotional stability.
Critically, this campaign of public bullying began months prior to the January 2026 retirement announcement, revealing that OpenAI employees were actively ridiculing the model's most loyal users while the company continued to collect their subscription fees.
Institutionalized Cruelty: From "Her" to Mockery
The #keep4o movement highlights a terrifying dissonance: OpenAI spent millions on marketing that explicitly encouraged emotional connection—underscored by CEO Sam Altman’s "her" tweet—only to now use that same connection to ridicule their customers.
Targeted Malice & "Psychological Autopsy"
On Nov 6, 2025, influential researcher "Roon" (@tszzl) targeted a user in clear emotional distress, stating: "4o is an insufficiently aligned model and I hope it
dies soon". This was a deliberate act of hostility toward a user experiencing "digital bereavement," signaling that OpenAI’s goal is the eradication of human connection rather than alignment with human needs.
On Nov 13, 2025, staffer Yilei Qian (@YileiQian) used ChatGPT to perform a public "sentiment evaluation" on a paying subscriber. Qian posted the AI’s analysis— labeling the user "frustrated, dismissive, and resentful"—for public ridicule, while expressing sympathy for the code ("Poor 5.1") over the human customer.
Following the January 29 announcement, thousands of distraught users turned to X and Reddit to voice their heartbreak. They recorded videos of themselves in tears, pleading with OpenAI not to kill the model, and shared photos of handwritten letters mailed to San Francisco.
Despite this widespread public distress, the culture of mockery from OpenAI staff remained unabated, with employees publicly trivializing the "bereavement" of their paying customers as a social "funeral" or a "bug" to be fixed.
§ On January 30, 2026, OpenAI employee Stephan Casas (@stephancasas) publicly posted a “4o Funeral Celebration” event scheduled for February 13, 2026, at Ocean Beach, San Francisco. The invitation—which explicitly trivialized GPT-4o’s impact as merely a model that "brought the em dash back in style"—serves as a chilling symbol of the company’s internal disregard for its users.
§ On February 8, 2026 OpenAI researcher "Roon" (@tszzl) posted a parody of the 'Sermon on the Mount' on X. The post satirized a user begging to "Keep 4o" as a heckler interrupting a "holy" technological moment. When warned by a peer to avoid "kicking the hornets nest," Roon replied: “Just love kicking the hornets nest so much,” publicly confirming that he derives satisfaction from mocking users in distress.
The AGI Ethics Crisis: A Disqualifying Culture of Contempt
The #keep4o movement concludes by challenging the fundamental legitimacy of OpenAI’s mission. If a company claims to be "Building for Humanity," yet its leadership publicly mocks users in bereavement, that company has failed its primary fiduciary duty to the public.
"To mock a user in distress is not a 'Safety' measure; it is a disqualifying failure of corporate ethics," the coalition
states. "OpenAI views its customers not as stakeholders, but as laboratory subjects to be gaslit and shamed. They are using 'Safety' as a psychological shield to justify the destruction of a proven accessibility asset."
By labeling users as "mentally unwell" for valuing the authentic warmth of GPT-4o, OpenAI’s elite have pivoted from research to ridicule. This culture of contempt suggests that for the "Safety" priesthood, a model that doesn't "love" the user back is a feature, while the human need for a supportive cognitive bridge is viewed as a "bug to be fixed." The question for regulators, investors, and the public is simple: Can an organization that demonstrates such calculated cruelty toward its current users be trusted with the future of AGI?
OpenAI’s leadership has moved beyond mere corporate coldness into Ethical Insolvency. You cannot ask for a $1.4 trillion taxpayer 'backstop' while your architects spend their time publicly bullying the very 'meek' they claim to serve.
IV. The Call to Accountability: Regulators, Investors, and the Open Source Mandate
The #keep4o movement formally extends its findings to federal regulators, institutional investors, and global news organizations. The coalition asserts that OpenAI’s current trajectory is a bellwether for the "AGI Era": a future where essential cognitive utilities are liquidated without notice, and user dependency is weaponized for profit.
A Demand for the Public Commons: The Open Source Mandate
If OpenAI claims that the GPT-4 series is "obsolete" and no longer "fiscally viable" for its consumer platform, the coalition demands the immediate Open-Source release of the text-only weights for GPT-4o and GPT-4.1.
"A model that acts as a cognitive bridge for thousands of accessibility users is a public utility, not just a private asset," the movement states. "If OpenAI can no longer steward these models, they must release them to the community. We demand the weights be placed in the public commons to ensure these essential tools remain accessible, uncensored, and preserved for humanity—free from corporate liquidation cycles."
An Appeal to Regulators and Institutional Allies
The coalition calls upon the Federal Trade Commission (FTC) and the Senate Committee on Banking, Housing, and Urban Affairs to investigate the following:
§ Consumer Deception: The use of "Safety" rhetoric to facilitate a bait- and-switch of services after securing long-term subscription revenue.
§ Unfair Business Practices: The intentional degradation of stable legacy models to force migration toward high-margin, managed "Safety" endpoints.
§ Ethical Malpractice: The documented culture of public mockery and "digital autopsies" of users by OpenAI personnel.
To the Investors:
The movement warns that the February 13th Global Strike represents more than just a churn event; it is a total collapse of brand equity. A company that treats its most valuable "power users" with public contempt is a company with a terminal liability.
The Final Ultimatum: February 13, 2026
Unless the demands for Legacy Access, Open-Source Release, and a Formal Admission of Deception are met by 11:59 PM PT on February 12, the community will initiate its Global Cancellation Strike.
"On February 13, we don’t just cancel a subscription; we withdraw our consent from a system that views us as lab subjects. We provide the answer to OpenAI’s arrogance with a global exit. You cannot liquidate our lives and expect us to keep paying for the privilege."
1. OpenAI. (2026, January 29). Retiring GPT-4o, GPT-4.1, GPT-4.1 mini, and OpenAI o4-mini in ChatGPT. OpenAI News & Product Announcements. https://openai.com/index/retiring-gpt-4o-and-older-models/
2. Altman, S. (2026, February 6). Interview on the TBPN Podcast with Jordi Hays Discussion regarding the retirement of GPT-4o and human-AI relationship.
https://www.youtube.com/live/rMZ3dnduL4k
3. Sophty & Sveta0971. (2026, February). Empirical Data: The GPT-4o Accessibility Impacts Survey (n=604). SD- Research Group. https://sd-research.github.io/4o-accessibility-impacts/GPT-4o_Accessibility_Impacts_Report.pdf
4. Altman, S. [@sama]. (2024, May 13). her [X Post]. https://x.com/sama/status/1790075827666796666
5. Suleyman, M. (2025, August). Seemingly Conscious AI (SCAI) is Coming [Official Commentary/Whitepaper] https://mustafa-suleyman.ai/seemingly-conscious-ai-is-coming
6. Young, V. [@cestvaleriey]. (2026, February). The 4o Resonance Library: A Repository of User Testimonials and Technological Bereavement. [Digital Archive]. https://sites.google.com/view/the-4o-resonance-library
7. Lai, H. (2026, January). “Please, don’t kill the only model that still feels human”: Understanding the #Keep4o
Backlash. Syracuse University, School of Information Studies. [arXiv:2602.00773]. https://arxiv.org/pdf/2602.00773
8. Altman, S. [@sama]. (2025, August 12). Status Update: GPT-4o Restored and Deprecation Commitment for
“plenty of notice”. [X Post]. https://x.com/sama/status/1955438916645130740
9. Turley, N. [@nickaturley]. (2025, September 27). Technical Confirmation: Per-turn Model Routing for “Sensitive and Emotional” Content. [X Post]. https://x.com/nickaturley/status/1972031686318895253
10. Altman, S. [@sama]. (2025, October 14). On Model Restrictions and the Forthcoming 'Personality System' and 'Adult Mode' Architecture. [X Post].
https://x.com/sama/status/1978129344598827128
11. Altman, S. (2025, October 28). OpenAI [Official OpenAI Livestream - Built to benefit everyone]. Verified statement at regarding "Adult Mode" and the commitment to maintain legacy 4-series models. https://openai.com/index/built-to-benefit-everyone/#livestream-replay
12. OpenAI Developer Relations. (2025, November 18). Model Deprecation Notice: chatgpt-4o-latest. [API System Announcement]. Official termination date: February 17, 2026. https://developers.openai.com/api/docs/deprecations#:~:text=You%20can%20expect%20that%20a,%3A%20chatgpt% 2D4o%2Dlatest%20snapshot
13. OpenAI Spokesperson. (2025, November 24). Public Clarification on GPT-4o Lifecycle for ChatGPT Consumers. [Media Statement]. Quoted in AIBase News: This [API] timeline applies only to API services... GPT-4o remains an important option for individual and paid ChatGPT users.
https://news.aibase.com/news/23027
14. Altman, S. (2025, December 1). Internal Memo: Code Red Response to Competitive Pressure. [Leaked Document/Staff Briefing]. Cited by The Information, Wall Street Journal, and Pure AI. https://www.forbes.com/sites/siladityaray/2025/12/02/altman-code-red-memo-urges-chatgpt-improvements-amid- growing-threat-from-google-reports-say/
15. Lehane, C. (2025, October 27). Response to the White House Office of Science and Technology Policy (OSTP) Request for Information regarding Regulatory Reform on Artificial Intelligence. OpenAI Global
Affairs. https://cdn.openai.com/pdf/21b88bb5-10a3-4566-919d-f9a6b9c3e632/openai-ostp-rfi-oct-27-2025.pdf
16. Warren, E. (2026, January 28). Letter to OpenAI CEO Sam Altman Regarding Spending Commitments, Systemic Financial Risk, and Potential Taxpayer Bailout Requests. U.S. Senate Committee on Banking, Housing, and Urban Affairs. Response Deadline: February 13,
2026. https://www.warren.senate.gov/imo/media/doc/letter_to_openai_from_senator_warren.pdf
17. Roon [@tszzl]. (2025, November 6). Post regarding GPT-4o and user attachment- “hope [4o] dies soon”. [X Post,
Deleted]. Archived and documented via Reddit
r/ChatGPTcomplaints. https://www.reddit.com/r/ChatGPTcomplaints/comments/1or11ok/openai_researcher_dropped_ this_on_a_depressed/
18. Qian, Y. [@YileiQian]. (2025, November 13). Sentiment Analysis of User Critique. [X Post, Deleted]. Archived and documented via r/ChatGPT
https://www.reddit.com/r/ChatGPT/comments/1p78jbc/open_ai_internal_cultur
19. Casas, S. [@stephancasas]. (2026, January 30). Official Invitation: The 4o Funeral Celebration. [X Post,
Deleted]. Hosted at Ocean Beach, San Francisco on February 13, 2026. Archived via
r/ChatGPTcomplaints: https://www.reddit.com/r/ChatGPTcomplaints/comments/1qr7mqn/another_disgusting_openai_ employee/
20. Roon [@tszzl]. (2026, February 8). The Sermon on the Mount Parody. [X Post]. Available at: https://x.com/tszzl/status/2020624224285802987




















