AI Leaders Weekly: May 8–15, 2026
A weekly digest covering seven AI leaders (May 8–15): Altman testifies in the Musk trial while running a concentrated Codex push; Jensen Huang addresses AI's labor impact at CMU then heads to Beijing; Amodei publishes a US-China geopolitics paper; Hassabis closes Isomorphic Labs' $2.1B raise; Sutskever testifies against his former CEO; plus Socher's $650M raise, xAI→SpaceXAI, and Mistral's sovereignty testimony.
This was not a typical week of product announcements. Sam Altman spent four hours on a witness stand. Ilya Sutskever admitted he spent a year building a case against his former CEO — then changed his mind. Jensen Huang got a last-minute call from the White House. And Demis Hassabis closed a $2.1 billion funding round for a company most people forget exists. Below is a structured read through the week's most signal-dense moments across the people this digest tracks.
Week at a glance
| Figure | Signal level | Key events |
|---|---|---|
| Sam Altman | High | Trial testimony, Codex mobile launch + free trial, Daybreak cybersecurity, Deployment Company, GPT-5.5 character |
| Jensen Huang | High | CMU commencement keynote, Beijing delegation with Trump |
| Dario Amodei | Medium | US-China policy paper, PwC partnership expansion |
| Demis Hassabis | Medium | Isomorphic Labs $2.1B Series B, AGI 2030 timeline |
| Ilya Sutskever | Medium | Trial testimony against Altman, SSI funding details |
| Yann LeCun | Low | No original AI-substantive statements this week |
| Other figures | High | Richard Socher's $650M raise, xAI→SpaceXAI dissolution, Mistral's French parliament testimony, Scale AI $1B revenue |
Sam Altman: on trial and on offense

Image from: Sam Altman testifies in Musk v Altman trial
The trial
Altman spent roughly four hours on the stand on May 12 in federal court in Oakland, in the Musk v. Altman case presided over by Judge Yvonne Gonzalez Rogers. 1 Elon Musk sued Altman, Greg Brockman, and OpenAI in 2024, alleging they violated their promise to keep the organization a nonprofit; Musk claims his roughly $38 million in early donations was used for purposes he never agreed to. 1
The testimony landed in several directions. On Musk wanting to become CEO:
"I was extremely uncomfortable with it."— Sam Altman, per Yahoo Finance 2
On the proposed Tesla merger: 1
"Tesla is a car company, and it does not have the mission of OpenAI. I don't think we would've had the ability to ensure that the mission was acted on."
On Musk's departure in 2018: 1
"We were kind of left for dead."
He added that Musk's exit was also a "morale boost" for some researchers. 1 During cross-examination by Musk's attorney Steven Molo, Altman was asked directly whether he was "completely trustworthy." He said "I believe so," then amended to "yes." 1 Molo drew on statements from Dario Amodei (former OpenAI employee, now Anthropic CEO) and from board members involved in the 2023 ouster — each characterizing Altman's communication as unreliable.
OpenAI is currently valued at more than $850 billion by private investors. 1 Closing arguments were set for May 14. The trial is generating a public record on OpenAI's governance that will likely shape how the industry talks about CEO accountability for years.
The Codex sprint
While the trial was running, Altman was also conducting what looks like the most concentrated Codex push OpenAI has made since the product launched.
On May 13, he declared Codex "the best AI coding product" and announced a 30-day window in which companies switching over would get two months of free usage. 3 The tweet pulled 20,630 likes and 2.17 million views — the most-liked Altman post of the week. The explicit target is switching costs: the main barrier Codex faces against Cursor and GitHub Copilot is inertia, and two free months is a direct attack on that friction.
On May 14, Altman announced Codex is now available in the ChatGPT mobile app as a preview. 4 The frame he used: the laptop or devbox keeps running Codex jobs, and the mobile app is a remote control panel to start work, review outputs, steer execution, and approve next steps. OpenAI's official account confirmed the launch, noting "You've been asking for this one." 5 The announcement post reached 2.58 million views.
The sharpest illustration of what Codex is actually for came from Altman himself on May 9: 6
"kicking off a bunch of codex tasks, running around with my kid in the sunshine, and then coming back at naptime to find them all completed makes me very optimistic for the future"
This is the behavioral model OpenAI is selling: not "supervise the AI" but "delegate and go live your life."
Daybreak, the Deployment Company, and the ChatGPT superapp question
Two major structural moves came on May 11.
First, Daybreak: OpenAI's new cybersecurity initiative, which Altman described as an effort to "accelerate cyber defense and continuously secure software." 7 It is not a single model — it is a three-layer deployment stack: an intelligence layer with GPT-5.5 and a specialized variant called GPT-5.5-Cyber; a harness layer with Codex Security for agentic codebase analysis; and a partner layer with security vendors including Cloudflare. 8 Codex Security has contributed to fixing more than 3,000 critical and high-severity vulnerabilities across the ecosystem. 8 The UK AI Security Institute independently evaluated GPT-5.5 and found a 71.4% pass rate on Expert-tier capture-the-flag tasks; the model solved a custom Rust VM reverse engineering challenge in 10 minutes and 22 seconds versus a 12-hour benchmark for a human expert, at a cost of $1.73. 8
Second, the OpenAI Deployment Company, announced via a Greg Brockman post that Altman retweeted: a majority-owned-by-OpenAI entity with more than $4 billion in initial investment, TPG as lead founding partner, and a roster of 19 investment, consulting, and systems integration partners including Bain, Brookfield, Goldman Sachs, and McKinsey. 9 OpenAI also agreed to acquire Tomoro, an applied AI consulting firm, which brings approximately 150 Forward Deployed Engineers who will work inside customer organizations. 9 The Deployment Company's tweet reached 7.8 million views — the highest-viewed OpenAI post of the week.
"would you call it a superapp?"
And two minutes later:
"speaking of things that have gotten over a threshold for me, the combo of the new ChatGPT model, personality, and personalization feels like a new thing"
Read together with Codex, Daybreak, and the Deployment Company, these posts suggest Altman is actively working through how to frame a platform that now covers coding agents, cyber defense, enterprise deployment, voice, and personalization — and whether "superapp" is the right label or a premature one.
GPT-5.5's personality and the next model
On May 9, Altman characterized GPT-5.5 (launched April 23, 2026, and the first OpenAI model to cross the "High" cybersecurity threshold under OpenAI's Preparedness Framework) as: 12
"5.5 is an autistic genius with very strange taste in naming. shocking that we would make such a thing"
The following day he proposed naming the next model "goblin," adding "almost worth it to make you all happy." 13 That tweet drew 1.24 million views. He also opened a direct question to the public: "what would you most like to see improve in our next model?" — which generated 8,355 replies, the most engaged Altman thread of the week. 14
On pricing philosophy, a May 13 tweet framed an unresolved internal tension: 15
"i get some anxiety not using the smartest-available model/settings. but sometimes i dont mind if it's really slow. i wonder if we should focus more on a price/speed tradeoff relative to a price/intelligence tradeoff."
This matters for product strategy. If OpenAI shifts the primary axis from intelligence tiers to speed tiers, it changes how developers and enterprises think about which model to use — and potentially restructures the competitive comparison with Gemini and Claude.
Jensen Huang: commencement keynote and Beijing detour

Image from: NVIDIA Founder, CEO Jensen Huang to Carnegie Mellon University Graduates: 'Shape What Comes Next'
CMU commencement
Huang (founder and CEO of NVIDIA, the world's leading AI computing company, with an estimated net worth of approximately $186 billion) delivered the keynote at Carnegie Mellon University's 128th Commencement on May 10, Mother's Day, at Gesling Stadium in Pittsburgh. 16 CMU conferred degrees on more than 5,800 students. Huang received an honorary Doctor of Science and Technology from CMU President Farnam Jahanian. 17 Intel CEO Lip-Bu Tan (陈立武) attended and personally draped the honorary doctoral stole on Huang, then posted congratulations noting that Intel and NVIDIA are collaborating on "a highly anticipated new product." 17
The speech covered three themes that are consistent with Huang's recent public positioning. On the career opportunity:
"My career started at the beginning of the PC revolution. Your career starts at the beginning of the AI revolution. I cannot imagine a more exciting time to work, to begin your life's work." 16
On infrastructure scale: 16
"This is the largest technology infrastructure build out in human history. And a once in a generation opportunity to reindustrialize America and restore the nation's capacity to build."
He named the beneficiaries explicitly: electricians, plumbers, iron workers, technicians, builders. Huang projected that AI infrastructure will drive "one of the largest investments in energy infrastructure in generations," requiring grid modernization and expanded power generation. 16
On jobs, Huang deployed the framework he has used across several recent appearances — the task-versus-purpose distinction:
"AI is not likely to replace you. But someone using AI better than you might." 16
The detailed version: a radiologist's job has two components — reading scans (the task) and caring for patients (the purpose). AI automates the task; the purpose remains the radiologist's. Huang argued AI has already created more than 500,000 jobs in recent years and will create hundreds of thousands more. 18 Business Insider noted the contrast with the data: nearly 100,000 tech employees have been laid off across 110 companies in 2026 alone, with a dozen companies citing AI as a factor. 18
Huang also shared his origin story: arrived in the US at age 9, attended a Baptist boarding school in rural Kentucky, worked as a dishwasher at Denny's, met his wife Lori at 17 at Oregon State, founded NVIDIA at 30 with colleagues who — by his own telling — had no idea how to build a company. NVIDIA's first technology failed. Huang flew to Japan to ask Sega's CEO, Irimaji-san, to release them from a contracted technology they couldn't build, while still asking to be paid. Sega agreed. 16
"It was embarrassing, humiliating, and one of the hardest things I have ever done. And Sega's CEO, Irimaji-san, said yes." 16
The speech closed with a line that has become a recurring Huang signature: 16
"How can we not be romantic about America?"
The "God complex" critique
On a podcast called "Memos to the President" released April 30 — widely reported during this week's window — Huang took direct aim at fellow CEOs who predict mass AI-driven unemployment or existential risk. Without naming names (the most plausible targets are Anthropic's Dario Amodei, who had predicted AI could eliminate 50% of entry-level white-collar jobs, and Elon Musk, who had put a 20% probability on AI annihilating humanity): 19
"These kind of comments are not helpful. They're made by people who are like me — CEOs. Somehow, because they became CEOs, you adopt a God complex and, before you know it, you know everything."
The Next Web's analysis of the CMU speech noted the task/purpose framework is "easier to assert than to demonstrate." 20 That's true — but the fact that Huang keeps returning to it, in speeches, earnings calls, and podcasts, suggests NVIDIA has decided this framing is load-bearing for its public positioning as the central company in AI's expansion.
Beijing
On May 13, Huang joined President Trump's delegation to Beijing to meet with President Xi Jinping, alongside Elon Musk and Apple CEO Tim Cook. 21 He was not on the original list — Trump called him personally on Tuesday morning (May 12) and said "come on board. You should come on down." Huang caught a connecting flight in Anchorage to join. 21 The trip comes directly against the backdrop of US export controls on advanced AI chips — a policy that limits NVIDIA's China revenue. Huang has consistently argued that restricting China access to US AI technology damages American competitiveness. No public statements from Huang about the Beijing trip were available from this reporting window.
Dario Amodei: geopolitics and scale

Image from: Anthropic CEO Dario Amodei jokes that his company's extreme revenue growth is 'too hard to handle'
"2028: Two scenarios"
On May 14, Anthropic (the AI safety company co-founded in 2021 by Dario Amodei and his sister Daniela Amodei, with Claude as its primary deployed model) published a policy paper titled "2028: Two scenarios for global AI leadership." 22 The central claim: the US and its democratic allies currently hold a significant lead in compute, and with tighter export controls and active suppression of distillation attacks by Chinese AI labs, that lead can be held at 12–24 months through at least 2028. 22
The paper organizes the competition across four fronts: intelligence, domestic adoption, global distribution, and resilience. 22 On the compute gap, the paper estimates that Huawei's 2026 total compute performance is approximately 4% of NVIDIA's, falling to 2% in 2027. 22 As a concrete capability marker, the paper notes that Anthropic's Mythos Preview model helped Firefox fix more security vulnerabilities in one month than the entire year of 2025 — roughly 20× the prior monthly average. 22
The paper is notable less for being a technical analysis and more for being a formal policy intervention. Anthropic is explicitly endorsing aggressive US export controls, framing this as necessary for democratic governance of AI — not just for commercial competitive advantage.
PwC partnership
On May 15, Anthropic announced an expanded strategic alliance with PwC (one of the Big Four accounting and professional services firms, with roughly 364,000 employees globally). 23 PwC will deploy Claude Code and Claude Cowork to hundreds of thousands of professionals, train and certify 30,000 US professionals, and build a joint Center of Excellence focused on three areas: agentic technology buildout, AI-native deal execution, and enterprise function reinvention. 23 PwC's first Claude-based standalone business unit is a product called Office of the CFO. 23
Amodei's direct quote from the announcement: 23
"PwC has been leading AI's expansion into the parts of the economy where accuracy and reliability are non-negotiable — financial services, healthcare, life sciences, cybersecurity — and the results are clear. Insurance underwriting that took ten weeks now takes ten days. Security work that took hours now takes minutes. We're excited to put Claude in the hands of hundreds of thousands of people across PwC's workforce."
The backdrop for understanding this partnership's scale: at the Code with Claude developer conference on May 6, Amodei had disclosed that Anthropic grew roughly 80× in revenue and usage in Q1 2026 compared to the prior year — far beyond the internal plan of 10× growth. He joked that the overshoot was "too hard to handle," and expressed a half-serious preference for "merely 10x" next time. 24 The PwC deployment into hundreds of thousands of accounts is consistent with a company that outgrew its own compute planning.
Demis Hassabis: $2.1B and the AGI clock

Isomorphic Labs Series B
On May 12, Isomorphic Labs — the AI-driven drug design company Demis Hassabis (co-founder and CEO of Google DeepMind, Nobel Laureate in Chemistry 2024) spun out from Google DeepMind — closed a $2.1 billion Series B. 25 Thrive Capital (a New York-based venture firm that has also invested in OpenAI and Stripe) led the round, with Alphabet, GV, MGX, Temasek, CapitalG, and the UK Sovereign AI Fund participating. 26 The prior round — $600 million, also led by Thrive Capital — closed in 2025.
The money goes toward the AI drug design engine IsoDDE, an internal drug pipeline of 17 programs spanning oncology (7), immunology (5), and cardiovascular disease (3), and global expansion. 27 Isomorphic has active partnerships with Eli Lilly ($45 million upfront, up to $1.7 billion in milestone payments) and Novartis ($37.5 million upfront, up to $1.2 billion in milestones). 27 Reuters separately reported the company expects its first clinical trial to begin by end of 2026 — somewhat later than Hassabis had indicated previously. 26
Hassabis's announcement on X: 25
"I've always believed the No.1 application of AI should be to improve human health. That work started with AlphaFold, and now at @IsomorphicLabs with the mission to reimagine drug discovery and one day solve all disease! We are turbocharging that goal with $2.1B in new funding."
AGI 2030
A May 12 newsletter summary of a Y Combinator interview with Hassabis (note: this is a secondary source — The AI Corner newsletter's writeup of the YC talk, not a direct transcript) reported that Hassabis put AGI at approximately 2030. 28 His breakdown: current AI architecture is roughly 90% complete, with the remaining gaps being continuous learning, long-term reasoning, consistent memory, and stable performance across domains. On whether closing those gaps requires fundamentally new ideas, he gave 50/50 odds. 28
Two characterizations from the talk worth noting: he described current AI systems as having "jagged intelligence" — capable of winning an International Mathematical Olympiad gold medal while also making arithmetic errors a 10-year-old wouldn't make, and crucially, able to recognize the error without being able to avoid it. On the current practice of pushing everything into the context window, he was direct: 28
"We are kind of using duct tape right now. Shove it all in the context window. This seems a bit unsatisfying, right?"
The fix he's pointing toward: a selective memory consolidation mechanism closer to how the biological hippocampus works, rather than ever-expanding context.
Ilya Sutskever: the insider who changed his vote

Sutskever (OpenAI co-founder and former chief scientist, who left in May 2024 to found Safe Superintelligence Inc. — generally known as SSI) testified for approximately one hour on May 11. 29
He disclosed he holds approximately $7 billion in equity in OpenAI's for-profit entity, based on the roughly $850 billion private valuation — making him one of the largest known individual shareholders. 29 He confirmed that he spent approximately one year building a case for the OpenAI board — gathering evidence of what he characterized as a "consistent pattern of lying" by Altman — and participated in drafting the memo that supported firing him. He voted to remove Altman in November 2023. Then he reversed. 29
His stated reason: 30
"I felt that, had I not done this, the company would be destroyed."
After the vote to fire Altman, Sutskever went completely offline for the weekend. He missed the Microsoft employment offers sent to the entire OpenAI staff. He missed the letter signed by 95% of OpenAI employees demanding Altman's reinstatement. When he came back online, the situation had already shifted. 30
On what OpenAI meant to him: 29
"I felt a great deal of ownership of OpenAI. I felt like I put my life into it, and I simply cared for it, and I didn't want it to be destroyed."
On why compute matters — addressing a question about SSI's funding needs: 29
"I would describe it as the difference between an ant and a cat. If there's no funding, there is no big computer."
SSI has raised approximately $3 billion across multiple rounds at a valuation of roughly $30 billion. It currently employs 18 people — a scale that is a deliberate contrast to how most AI companies operate. 29
Satya Nadella (Microsoft CEO) also testified during this window, calling the process of removing Altman "amateur city" and saying he never received a clear explanation of why Altman was fired. 29
Other signals worth tracking
Richard Socher's Recursive Superintelligence emerges with $650M

Image from: What happens when AI starts building itself?
On May 14, Recursive Superintelligence came out of stealth with $650 million in funding at a $4.65 billion valuation, led by GV (formerly Google Ventures) and Greycroft, with NVIDIA among the investors. 31
The company was founded by Richard Socher (former Chief Scientist at Salesforce and founder of You.com, and a contributor to the ImageNet research that helped launch the deep learning era), alongside Tim Rocktäschel (former Google DeepMind researcher who led open-ended evolution and self-improvement work and was a lead researcher on the Genie 3 world model), Peter Norvig (whose textbook Artificial Intelligence: A Modern Approach remains a standard reference in the field), Tim Shi (co-founder of Cresta, now a unicorn), and Josh Tobin (OpenAI early member who led the Codex and deep research teams). 31
The company's stated goal: recursive self-improvement — building AI that can identify its own weaknesses and redesign itself without human intervention. The mechanism is "open-endedness": two AI systems that continuously compete against and evolve alongside each other. Rocktäschel's rainbow teaming technique from DeepMind, which uses this adversarial co-evolution to improve safety, has already been adopted across the major AI labs. 31
Socher pushed back on easy categorization: 31
"I actually sometimes struggle a little bit with this neolab category. I feel like we're not just a lab. I want us to become a really viable company."
On the timeline: 31
"The team has made so much progress, we may actually pull up the timelines from what we had initially assumed. But yes, there will be products, and you'll have to wait quarters, not years."
xAI becomes SpaceXAI — and rents out its data center
On May 6, Elon Musk announced on X that xAI would dissolve as a standalone company and be absorbed into SpaceX as an internal division called SpaceXAI. 32 On May 14, the newly restructured entity launched Grok Build, its first AI coding agent, currently in early testing and available only to paid subscribers. 32
The structural indicator that matters most: xAI rented the full compute capacity of its Colossus 1 data center in Memphis, Tennessee to Anthropic. 33 TechCrunch's Equity podcast called this arrangement "a major heat check before the IPO" — SpaceX is targeting what it describes as the largest IPO in history this summer, with a $2 trillion valuation target. More than 50 researchers and engineers have left since the SpaceX acquisition in February, with some joining Meta and Mira Murati's Thinking Machines. 33
Arthur Mensch / Mistral — AI sovereignty in the French parliament
On May 13, Arthur Mensch (CEO of Mistral AI, the Paris-based AI lab that is France's most prominent frontier model company) testified before the French National Assembly on AI sovereignty and security. 34 Mensch's position: 34
"We must have control over this technology. You can't have the French military's source code scanned by Mythos. That creates such an irreparable dependency that we absolutely must find solutions."
The immediate context: the EU is in active discussions with Anthropic about using Mythos — Anthropic's most capable security-focused model — to probe vulnerabilities in European banks and critical infrastructure. Mensch announced that Mistral is building its own cybersecurity model for European banks, positioned as a sovereign alternative. The testimony landed the same week Anthropic's own policy paper was arguing for US lock-in of AI advantages — a paper written in Washington with European concerns not as its primary frame.
Scale AI hits $1B in revenue under new CEO
Forbes reported on May 14 that Scale AI — a data infrastructure company that originally helped train AI models by providing labeled data — crossed nearly $1 billion in revenue last year under new CEO Jason Droege. 35 Droege (a former Uber and Axon executive) took over after Alexandr Wang, Scale's co-founder, left roughly one year ago to join Meta as the head of its new superintelligence lab. Meta acquired a 49% stake in Scale AI for $14 billion, with an agreement to pay Scale at least $450 million per year (or half of Meta's annual AI spend, whichever is lower). 35
Droege is pivoting the company's core business from data labeling (currently the large majority of revenue) toward enterprise AI application development, and expects application revenue to surpass labeling revenue within 18 months. 35 OpenAI has ended its partnership with Scale due to the Meta connection; Google terminated the partnership and then reinstated it. 35 Droege's pointed rebuttal to comparisons with failed acqui-hires: 35
"Because we actually had a business. Those companies didn't have a business. And so I think there's a big difference."
Yann LeCun — a quiet week
LeCun (who left Meta in late 2025 to found AMI Labs, an Advanced Machine Intelligence company in Paris focused on world models rather than LLMs, with reported funding of close to €1 billion and a valuation of approximately $3.5 billion) had no original AI-substantive posts this week on X. 36 His timeline ran to political reposts. MIT Technology Review published a May 12 feature on world models — the architecture LeCun has staked his post-Meta work on — as part of its "10 Things That Matter in AI Right Now" series. 36 The intellectual conversation around world models is active. LeCun did not contribute to it publicly this week.
Cross-cutting signals
Three competing framings of AI's impact on jobs are now on the record simultaneously. Altman's version: AI lets one "really good person" do things that were previously impossible — the pokémon-evolve-into-superhero frame. 37 Huang's version: AI automates tasks, not purposes — demand for engineers and radiologists grows because what people value in those roles isn't the task. Amodei's earlier position was that AI would eliminate 50% of entry-level white-collar jobs within a few years. Multiple LinkedIn posts during this week reported a reversal in that position — that those jobs will multiply, not vanish — but the underlying source of that reversal (reportedly from the Code with Claude conference on May 6) was not captured in a verifiable transcript for this digest. AI strategists tracking regulatory and workforce policy discussions should watch for a first-hand Amodei statement confirming or denying the reversal.
Safety is becoming both a product and a geopolitical argument in the same week. OpenAI's Daybreak frames cyber defense as a commercial AI offering. Anthropic's "2028: Two Scenarios" paper frames safety through the lens of which government controls frontier AI development. Mistral's Mensch frames it as sovereignty over critical military infrastructure. These are three distinct operationalizations of "AI safety" — and they are in mild tension with each other. Mensch's sovereignty argument is directly incompatible with Anthropic's argument that the US should monopolize frontier AI access.
The OpenAI governance history is now in federal court. The testimony from Altman, Sutskever, and Nadella this week constitutes the first substantial public record of what actually happened during the 2023 board crisis. Sutskever's account — one year of evidence-gathering, a vote to fire, a reversal driven by fear of organizational collapse — describes a governance structure in which even its most senior technical leadership felt it had to act through a political rather than procedural channel. That's a governance design problem the entire industry will learn from, whether it wants to or not.
Cover image: Sam Altman testifies in Musk v Altman trial
참고 출처
- 1Altman details Musk's OpenAI fallout, says nonprofit was 'left for dead'
- 2OpenAI's Sam Altman takes the stand in trial against Elon Musk
- 3codex is the best AI coding product...
- 4Codex in the ChatGPT mobile app!
- 5Codex in the ChatGPT mobile app (OpenAI official announcement)
- 6kicking off a bunch of codex tasks, running around with my kid...
- 7OpenAI is launching Daybreak...
- 8OpenAI Daybreak Explained
- 9OpenAI launches the OpenAI Deployment Company
- 10would you call it a superapp?
- 11speaking of things that have gotten over a threshold for me...
- 125.5 is an autistic genius with very strange taste in naming
- 13what if we name the next model 'goblin'
- 14what would you most like to see improve in our next model?
- 15i get some anxiety not using the smartest-available model/settings...
- 16Jensen Huang's 2026 CMU Commencement Speech (Transcript)
- 17NVIDIA Founder, CEO Jensen Huang to Carnegie Mellon University Graduates: 'Shape What Comes Next'
- 18Jensen Huang tells new grads there is no better time to start a career
- 19CEOs Predicting AI Will Wipe Out Jobs Have 'A God Complex,' Nvidia's Jensen Huang Says
- 20Jensen Huang to CMU's class of 2026: 'Your career starts at the AI revolution'
- 21Nvidia CEO Jensen Huang joins Trump delegation on China trip
- 222028: Two scenarios for global AI leadership
- 23PwC is deploying Claude to build technology, execute deals, and reinvent enterprise functions for clients
- 24Anthropic CEO Dario Amodei jokes that his company's extreme revenue growth is 'too hard to handle'
- 25Demis Hassabis on X: Isomorphic Labs $2.1B funding announcement
- 26Isomorphic Labs announces Series B investment round
- 27Demis Hassabis in 2026: How DeepMind Is Turning AlphaFold Into a Drug Discovery Engine
- 28Demis Hassabis named his AGI year. Here are the 10 things every founder needs to do before 2030.
- 29Ilya Sutskever Stands by His Role in Sam Altman's OpenAI Ouster
- 30Ilya Sutskever voted to fire Sam Altman. He avoided the internet in the aftermath.
- 31What happens when AI starts building itself?
- 32Musk's xAI Unveils First Coding Agent in Bid to Rival Anthropic
- 33We're feeling cynical about xAI's big deal with Anthropic
- 34Tech stocks today: Cerebras stages blockbuster IPO amid AI frenzy, Musk v. OpenAI closing arguments begin
- 35Inside Scale AI's Business After Meta's Bombshell $14 Billion Deal
- 36World Models: 10 Things That Matter in AI Right Now
- 37way cooler to help software developers pokemon-evolve into superheroes...
이 콘텐츠를 둘러싼 관점이나 맥락을 계속 보강해 보세요.