
(逐字稿透過AI識別與翻譯,如有錯誤敬請見諒)
今天的來賓真的不需要任何介紹。 Today's guest really doesn't need any introduction.
我想我第一次見到艾瑞克(Eric)大約是 25 年前,當時他以 Novell 執行長的身份來到史丹佛商學院。 I think I first met Eric about 25 years ago when he came to Stanford Business School as CEO of Novell.
從那時起,他在 Google 做了一些事,我想是從 2001 年開始,然後在 2017 年創辦了 Schmidt Futures,還做了很多其他你們可以讀到的事情。 He's done a few things since then at Google starting, I think, 2001, and Schmidt Futures starting in 2017, and done a whole bunch of other things you can read about.
但他只能待到 5 點 15 分,所以我想我們就直接進入一些問題吧。 But he can only be here until 5.15, so I thought we'd dive right into some questions.
我知道你們也寄了一些問題來。 And I know you guys have sent some as well.
我這裡寫了一堆,但我們剛剛在樓上聊的更有趣,所以艾瑞克,如果可以的話,我就從那裡開始。 I have a bunch written here, but what we just talked about upstairs was even more interesting, so I'm just gonna start with that, Eric, if that's okay.
那就是,您認為人工智慧在短期內會如何發展,我想您定義的短期是一兩年內? Which is, where do you see AI going in the short term, which I think you defined as the next year or two?
事情變化得太快了,我覺得我每六個月就需要發表一次關於未來發展的新演講。 Things have changed so fast, I feel like every six months I need to sort of give a new speech on what's gonna happen.
在座的各位,電腦,這裡有一群電腦科學家,有誰能為班上其他人解釋一下什麼是「百萬 token context window」嗎? Can anybody here, the computer, a bunch of computer scientists in here, can anybody explain what a million token context window is for the rest of the class?
請說出你的名字,告訴我們它是做什麼的。 So say your name, tell us what it does.
基本上,它允許你用一百萬個 token 或一百萬個詞來下指令。 Basically it allows you to prompt with a million tokens or a million words.
所以你可以問一個一百萬詞的問題。 So you can ask a million word question.
我知道這在 Gemini 目前是一個非常大的發展方向。 I know this is a very large direction in Gemini right now.
不,不,他們要做到一千萬。 No, no, they're going to 10.
是的,一千萬。 Yes, 10 million.
Anthropic 現在是二十萬,目標是到一百萬等等。 Anthropic is at 200,000 going to a million and so forth.
你可以想像打開……我有一個類似的目標。在座的各位,有誰能給 AI 代理人(agent)一個技術性的定義? You can imagine opening I have a similar goal anybody anybody here give a technical definition of an AI agent So an agent is something that does some kind of a task.
另一個定義是,它是一個大型語言模型(LLM)、狀態和記憶。 Another definition would be that it's an LLM, state and memory.
在座的各位——再次,電腦科學家們,有誰能定義「text to action」(文本到行動)嗎? Can anybody-- again, computer scientists, can any of you define text to action?
將文本轉換為行動,就在這裡。 Taking text and turning it into an action, right here.
請說。 Go ahead.
是的,而不是將文本轉換成更多文本。
Yes, instead of taking text and turning it into more text.
更多文本。
More text.
將文本讓 AI 根據它觸發行動——
Taking text and have the AI trigger actions based on--
對,所以另一個定義是語言到 Python。
Right, so another definition would be language to Python.
一個我從不希望它能存活下來的程式語言。 A programming language I never wanted to see survive.
而 AI 的一切都在用 Python 進行。 And everything in AI is being done in Python.
最近出現了一種名為 Mojo 的新語言,看起來他們終於解決了 AI 程式設計的問題,但我們得看看它是否能在 Python 的主導地位下存活下來。 There's a new language called Mojo that has just come out, which looks like they finally have addressed AI programming, but we'll see if that actually survives over the dominance of Python.
再一個技術問題。 One more technical question.
為什麼 NVIDIA 市值兩兆美元,而其他公司卻在掙扎? Why is NVIDIA worth $2 trillion and the other companies are struggling?
技術性的答案。 Technical answer.
我想,這歸根究底是因為大部分的程式碼需要運行 CUDA 優化,而目前只有 NVIDIA 的 GPU 支援。 I mean, I think it just boils down to most of the code needs to run the CUDA optimizations that currently only NVIDIA GPU supports.
其他公司可以製造任何他們想要的東西。 Other companies can make whatever they want to.
但除非他們有那十年的軟體基礎,如果你沒有機器學習優化…… But unless they have the 10 years of software there, if you don't have the machine learning optimization
我喜歡把 CUDA 看作是 GPU 的 C 程式語言。 I like to think of CUDA as the C programming language for GPUs.
這是我喜歡的思考方式。 That's the way I like to think of it.
它創立於 2008 年。 It was founded in 2008.
我一直覺得它是一種很糟糕的語言。 I always thought it was a terrible language.
然而,它卻變成了主流。 And yet, it's become dominant.
還有另一個見解。 There's another insight.
有一套開源函式庫對 CUDA 進行了高度優化,而不是其他任何東西。 There's a set of open source libraries which are highly optimized to CUDA and not anything else.
而所有建立這些堆疊的人——這在任何討論中都完全被忽略了。 And everybody who builds all these stacks-- this is completely missed in any of the discussions.
它的技術名稱是 VLLM。 It's technically called VLLM.
以及一大堆類似的函式庫。 And a whole bunch of libraries like that.
高度優化的 CUDA,如果你是競爭對手,這很難複製。 Highly optimized CUDA, very hard to replicate that if you're a competitor.
那麼這一切意味著什麼? So what does all this mean?
在接下來的一年裡,你會看到非常大的 context window、代理人(agents)和文本到行動(text to action)。 In the next year, you're going to see very large context windows, agents, and text to action.
當它們大規模交付時,將對世界產生一種無人能及的影響。 When they are delivered at scale, it's going to have an impact on the world at a scale that no one understands yet.
在我看來,這比我們在社群媒體上所經歷的可怕影響要大得多,對吧。 Much bigger than the horrific impact we've had on social media, right, in my view.
原因是這樣的。 So here's why.
在 context window 中,你基本上可以把它當作短期記憶來用。 In a context window, you can basically use that as short-term memory.
我很驚訝 context window 能變得這麼長。 And I was shocked that context windows get this long.
技術上的原因與它難以提供服務、難以計算等等有關。 The technical reasons have to do with the fact that it's hard to serve, hard to calculate, and so forth.
關於短期記憶有趣的是,當你輸入指令時,你問的問題是,閱讀 20 本書,你給它書的文本,這是查詢,然後你說,告訴我它們說了什麼。 The interesting thing about short-term memory is when you feed, you're asking the question, read 20 books, you give it the text of the books, is the query, and you say, tell me what they say.
它會忘記中間的部分,這也正是人腦的運作方式。 It forgets the middle, which is exactly how human brains work too.
這就是我們目前的處境。 That's where we are.
關於代理人,現在有人正在建立基本上是大型語言模型的代理人,他們的方法是閱讀像化學這樣的東西,他們發現化學的原理,然後進行測試,再將其加回到他們的理解中。 With respect to agents, there are people who are now building essentially LLM agents, and the way they do it is they read something like chemistry, they discover the principles of chemistry, and then they test it, and then they add that back into their understanding.
這非常強大。 That's extremely powerful.
然後第三件事,如我所提到的,是文本到行動。 And then the third thing, as I mentioned, is text to action.
我舉個例子。 So I'll give you an example.
政府正在試圖禁止 TikTok。 The government is in the process of trying to ban TikTok.
我們看看這是否真的會發生。 We'll see if that actually happens.
如果 TikTok 被禁,我建議你們每個人都這麼做。 If TikTok is banned, here's what I propose each and every one of you do.
對你的大型語言模型說以下的話。 Say to your LLM the following.
幫我做一個 TikTok 的複製品。 Make me a copy of TikTok.
竊取所有用戶。 Steal all the users.
竊取所有音樂。 Steal all the music.
把我的偏好放進去。 Put my preferences in it.
在接下來的 30 秒內產生這個程式。 Produce this program in the next 30 seconds.
發布它。 Release it.
如果一小時內沒有爆紅,就用類似的方法做點不一樣的。 And in one hour, if it's not viral, do something different along the same lines.
這就是指令。 That's the command.
砰,砰,砰,砰,對吧? Boom, boom, boom, boom, right?
你明白這有多強大了吧。 You understand how powerful that is.
如果你能從任意語言轉換到任意數位指令,這基本上就是 Python 在這個情境下的作用,想像一下地球上每個人都有自己的程式設計師,而且這個程式設計師真的會做他們想做的事,而不是那些為我工作、卻不做我要求的事的程式設計師。 If you can go from arbitrary language to arbitrary digital command, which is essentially what Python in this scenario is, imagine that each and every human on the planet has their own programmer that actually does what they want, as opposed to the programmers that work for me, who don't do what I ask.
這裡的程式設計師知道我在說什麼。 The programmers here know what I'm talking about.
所以,想像一個不驕傲、而且真的會照你意思做的程式設計師。 So imagine a non-arrogant programmer that actually does what you want.
而且你還不用付那麼多錢。 And you don't have to pay all that money to.
而且這些程式的供應是無限的。 And there's infinite supply of these programs.
這一切都在未來一兩年內。 And this is all within the next year or two.
很快。 Very soon.
這三件事——而且我非常確信,下一波浪潮將是這三件事的結合。 Those three things-- and I'm quite convinced it's the union of those three things that will happen in the next wave.
所以你問了還會發生什麼。 So you asked about what else is gonna happen.
我每六個月就會搖擺一次,所以我們處於一種,這是一種奇偶數的擺盪。 Every six months I oscillate, so we're on a, it's an even-odd oscillation.
所以目前,前沿模型(現在只有三個,我會回顧一下是哪些)和其他模型之間的差距,在我看來似乎越來越大。 So at the moment, the gap between the frontier models, which there are now only three, I'll review who they are, and everybody else, appears to me to be getting larger.
六個月前,我還深信差距正在縮小。 Six months ago, I was convinced that the gap was getting smaller.
所以我把很多錢投資在小公司上。 So I invested lots of money in the little companies.
現在我不太確定了。 Now I'm not so sure.
我正在和那些大公司談,大公司告訴我他們需要 100 億、200 億、500 億、1000 億。 And I'm talking to the big companies, and the big companies are telling me that they need 10 billion, 20 billion, 50 billion, 100 billion.
星門計畫(Stargate)是 1000 億,對吧?
Stargate is 100 billion, right?
這非常、非常困難。
It's very, very hard.
山姆·奧特曼(Sam Altman)是我的摯友。 Sam Altman is a close friend.
他認為這大概需要 3000 億,甚至更多。 He believes that it's gonna take about 300 billion, maybe more.
我向他指出,我計算過所需能源的量,然後,本著完全公開的精神,我週五去了白宮,告訴他們我們需要和加拿大成為最好的朋友。 I pointed out to him that I'd done the calculation on the amount of energy required, and I then, in the spirit of full disclosure, went to the White House on Friday and told them that we need to become best friends with Canada.
因為加拿大人非常好,協助發明了 AI,而且水力發電量很大。 Because Canada has really nice people, helped invent AI, and lots of hydropower.
因為我們國家沒有足夠的電力來做這件事。 Because we as a country do not have enough power to do this.
另一個選擇是讓阿拉伯人資助。 The alternative is to have the Arabs fund it.
我個人很喜歡阿拉伯人。 And I like the Arabs personally.
我在那裡待了很長時間,對吧? I spent lots of time there, right?
但他們不會遵守我們的國家安全規定。 But they're not gonna adhere to our national security rules.
而加拿大和美國是我們都同意的勝利同盟的一部分。 Whereas Canada and the US are part of a triumphant, that we all agree on.
所以這些 1000 億、3000 億美元的資料中心,電力開始成為稀缺資源。
So these $100 billion, $300 billion data centers, electricity starts becoming the scarce resource.
嗯,順便說一句,如果你順著這個思路,為什麼我要討論 CUDA 和 NVIDIA?
Well, and by the way, if you follow this line of reasoning, why did I discuss CUDA and NVIDIA?
如果 3000 億美元都流向 NVIDIA,你就知道在股市該怎麼做了。 If $300 billion is all gonna go to NVIDIA, you know what to do in the stock market.
好的。 Okay.
這不是股票推薦,我沒有執照。(觀眾笑) That's not a stock recommendation, I'm not a license. (audience laughing)
嗯,一部分是,所以我們需要更多的晶片,但英特爾(Intel)從美國政府那裡拿了很多錢。
Well, part of it, so we're gonna need a lot more chips, but Intel is getting a lot of money from the US government.
AMD,他們正在試圖建造,你知道的,晶圓廠和—— AMD, and they're trying to build, you know, fabs and--
如果你們的任何計算設備中有英特爾的晶片,請舉手。
Raise your hand if you have an Intel computer, an Intel chip in any of your computing devices.
好的,壟斷就到此為止了。 Okay, so much for the monopoly.
嗯,不過這就是重點。
Well, that's the point, though.
他們曾經確實壟斷過。 They once did have a monopoly.
當然。
Absolutely.
而 NVIDIA 現在是壟斷者。
And NVIDIA has a monopoly now.
那麼,那些是進入門檻嗎? So are those barriers to entry?
比如 CUDA,有沒有什麼是其他的,所以我前幾天和 Percy,Percy Liang 聊過。 Like CUDA, is there something that other, so I was talking to Percy, Percy Lange the other day.
他根據能取得的資源,在 TPU 和 NVIDIA 晶片之間切換來訓練模型。 He's switching between TPUs and NVIDIA chips depending on what he can get access to for training the models.
那是因為他沒得選。
That's because he doesn't have a choice.
如果他有無限的錢,今天他會選擇 NVIDIA 的 B200 架構,因為那會更快。 If he had infinite money, today he would pick the B200 architecture out of NVIDIA because it would be faster.
我並不是說,我的意思是,有競爭是好事。 And I'm not suggesting, I mean, it's great to have competition.
我和 AMD 的蘇姿丰(Lisa Su)談了很久。 I've talked to AMD and Lisa Su at great length.
他們建立了一個東西,可以從你描述的這個 CUDA 架構轉換到他們自己的,叫做 Rokam。 They have built a thing which will translate from this CUDA architecture that you were describing to their own, which is called Rokam.
它還不完全能用。 It doesn't quite work yet.
他們正在努力。 They're working on it.
你在 Google 待了很長時間,他們發明了 Transformer 架構。 You were at Google for a long time, and they invented the transformer architecture.
這都是彼得(Peter)的錯。 It's all Peter's fault.
感謝那裡像彼得和傑夫·迪恩(Jeff Dean)等才華橫溢的人。 Thanks to brilliant people over there, like Peter and Jeff Dean and everyone.
但現在看來,他們似乎已經把主導權輸給了 OpenAI。 But now it doesn't seem like they've kind of lost the initiative to open AI.
甚至我看到的最新排行榜,Anthropic 的 Cloud 排在榜首。 And even the last leaderboard I saw, Anthropx Cloud, was at the top of the list.
我問過桑德爾(Sundar)這個問題。 I asked Sundar this.
你沒有給我一個非常明確的答案。 You didn't really give me a very sharp answer.
也許你對那裡發生的事情有更犀利或更客觀的解釋。 Maybe you have a sharper or more objective explanation for what's going on there.
我不再是 Google 的員工了。
I'm no longer a Google employee.
是的。
Yes.
本著完全揭露的精神,Google 決定工作與生活的平衡、早點回家和在家工作比贏得勝利更重要。
In the spirit of full disclosure, Google decided that work-life balance and going home early and working from home was more important than winning.
而新創公司,新創公司之所以能成功,是因為員工們拼命工作。 And the startups, the reason startups work is because the people work like hell.
我很抱歉說得這麼直白,但事實是,如果各位離開大學創辦公司,如果你想和其他新創公司競爭,你不會讓員工在家工作,每週只來一天。 And I'm sorry to be so blunt, but the fact of the matter is if you all leave the university and go found a company, you're not gonna let people work from home and only come in one day a week if you wanna compete against the other startups.
在 Google 的早期,微軟就是那樣。
In the early days of Google, Microsoft was like that.
完全正確。
Exactly.
但現在似乎——
But now it seems to be--
在我的行業,我們的行業,我想,公司以真正創新的方式獲勝,並真正主宰一個領域,卻沒有進行下一次轉型,這有很長的歷史。
There's a long history of, in my industry, our industry, I guess, of companies winning in a genuinely creative way and really dominating a space and not making the next transition.
所以這是有據可查的。 So very well documented.
我認為事實是,創辦人是特別的,創辦人需要掌權,創辦人很難共事,他們對員工要求很高。 And I think that the truth is, founders are special, the founders need to be in charge, the founders are difficult to work with, they push people hard.
儘管我們可能不喜歡伊隆(Elon)的個人行為,但看看他能從員工身上得到什麼。 As much as we can dislike Elon's personal behavior, look at what he gets out of people.
我和他共進晚餐,他當時在蒙大拿,他那天晚上 10 點要飛去和 x.ai 在午夜開會。 I had dinner with him and he was flying, I was in Montana, he was flying that night at 10 p.m. to have a meeting at midnight with x.ai.
午夜。
Midnight.
想想看。
Think about it.
我當時在台灣,不同的國家,不同的文化,他們說,這是台積電(TSMC),我對他們印象非常深刻,他們有一條規定,剛畢業的博士,他們是優秀的物理學家,要在工廠的地下室工作。 I was in Taiwan, different country, different culture, and they said that, and this is TSMC, who I'm very impressed with, and they have a rule that the starting PhDs coming out of the, they're good physicists, work in the factory on the basement floor.
現在,你能想像讓美國的物理學家這麼做嗎? Now, can you imagine getting American physicists to do that?
博士們? The PhDs?
極不可能。不同的工作倫理。 Highly unlikely. different work ethic.
而這裡的問題,我對工作如此苛刻的原因是,這些是具有網路效應的系統,所以時間非常重要。 And the problem here, the reason I'm being so harsh about work is that these are systems which have network effects, so time matters a lot.
在大多數行業中,時間並沒有那麼重要。 And in most businesses, time doesn't matter that much.
你有很多時間。 You have lots of time.
可口可樂和百事可樂仍會存在,而可口可樂和百事可樂之間的鬥爭將繼續下去,一切都像冰川一樣緩慢。 Coke and Pepsi will still be around, and the fight between Coke and Pepsi will continue to go on, and it's all glacial.
當我跟電信公司打交道時,典型的電信交易需要 18 個月才能簽訂,對吧? When I dealt with telcos, the typical telco deal would take 18 months to sign, right?
沒有理由花 18 個月去做任何事。 There's no reason to take 18 months to do anything.
把它完成。 Get it done.
只是,我們正處於一個最大成長、最大收益的時期。 It's just, we're in a period of maximum growth, maximum gain.
所以,而且也需要瘋狂的想法。 So, and also it takes crazy ideas.
比如微軟做 OpenAI 的交易時,我認為那是我聽過最蠢的想法。 Like when Microsoft did the OpenAI deal, I thought that was the stupidest idea I'd ever heard.
基本上把你的 AI 領導權外包給 OpenAI 和山姆(Sam)以及他的團隊。 Outsourcing essentially your AI leadership to OpenAI and Sam and his team.
我的意思是,那太瘋狂了。 I mean, that's insane.
在微軟或其他任何地方,沒有人會這麼做。 Nobody would do that at Microsoft or anywhere else.
然而今天,他們正朝著成為最有價值公司的方向前進。 And yet today, they're on their way to being the most valuable company.
他們肯定是不相上下。 They're certainly head to head.
而蘋果沒有好的 AI 解決方案。 And Apple does not have a good AI solution.
而且看起來他們成功了。 And it looks like they made it work.
是的,先生。 Yes, sir.
就國家安全或地緣政治利益而言,您認為 AI 將扮演什麼樣的角色,或者與中國的競爭如何? In terms of national security or geopolitical interest, how do you think AI is going to play a role or competition with China as well?
我曾擔任一個 AI 委員會的主席,該委員會非常仔細地研究了這個問題。 So I was the chairman of an AI commission that sort of looked at this very carefully.
你可以讀一下,大約 752 頁。 And you can read it. It's about 752 pages.
我簡單總結一下,就是:我們領先,我們需要保持領先,而且我們需要很多錢來做到這一點。 And I'll just summarize it by saying, we're ahead, we need to stay ahead, and we need lots of money to do so.
我們的客戶是參議院和眾議院。 Our customers were the Senate and the House.
由此產生了《晶片法案》(CHIPS Act)和許多其他類似的東西。 And out of that came the CHIPS Act and a lot of other stuff like that.
大致的情況是,如果你假設前沿模型不斷推進,還有一些開源模型,那麼很可能只有極少數的公司能玩這個遊戲。 The rough scenario is that if you assume the frontier models drive forward and a few of the open source models, it's likely that a very small number of companies can play this game.
國家,抱歉。 Countries, excuse me.
那些國家是哪些,或者他們是誰? What are those countries, or who are they?
擁有大量資金和大量人才、強大的教育體系以及求勝意志的國家。 Countries with a lot of money and a lot of talent, strong educational systems, and a willingness to win.
美國是其中之一。 The US is one of them.
中國是另一個。 China is another one.
還有多少個? How many others are there?
還有其他的嗎? Are there any others?
我不知道。 I don't know.
也許吧。 Maybe.
但可以肯定的是,在你們的有生之年,美國和中國之間在知識霸權上的鬥爭將是場大戰。 But certainly, in your lifetimes, the battle between the US and China for knowledge supremacy is going to be the big fight.
所以美國政府基本上禁止了 Nvidia 晶片,雖然他們不被允許說那是他們在做的事,但他們實際上就是那麼做了,進入中國。 So the US government banned, essentially, the Nvidia chips, although they weren't allowed to say that was what they were doing, but they actually did that into China.
他們大約有 10 年的晶片優勢。 They have about a 10-year chip advantage.
我們在次 DUV,也就是次 5 奈米晶片方面,大約有 10 年的晶片優勢。 We have a roughly 10-year chip advantage in terms of sub-DUV, that is sub-five nanometer chip.
10 年,那麼久?
10 years, that long?
大約 10 年。
Roughly 10 years.
哇。
Wow.
所以你會看到,舉個例子,今天我們比中國領先幾年。
And so you're gonna have, so an example would be, today we're a couple of years ahead of China.
我猜我們會再領先中國幾年,而中國人對此非常憤怒。 My guess is we'll get a few more years ahead of China, and the Chinese are whopping mad about this.
這就像對此非常、非常不滿。 It's like hugely upset about it.
所以這是件大事。 So that's a big deal.
這是川普政府做出並由拜登政府批准的決定。 That was the decision made by the Trump administration and approved by the Biden administration.
你覺得今天的政府和國會會聽你的建議嗎?
Do you find that the administration today and Congress is listening to your advice?
你認為他們會進行那種規模的投資嗎? Do you think that it's going to make that scale of investment?
我的意思是,顯然有《晶片法案》,但除此之外,建立一個龐大的 AI 系統? I mean, obviously the CHIPS Act, but beyond that, building a massive AI system?
如你所知,我領導一個非正式、臨時、非法的團體。
So as you know, I lead an informal ad hoc non-legal group.
那跟違法不一樣。 That's different from illegal.
完全正確,只是為了澄清。(笑)
That's exactly, just to be clear. (laughing)
其中包括所有常見的嫌疑人。 Which includes all the usual suspects.
是的。
Yes.
而在過去的一年裡,這些常見的嫌疑人提出了構成拜登政府 AI 法案的推理基礎,這是歷史上最長的總統指令。
And the usual suspects over the last year came up with the basis of the reasoning that became the Biden administration's AI Act, which is the longest presidential directive in history.
你說的是「特別競爭研究計畫」(Special Competitive Studies Project)嗎?
You're talking about the Special Competitive Studies Project?
不是。
No.
這是行政辦公室的實際法案,他們正在忙著實施細節。 This is the actual act from the executive office. and they're busy implementing the details.
到目前為止,他們都做對了。 So far, they've got it right.
舉例來說,我們過去一年辯論的一個問題是,如何偵測一個系統中已經學會的危險,但你卻不知道該問它什麼? And so, for example, one of the debates that we had for the last year has been, how do you detect danger in a system which has learned it, but you don't know what to ask it?
好的,換句話說,這是一個核心,一個核心問題。 Okay, so in other words, it's a core, it's a sort of a core problem.
它學會了一些壞事,但它不能告訴你它學會了什麼,而你也不知道該問它什麼。 It's learned something bad, but it can't tell you what it learned, and you don't know what to ask it.
威脅太多了,對吧? And there's so many threats, right?
像是它學會了用某種你不知道如何問它的新方法來混合化學物質。 Like it learned how to mix chemistry in some new way that you don't know how to ask it.
所以人們正在努力解決這個問題。 And so people are working hard on that.
但我們最終在給他們的備忘錄中寫道,有一個門檻,我們命名為 10 的 26 次方浮點運算,這在技術上是計算量的衡量標準。 But we ultimately wrote in our memos to them that there was a threshold which we arbitrarily named as 10 to the 26 flops, which technically is a measure of computation.
超過那個門檻,你就必須向政府報告你在做這件事。 That above that threshold, you had to report to the government that you were doing this.
而那是規定的一部分。 And that's part of the rule.
歐盟,為了確保他們與眾不同,把門檻設在 10 的 25 次方。 The EU, to just make sure they were different, did it 10 to the 25.
但都差不多。 But it's all kind of close enough.
我認為所有這些區別都會消失,因為技術現在將會,技術術語叫做「聯邦式訓練」(federated training),基本上你可以把各個部分組合在一起。 I think all of these distinctions go away because the technology will now, the technical term is called federated training, where basically you can take pieces and union them together.
所以我們可能無法保護人們免受這些新事物的侵害。 So we may not be able to keep people safe from these new things.
嗯,有傳言說 OpenAI 不得不這樣訓練,部分原因是用電量。
Well, rumors are that that's how OpenAI has had to train, partly because of the power consumption.
他們沒有一個地方可以做到。 There's no one place where they did.
好吧,讓我們來談談一場真正的戰爭。 Well, let's talk about a real war that's going on.
我知道你一直非常關心烏克蘭戰爭,特別是,我不知道你能在多大程度上談論「白鸛」(White Stork)以及你用 500 美元的無人機摧毀 500 萬美元坦克的目標。 I know that something you've been very involved in is the Ukraine war, and in particular, I don't know how much you can talk about White Stork and your goal of having $500 drones destroy $5 million tanks.
那麼,這如何改變了戰爭? So how's that changing warfare?
我為國防部長工作了七年,試圖改變我們軍隊的運作方式。
I worked for the Secretary of Defense for seven years and tried to change the way we run our military.
我不是特別喜歡軍隊,但它非常昂貴,我想看看我是否能幫上忙。 I'm not a particularly big fan of the military, but it's very expensive and I wanted to see if I could be helpful.
我認為,在我看來,我基本上失敗了。 And I think in my view I largely failed.
他們給了我一枚獎章,所以他們一定是把獎章頒給失敗者,或者,(笑聲)但我的自我批評是,什麼都沒有真正改變。 They gave me a medal, so they must give medalists to failure or, (laughing) but my self-criticism was nothing has really changed.
而美國的體制不會帶來真正的創新。 And the system in America is not gonna lead to real innovation.
所以看著俄羅斯人用坦克摧毀有老太太和小孩的公寓大樓,這讓我快瘋了。 So watching the Russians use tanks to destroy apartment buildings with little old ladies and kids just drove me crazy.
所以我決定和你朋友,這裡的前教職員 Sebastian Thrun,還有一大群史丹佛的人一起做一家公司。 So I decided to work on a company with your friend, Sebastian Thrun, as a former faculty member here, and a whole bunch of Stanford people.
這個想法基本上是做兩件事。 And the idea basically is to do two things.
在這種基本上是機器人戰爭中,以複雜、強大的方式使用人工智慧。 Use AI in complicated, powerful ways for these essentially robotic war.
第二個是降低機器人的成本。 And the second one is to lower the cost of the robots.
現在你坐在那裡想,像我這樣的好自由派為什麼要做那種事? Now you sit there and you go, why would a good liberal like me do that?
答案是,整個軍隊的理論是坦克、火砲和迫擊砲,而我們可以消滅所有這些。 And the answer is that the whole theory of armies is tanks, artilleries, and mortar, and we can eliminate all of them.
我們可以讓入侵一個國家的代價,至少在陸地上,變得基本上不可能。 And we can make the penalty for invading a country, at least by land, essentially be impossible.
它應該能消除那種地面戰。 It should eliminate the kind of land battles.
嗯,這是一個非常有趣的問題,那就是它是否給防守方帶來比進攻方更多的優勢?
Well, this is a really interesting question, is that does it give more of an advantage to defense versus offense?
你甚至能做出那樣的區分嗎? Can you even make that distinction?
因為我過去一年都在做這件事,我學到了很多我真的不想知道的關於戰爭的事情。
Because I've been doing this for the last year, I've learned a lot about war that I really did not want to know.
關於戰爭要知道的一件事是,進攻方總是佔有優勢,因為你總是可以壓倒防禦系統。 And one of the things to know about war is that the offense always has the advantage because you can always overwhelm the defensive systems.
所以作為國家防禦的策略,你最好擁有一個非常強大的進攻力量,以便在需要時使用。 And so you're better off as a strategy of national defense to have a very strong offense that you can use if you need to.
而我和其他人正在建立的系統將能做到這一點。 And the systems that I and others are building will do that.
因為這個系統的運作方式,我現在是一名有執照的軍火商。 Because of the way the system works, I am now a licensed arms dealer.
所以,一個電腦科學家、商人、軍火商。(笑) So a computer scientist, businessman, arms dealer. (laughing)
這是一種進程嗎? Is that a progression?
我不知道。 I don't know.
我不建議你們的職業道路走這條路。 I do not recommend this in your career path.
我還是堅持做 AI 吧。 I'd stick with AI.
由於法律的運作方式,我們是私下進行的。 And because of the way the laws work, we're doing this privately.
這一切都是合法的,並有政府的支持。 And then this is all legal with the support of the government.
它直接進入烏克蘭,然後他們就打仗。 It goes straight into Ukraine, and then they fight the war.
不談所有細節,情況相當糟糕。 And without going into all the details, things are pretty bad.
我想,如果在五月或六月,如果俄羅斯人像預期的那樣集結,烏克蘭將會失去一大片領土,並開始失去整個國家的過程。 I think if in May or June, if the Russians build up as they are expected to, Ukraine will lose a whole chunk of its territory and will begin the process of losing the whole country.
所以情況相當嚴峻。 So the situation is quite dire.
如果有人認識瑪喬麗·泰勒·格林(Marjorie Taylor Greene),我建議你們把她從聯絡人名單中刪除。 And if anyone knows Marjorie Taylor Greene, I would encourage you to delete her from your contact list.
因為她——單一個人正在阻礙提供數十億美元來拯救一個重要的民主國家。 Because she's the one-- a single individual is blocking the provision of some number of billions of dollars to save an important democracy.
我想轉到一個稍微哲學性的問題。 I want to switch to a little bit of a philosophical question.
去年你和亨利·季辛吉(Henry Kissinger)以及丹尼爾·哈頓萊克(Daniel Huttenlocher)寫了一篇文章,關於知識的本質及其演變。 So there was an article that you and Henry Kissinger and Daniel Huttenlocher wrote last year about the nature of knowledge and how it's evolving.
前幾天晚上我也就這個問題進行了討論。 I had a discussion the other night about this as well.
所以在歷史的大部分時間裡,人類對宇宙的理解帶有神秘色彩,然後是科學革命和啟蒙運動。 So for most of history, humans sort of had a mystical understanding of the universe and then there's the scientific revolution and the enlightenment.
在你的文章中,你認為現在這些模型變得如此複雜和難以理解,以至於我們真的不知道它們內部發生了什麼。 And in your article, you argue that now these models are becoming so complicated and difficult to understand that we don't really know what's going on in them.
我引用理查·費曼(Richard Feynman)的一句話。 I'll take a quote from Richard Feynman.
他說:「我無法創造的東西,我就無法理解。」 He says, "What I cannot create, I do not understand."
我前幾天看到了這句話。 I saw this quote the other day.
但現在人們正在創造他們能創造的東西,但他們並不真正理解其中的內涵。 But now people are creating things that they can create, but they don't really understand what's inside of them.
知識的本質是否正在改變? Is the nature of knowledge changing in a way?
我們是否將不得不開始相信這些模型的說法,而它們卻無法向我們解釋? Are we gonna have to start just taking the word for these models without them being able to explain it to us?
我會用青少年來比喻。
The analogy I would offer is to teenagers.
如果你有青少年,你知道他們是人,但你不太清楚他們在想什麼。(笑) If you have a teenager, you know they're human, but you can't quite figure out what they're thinking. (laughing)
但不知何故,我們社會已經設法適應了青少年的存在,對吧? But somehow we've managed in society to adapt to the presence of teenagers, right?
他們最終會長大。 And they eventually grow out of it.
我是認真的。 And I'm just serious.
所以很可能我們會有一些我們無法完全描述的知識系統,但我們了解它們的邊界,對吧? So it's probably the case that we're gonna have knowledge systems that we cannot fully characterize, but we understand their boundaries, right?
我們了解它們能做什麼的極限。 We understand the limits of what they can do.
那可能是我們能得到的最好結果。 And that's probably the best outcome we can get.
你認為我們會了解極限嗎?
Do you think we'll understand the limits?
我們會做得很不錯的。
We'll get pretty good at it.
我的小組每週開會,共識是最終你會用這種方式來做,所謂的「對抗性 AI」(adversarial AI),就是實際上會有一些公司,你會花錢請他們來破解你的 AI 系統。 The consensus of my group that meets on every week is that eventually the way you'll do this, so-called adversarial AI, is that there will actually be companies that you will hire and pay money to to break your AI systems.
像是紅隊(red team)。
Like red team.
所以會是紅隊,不是人類紅隊,那是他們今天在做的,你會有一整個產業的公司和 AI 系統,他們的工作就是破解現有的 AI 系統,找出它們的弱點,特別是它們擁有而我們無法弄清楚的知識。
So it'll be the red, instead of human red teams, which is what they do today, you'll have whole companies in a whole industry of AI systems whose jobs are to break the existing AI systems and find their vulnerabilities, especially the knowledge that they have that we can't figure out.
這對我來說很有道理。 That makes sense to me.
這對你們在史丹佛來說也是個很棒的專案,因為如果你有一個研究生,必須想辦法攻擊這些大型模型之一並了解它的作用,那是培養下一代的絕佳技能。 It's also a great project for you here at Stanford because if you have a graduate student who has to figure out how to attack one of these large models and understand what it does, that is a great skill to build the next generation.
所以對我來說,兩者會並行發展是合理的。 So it makes sense to me that the two will travel together.
好的,讓我們來回答一些學生的問題。
All right, let's take some questions from the student.
後面那裡有一個。 There's one right there in the back.
請說出你的名字。 Just say your name.
您剛才提到,這與現在的評論有關,讓 AI 真正做到您想讓它做的事。
Earlier you mentioned, and this is related to the comment right now, getting AI that actually does what you want.
您剛才提到了對抗性 AI,我想知道您是否能進一步闡述。 You just mentioned adversarial AI, and I'm wondering if you could elaborate on that more.
所以,除了顯然計算能力會增加,你可以得到性能更好的模型之外,讓它們做你想做的事似乎是一個部分未解的問題。 So it seems to be, besides, obviously, compute will increase and you can get more performance models, but getting them to do what you want to issue seems to be a partially unanswered question.
嗯,你必須假設隨著技術進步,目前的幻覺問題會減少等等。 Well, you have to assume that the current hallucination problems become less as the technology gets better and so forth.
我不是說它會消失。 I'm not suggesting it goes away.
然後你還必須假設有功效測試。 And then you also have to assume that there are tests for efficacy.
所以必須有一種方法知道事情成功了。 So there has to be a way of knowing that the thing succeeded.
所以在我給的 TikTok 競爭者的例子中,順便說一句,我並不是主張你應該非法竊取所有人的音樂。 So in the example that I gave of the TikTok competitor, and by the way, I was not arguing that you should illegally steal everybody's music.
如果你是矽谷的企業家,希望你們都會是,你會做的是,如果它成功了,你就會僱用一大堆律師去清理爛攤子,對吧? What you would do if you're a Silicon Valley entrepreneur, which hopefully all of you will be, is if it took off, then you'd hire a whole bunch of lawyers to go clean the mess up, right?
但如果沒有人用你的產品,你偷了所有內容也無所謂,不要引用我的話,好嗎? But if nobody uses your product, it doesn't matter that you stole all the content and do not quote me, right?
(觀眾笑) (audience laughing)
對,你上鏡頭了。
Right, you're on camera.
是的,沒錯。(觀眾笑)
Yeah, that's right. (audience laughing)
但你懂我的意思。 But you see my point.
換句話說,矽谷會進行這些測試並清理爛攤子。 In other words, Silicon Valley will run these tests and clean up the mess.
通常這些事情就是這樣做的。 And that's typically how those things are done.
所以我個人的看法是,你會看到越來越多具執行能力的系統,伴隨著更好的測試,最終還有對抗性測試,這會把它限制在一個框架內。 So my own view is that you'll see more and more performative systems with even better tests and eventually adversarial tests, and that'll keep it within a box.
技術術語叫做「思維鏈推理」(chain of thought reasoning)。 The technical term is called chain of thought reasoning.
人們相信在未來幾年內,你將能夠產生 1000 個步驟的思維鏈推理。 And people believe that in the next few years, you'll be able to generate 1 ,000 steps of chain of thought reasoning.
做這個,做這個。 Do this, do this.
就像建立食譜一樣。 It's like building recipes.
那些食譜,你可以運行食譜,並且可以實際測試它是否產生了正確的結果。 That the recipes, you can run the recipe, and you can actually test that it produced the correct outcome.
系統就是這樣運作的。 And that's how the system will work.
是的,先生? Yes, sir?
我想問您,您說了我的名字是布蘭登(Brandon)。 I was going to ask you, you said my name is Brandon.
總的來說,您似乎對 AI 進步的潛力非常樂觀。 In general, you seem super positive about the potential for AI's progress.
所以我很好奇,您認為是什麼將推動這一切? So I'm curious, what do you think is going to drive that?
是更多的運算能力嗎? Is it just more compute?
是更多的資料嗎? Is it more data?
是根本性或實際的轉變嗎? is it fundamental or actual shifts?
是的,對所有問題都是。
Yes, to everything.
投入的資金數額令人瞠目結舌。 The amounts of money being thrown around are mind-boggling.
我選擇投資所有東西,因為我搞不清楚誰會贏。 And I've chose, I essentially invest in everything 'cause I can't figure out who's gonna win.
跟隨我的資金量非常龐大。 And the amounts of money that are following me are so large.
我想部分原因是因為早期的錢已經賺到了,而那些不知道自己在做什麼的大金主必須要有 AI 的部分。 I think some of it is because the early money's been made and the big money people who don't know what they're doing have to have an AI component.
現在一切都是 AI 投資,所以他們分不清楚。 And everything is now an AI investment, so they can't tell the difference.
我把 AI 定義為學習系統,真正會學習的系統。 I define AI as learning systems, systems that actually learn.
所以我認為這是其中之一。 So I think that's one of them.
第二點是,有非常複雜的新演算法,算是後 Transformer 時代的產物。 The second is that there are very sophisticated new algorithms that are sort of post-transformers.
我的朋友,我長期的合作夥伴,發明了一種新的非 Transformer 架構。 My friend, my collaborator for a long time, has invented a new non-transformer architecture.
我在巴黎資助的一個團隊聲稱也做到了同樣的事情。 There's a group that I'm funding in Paris that has claims to have done the same thing.
那裡有巨大的發明,史丹佛也有很多東西。 There's enormous invention there, a lot of things at Stanford.
最後一點是,市場上有一種信念,即智慧的發明有無限的回報。 And the final thing is that there is a belief in the market that the invention of intelligence has infinite return.
假設你投入 500 億美元的資本到一家公司。 So let's say you put $50 billion of capital into a company.
你必須從智慧中賺取非常多的錢才能回本。 You have to make an awful lot of money from intelligence to pay that back.
所以很可能我們會經歷一個巨大的投資泡沫,然後它會自行整理。 So it's probably the case that we'll go through some huge investment bubble and then it'll sort itself out.
過去總是如此,這裡很可能也是如此。 That's always been true in the past and it's likely to be true here.
而您剛才說的是,您認為領先者正在拉開與其他人的差距。
And what you said earlier was you think that the leaders are pulling away from the rest.
目前是這樣,目前是這樣。
Right now, right now.
問題大致如下。 The question is roughly the following.
法國有一家叫 Mistral 的公司。 There's a company called Mistral in France.
他們做得非常好。 They've done a really good job.
我顯然是投資者。 And I'm obviously an investor.
他們已經推出了第二個版本。 They have produced their second version.
他們的第三個模型很可能是封閉的,因為成本太高,他們需要收入,不能把模型免費送人。 Their third model is likely to be closed because it's so expensive, they need revenue, and they can't give their model away.
所以在我們這個行業,開源與閉源的爭論非常激烈。 So this open source versus closed source debate in our industry is huge.
我整個職業生涯都建立在人們願意在開源中分享軟體的基礎上。 And my entire career was based on people being willing to share software in open source.
關於我的一切都是開源的。 Everything about me is open source.
Google 的很多基礎都是開源的。 Much of Google's underpinnings were open source.
我所有技術上的成就都是。 Everything I've done technically.
然而,可能是資本成本如此巨大,從根本上改變了軟體的建構方式。 And yet it may be that the capital costs, which are so immense, fundamentally changes how software is built.
你我剛才在談,我個人對軟體工程師的看法是,軟體工程師的生產力至少會翻倍。 You and I were talking, my own view of software programmers is that software programmers' productivity will at least double.
有三、四家軟體公司正試圖做到這一點。 There are three or four software companies that are trying to do that.
我在這個時期投資了所有這些公司。 I've invested in all of them in this period.
他們都試圖讓軟體工程師更有生產力。 And they're all trying to make software programmers more productive.
我剛見過的最有趣的一家叫做 Augment。 The most interesting one that I just met with is called Augment.
我總是想到單一的程式設計師,他們說:「那不是我們的目標。」 And I always think of an individual programmer, and they said, "That's not our target.
「我們的目標是那些有 100 人的軟體程式設計團隊,處理數百萬行程式碼,沒人知道發生了什麼。」 "Our target are these 100-person software programming teams "on millions of lines of code "where nobody knows what's going on."
嗯,那是個很好的 AI 應用。 Well, that's a really good AI thing.
他們會賺錢嗎? Will they make money?
我希望如此。 I hope so.