比尔·盖茨与萨姆·奥尔特曼的对话

如果让人们列举人工智能领域的领军人物,有一个名字你可能会听得最多:萨姆·奥尔特曼(Sam Altman)。他在OpenAI的团队正在用ChatGPT挑战人工智能的极限,我很高兴能和他谈谈下一步的计划。我们的谈话涵盖了为什么今天的人工智能模型是最愚蠢的,社会将如何适应技术变革,甚至当我们完善了人工智能之后,人类将在哪里找到目标。

比尔·盖茨:我今天的嘉宾是萨姆·奥尔特曼。当然,他是OpenAI的首席执行官。长期以来,他一直是科技行业的创业者和领导者,包括经营Y Combinator,这家公司做了很多了不起的事情,比如资助Reddit、Dropbox、Airbnb。

在我录制本期节目不久之后,他被解除了OpenAI首席执行官的职务,这完全惊到了我,至少是短暂的惊讶。解雇后的几天里发生了很多事情,包括几乎所有OpenAI员工联名支持萨姆回归,而现在,萨姆又回来了。所以,在你听到我们的对话之前,让我们先来了解一下萨姆,看看他现在过得怎么样。

比尔·盖茨:嘿,萨姆。

萨姆·奥尔特曼:嘿,比尔。

比尔·盖茨:你好吗?

萨姆·奥尔特曼:哦,天哪。这真的太疯狂了,我还好。这是一个非常激动人心的时期。

比尔·盖茨:团队情况怎么样?

萨姆·奥尔特曼:我想,你知道很多人都注意到了这样一个事实,那就是团队从未如此高效、乐观、出色。所以,我猜这也正是藏在所有事情背后的一线希望。

在某种意义上,这是我们成长的真正时刻,我们非常有动力变得更好,变成一个为我们所面临的挑战做好准备的公司。

比尔·盖茨:太棒了。

所以,我们在对话中不会讨论那件事;相反,你会听到萨姆致力于建立一个安全和负责任的人工智能的承诺。我希望你喜欢这次对话。

欢迎来到《为自己解惑》。我是比尔·盖茨。

比尔·盖茨:今天我们将主要关注人工智能,因为它如此令人兴奋,人们同时也对它感到担忧。欢迎萨姆。

萨姆·奥尔特曼:非常感谢你邀请我来参加节目。

比尔·盖茨:我有幸见证了你们工作的进展,但开始的时候我是非常怀疑的,我也没期待过ChatGPT能做得这么好。它让我十分惊讶,我们实际上并不懂这种编码方式。我们知道数字,我们可以看到它相乘,但如何把莎士比亚的作品编码?你认为我们能对这种表示有更深的理解吗?

萨姆·奥尔特曼:百分之百可以。要在人脑中做到这一点非常难,你可以说这是一个类似的问题,就是有这些神经元,它们彼此相连。但它们的连接在变化,我们不可能切开你的大脑来观察它是如何进化的,但我们可以完美地透视。目前在可解释性方面已经有一些非常好的工作,而且我认为随着时间的推移会有更多的解释出现。

我认为我们将能够理解这些网络,但我们目前的理解能力还很低。而正如你所乐见的,我们仅了解的那一点点已经对改进这些东西非常有帮助。撇开科学好奇心不谈,我们都有动力去真正了解它们,尽管它们的规模是如此庞大。我们还可以说,莎士比亚(的作品)在你大脑的哪个位置编码的,又是如何表现的?

比尔·盖茨:我们不知道。

萨姆·奥尔特曼:我们确实不知道,甚至可以说在这些我们本应能够完美透视、观察并进行任何测试的大量数字中我们还是找不到答案,这就更让人缺少满足感。

比尔·盖茨:我非常确信,在接下来的五年内,我们会理解它。就训练效率和准确性而言,这种理解将让我们做得比今天能做的好得多。

萨姆·奥尔特曼:百分之百同意。你会在许多经验性发现的技术发展史中看到这一点。他们虽然不知道发生了什么,但显然它行得通。然后,随着科学理解的加深,他们可以使它变得更好。

比尔·盖茨:是的,在物理学、生物学中,有时只是随便一通乱试,然后就“哇”的一声——这究竟是怎么实现的?

萨姆·奥尔特曼:在我们的案例中,构建GPT-1的那个人自己解决了这个“哇”的问题,这有些令人印象深刻,但并没有深入理解它是如何工作的,以及为什么它是有效的。然后我们有了拓展规律,可以预测它会变得多好。这就是为什么当我们告诉你可以做一个演示时,我们相当有信心它会成功。我们还没有训练模型,但我们很有信心。这让我们做了大量尝试,对正在发生的事情有了越来越科学的认识。但这确实源于经验结果先行。

比尔·盖茨:当你展望未来两年,你认为会有哪些重要的里程碑?

萨姆·奥尔特曼:多模态肯定会很重要。

比尔·盖茨:你指的是语音输入、语音输出?

萨姆·奥尔特曼:语音输入、语音输出,然后是图像,最终是视频。显然,人们真的需要这些。我们已经推出了图像和音频,反响比我们的预期要强烈得多。我们能够将其推进得更远,但也许最重要的进步领域将围绕推理能力展开。现在,GPT-4的推理能力还非常有限。还有可靠性,如果你问GPT-4大部分问题10000次,这10000次中可能有一次回答得很好,但它不一定知道是哪一次。而你却希望每次都能得到这10000次中最好的回答,因此可靠性的提升将非常重要。

可定制性和个性化也将非常重要。人们对GPT-4的需求各不相同:不同的风格,不同的假设集,我们将使所有这些成为可能,然后还能让它使用你自己的数据。它能够了解你、你的电子邮件、你的日历、你喜欢的预约方式,并与其他外部数据源连接,所有这些都将是最重要的改进领域。

比尔·盖茨:在目前的基础算法中,它只是在做简单的前馈、乘法,所以为了生成每一个新词,它本质上都在做同样的事情。我会很感兴趣的是,你们能够像解决复杂的数学方程式那样,可能需要任意次数的应用变换,那么用于推理的控制逻辑可能需要比我们今天所做的复杂得多。

萨姆·奥尔特曼:至少,我们似乎需要某种形式的自适应计算。现在,我们在每个标记上都花费同样多的计算资源,不管它是一个简单的标记,还是解决一些复杂的数学问题。

比尔·盖茨:是的,比如说,“解决黎曼假设……”

萨姆·奥尔特曼:那需要大量的计算。

比尔·盖茨:但它用的计算资源跟说个“The”一样。

萨姆·奥尔特曼:对,我们至少得让它能用。我们可能还需要在它之上更复杂的东西。

比尔·盖茨:你和我都参加过一个参议院的教育会议,我很高兴有大约30名参议员参加了那次会议,并帮助他们快速跟上进展,因为这是一个重大的变革推动者。我不认为我们为了吸引政客已经做得过多。然而,当他们说,“哦,我们在社交媒体上搞砸了,我们应该做得更好”——这是一个巨大的挑战,在两极分化方面有非常负面的因素。即使是现在,我也不确定我们该如何应对。

萨姆·奥尔特曼:我不明白为什么政府在社交媒体方面不能更有效,但这似乎值得作为一个研究案例去理解,因为他们现在将要面临的是与AI相关的挑战。

比尔·盖茨:这是一个很好的研究案例,那么当你谈论监管时,你是否清楚该构建哪种类型的监管?

萨姆·奥尔特曼:我认为我们开始弄清楚了。在这个领域进行过度监管是非常容易的,你也可以看到过去许多此类事情的发生。但同样,如果我们是对的,可能结果却显示我们错了,但如果在最后我们是对的,这项技术发展到我们认为它会达到的程度,它将影响社会,影响地缘政治力量的平衡,以及其他许多事物。

对于这些仍然是假设性的,但未来极其强大的系统——不是说GPT-4,而是针对计算能力是其10万倍或100万倍的系统,我们已经接受了一个全球监管机构的想法,这个机构将紧盯这些超级强大的系统,因为它们确实会产生如此大的全球影响。

我们谈到的一个模式就是类似国际原子能机构的模式。对于核能,我们的决定也是如此。由于其潜在的全球影响,它需要一个全球性的机构,我认为这是合理的。会有很多短期问题,比如这些模式可以说什么,不可以说什么?我们如何看待版权问题?不同的国家会有不同的考虑,这没问题。

比尔·盖茨:有些人认为,如果一些模型非常强大,我们就会对它们感到害怕——全球核监管之所以行之有效,基本上是因为至少在民用方面,每个人都希望共享安全实践,而且这一点做得非常好。当你涉及核武器方面时,就没有这种情况了。

如果关键在于阻止整个世界做危险的事情,你会希望有一个全球政府,但今天对于许多问题,如气候问题、恐怖主义,可以看到我们很难合作。人们甚至援引中美竞争来解释为什么任何放缓的想法都是不恰当的。难道任何放慢脚步的想法,或者说放慢脚步到足够谨慎的程度,都难以实施吗?

萨姆·奥尔特曼:是的,我认为要求其放慢速度是非常困难的。如果改成“做你想做的事,但任何计算集群都不能超过一个特定的、极高的功率门槛”——鉴于这里的成本,我们可能只会看到五个这样的集群——像这样的任何集群都必须接受类似国际武器检查员的审查。那里的模型必须接受安全审计,通过训练期间的一些测试,并在部署前通过审计和测试。

对我来说,这似乎是可能的。我之前不太确定,但今年我进行了一次环球之旅,与需要参与这一计划的许多国家的元首进行了交谈,他们几乎都表示了支持。这不会让我们免于所有事情,仍然会有一些问题出现在规模较小的系统上,有些情况可能会出现相当严重的错误,但我认为这可以帮助我们应对最高层面的风险。

比尔·盖茨:我确实认为,在最好的情况下,人工智能可以帮助我们解决一些难题。

萨姆·奥尔特曼:当然可以。

比尔·盖茨:包括两极分化的问题,因为它可能会破坏民主,而那将是一个极其糟糕的事情。现在,我们看到人工智能带来了很多生产力的提升,这是非常好的事情。你最兴奋的领域是哪些?

萨姆·奥尔特曼:首先,我始终认为值得记住的是,我们正处在这一长期、连续的曲线上。现在,我们有能够完成任务的人工智能系统。它们当然不能完成一个完整的工作(岗位所做的事情),但它们可以做些任务,并且在那里有生产力的提升。最终,它们将能够做更多类似今天人类工作的事情,我们人类当然也会找到新的、更好的工作。我完全相信,如果你给人们更强大的工具,他们不仅仅可以工作得更快,还可以做一些本质上不同的事情。

现在,我们或许可以将程序员的工作速度提高三倍。这就是我们所看到的,也是我们最兴奋的领域之一,它运行得非常好。但是,如果你能让程序员的效率提高三倍,那就不仅仅是他们能做的事情多了三倍,而是他们能在更高的抽象层次上、使用更多的脑力去思考完全不同的事情。这就好比从打孔卡到更高级的语言,不仅仅是让我们的编程速度快了一点,而是让我们得到了质的提升。我们确实看到了这一点。

当我们看向这些能够完成更完整任务的下一代人工智能时,你可以将它想象成一个小代理,你可以对它说:“帮我写这整个程序,我会在过程中问你几个问题,但它不仅仅是一次只写几个函数”,这样就会有很多新生事物出现。然后,它还能做更复杂的事情。有一天,也许会有一个人工智能,你可以对它说:“帮我建立并运营这家公司”。

然后有一天,也许会有一个人工智能,你可以对它说:“去发现新的物理学”。我们现在看到的东西既令人兴奋又美妙,但我认为把它放在这项技术的背景下是值得的,至少在未来的五年或十年内,这项技术将处于一个非常陡峭的成长曲线上。现有这些模型都将变成最愚蠢的模型。

编程可能是我们今天感到最兴奋的一个提高生产力的领域。目前,它已经被大规模部署和使用。医疗保健和教育也是另外两个我们非常期待的快速发展的领域。

比尔·盖茨:有点令人生畏的是,与以往的技术改进不同,这项技术的改进速度非常快,而且没有上限。它可以在很多工作领域达到人类的水平,即使做不出独特的科学研究,它也可以打客服电话和销售电话。我想你和我确实有一些担忧,尽管这是一件好事,但它将迫使我们比以往任何时候都要更快地适应。

萨姆·奥尔特曼:这才是可怕的地方。这并不是说我们必须适应,并不是说人类没有超强的适应能力。我们已经经历过这些大规模的技术变革,人们所从事的大量工作可能在几代人的时间里发生变化,而在几代人的时间里,我们似乎可以很好地吸收这些变化。在过去那些伟大的技术革命中,我们已经看到了这一点。每一次技术革命都会变得更快,而这次将是迄今为止最快的一次。这就是我觉得有点可怕的地方,我们的社会需要以何种速度去适应它的发展,以及劳动力市场将发生的变化。

比尔·盖茨:人工智能的一个方面是机器人技术(学),或者说蓝领工作,当你得到具有人类水平能力的手和脚时。ChatGPT令人难以置信的突破让我们开始关注白领工作,这完全没问题,但我担心人们会失去对蓝领工作的关注。你如何看待机器人技术?

萨姆·奥尔特曼:我对此非常兴奋。我们太早开始研究机器人了,所以不得不搁置那个项目。它也因为错误的原因而变得困难,无助于我们在机器学习研究的困难部分取得进展。我们一直在处理糟糕的模拟器和肌腱断裂之类的问题。随着时间的推移,我们也越来越意识到,我们首先需要的是智能和认知,然后才能想办法让它适应物理特性。从我们构建这些语言模型的方式来看,从那开始更容易。但我们一直计划回到这个问题上来。

我们已经开始对一些机器人公司进行投资。在物理硬件方面,我终于第一次看到了真正令人兴奋的新平台被建立起来。到时候,我们就能利用我们的模型,就像你刚才说的,利用它们的语言理解能力和未来的视频理解能力,说:“好吧,让我们用机器人做一些了不起的事情吧。”

比尔·盖茨:如果那些已经把腿部做得很好的硬件人员真的把手臂、手掌和手指做出来,然后我们再把它们组合起来,而且价格也不会贵得离谱,那么这将会迅速改变很多蓝领类工作的就业市场。

萨姆·奥尔特曼:是的。当然,如果我们回溯七到十年,共识性的预测是其影响的首先是蓝领工作,其次是白领工作,创造力可能永远不会被影响到,至少会是最后一个,因为那是魔法和人类的强项。

显然,现在的情况正好相反。我认为其中有很多有趣的原因能够解释它为什么会发生。创造性工作,GPT模型的幻觉是一个特性,而不是缺陷,它能让你发现一些新事物。而如果你要让机器人移动重型机械,你最好能做到非常精确。我认为这只是一个你必须跟随技术发展的案例。你可能有一些先入为主的观念,但有时科学并不往那个方向发展。

比尔·盖茨:那么你手机上最常用的应用是什么?

萨姆·奥尔特曼:Slack。

比尔·盖茨:真的吗?

萨姆·奥尔特曼:是的,我希望我能说是ChatGPT。

比尔·盖茨:(笑)甚至比电子邮件还多?

萨姆·奥尔特曼:远远超过电子邮件。我认为唯一可能超过它的是iMessages,但确实Slack比iMessages还多。

比尔·盖茨:在OpenAI内部,有很多协调工作要做。

萨姆·奥尔特曼:是的,那你呢?

比尔·盖茨:我是Outlook。我是传统的电子邮件派,要么就是浏览器,当然,我的许多新闻都是通过浏览器看来的。

萨姆·奥尔特曼:我没有把浏览器算作一个应用,有可能我使用它的频率更高,但我仍然打赌是Slack,我整天都在使用它。

比尔·盖茨:不可思议。好吧,我们这里有一个黑胶唱片机。我像对其他嘉宾那样,要求萨姆带来一张他最喜欢的唱片。那么,你今天带来了什么?

萨姆·奥尔特曼:我带来了马克斯·里希特重新编曲的维瓦尔第的《新四季》。我工作时喜欢无歌词的音乐,这张唱片既保留了维瓦尔第原作的舒适感,也有我非常熟悉的曲子,但又有足够多新的音符带来完全不同的体验。有些音乐作品,你会因为在人生的关键时期大量地听它们而形成强烈的情感依恋,而《新四季》正是我在我们初创OpenAI时经常听的东西。

我认为这是非常美妙的音乐,它高亢而乐观,完美适配我工作时的需求,我觉得新版本非常棒。

比尔·盖茨:这是由交响乐团演奏的吗?

萨姆·奥尔特曼:是的,是由Chineke!乐团演奏的。

比尔·盖茨:太棒了。

萨姆·奥尔特曼:现在就播吗?

比尔·盖茨:是的,我们来听听。

萨姆·奥尔特曼:这是我们要听的乐章的序曲。

比尔·盖茨:你戴耳机吗?

萨姆·奥尔特曼:我戴。

比尔·盖茨:你的同事们会因为你听古典音乐而取笑你吗?

萨姆·奥尔特曼:我不认为他们知道我在听什么,因为我确实戴着耳机。在寂静中工作对我来说非常困难,我可以做到,但这不是我的自然状态。

比尔·盖茨:这很有趣。我同意,带歌词的歌曲会让我觉得分心,但这更多是一种情绪类型的东西。

萨姆·奥尔特曼:是的,而且我把它调得很轻,我也不能听响亮的音乐,不知为何这是我一直以来的习惯。

比尔·盖茨:太棒了,感谢你带来美妙的音乐。

比尔·盖茨:现在,对我来说,如果你真的借助人工智能达到了令人难以置信的能力,AGI(通用人工智能),AGI+(超级通用人工智能),我担心的有三件事:一是坏人控制了系统,如果我们有好人拥有同样强大的系统,这有希望能最小化那个问题;二是系统控制一切的可能性,出于某些原因,我不太担心这个问题,但我很高兴其他人关注这个问题;最让我感到困惑的是人类的目的,我对这点感到非常兴奋,我很擅长研究疟疾和根除疟疾,也很擅长召集聪明人并为此投入资源。

当机器人对我说:“比尔,去打匹克球吧,我能根除疟疾。你只是个思维迟钝的人。”那时它就是一个哲学上令人困惑的事情。我们如何组织社会?是的,我们要改善教育,但教育要做什么,如果你走向极端,我们仍然有很大的不确定性。第一次,这种情况可能在未来20年内发生的机会不为零。

萨姆·奥尔特曼:从事技术工作有很多心理上的困难,但你说的这些对我来说是最困难的,因为我也从中获得了很多满足感。

比尔·盖茨:你确实带来了价值。

萨姆·奥尔特曼:从某种意义上来说,这可能是我做的最后一件难事。

比尔·盖茨:我们的思维如此依赖于稀缺性,教师、医生和好的想法的稀缺,部分原因是,我确实在想,如果一代人在没有这种稀缺的情况下成长,他们会对如何组织社会以及要做什么这个哲学概念会产生什么看法,也许他们会想出一个解决方案。我担心我的思维如此受到稀缺性的影响,以至于我甚至很难思考这个问题。

萨姆·奥尔特曼:这也是我告诉自己的,而且我真心相信,虽然我们在某种意义上放弃了一些东西,但我们将会拥有比我们人类更聪明的东西。如果我们能进入这个“后稀缺”世界,我们将会找到新的事情去做。它们会感觉非常不同。也许你不是在解决疟疾问题,而是在决定你喜欢哪个星系,以及你打算如何处理它。

我相信我们永远不会缺少问题,不会缺少获得满足感和为彼此做事的方式,不会缺少对我们如何为其他人玩人类游戏的方式的理解,这将仍然非常重要。这肯定会有所不同,但我认为唯一的出路就是走下去。我们必须去做这件事,它必将会发生,且现在已经是一个不可阻挡的技术进程,因为其价值太大了。我非常非常有信心,我们会成功的,但感觉确实会很不一样。

比尔·盖茨:将这项技术应用于某些当前问题,比如为孩子们提供家教,帮助激发他们的动力,或发现治疗阿尔茨海默症的药物,我认为如何做是非常清楚的。无论人工智能能否帮助我们减少战争,减少分化。你会认为随着智能的提升,不分化是常识,不发动战争也是常识,但我确实认为很多人会持怀疑态度。我很愿意让人们致力于解决最困难的人类问题,比如我们是否能和睦相处。如果我们认为人工智能可以帮助人类更好地相处,我认为那将是非常积极的。

萨姆·奥尔特曼:我相信它会在这方面给我们带来意外的惊喜。这项技术会让我们惊讶于它能做的事情有多么多。我们还得拭目以待,但我非常乐观。我同意你的看法,这将是非常大的贡献。

比尔·盖茨:就公平性而言,技术通常很昂贵,比如个人电脑或互联网连接,而降低成本需要时间。我想,运行这些人工智能系统的成本看起来很不错,每次评估的成本会降低很多吗?

萨姆·奥尔特曼:它已经降低了很多。GPT-3是我们推出时间最长、优化最久的模型,在它推出的三年多时间里,我们已经将成本降低了40倍。对于三年的时间来说,这是一个很好的开始。至于GPT-3.5版,我敢打赌,目前我们已经将其成本降低了近10倍。

GPT-4是新产品,所以我们还没有那么多时间来降低成本,但我们会继续。我认为,在我所知道的所有技术中,我们的成本下降曲线是最陡峭的,优于摩尔定律。这不仅是因为我们想出了如何让模型更高效的方法,还因为我们对研究有了更好的理解,我们可以在更小的模型中获得更多的知识和能力。我认为,我们将把智能的成本降低到接近于零的程度,这对社会来说将是一个改头换面的转变。

现在,我的世界基本模型由智能成本和能源成本组成。(比尔笑了)这是影响生活质量的两个最大因素,尤其是对穷人而言,但总体来看也是如此。如果你能同时降低这两方面的成本,你能拥有的东西就会更多,你能为人们带来的改善就会更大。我们正走在一条曲线上,至少在智能方面,我们将真正实现这一承诺。即使按照目前的价格(这也是有史以来最高的价格,而且远远超出了我们的预期),每月20美元,你就能获得大量的GPT-4访问权限,而且价值远远超过20美元。我们已经降得很低了。

比尔·盖茨:那竞争呢?很多人一下子同时挤进这个赛道是不是一件有趣的事情?

萨姆·奥尔特曼:既令人讨厌,又充满动力和乐趣,(比尔笑了)我相信你也有过类似的感觉。这确实促使我们做得更快、更好,我们对自己的方法很有信心。我们有很多人,我认为他们都在往冰球所在的地方滑,而我们也在往冰球要去的地方滑,这感觉很好。

比尔·盖茨:我认为人们会对OpenAI的规模之小感到惊讶。你们有多少员工?

萨姆·奥尔特曼:大约500人,所以我们比以前稍微大一些。

比尔·盖茨:但那很小,(笑)要是以谷歌、微软、苹果的标准来看。

萨姆·奥尔特曼:确实很小,我们不仅要经营研究实验室,现在还要经营一家真正的企业和两款产品。

比尔·盖茨:你所有能力的扩展,包括与世界上所有的人交谈,倾听所有支持者的声音,这对你来说一定很有趣。

萨姆·奥尔特曼:非常令人着迷。

比尔·盖茨:这是一家员工都很年轻的公司吗?

萨姆·奥尔特曼:比平均年龄要大一些。

比尔·盖茨:好的。

萨姆·奥尔特曼:这里不是一群24岁的程序员。

比尔·盖茨:的确,我的视角有些扭曲了,因为我已经60多岁了。我看到你,你比我年轻,但你说得对,你们有很多人四十多岁了。

萨姆·奥尔特曼:三十多岁、四十多岁、五十多岁(的人)

比尔·盖茨:这不像早期的苹果、微软,那时我们真的还是孩子。

萨姆·奥尔特曼:不是的,我也反思过这个问题。我认为公司普遍变老了,我不知道该如何看待这个问题。我认为这在某种程度上对社会是个不好的迹象,但我在 YC(Y Combinator)追踪过这个问题。随着时间的推移,最优秀的创始人年龄都呈增长趋势。

比尔·盖茨:这很有意思。

萨姆·奥尔特曼:就我们的情况而言,甚至还比平均年龄还要大一些。

比尔·盖茨:你在YC扮演的角色帮助这些公司学到了很多,我想这对你现在的工作也是很好的锻炼。(笑)

萨姆·奥尔特曼:那非常有帮助。

比尔·盖茨:包括看到错误。

萨姆·奥尔特曼:完全可以这么说。OpenAI做了很多与YC建议的标准相反的事情。我们花了四年半时间才推出我们的第一个产品。公司成立之初,我们对产品没有任何概念,我们没有与用户交流。我仍然不建议大多数公司这样做,但在YC学习和见识过这些规则后,我觉得自己明白了何时、如何以及为什么我们可以打破这些规则,我们所做的事情真的与我见过的其他公司大相径庭。

比尔·盖茨:关键是你集结的人才团队,让他们专注于大问题,而不是某些短期的收益问题。

萨姆·奥尔特曼:我认为硅谷的投资者不会在我们需要的水平上支持我们,因为我们必须在研究上花费如此多的资金才能推出产品。我们只是说:“最终模型会足够好,我们知道它会对人们有价值。”但我们非常感激与微软的合作,因为这种超前投资并不是风险投资行业擅长的。

比尔·盖茨:确实不是,而且资本成本相当可观,几乎达到了风险投资所能承受的极限。

萨姆·奥尔特曼:可能已经超过了。

比尔·盖茨:确实可能。我非常赞同萨蒂亚对于“如何将这个杰出的人工智能组织与大型软件公司结合起来?”的思考,甚至可以说一加一远远大于二。

萨姆·奥尔特曼:是的,这很棒。你真说到点上了,这也是我从YC学到的。我们可以说要找世界上最好的人来做这件事。我们要确保我们的目标和AGI的使命是一致的。但除此之外,我们要让人们做自己的事情。我们会意识到这将经历一些曲折,需要一段时间。

我们有一个大致正确的理论,但一路上的很多策略都被证明是大错特错的,我们只是试图遵循科学。

比尔·盖茨:我记得我去看了演示,也确实想过这个项目的收入途径是什么?是什么样的?在这个狂热的时代,你仍然手握一个令人难以置信的团队。

萨姆·奥尔特曼:是的,优秀的人都希望与优秀的同事共事。

比尔·盖茨:那是一种吸引力。

萨姆·奥尔特曼:那里有一个很深的引力中心。此外,这听起来很陈词滥调,每家公司都这么说,但人们感受到了深深的使命感,每个人都想参与AGI的创建。

比尔·盖茨:那一定很激动人心。当你再次用演示震撼我时,我可以感受到那股能量。我看到了新的人,新的想法,而你们仍以非常不可思议的速度前进着。

萨姆·奥尔特曼:你最常给出的建议是什么?

比尔·盖茨:才能可以分很多种,在我职业生涯的早期,我认为只有纯粹的智商,比如工程智商,当然,你可以将其应用于金融和销售。但这种想法被证明是如此错误,建立一个拥有正确技能组合的团队是如此重要。针对他们的问题,引导他们思考应该如何建立一个拥有所有不同技能的团队,这可能是我认为最有帮助的建议之一。是的,告诉孩子们,数学、科学很酷,如果你喜欢的话,但真正让我惊讶的是才能的混合。

那你呢?你给出的建议是什么?

萨姆·奥尔特曼:关于大多数人对风险的误判。他们害怕离开舒适的工作,去做他们真正想做的事情。实际上,如果他们不这样做,他们回顾自己的一生时就会想,“天啊,我从来没有去创办我想创办的公司,或者我从未尝试成为一名人工智能研究员。”我认为实际上这样风险更大。

与此相关的是,明确自己想要做什么,并向别人提出自己的要求,会有意想不到的收获。很多人受困于把时间花在自己不想做的事情上,而我最常给的建议可能就是想办法解决这个问题。

比尔·盖茨:如果你能让人们从事一份让他们感到有目标的工作,那会更有趣。有时,他们就是这样产生巨大影响的。

萨姆·奥尔特曼:当然。

比尔·盖茨:感谢你的到来,这是一次精彩的对话。在未来的日子里,我相信我们还会有更多的交流,因为我们正努力以最好的方式塑造人工智能。

萨姆·奥尔特曼:非常感谢你的邀请,我真的很享受与你对话。

比尔·盖茨:《为自己解惑》是盖茨笔记的一个节目。特别感谢我今天的嘉宾萨姆·奥尔特曼。

比尔·盖茨:告诉我你的第一台电脑是什么?

萨姆·奥尔特曼:是Mac LC2。

比尔·盖茨:不错的选择。

萨姆·奥尔特曼:是个好东西,我还留着它,它到现在还能用。

以下是英文对话原文:

If you ask people to name leaders in artificial intelligence, there’s one name you’ll probably hear more than any other: Sam Altman. His team at OpenAI is pushing the boundaries of what AI can do with ChatGPT, and I loved getting to talk to him about what’s next. Our conversation covered why today’s AI models are the stupidest they’ll ever be, how societies adapt to technological change, and even where humanity will find purpose once we’ve perfected artificial intelligence.

BILL GATES: My guest today is Sam Altman. He, of course, is the CEO of OpenAI. He’s been an entrepreneur and a leader in the tech industry for a long time, including running Y Combinator, that did amazing things like funding Reddit, Dropbox, Airbnb.

A little while after I recorded this episode, I was completely taken by surprise when, at least briefly, he was let go as the CEO of OpenAI. A lot happened in the days after the firing, including a show of support from nearly all of OpenAI’s employees, and Sam is back. So, before you hear the conversation that we had, let’s check in with Sam and see how he’s doing.

BILL GATES: Hey, Sam. 

SAM ALTMAN: Hey, Bill.

BILL GATES: How are you? 

SAM ALTMAN: Oh, man. It’s been so crazy. I’m all right. It’s a very exciting time.

BILL GATES: How’s the team doing? 

SAM ALTMAN: I think, you know a lot of people have remarked on the fact that the team has never felt more productive or more optimistic or better. So, I guess that’s like a silver lining of all of this.

In some sense, this was like a real moment of growing up for us, we are very motivated to become better, and sort of to become a company ready for the challenges in front of us.

BILL GATES: Fantastic. 

[music]

So, we won’t be discussing that situation in the conversation; however, you will hear about Sam’s commitment to build a safe and responsible AI. I hope you enjoy the conversation.

Welcome to Unconfuse Me. I’m Bill Gates.

[music fades]

BILL GATES: Today we’re going to focus mostly on AI, because it’s such an exciting thing, and people are also concerned. Welcome, Sam.

SAM ALTMAN: Thank you so much for having me.

BILL GATES: I was privileged to see your work as it evolved, and I was very skeptical. I didn’t expect ChatGPT to get so good. It blows my mind, and we don’t really understand the encoding. We know the numbers, we can watch it multiply, but the idea of where is Shakespearean encoded? Do you think we’ll gain an understanding of the representation?

SAM ALTMAN: A hundred percent. Trying to do this in a human brain is very hard. You could say it’s a similar problem, which is there are these neurons, they’re connected. The connections are moving and we’re not going to slice up your brain and watch how it’s evolving, but this we can perfectly x-ray. There has been some very good work on interpretability, and I think there will be more over time. I think we will be able to understand these networks, but our current understanding is low. The little bits we do understand have, as you’d expect, been very helpful in improving these things. We’re all motivated to really understand them, scientific curiosity aside, but the scale of these is so vast. We also could say, where in your brain is Shakespeare encoded, and how is that represented?

BILL GATES: We don’t know.

SAM ALTMAN: We don’t really know, but it somehow feels even less satisfying to say we don’t know yet in these masses of numbers that we’re supposed to be able to perfectly x-ray and watch and do any tests we want to on.

BILL GATES: I’m pretty sure, within the next five years, we’ll understand it. In terms of both training efficiency and accuracy, that understanding would let us do far better than we’re able to do today. 

SAM ALTMAN: A hundred percent. You see this in a lot of the history of technology where someone makes an empirical discovery. They have no idea what’s going on, but it clearly works. Then, as the scientific understanding deepens, they can make it so much better. 

BILL GATES: Yes, in physics, biology, it’s sometimes just messing around, and it’s like, whoa – how does this actually come together?

SAM ALTMAN: In our case, the guy that built GPT-1 sort of did it off by himself and solved this, and it was somewhat impressive, but no deep understanding of how it worked or why it worked. Then we got the scaling laws. We could predict how much better it was going to be. That was why, when we told you we could do a demo, we were pretty confident it was going to work. We hadn’t trained the model, but we were pretty confident. That has led us to a bunch of attempts and better and better scientific understanding of what’s going on. But it really came from a place of empirical results first. 

BILL GATES: When you look at the next two years, what do you think some of the key milestones will be?

SAM ALTMAN: Multimodality will definitely be important.

BILL GATES: Which means speech in, speech out?

SAM ALTMAN: Speech in, speech out. Images. Eventually video. Clearly, people really want that. We’ve launched images and audio, and it had a much stronger response than we expected. We’ll be able to push that much further, but maybe the most important areas of progress will be around reasoning ability. Right now, GPT-4 can reason in only extremely limited ways. Also reliability. If you ask GPT-4 most questions 10,000 times, one of those 10,000 is probably pretty good, but it doesn’t always know which one, and you’d like to get the best response of 10,000 each time, and so that increase in reliability will be important. 

Customizability and personalization will also be very important. People want very different things out of GPT-4: different styles, different sets of assumptions. We’ll make all that possible, and then also the ability to have it use your own data. The ability to know about you, your email, your calendar, how you like appointments booked, connected to other outside data sources, all of that. Those will be some of the most important areas of improvement.

BILL GATES: In the basic algorithm right now, it’s just feed forward, multiply, and so to generate every new word, it’s essentially doing the same thing. I’ll be interested if you ever get to the point where, like in solving a complex math equation, you might have to apply transformations an arbitrary number of times, that the control logic for the reasoning may have to be quite a bit more complex than just what we do today. 

SAM ALTMAN: At a minimum, it seems like we need some sort of adaptive compute. Right now, we spend the same amount of compute on each token, a dumb one, or figuring out some complicated math.

BILL GATES: Yes, when we say, “Do the Riemann hypothesis …”

SAM ALTMAN: That deserves a lot of compute.

BILL GATES: It’s the same compute as saying, “The.”

SAM ALTMAN: Right, so at a minimum, we’ve got to get that to work. We may need much more sophisticated things beyond it.

BILL GATES: You and I were both part of a Senate Education Session, and I was pleased that about 30 senators came to that, and helping them get up to speed, since it’s such a big change agent. I don’t think we could ever say we did too much to draw the politicians in. And yet, when they say, “Oh, we blew it on social media, we should do better,” – that is an outstanding challenge that there are very negative elements to, in terms of polarization. Even now, I’m not sure how we would deal with that. 

SAM ALTMAN: I don’t understand why the government was not able to be more effective around social media, but it seems worth trying to understand as a case study for what they’re going to go through now with AI.

BILL GATES: It’s a good case study, and when you talk about the regulation, is it clear to you what sort of regulations would be constructed?

SAM ALTMAN: I think we’re starting to figure that out. It would be very easy to put way too much regulation on this space. You can look at lots of examples of where that’s happened before.But also, if we are right, and we may turn out not to be, but if we are right, and this technology goes as far as we think it’s going to go, it will impact society, geopolitical balance of power, so many things, that for these, still hypothetical, but future extraordinarily powerful systems – not like GPT-4, but something with 100,000 or a million times the compute power of that, we have been socialized in the idea of a global regulatory body that looks at those super-powerful systems, because they do have such global impact. One model we talk about is something like the IAEA. For nuclear energy, we decided the same thing. This needs a global agency of some sort, because of the potential for global impact. I think that could make sense. There will be a lot of shorter term issues, issues of what are these models allowed to say and not say? How do we think about copyright? Different countries are going to think about those differently and that’s fine.

BILL GATES: Some people think if there are models that are so powerful, we’re scared of them –the reason nuclear regulation works globally, is basically everyone, at least on the civilian side, wants to share safety practices, and it has been fantastic. When you get over into the weapons side of nuclear, you don’t have that same thing. If the key is to stop the entire world from doing something dangerous, you’d almost want global government, which today for many issues, like climate, terrorism, we see that it’s hard for us to cooperate. People even invoke U.S.-China competition to say why any notion of slowing down would be inappropriate. Isn’t any idea of slowing down, or going slow enough to be careful, hard to enforce?

SAM ALTMAN: Yes, I think if it comes across as asking for a slowdown, that will be really hard. If it instead says, “Do what you want, but any compute cluster above a certain extremely high-power threshold” – and given the cost here, we’re talking maybe five in the world, something like that –any cluster like that has to submit to the equivalent of international weapons inspectors. The model there has to be made available for safety audit, pass some tests during training, and before deployment. That feels possible to me. I wasn’t that sure before, but I did a big trip around the world this year, and talked to heads of state in many of the countries that would need to participate in this, and there was almost universal support for it. That’s not going to save us from everything. There are still going to be things that are going to go wrong with much smaller-scale systems, in some cases, probably pretty badly wrong. But I think that can help us with the biggest tier of risks.

BILL GATES: I do think AI, in the best case, can help us with some hard problems.

SAM ALTMAN: For sure.

BILL GATES: Including polarization because potentially that breaks democracy and that would be a super-bad thing. Right now, we’re looking at a lot of productivity improvement from AI, which isoverwhelmingly a very good thing. Which areas are you most excited about?

SAM ALTMAN: First of all, I always think it’s worth remembering that we’re on this long, continuous curve. Right now, we have AI systems that can do tasks. They certainly can’t do jobs, but they can do tasks, and there’s productivity gain there. Eventually, they will be able to do more things that we think of like a job today, and we will, of course, find new jobs and better jobs. I totally believe that if you give people way more powerful tools, it’s not just that they can work a little faster, they can do qualitatively different things. Right now, maybe we can speed up a programmer 3x. That’s about what we see, and that’s one of the categories that we’re most excited about it. It’s working super-well. But if you make a programmer three times more effective, it’s not just that they can do three times more stuff, it’s that they can – at that higher level of abstraction, using more of their brainpower – they can now think of totally different things. It’s like going from punch cards to higher level languages didn’t just let us program a little faster, it let us do these qualitatively new things. We’re really seeing that.

As we look at these next steps of things that can do a more complete task, you can imagine a little agent that you can say, “Go write this whole program for me, I’ll ask you a few questions along the way, but it won’t just be writing a few functions at a time.” That’ll enable a bunch of new stuff. And then again, it’ll do even more complex stuff. Someday, maybe there’s an AI where you can say, “Go start and run this company for me.” And then someday, there’s maybe an AI where you can say, “Go discover new physics.” The stuff that we’re seeing now is very exciting and wonderful, but I think it’s worth always putting it in context of this technology that, at least for the next five or ten years, will be on a very steep improvement curve. These are the stupidest the models will ever be.

Coding is probably the single area from a productivity gain we’re most excited about today. It’s massively deployed and at scaled usage at this point. Healthcare and education are two things that are coming up that curve that we’re very excited about too.

BILL GATES: The thing that is a little daunting is, unlike previous technology improvements, this one could improve very rapidly, and there’s kind of no upper bound. The idea that it achieves human levels on a lot of areas of work, even if it’s not doing unique science, it can do support calls and sales calls. I guess you and I do have some concern, along with this good thing, that it’ll force us to adapt faster than we’ve had to ever before.

SAM ALTMAN: That’s the scary part. It’s not that we have to adapt. It’s not that humanity is not super-adaptable. We’ve been through these massive technological shifts, and a massive percentage of the jobs that people do can change over a couple of generations, and over a couple of generations, we seem to absorb that just fine. We’ve seen that with the great technological revolutions of the past. Each technological revolution has gotten faster, and this will be the fastest by far. That’s the part that I find potentially a little scary, is the speed with which society is going to have to adapt, and that the labor market will change.

BILL GATES: One aspect of AI is robotics, or blue-collar jobs, when you get hands and feet that are at human-level capability. The incredible ChatGPT breakthrough has kind of gotten us focused on the white-collar thing, which is super appropriate, but I do worry that people are losing the focus on the blue-collar piece. So how do you see robotics?

SAM ALTMAN: Super-excited for that. We started robots too early, so we had to put that project on hold. It was hard for the wrong reasons. It wasn’t helping us make progress with the difficult parts of the ML research. We were dealing with bad simulators and breaking tendons and things like that. We also realized more and more over time that we first needed intelligence and cognition, and then we could figure out how to adapt it to physicality. It was easier to start with that with the way we built these language models. But we have always planned to come back to it. 

We’ve started investing a little bit in robotics companies. On the physical hardware side, there’s finally, for the first time that I’ve ever seen, really exciting new platforms being built there. At some point, we will be able to use our models, as you were saying, with their language understanding and future video understanding, to say, “All right, let’s do amazing things with a robot.”

BILL GATES: If the hardware guys who’ve done a good job on legs actually get the arms, hands, fingers piece, and then we couple it, and it’s not ridiculously expensive, that could change the job market for a lot of the blue-collar type work, pretty rapidly.

SAM ALTMAN: Yes. Certainly, the prediction, the consensus prediction, if we rewind seven or ten years, was that the impact was going to be blue-collar work first, white-collar work second, creativity maybe never, but certainly last, because that was magic and human.

Obviously, it’s gone exactly the other direction. I think there are a lot of interesting takeaways about why that happened. Creative work, the hallucinations of the GPT models is a feature, not a bug. It lets you discover some new things. Whereas if you’re having a robot move heavy machinery around, you’d better be really precise with that. I think this is just a case of you’ve got to follow where technology goes. You have preconceptions, but sometimes the science doesn’t want to go that way.

BILL GATES: So what application on your phone do you use the most?

SAM ALTMAN: Slack.

BILL GATES: Really? 

SAM ALTMAN: Yes. I wish I could say ChatGPT. 

BILL GATES: [laughs] Even more than e-mail?

SAM ALTMAN: Way more than e-mail. The only thing that I was thinking possibly was iMessages, but yes, more than that. 

BILL GATES: Inside OpenAI, there’s a lot of coordination going on. 

SAM ALTMAN: Yes. What about you? 

BILL GATES: It’s Outlook. I’m this old-style e-mail guy, either that or the browser, because, of course, a lot of my news is coming through the browser. 

SAM ALTMAN: I didn’t quite count the browser as an app. It’s possible I use it more, but I still would bet Slack. I’m on Slack all day.

BILL GATES: Incredible.

BILL GATES: Well, we’ve got a turntable here. I asked Sam, like I have for other guests, to bring one of his favorite records. So, what have we got? 

SAM ALTMAN: I brought The New Four Seasons – Vivaldi Recomposed by Max Richter. I like music with no words for working. That had the old comfort of Vivaldi and pieces I knew really well, but enough new notes that it was a totally different experience. There are pieces of music that you form these strong emotional attachments to, because you listened to them a lot in a key period of your life. This was something that I listened to a lot while we were starting OpenAI.

I think it’s very beautiful music. It’s soaring and optimistic, and just perfect for me for working. I thought the new version is just super great. 

BILL GATES: Is it performed by an orchestra? 

SAM ALTMAN: It is. The Chineke! Orchestra.

BILL GATES: Fantastic. 

SAM ALTMAN: Should I play it?

BILL GATES: Yes, let’s. 

[music – “The New Four Seasons – Vivaldi Recomposed: Spring 1” by Max Richter]

SAM ALTMAN: This is the intro to the sound we’re going for.

[music]

BILL GATES: Do you wear headphones?

SAM ALTMAN: I do.

BILL GATES: Do your colleagues give you a hard time about listening to classical music?

SAM ALTMAN: I don’t think they know what I listen to, because I do wear headphones. It’s very hard for me to work in silence. I can do it, but it’s not my natural state.

BILL GATES: It’s fascinating. Songs with words, I agree, I would find that distracting, but this is more of a mood type thing. 

SAM ALTMAN: Yes, and I have it quiet. I can’t listen to loud music either, but it’s just somehow always what I’ve done. 

BILL GATES: It’s fantastic. Thanks for bringing it.

[music fades]

BILL GATES: Now, with AI, to me, if you do get to the incredible capability, AGI, AGI+, there are three things I worry about. One is that a bad guy is in control of the system. If we have good guys who have equally powerful systems that hopefully minimizes that problem. There’s the chance of the system taking control. For some reasons, I’m less concerned about that. I’m glad other people are. The one that sort of befuddles me is human purpose. I get a lot of excitement that, hey, I’m good at working on malaria, and malaria eradication, and getting smart people and applying resources to that. When the machine says to me, “Bill, go play pickleball, I’ve got malaria eradication.You’re just a slow thinker,” then it is a philosophically confusing thing. How do you organize society? Yes, we’re going to improve education, but education to do what, if you get to this extreme, which we still have a big uncertainty. For the first time, the chance that might come in the next 20 years is not zero.

SAM ALTMAN: There’s a lot of psychologically difficult parts of working on the technology, but this is for me, the most difficult, because I also get a lot of satisfaction from that.

BILL GATES: You have real value added.

SAM ALTMAN: In some real sense, this might be the last hard thing I ever do. 

BILL GATES: Our minds are so organized around scarcity; scarcity of teachers and doctors and good ideas that, partly, I do wonder if a generation that grows up without that scarcity will find the philosophical notion of how to organize society and what to do. Maybe they’ll come up with a solution. I’m afraid my mind is so shaped around scarcity, I even have a hard time thinking of it.

SAM ALTMAN: That’s what I tell myself too, and it’s what I truly believe, that although we are giving something up here, in some sense, we are going to have things that are smarter than us. If we can get into this world of post-scarcity, we will find new things to do. They will feel very different. Maybe instead of solving malaria, you’re deciding which galaxy you like, and what you’re going to do with it.  I’m confident we’re never going to run out of problems, and we’re never going to run out of different ways to find fulfilment and do things for each other and understand how we play our human games for other humans in this way that’s going to remain really important. It is going to be different for sure, but I think the only way out is through. We have to go do this thing. It’s going to happen. This is now an unstoppable technological course. The value is too great. And I’m pretty confident, very confident, we’ll make it work, but it does feel like it’s going to be so different. 

BILL GATES: The way to apply this to certain current problems, like getting kids a tutor and helping to motivate them, or discover drugs for Alzheimer’s, I think it’s pretty clear how to do that. Whether AI can help us go to war less, be less polarized; you’d think as you drive intelligence, and not being polarized kind of is common sense, and not having war is common sense, but I do think a lot of people would be skeptical. I’d love to have people working on the hardest human problems, like whether we get along with each other. I think that would be extremely positive, if we thought the AI could contribute to humans getting along with each other. 

SAM ALTMAN: I believe that it will surprise us on the upside there. The technology will surprise us with how much it can do. We’ve got to find out and see, but I’m very optimistic. I agree with you, what a contribution that would be.

BILL GATES: In terms of equity, technology is often expensive, like a PC or Internet connection, and it takes time to come down in cost. I guess the costs of running these AI systems, it looks pretty good that the cost per evaluation is going to come down a lot?

SAM ALTMAN: It’s come down an enormous amount already. GPT-3, which is the model we’ve had out the longest and the most time to optimize, in the three and a little bit years that it has been out, we’ve been able to bring the cost down by a factor of 40. For three years’ time, that’s a pretty good start. For 3.5, we’ve brought it down, I would bet, close to 10 at this point. Four is newer, so we haven’t had as much time to bring the cost down there, but we will continue to bring the cost down. I think we are on the steepest curve of cost reduction ever of any technology I know, way better than Moore’s Law. It’s not only that we figured out how to make the models more efficient, but also, as we understand the research better, we can get more knowledge, we can get more ability into a smaller model. I think we are going to drive the cost of intelligence down to so close to zero that it will be this before-and-after transformation for society. 

Right now, my basic model of the world is cost of intelligence, cost of energy. [Bill laughs] Those are the two biggest inputs to quality of life, particularly for poor people, but overall. If you can drive both of those way down at the same time, the amount of stuff you can have, the amount of improvement you can deliver for people, it’s quite enormous. We are on a curve, at least for intelligence, we will really, really deliver on that promise. Even at the current cost, which again, this is the highest it will ever be and much more than we want, for 20 bucks a month, you get a lot of GPT-4 access, and way more than 20 bucks’ worth of value. We’ve come down pretty far. 

BILL GATES: What about the competition? Is that kind of a fun thing that many people are working on this all at once?

SAM ALTMAN: It’s both annoying and motivating and fun. [Bill laughs] I’m sure you’ve felt similarly. It does push us to be better and do things faster. We are very confident in our approach. We have a lot of people that I think are skating to where the puck was, and we’re going to where the puck is going. It feels all right. 

BILL GATES: I think people would be surprised at how small OpenAI is. How many employees do you have?

SAM ALTMAN: About 500, so we’re a little bigger than before. 

BILL GATES: But that’s tiny. [laughs] By Google, Microsoft, Apple standards – 

SAM ALTMAN: It’s tiny. We have to not only run the research lab, but now we have to run a real business and two products. 

BILL GATES: The scaling of all your capacities, including talking to everybody in the world, and listening to all those constituencies, that’s got to be fascinating for you right now. 

SAM ALTMAN: It’s very fascinating. 

BILL GATES: Is it mostly a young company?

SAM ALTMAN: It’s an older company than average. 

BILL GATES: Okay.

SAM ALTMAN: It’s not a bunch of 24-year-old programmers. 

BILL GATES: It’s true, my perspective is warped, because I’m in my 60s. I see you, and you’re younger, but you’re right. You have a lot in their 40s.

SAM ALTMAN: Thirties, 40s, 50s.

BILL GATES:  It’s not the early Apple, Microsoft, which we were really kids. 

SAM ALTMAN: It’s not, and I’ve reflected on that. I think companies have gotten older in general, and I don’t know quite what to make of that. I think it’s somehow a bad sign for society, but I tracked this at YC. The best founders have trended older over time. 

BILL GATES: That’s fascinating. 

SAM ALTMAN: Then in our case, it’s a little bit older than the average, even still. 

BILL GATES: You got to learn a lot by your role at Y Combinator, helping these companies. I guess that was good training for what you’re doing now. [laughs]

SAM ALTMAN: That was super helpful. 

BILL GATES: Including seeing mistakes.

SAM ALTMAN: Totally. OpenAI did a lot of things that are very against the standard YC advice. We took four and a half years to launch our first product. We started the company without any idea of what a product would be. We were not talking to users. I still don’t recommend that for most companies, but having learned the rules and seen them at YC made me feel like I understood when and how and why we could break them. We really did things that were just so different than any other company I’ve seen. 

BILL GATES: The key was the talent that you assembled, and letting them be focused on the big, big problem, not some near-term revenue thing.

SAM ALTMAN: I think Silicon Valley investors would not have supported us at the level we needed, because we had to spend so much capital on the research before getting to the product. We just said, “Eventually the model will be good enough that we know it’s going to be valuable to people.” But we were very grateful for the partnership with Microsoft, because this kind of way-ahead-of-revenue investing is not something that the venture capital industry is good at. 

BILL GATES: No, and the capital costs were reasonably significant, almost at the edge of what venture would ever be comfortable with. 

SAM ALTMAN: Maybe past.

BILL GATES: Maybe past. I give Satya incredible credit for thinking through ‘how do you take this brilliant AI organization, and couple it into the large software company?’ It has been very, very synergistic. 

SAM ALTMAN: It’s been wonderful, yes. You really touched on it, though, and this was something I learned from Y Combinator. We said, we are going to get the best people in the world at this. We are going to make sure that we’re all aligned at where we’re going and this AGI mission. But beyond that, we’re going to let people do their thing. We’re going to realize it’s going to go through some twists and turns and take a while. 

We had a theory that turned out to be roughly right, but a lot of the tactics along the way turned out to be super wrong. We just tried to follow the science. 

BILL GATES: I remember going and seeing the demonstration and thinking, okay, what’s the path to revenue on that one? What is that like? In these frenzied times, you’re still holding on to an incredible team.

SAM ALTMAN: Yes. Great people really want to work with great colleagues. 

BILL GATES: That’s an attractive force.

SAM ALTMAN: There’s a deep center of gravity there. Also, it sounds so cliche, and every company says it, but people feel the mission so deeply. Everyone wants to be in the room for the creation of AGI.

BILL GATES: It must be exciting. I can see the energy when you come up and blow me away again with the demos; I’m seeing new people, new ideas. You’re continuing to move at a really incredible speed. 

SAM ALTMAN: What’s the piece of advice you give most often? 

BILL GATES: There are so many different forms of talent. Early in my career, I thought, just pure IQ, like engineering IQ, and of course, you can apply that to financial and sales. That turned out to be so wrong. Building teams where you have the right mix of skills is so important. Getting people to think, for their problem, how do they build that team that has all the different skills, that’s probably the one that I think is the most helpful. Yes, telling kids, you know, math, science is cool, if you like it, but it’s that talent mix that really surprised me. 

What about you? What advice do you give?

SAM ALTMAN: It’s something about how most people are mis-calibrated on risk. They’re afraid to leave the soft, cushy job behind to go do the thing they really want to do, when, in fact, if they don’t do that, they look back at their lives like, “Man, I never went to go start this company I wanted to start, or I never tried to go be an AI researcher.” I think that’s sort of much riskier. 

Related to that, being clear about what you want to do, and asking people for what you want goes a surprisingly long way. A lot of people get trapped in spending their time in not the way they want to do. Probably the most frequent advice I give is to try to fix that some way or other. 

BILL GATES: If you can get people into a job where they feel they have a purpose, it’s more fun. Sometimes that’s how they can have gigantic impact.

SAM ALTMAN: That’s for sure. 

BILL GATES: Thanks for coming. It was a fantastic conversation. In the years ahead, I’m sure we’ll get to talk a lot more, as we try to shape AI in the best way possible. 

SAM ALTMAN: Thanks a lot for having me. I really enjoyed it. 

[music]

BILL GATES: Unconfuse Me is a production of the Gates Notes. Special thanks to my guest today, Sam Altman. 

BILL GATES: Remind me what your first computer was?

SAM ALTMAN: A Mac LC2.

BILL GATES: Nice choice.

SAM ALTMAN: It was a good one. I still have it; it still works. 

本文来自微信公众号:比尔盖茨(ID:gatesnotes),作者:Bill Gates

声明: 该内容为作者独立观点,不代表新零售资讯观点或立场,文章为网友投稿上传,版权归原作者所有,未经允许不得转载。 新零售资讯站仅提供信息存储服务,如发现文章、图片等侵权行为,侵权责任由作者本人承担。 如对本稿件有异议或投诉,请联系:wuchangxu@youzan.com
(0)
上一篇 2024年1月12日 18:07
下一篇 2024年1月12日

相关推荐

  • 水温80度:AI行业真假繁荣的临界点

    我们从来没拥有过这么成功的AI主导的产品。

    (这种分析统计并不那么准,但大致数量级是差不多的)

    这两个产品碰巧可以用来比较有两个原因:

    一个是它们在本质上是一种东西,只不过一个更通用,一个更垂直。

    蓝海的海峡

    未来成功的AI产品是什么样,大致形态已经比较清楚了,从智能音箱和Copilot这两个成功的AI产品上已经能看到足够的产品特征。

    未来科技 2024年6月5日
  • ChatGPT、Perplexity、Claude同时“罢工”,全网打工人都慌了

    美西时间午夜12点开始,陆续有用户发现自己的ChatGPT要么响应超时、要么没有对话框或提示流量过载,忽然无法正常工作了。

    因为发现AI用久了,导致现在“离了ChatGPT,大脑根本无法运转”。”

    等等,又不是只有一个聊天机器人,难道地球离了ChatGPT就不转了。

    大模型连崩原因猜想,谷歌躺赢流量激增6成

    GPT归位,人们的工作终于又恢复了秩序。

    未来科技 2024年6月5日
  • ChatGPT宕机8小时,谷歌Gemini搜索量激增60%

    ChatGPT一天宕机两次

    谷歌Gemini搜索量激增近60%

    ChatGPT在全球拥有约1.8亿活跃用户,已成为部分人群工作流程的关键部分。

    过去24小时内提交的关于OpenAI宕机的问题报告

    图片来源:Downdetector

    ChatGPT系统崩溃后,有网友在社交媒体X上发帖警告道:“ChatGPT最近发生的2.5小时全球中断,为我们所有依赖AI工具来支持业务的人敲响了警钟。

    未来科技 2024年6月5日
  • ChatGPT、Perplexity、Claude同时大崩溃,AI集体罢工让全网都慌了

    接着OpenAI也在官网更新了恢复服务公告,表示“我们经历了一次重大故障,影响了所有ChatGPT用户的所有计划。Generator调查显示,在ChatGPT首次故障后的四小时内,谷歌AI聊天机器人Gemini搜索量激增60%,达到327058次。

    而且研究团队表示,“Gemini”搜索量的增长与“ChatGPT故障”关键词的搜索趋势高度相关,显示出用户把Gemini视为ChatGPT的直接替代选项。

    未来科技 2024年6月5日
  • 深度对话苹果iPad团队:玻璃的传承与演变

    iPad最为原始的外观专利

    没错,这就是iPad最初被设想的样子:全面屏,圆角矩形,纤薄,就像一片掌心里的玻璃。

    2010年发布的初代iPad

    好在乔布斯的遗志,并未被iPad团队遗忘。

    初代iPad宣传片画面

    乔布斯赞同这一想法,于是快速将资源投入平板电脑项目,意欲打造一款与众不同的「上网本」,这就是iPad早年的产品定义。

    iPad进化的底色

    苹果发布会留下过很多「名场面」,初代iPad发布会的末尾就是一例。

    未来科技 2024年6月5日
  • 底层逻辑未通,影视业的AI革命正在褪色…

    GPT、Sora均为革命性产品,引发了舆论风暴,但它在上个月发布的“多模态语音对谈”Sky语音,却由于声音太像电影明星斯嘉丽·约翰逊,被正主强烈警告,被迫下架。

    华尔街日报也在唱衰,认为“AI工具创新步伐正在放缓,实用性有限,运行成本过高”:

    首先,互联网上已经没有更多额外的数据供人工智能模型收集、训练。

    03、

    如果说训练“数字人”、使用AI配音本质上瞄向的仍是影视行业固有的发展方向,那么还有另外一群人试图从根本上颠覆影视行业的生产逻辑和产品形态。

    但分歧点正在于此,电影公司希望通过使用AI技术来降低成本,但又不希望自己的内容被AI公司所窃取。

    未来科技 2024年6月5日
  • KAN会引起大模型的范式转变吗?

    “先变后加”代替“先加后变”的设计,使得KAN的每一个连接都相当于一个“小型网络”, 能实现更强的表达能力。

    KAN的主要贡献在于,在当前深度学习的背景下重新审视K氏表示定理,将上述创新网络泛化到任意宽度和深度,并以科学发现为目标进行了一系列实验,展示了其作为“AI+科学”基础模型的潜在作用。

    KAN与MLP的对照表:

    KAN使神经元之间的非线性转变更加细粒度和多样化。

    未来科技 2024年6月5日
  • 这个国家,也开始发芯片补贴了

    //mp.weixin.qq.com/s/tIHSNsqF6HRVe2mabgfp6Q
    [4]中国安防协会:欧盟批准430亿欧元芯片补贴计划:2030年产量占全球份额翻番.2023.4.19.https。//mp.weixin.qq.com/s/VnEjzKhmZbuBUFclzGFloA
    [6]潮电穿戴:印度半导体投资大跃进,一锤砸下1090亿,政府补贴一半.2024.3.5https。

    未来科技 2024年6月5日
  • 大模型的电力经济学:中国AI需要多少电力?

    这些报告研究对象(数字中心、智能数据中心、加密货币等)、研究市场(全球、中国与美国等)、研究周期(多数截至2030年)各不相同,但基本逻辑大同小异:先根据芯片等硬件的算力与功率,计算出数据中心的用电量,再根据算力增长的预期、芯片能效提升的预期,以及数据中心能效(PUE)提升的预期,来推测未来一段时间内智能数据中心的用电量增长情况。

    未来科技 2024年6月5日
  • 你正和20万人一起接受AI面试

    原本客户还担心候选人能否接受AI面试这件事,但在2020年以后,候选人进行AI面试的过程已经是完全自动化的,包括面试过程中AI面试官回答候选人的问题,AI面试官对候选人提问以及基于候选人的回答对候选人进行至多三个轮次的深度追问。

    以近屿智能与客户合作的校验周期至少3年来看,方小雷认为AI应用不太可能一下子爆发,包括近屿智能在内的中国AI应用企业或许要迎来一个把SaaS做起来的好机会。

    未来科技 2024年6月4日