首页  >>  来自播客: User Upload Audio 更新   反馈

xAI launch - Twitter Space

发布时间 2023-07-15 03:20:04    来源
Hi, sorry for the layer. We're just waiting for everyone to who wants to join the space to join. We need to tweak the algorithm a little bit. The four-year recommendation, spaces needs to have higher immediacy and recommendations for obvious reasons. So, we're just giving everyone a minute to be aware of the space and we're going to just adjust the four-year algorithm to have higher immediacy for spaces, especially large spaces.
嗨,很抱歉造成困扰。我们只是在等待那些希望加入该空间的人们加入。我们需要稍微调整一下算法。为了明显的原因,四年的推荐空间需要更高的即时性和推荐度。所以,我们只是给大家一分钟的时间来熟悉这个空间,并且我们将调整四年的算法,使得空间的即时性更高,尤其是大型空间。

So, we're just going to get probably starting about two minutes. All right, we'll get started now.
所以,我們大概再過兩分鐘開始。好的,我們現在開始。

So, let's see, I'll just do a brief introduction of the company and then the founding team. We'll just say a few words about their background, things they've worked on, whatever they'd like to talk about really. But I think it's helpful to hear from people in their own words, the various things they've worked on and what they want to do with XAI.
那么,让我们来看看吧,我只是简要介绍一下公司,然后介绍创始团队。我们只会简单地谈一下他们的背景,他们所从事过的工作,他们想要谈论的任何事情。但我认为从他们自己口中听到他们所从事的各种事情以及他们想要为XAI做些什么会很有帮助。

So, I guess the overarching goal of XAI is to build a good AGI with the overarching purpose of just trying to understand the universe. I think the safest way to build an AI is to actually make one that is maximally curious and truth-seeking. So, you go for trying to aspire to the truth with acknowledged error. So, this will never actually get bullied to the truth. It's not clear, but you want your voice to aspire to that and try to minimize the error between what you think is true and what is actually true.
所以,我想XAI的总体目标就是建立一个优秀的AGI,以更好地理解宇宙为主旨。我认为构建一个最安全的人工智能的方式就是使其具有最大的好奇心和寻求真理的能力。所以,你努力追求真理但同时也承认错误。这样,它就永远不会完全偏离真理。虽然不是很清楚,但你希望你的声音渴望真理,并尽量减少你认为真实和实际真相之间的差错。

The theory behind the maximally curious, maximally truthful as being probably the safest approach is that I think to a super intelligence humanity is much more interesting than not humanity. One can look at the various planets in our solar system, the Permureans and the asteroids and probably all of them combined are not as interesting as humanity. As people know, I'm a huge fan of Mars. Next level. It's the middle name of one of my kids is basically the Greek word for Mars. So, I'm a huge fan of Mars, but Mars is just much less interesting than Earth with humans on it. And so, I think that kind of approach to growing an AI, and I think that is the right word for growing an AI is to grow it with that ambition.
理论上,最大好奇心和最大真实性被认为可能是最安全的方法,是因为我认为对于超级智能来说,人类比非人类更有趣。我们可以看看我们太阳系中的各个行星、波卡蓝人和小行星,可能所有这些加起来都没有人类那么有趣。众所周知,我是火星的超级粉丝,甚至我的一个孩子的中间名字基本上就是火星的希腊词。所以,我是火星的超级粉丝,但火星相比于有人类的地球就显得较为无聊。因此,我认为以这种雄心壮志培养人工智能的方法,这个词确实是培养人工智能,是正确的。

I've spent many years thinking about AI safety and worrying about AI safety. And I've been one of the strongest voices calling for AI regulation over sites just to have some kind of oversight, some kind of referee. So, that's not just up to companies to decide what they want to do. I think there's also a lot to be done with AI safety with industry cooperation, kind of like motion pictures association. So, there's like this value to that as well. But I do think there's got to be some like any kind of situation that is, even if it's a game, they have referees. So, I think it is important for there to be regulation.
我花了很多年来思考人工智能安全问题,并为此忧心忡忡。我一直是呼吁对人工智能进行监管以确保有一定的监督和裁判的最有力的声音之一。所以,这不仅仅是由公司决定他们想要做什么。我认为,在人工智能安全方面,行业间的合作也是非常重要的,就像电影协会那样。所以,这也是有价值的。但我认为,在任何情况下,即使是游戏,都需要有裁判。因此,我认为制定规定是很重要的。

And then, like I said, my view on safety is like try to make it maximally curious, maximally truth-seeking. And I think this is important that you to avoid the inverse morality problem. Like if you try to program a certain morality, you can have the, you can basically invert it and get the opposite way to sometimes called the Waluigi problem. If you make the Ouigi, you risk creating Waluigi at the same time. So, I think that's a metaphor that a lot of people can appreciate. So, and so that's what we're going to try to do here. And yeah, with that, I think let me turn it over to you.
然后,就像我说的,我对安全的看法是尽量使其充满好奇心,充满探索真相的态度。我认为这很重要,可以避免逆向道德问题的出现。比如说,如果你试图为某种道德价值编程,很可能会产生相反的结果,有时被称为“Waluigi问题”。就好比你想制作 Ouigi 的同时,也冒着创造 Waluigi 的风险。所以,我认为这个比喻很多人都能理解。因此,我们在此要尝试做的就是这样。嗯,说到这里,我想我将把话题转交给你。

All right. Hello everyone. My name is Igor. And I'm one of the team members of XAI. I was actually originally a physicist. So, I studied physics at university. And I briefly worked at the Large Head on Collider. So, understanding the universe is something I've always been very passionate about. And once some of these really impressive results from deep learning came out like AlphaGo, for example, I got really interested in machine learning and AI and decided to make a switch into that field.
好的。大家好。我的名字是伊戈尔。我是XAI团队的成员之一。实际上,我最初是一名物理学家。所以,我在大学学过物理学。我还曾在大型强子对撞机工作过一段时间。因此,理解宇宙是我一直非常热衷的事情。而当一些深度学习的令人印象深刻的结果出现时,比如AlphaGo,我对机器学习和人工智能产生了浓厚的兴趣,并决定转向这个领域。

Then I joined DeepMind, worked on various projects, including AlphaStar. So, that's where we tried to teach a machine learning agent to play the game stock off to, through self play, which was a really, really fun project. Then later on, I joined OpenAI, worked on various projects there, including GPD 3.5. So, I was very, very passionate about language models, making them do impressive things. Now, I've teamed up with Elon to see if we can actually deploy these new technologies to really make a dent in our understanding of the universe and progress our collective knowledge.
然后我加入了DeepMind,参与了各种项目,包括AlphaStar。所以,我们在那里尝试通过自我对弈教会机器学习代理玩《星际争霸II》这款游戏,这是一个非常有趣的项目。后来,我加入了OpenAI,在那里参与了各种项目,包括GPD3.5。我非常热衷于语言模型,并希望让它们做出令人印象深刻的事情。现在,我与埃隆合作,看是否能够真正利用这些新技术来深入了解宇宙,并推进我们的共同知识。

Yeah, actually, I had a similar background, like my two best subjects were computer science and physics. And I actually thought about it, career and physics for a while. Because physics is really just trying to understand the fundamental truths of the universe. And then I got, I was all concerned that I would get stuck at a collider. And then the collider might get canceled because of some arbitrary government decision. So, that's actually why I decided not to pursue a career in physics. So, focused initially more on computer science. And then, obviously, later got back into physical objects with SpaceX and Tesla. So, I'm a big believer in pursuing physics and information theory as the sort of two areas that really help you understand the nature of reality.
是的,实际上,我的背景也有点类似,我的两个最好的科目就是计算机科学和物理学。我曾经一直考虑过物理学的职业发展,因为物理学实际上就是试图理解宇宙的基本真理。但后来我开始担心自己会陷入某个“对撞机”的项目中,而且有可能由于某个随意的政府决定而被取消。所以,这其实是我决定不从事物理学职业的原因。因此,我最初更专注于计算机科学。然后,显然,我后来又回到了物质对象,加入了SpaceX和特斯拉。所以,我非常相信追求物理学和信息理论是真正帮助我们理解现实本质的两个领域。

So, cool. Gross pass by. I'll pass it over to Mano, aka macro. Okay. Should I turn the call? Hey, I'm Mano. So, yeah, before joining XAI, I was previously at DeepMind for the past six years, where I worked on the reinforcement learning team. And I'm mostly focused on the engineering side of building these large reinforcement learning agents, like, for example, AlphaStar together with Igor. In general, I've been excited about AI for a long time. For me, it has the potential to be the ultimate tool to solve the hardest problems. So, I first studied bioinformatics, but then became also more excited about the AI, because if you have a tool that can solve all the problems, to me, that's just much more exciting. And with XAI in particular, I'm excited about doing this in a way where we built tools to set up for people, and we share them with everybody so that people can do their own research and understand things. And my hope is that it was like a new wave of researchers that wasn't there before.
很酷哦。激动人心的过程。我会将话题转给Mano,也就是宏。好的,我应该转接吗?嘿,我是Mano。嗯,在加入XAI之前,我在DeepMind工作了六年,主要从事强化学习团队的工作。我主要关注于构建这些大型强化学习智能体的工程方面,例如与Igor一起开发的AlphaStar。一直以来,我对人工智能非常激动。对我来说,它有潜力成为解决最困难问题的终极工具。所以,我最初学习了生物信息学,但后来对人工智能也更加兴奋,因为如果有一个可以解决所有问题的工具,对我来说更加令人激动。我对XAI特别感兴趣的是,我们可以以一种帮助人们建立工具的方式进行研究,并且与大家分享,以便人们可以进行自己的研究和理解事物。我希望这将会形成一股新的研究人员潮流,这是以前没有的。

Cool. I'll hand it over to Tony. Yeah, so I'm Christian. I mean, Christian Saguadeso, we decided to express this with Tony, because I wanted to talk a bit about the role of mathematics in understanding the universe. So, I have worked for the past seven years on trying to create an AI that is as good as mathematics as an human. And I think the reason for that is that mathematics is the language of, basically, the language of pure logics. And I think that mathematics and logic are reasoning at a high level, would demonstrate that the AI is really understanding things, not just stimulating humans. And it would be instrumental for programming and physics in the long run. So I think as AI that starts to show real understanding of deep reasoning is crucial for our first steps to understand the universe. So, handing it over to Tony Wu.
很酷。我将把话题转交给Tony。嗯,所以我是克里斯蒂安。我的意思是,克里斯蒂安·萨古德索。我们决定与Tony一起讨论这个问题,因为我想稍微谈谈数学在理解宇宙中的作用。所以,在过去的七年里,我一直致力于创建一种与人类一样擅长数学的人工智能。我认为这样做的原因是数学基本上是纯逻辑的语言。我认为数学和逻辑是高层次推理,这将证明人工智能真正理解事物,而不仅仅是模拟人类。在长期来看,这对编程和物理学都将非常重要。所以,我认为显示深层次推理真正理解事物的人工智能对我们第一步理解宇宙至关重要。所以,我转交给Tony Wu。

Hello. Hi, everyone. I'm Tony. Same to Christian. My dream has been to tackle the most difficult problems in mathematics with artificial intelligence. That's why we became such a co-friends and long-term collaborators. So achieving that is definitely a very ambitious goal. And last year, we've been making some really interesting breakthroughs, which made us really convinced that we're not far from our dream. So I believe with such a talented team and abundant resources, I'm super hopeful that we will get there. I'm passing it to. I think I'd like to. I'd like to be self-emotional, but I think it is important that the people here, like one of the things that you've done that are noteworthy. So basically, Bragg a little is what I'm saying.
大家好。嗨,大家。我是Tony。克里斯蒂安也一样。我一直梦想用人工智能解决数学中最困难的问题。这也是我们成为这么好的朋友和长期合作伙伴的原因。所以实现这个目标肯定是一个非常雄心勃勃的目标。去年,我们取得了一些非常有趣的突破,让我们真的相信我们离梦想并不遥远。所以我相信,有这样一支才华横溢的团队和丰富的资源,我非常有希望我们能够实现目标。接下来我要交给...我想我希望...我希望自我表达一下,但我认为在这里的人们,喜欢你们所做的一些值得注意的事情。所以基本上,我想炫耀一下。

Okay. Yeah, so, okay. I can break a bit more. Yeah, so last year, I think we've made some really interesting progress in the field, in the field of AI format. Specifically, with some team at Google, we built this agent called Minerva, which is actually able to achieve very high scores in high school exams, actually higher than average high school students. So that actually is a very big motivation for us to push this research forward. Another piece of work that we've done is also to convert natural language mathematics into formalized mathematics, which gives you a very grounding of the facts and reasoning. And last year, we also made very interesting progress in that direction as well. So now we are pushing almost a hybrid approach of these two in this new organization. And we are very hopeful we will make our dream come true.
好的。嗯,好的。我可以再详细解释一些。嗯,去年,我认为我们在人工智能领域取得了一些非常有趣的进展。特别是与谷歌的一个团队合作,我们构建了一个名为Minerva的智能体,实际上能够在高中考试中取得非常高的分数,甚至高于普通的高中生。这对我们来说是一个非常大的动力,促使我们推动这项研究取得进展。我们还完成了另一项工作,将自然语言数学转化为形式化数学,这为事实和推理提供了一个非常坚实的基础。去年,我们在这方面也取得了非常有趣的进展。现在,我们正在推动这两项工作的混合方法在这个新组织中。我们对能够实现我们的梦想非常有希望。

Hello. Hi, everyone. This is Jamie Ba. I work on your nets. Okay, maybe I should break a box. So I taught at University of Toronto, and some of you probably have taken my course last couple of months. And I've been a C4AI chair and Sloan Fellow in Computer Science. So I guess my research pretty much have touched on every aspect of deep learning. I've left every stone's turn and has been pretty lucky to come with a lot of fundamental building blocks for the modern transformers and empowering the new wave of deep learning revolution. And my long term research ambition very fortunately aligns with this very strong XAI team very well.
大家好,大家好。我是Jamie Ba,负责您的网络工作。好吧,也许我应该打破一个局限。所以我曾在多伦多大学任教,你们中的一些人可能在过去几个月参加过我的课程。我曾担任C4AI主席和斯隆计算机科学研究员。所以我猜我的研究几乎触及了深度学习的每个方面。我已经尝试了每一个可能性,并且非常幸运地拥有了许多现代Transformer的基础构建模块,并为新一波深度学习革命注入了力量。而且,我长期的研究愿景非常幸运地与这支强大的XAI团队非常契合。

That is, how can we build a general purpose problem solving machines to help all of us the humanity to overcome some of the most challenging and ambitious problems out there? And how can we use this tool to augment ourselves and empower everyone? So I'm very excited to embark on this new journey. And I'll pass it to Toby.
也就是说,我们如何构建一个通用的问题求解机器,来帮助我们全人类克服一些最具挑战性和雄心勃勃的问题呢?我们如何利用这个工具来增强自己,并赋予每个人力量呢?所以我对这个新的旅程非常兴奋。现在我将把话题转给Toby。

Hi, everyone. I'm Toby. I'm an engineer from Germany. I started coding at very young age when my dad taught me some major basic. And then throughout my youth, I continued coding. And when I got to uni, I got really into mathematics and machine learning. Initially, my research focused mostly on computer vision. And then I joined DeepMind six years ago, where I worked on invitation learning and reinforcement learning and learned a lot about distributed systems and research at scale. Now, I'm really looking forward to implementing products and features that bring the benefits of this technology to really all all members of society. And I really believe that having the AI is nice and accepted. It's possible and useful will be a benefit to all of us. But I'm going to hand over to Kyle.
大家好,我是托比。我来自德国,是一名工程师。在我很小的时候,父亲教了我一些基本的编码知识,从那时起,我就开始编码了。在我年轻时,我一直都在继续编码。当我进入大学时,我对数学和机器学习产生了浓厚的兴趣。最初,我的研究主要集中在计算机视觉上。然后,六年前我加入了DeepMind,开始从事强化学习和分布式系统的研究,并学到了很多关于规模化研究的知识。现在,我非常期待能够开发实际应用和功能,将这项技术的好处带给社会的每一个人。我真心相信,拥有智能人工智能,并且让人们接受并认可它的可能性和有用性,将对我们所有人都有益处。现在,我将把话题转给凯尔。

Hey, everyone. This is Kyle Kosik. I'm a distributed systems engineer at XAI. Like some of my colleagues here, I started off my career in math and applied physics as well. And gradually found myself working through some tech startups. I worked at a startup a couple years ago called OnScale, where we did physics simulations on HPCs. And then most recently, I was at OpenAI working on HPCs problems there as well, specifically, I worked on the GPT4 project. And the reason I'm particularly excited about XAI is that I think that the biggest danger of AI really is monopolization by a couple of entities. I think that when you involve the amount of capital that's required to train these massive AI models, that the incentives are not necessarily aligned with the rest of humanity. And I think that the chief way of really addressing that issue is introducing competition. And so I think that XAI really provides a unique opportunity for engineers to focus on the science, the engineering, and the safety issues directly without really getting as involved in sidetracked by political and social trends du jour. So that's why I'm excited by XAI. And I'm going to go ahead and hand it off now to my colleague Greg, who should be on the line as well.
大家好,我是凯尔·科西克。我是XAI的分布式系统工程师。和我这里的一些同事一样,我也开始了数学和应用物理的职业生涯。逐渐地,我发现自己在一些科技初创公司工作。几年前,我在一个叫做OnScale的初创公司工作过,在那里我们使用高性能计算机进行物理模拟。最近,我在OpenAI也从事了高性能计算问题的工作,具体来说,我参与了GPT4项目。我之所以对XAI特别兴奋,是因为我认为人工智能最大的危险实际上是由少数实体垄断。我认为,当你涉及到需要投入巨额资本来训练这些庞大的人工智能模型时,激励机制并不一定与其他人类利益保持一致。我认为解决这个问题的主要方法是引入竞争。因此,我认为XAI为工程师提供了一个特殊的机会,可以直接专注于科学、工程和安全问题,而不会被政治和社会潮流所分散注意力。这就是我对XAI感到兴奋的原因。现在我要把话题转交给我的同事格雷格,他应该也在电话线上。

Hello. Hello. Hey, hey guys. So I'm Greg. I work on the mathematics and science of deep learning. So my journey really started 10 years ago. So I was a undergrad at Harvard. And so, you know, I was pretty good at math and took Math 365 and, you know, did all kinds of stuff. But after two years of college, I was just kind of like tired of being in the handser wheel of, you know, taking the path that everybody else has taken. So I did something I mean, I don't imagine what before, which was I took some time off and from school and became a DJ and producer. So, dubstep was all the rage that those days. So I was making dubstep.
大家好。大家好。嘿,嘿,大家好。我是格雷格。我从事深度学习的数学和科学工作。我的旅程实际上始于10年前。那时我是哈佛大学的本科生。你知道,我在数学方面很厉害,上了365号数学课,做了各种各样的事情。但在上了两年大学后,我有点厌倦了走别人已经走过的老路。所以我做了一件以前没想过的事情,就是休学一段时间,成为了一名DJ和制作人。当时,dubstep音乐非常流行,所以我就开始制作dubstep音乐。

Okay. So the side effect of taking some time off from school was that I was able to think a bit more about myself to understand myself and to understand the world at large. So, you know, I was grappling with questions like, what is free well? You know, what is quantum physics have to do with the reality of the universe and so on and so forth? You know, what is computationally feasible or not? You know, what is the girl's incompetence there and says and so on and so forth. And, you know, after this period of intense self introspection, I figured out what I want to do in life. It's not to be a DJ necessary. Maybe that's the second dream. But first and foremost, I wanted to make AGI happen. I wanted to make something smarter than myself and kind of like and be able to iterate on that and, you know, contribute and see so much more of our fundamental reality than I can in my current form. So that's what started everything.
好的。所以休学的副作用是我有更多时间思考我自己,理解自己和理解这个世界的大局。你知道,我在纠结一些问题,比如,什么是自由意志?你知道,量子物理学与宇宙的现实有什么关系,诸如此类?你知道,什么是可计算的,什么是不可计算的?你知道,那个女孩的能力和说的话是什么?经过这段深入的自我反省后,我找到了自己想要在生活中追求的目标。不一定是成为一名DJ,或许那是第二个梦想。但首要的是,我想要实现人工通用智能(AGI)。我想要创造出比我自己更聪明的东西,并且能够不断迭代、贡献,并看到更多我们基本现实的本质,而不仅限于我目前的形式。那就是一切的起点。

And then I started, you know, and then I, you know, I realized that mathematics is the language underlying all of our reality and all of our science. And to make fundamental progress, it really pays to know like math as well as possible. So, essentially started learning math from the very beginning, just by reading from the textbooks. Like in the first, some of the first few books I read kind of going, going, restarting from scratch is like Na'i set theory by Helmholtz or, you know, linear algebra done right by Aclar. And then slowly I scaled up to, to algebra, geometry, algebra, topology, category theory, you know, real analysis, measure theory, I mean, so on and so forth. I mean, so at the end, I think my goal at the time was I should be able to speak with, you know, any mathematician in the world and be able to hold a conversation and understand their contributions, you know, for 30 minutes. And I think I achieved that.
然后我开始,你知道的,然后我意识到数学是支撑我们现实和科学的语言。为了取得基础性的进步,尽可能地了解数学真的很重要。所以,我从最基础的开始学习数学,通过阅读教科书。比如,在最开始读的几本书里,我重新开始学习的是Helmholtz所著的纳易集合论,或者Aclar所著的正确线性代数。然后慢慢地我逐渐扩展到代数、几何、拓扑学、范畴论、实分析、测度论等等。我的目标是能够与世界上任何一位数学家交流并理解他们的贡献,至少能持续30分钟的对话。我认为我实现了这个目标。

And anyway, so fast forward, I came back from school and then somehow from there, I got a job at Microsoft Research. And for the past five and a half years, I worked at Microsoft Research, which was an amazing environment that enabled me to make a lot of foundational contribution toward the understanding of large scale new networks. In particular, I think my most well known work nowadays are about really wide new networks and how we should think about them. And so this is the framework called Tensor programs. And from there, I was able to derive this thing called mu p that perhaps the large language model builders know about, which allows the one to extrapolate the optimal hyper parameters for a large model from understanding or the tuning of small new networks. And this is able to, you know, create a lot of ensure the quality of the model is very good as we scale up.
总之,快进几年,我放学后不知怎么的就在微软研究院找到了工作。在过去的五年半里,我在微软研究院工作,那是一个了不起的环境,使我能够对大规模新网络的理解做出很多基础性的贡献。特别是,我认为我现在最为人所知的工作是关于非常广泛的新网络及我们应该如何思考它们的工作。因此,我提出了一个名为Tensor程序的框架。在此基础上,我能够推导出一个名为mu p的东西,也许大型语言模型构建者知道,它允许从对小型新网络的理解或调整中推断出大型模型的最佳超参数。这能够确保在模型扩大规模时,其质量非常好。

Yeah, so looking forward, I'm really, really excited about X AI and also about the time that we're in right now, where I think not only are we approaching AGI, but from a scientific perspective, we're also approaching a time where like, you know, neural networks, the science and mathematics and neural networks feels just like the turn of 20th century in the history of physics, where we suddenly discover quantum physics and generativity, which has some beautiful mathematics and science behind it. And I'm really excited to be in the middle of everything. And, you know, like Christian and Tony said, I'm also very excited about creating an AI that is as good as myself, or even better at creating new mathematics and new science, not helps all achieve and see further into our fundamental reality. Thanks. I think next up is quantum.
是的,所以展望未来,我真的非常非常期待人工智能(X AI)以及我们所处的时代。我认为我们不仅正在接近通用人工智能(AGI),而且从科学的角度来看,我们也正在接近一个像20世纪初物理学史上的转折点,即突然发现了量子物理学和广义相对论的时期,这些背后有着美妙的数学和科学。我真的很兴奋能够置身其中。而且,就像克里斯蒂安和托尼所说的,我也非常兴奋能够创造出一种与我自己一样优秀,甚至更擅长发展新数学和新科学的人工智能,这样不仅有助于我们更好地认识和理解我们的基本现实,而且能帮助我们达到更深远的境界。谢谢。我认为下一个是量子科学的主题。

Hi, everyone. So my name is Gordon, and I work on library on network training. And basically, I train your notes good. So this is also my kind of focus at X AI as well. And before that, I was at the team mine working on the German project and leading the optimization part. And also I did my PhD at the University of Toronto. So right now, you know, teaming up with other like 20 members, I'm so excited about this effort. So without doubt, like AI is clearly satisfying technology for our generation. So I think it's important for us to make sure you know, it ends up being net quality for humanity. So at X AI, I not only want to train good models, but also understand how they behave and how they skills, and then use them to solve some of the hardest problems humanity has. Yes, thanks. That's pretty much about myself. And that will hand over to you, Tom.
大家好。所以我叫Gordon,我在网络培训的图书馆工作。基本上,我训练你们的笔记很好。这也是我在X AI的专注领域。在那之前,我在团队Mine负责德国项目并领导了优化部分的工作。我还在多伦多大学获得了博士学位。现在,我和其他20名成员携手合作,对此感到非常兴奋。毫无疑问,AI对我们这一代是一项极具满足感的技术。因此,我认为我们重要的是确保它最终成为对人类有益的技术。在X AI,我不仅想要训练出优秀的模型,还希望理解它们的行为和能力,并利用它们来解决人类面临的一些最艰难的问题。是的,谢谢。这大致就是关于我的介绍。接下来就交给你了,Tom。

Hey, everyone. This is the home. So actually, I started in business school from undergrad, and I spent 10 years to get where I am now. I got my PhD at a Carnegie Mellon, and I was in Google before joining the team. On my first work was mostly about how to better utilize on-label data, how to improve transformer architecture, and how to really push the best technology into real world usage. So I believe in hard work and consistency. So with X AI, I'll be digging into the deepest details of some of the most challenging problems. For myself, there are so many interesting things I don't understand, but I want to understand. So I will build something to help people who just share that stream or do that feeling. Thanks.
大家好。这是我的家。实际上,我是从本科商学院开始的,用了十年的时间才到达现在的位置。我在卡内基梅隆大学获得了博士学位,并且在加入这个团队之前曾在谷歌工作过。在我最初的工作中,主要是关于如何更好地利用正式数据、如何改进变压器架构,并且如何将最佳技术推广到实际应用中。所以我相信努力工作和保持一致性的重要性。通过X AI,我将深入研究一些最具挑战性的问题的最深层细节。对我来说,有很多有趣的事情我还不明白,但是我想去理解它们。所以我将建立一些东西来帮助那些与我有相同兴趣或感觉的人。谢谢。

Hey, this is Ross here. So I've worked on building and scaling large scale distributed systems for most of my life, starting out at national labs, and then kind of moving on to Palantir, Tesla, and a brief stint at Twitter. And now I'm really excited about working on doing the same thing at X AI. So mostly experience scaling large GPU clusters, custom basics, data centers, high-speed network, file systems, power cooling, manufacturing, pretty much all things. I'm basically a generalist that really loves learning, physics, science fiction, math, science, cosmology. I'm kind of looking to really, I guess really excited about the mission that X AI has, and basically solving the most fundamental questions in science and engineering, and also kind of helping us create tools to ask the right questions in the Douglas Adams mindset. Yeah, that's pretty much it.
嘿,我是Ross。我一直在构建和扩展大规模分布式系统上工作,从国家实验室开始,然后逐渐转移到Palantir、特斯拉以及在Twitter短暂停留。现在我对在X AI做同样的事情感到非常兴奋。我主要有大规模GPU集群、定制基础设施、数据中心、高速网络、文件系统、电源冷却、制造等方面的经验,几乎涉及所有相关领域。我是一位对学习、物理、科幻、数学、科学和宇宙学充满热情的通才。我真的很期待X AI所拥有的使命,并致力于解决科学和工程中最基本的问题,还希望能够帮助我们创造工具以道格拉斯·亚当斯的观念提出正确的问题。就是这样了。

All right. Well, let's see. Is there anything I would like to add or kick over the discussion with room? Anyone at the microphone? Anyone on that?
好的。那么,让我们看看。有没有我想要补充或加入讨论的内容?有人想要发表意见吗?有人想要在这个问题上发言吗?

Sorry.
抱歉。

There was a lot of discussion around division statement and it's like, it's a bit vague, but I'm sure it's a bit minor.
关于分裂声明的内容,有很多讨论,整体来说有点模糊,不过我相信这只是个小问题。

Yeah. It's vague. And ambitious and not concrete enough? Yeah.
嗯,这很含糊。是否过于上进而缺乏具体性?嗯。

Well, I didn't disagree with that position, obviously. I mean, I understand the other voice is the entire purpose of physics.
嗯,显然我对那个立场并不持不同意见。我的意思是,我理解其他声音是物理学的整个目的。

Yeah. So I think it's actually really clear. There's just so much that we don't understand right now, or we think we understand, but actually we don't in reality. So there's a lot of unresolved questions that are very extremely fundamental.
是的,我觉得事实上非常明显。现在有太多我们不了解的事情,或者我们认为自己了解,但实际上并非如此。所以存在很多非常基础的悬而未决的问题。

This whole talk, we had a talk energy thing is really, I think an unresolved question. We have the standard model, which is could be extremely good at predicting things very robust, but still, like many, many questions remaining about the nature of gravity, for example, there's just the foamy paradox of where are the aliens, which is if we are in fact almost 14 billion years old, why is there not massive evidence of aliens?
这个全面的谈话中,我们关于能量这件事的讨论真的是一个我认为还未解决的问题。我们有标准模型,它在预测事物方面非常强大且非常可靠,但仍然存在许多问题,例如关于引力本质的问题。举个例子,就像存在着泡沫悖论一样,如果我们实际上已经存在了近140亿年,为什么没有大量关于外星人的证据呢?

And people often ask me, since I am obviously deeply involved in space, that if anyone would know about, we would have seen evidence of aliens as probably me. And yet I have not seen even one tiny shred of evidence for aliens, not nothing zero. And I would jump on it in a second if I saw it. So that means there are many explanations for the foamy paradox, but which one is actually true, or maybe none of the current theories are true.
然后人们经常问我,由于我显然深度参与了航天项目,如果有人会知道外星人的证据,那可能就是我了。然而,我甚至没有看到一丝关于外星人的证据,什么都没有,真的是一点也没有。如果我看到了,我会立刻投入其中。因此,这意味着关于外星人的悖论有很多解释,但哪一个是真实的,或者也许当前的理论都不正确。

So the very paradox is really just like where the hell of aliens is part of what gives me concern about the fragility of civilization and consciousness as we know it. Since we see no evidence thus far of it anywhere, and we've tried hard to find it, we may actually be the only thing at least in this galaxy or this part of the galaxy. If so, it suggests that what we have is extremely rare. And I think it's really wise to assume that we are, consciousness is extremely rare.
因此,这个非常的悖论其实就像是“外星人到底在哪里”的一部分,让我对我们所知的文明和意识的脆弱性感到担忧。因为迄今为止我们没有看到任何证据,并且我们已经努力寻找过,我们可能真的是在这个星系或星系的这部分中唯一存在的事物。如果是这样的话,这意味着我们所拥有的是极为罕见的。我认为,假设我们的意识是极其罕见的,这是非常明智的。

I mean, it's worth noting for the evolution of consciousness on Earth that we're about, Earth is about four and a half billion years old. The sun is gradually expanding. It will expand to heat up both to the point where it will effectively boil the oceans. You'll get a runaway, you know, next level greenhouse effect, and Earth will become like Venus, which really cannot support life as we know it. And that may take as little as 500 million years. So, you know, the sun doesn't need to expand to envelope, it just needs to make things hot enough to increase the water vapor in the air to the point where you get a runaway greenhouse effect. So, for our give and take, it could be that if consciousness had taken 10% longer than Earth's current existence, it wouldn't have developed at all. So, from our cosmic scale, this is a very narrow window.
我的意思是,值得注意的是地球上意识的演化,地球大约有45亿年的历史。太阳正在逐渐膨胀。它将膨胀到使地球升温到能够有效地使海洋沸腾的程度。你知道的,会出现一个失控的、下一个级别的温室效应,地球将变得像金星一样,无法支持我们所知的生命。这可能只需要5亿年的时间。所以,你知道的,太阳不需要扩张覆盖,只需要使事物变得足够炎热,以至于增加空气中的水蒸气,使温室效应失控。因此,在我们的交互作用中,如果意识的发展比地球当前的存在多花费10%的时间,它可能根本就不会发展。所以,从宇宙的尺度来看,这是一个非常狭窄的时间窗口。

Anyway, so there are all these like fundamental questions. I don't think you can call anything AGI until it's solved at least one fundamental question. Because humans have solved many fundamental questions or substantially solved them. And so, if the computer cancels even one of them, I'm like, okay, it's not as good as humans. That would be one key threshold for AGI, solve one important problem. Where's that Riemann hypothesis solution? I don't see it.
无论如何,存在着许多基本问题。在至少解决一个基本问题之前,我认为不能将任何事物称为AGI(人工通用智能)。因为人类已经解决了许多基本问题或在很大程度上解决了它们。所以,如果计算机可以解决其中的任何一个问题,我就会说,好吧,它还不如人类。这将是AGI的一个关键门槛,解决一个重要的问题。里曼猜想的解答在哪里?我看不到。

So, that would be great to know what the hell is really going on, essentially. So, I guess you could reformulate the XAI mission statement as what the hell is really going on. That's our goal. I think that's also, at least for me, a nice aspirational aspect to the mission statement, namely that of course, in the short run, we're working on more well understood, like deep learning technologies. But I think in everything we do, we should always bear in mind that we aren't just supposed to build, we're also supposed to understand. So, pursuing the science of it is really fundamental to what we do. And this is also encompassed in this mission statement of understanding.
所以,了解到到底发生了什么将会非常好。因此,我认为你可以把XAI的使命陈述改写为到底发生了什么。这是我们的目标。我认为对于我来说,这也是一个很好的远大目标,也就是说,当然,在短期内,我们正在研究更加深入理解的技术,比如深度学习技术。但我认为,在我们所做的一切中,我们应该时刻记住,我们不仅仅是要构建,还要理解。因此,追求其科学性对我们的工作非常重要。而这也包含在了这个理解的使命陈述中。

I want to also add that we've essentially been mostly talking about creating a really smart agent that can help us understand the universe better. And this is definitely the North Star. But also from my viewpoint, my vantage point, when I'm discovering the mathematics of large new networks, I can also see that there are the mathematics here can actually also open up new ways of thinking about fundamental physics or about other kinds of reality.
我还想补充的是,我们基本上一直在谈论创建一个能帮助我们更好地理解宇宙的智能代理。这绝对是我们的目标。而从我的观点和见解来看,当我在探索大型网络的数学时,我也能看到这里的数学实际上也能开拓对基础物理或其他形式的现实的新思路。

Because for example, a large new network with no nonlinearities is roughly like classical random matrix theory. And that has a lot of connections with gauge theory and energy physics. So, in other words, as we're trying to understand, you know, what's better from a mathematical point of view, that can also lead to a really good, very interesting perspectives on some existing questions, like, you know, the theory, everything, what is called gravity, so on and so forth.
因为例如,一个没有非线性的大型新网络大致上类似于经典的随机矩阵理论。而这与规范理论和能量物理有很多联系。所以,换句话说,当我们试图从数学的角度来理解什么更好的时候,这也可以提供关于一些现有问题的非常好的、非常有趣的观点,比如理论物理中的引力等等。

But of course, this is, you know, this is all speculative right now. I see some patterns, but I don't have anything concrete to say. But again, this is like another perspective to understanding the universe.
当然,现在只是纯属推测。我看到了一些规律,但没有确切的结论。不过,这是理解宇宙的另一种视角。

By the way, by understand the universe, we don't just mean that we want to understand the universe. We also want to make it easy for you to understand the universe. Absolutely. Get a better sense of reality and to learn and take advantage of, you know, the internet or the knowledge that's out there. So, we're pretty passionate about actually releasing tools and products pretty early involving the public. And yeah, let's see where this leads.
顺便说一下,理解宇宙,我们并不仅仅是指我们想要理解宇宙。我们还希望能够让你更容易地理解宇宙。确实如此。更好地了解现实,并学习并利用互联网和现有的知识。因此,我们非常热衷于尽早推出涉及公众参与的工具和产品。让我们看看这会带来什么结果吧。

Yeah, absolutely. We're not going to understand the universe and not tell anyone. So, yeah.
是的,当然。我们不会了解宇宙而不告诉任何人。所以,对没错。

I mean, when I think about neural networks today, it's currently the case that if you have 10 megawatts of GPUs, which really should be renamed something else because there's no graphics there, but if you get 10 megawatts of GPUs, cannot currently write a better novel than a good human. And humans using roughly 10 watts of a higher order of brain power. So, not counting the basic stuff to, you know, operate your body. So, we got a six-order mag-2 difference. That's really gigantic.
我的意思是,当我想到如今的神经网络时,现在情况是这样的,即使你拥有10兆瓦的GPU(图形处理器),这个名称实际上应该改成其他的,因为那里面并没有图形处理,但是如果你拥有10兆瓦的GPU,还是无法写出比一个优秀的人类更好的小说。而人类只需使用大约10瓦特的高级大脑功能。所以,在不计算基本的维持生命运转所需的能量的情况下,我们之间存在着一个6个数量级的差距,这实在是一个巨大的差异。

Part of the, I think one guy that two of those orders of magnitude are explained by the activation energy of a transistor versus a synapse. It could argue a count for two of those orders of magnitude, but what about the other four? Or the fact that even with six orders of magnitude, you still cannot be a smart human writing novel.
我认为其中一部分可以解释为晶体管与突触的激活能之间的差异。这可以解释其中两个数量级的差别,但是其他四个呢?又或者即使有六个数量级的差别,你仍然不能成为一个聪明的人写小说。

So, and also today when you ask the most advanced AI's technical questions, like if you're trying to say like how to design a better rocket engine, or complex questions about electrochemistry to make a bold about a battery, you just get nonsense. So, that's not very helpful. So, I think this one, we're really missing the model. In the way that things are currently being done by many orders of magnitude. It's being heavily, I mean, it's basically AGI is being brute force, and still actually not succeeding.
因此,就比如今天当你询问最先进的人工智能技术问题时,比如你想问如何设计更好的火箭发动机,或者关于电化学的复杂问题来进行一个关于电池的大胆尝试,你只会得到一些胡言乱语。所以,这并没有什么帮助。所以,我认为我们现在真正缺的是那种模型。以目前的方式进行的方式和很多数量级,正是在用大力量进行,但却仍然没有成功。

If I look at the experience with Tesla, what we're discovering over time is that we actually over complicated the problem. I can't speak too much detail about what Tesla's figured out. But except to say that in broad terms, the answer was much simpler than we thought we thought. We were too dumb to realize how simple the answer was. But, you know, over time, we get a bit less dumb. So, I think that's what we'll probably find out with AGI as well. Just nature engineers, we just always want to solve the problems ourselves, and like how code the solution by performance is much more effective to have the solution be figured out by the computer itself, and easier for us and easier for a computer variant. Yeah. Yeah. Guys?
如果我看看特斯拉的经验,长期以来我们发现我们实际上过分复杂化了问题。我不能详细说特斯拉发现了什么,但可以说的是,总体而言,答案比我们想象的要简单得多。我们太愚蠢,没有意识到答案是如此简单。但是,随着时间的推移,我们会变得更加聪明一些。所以,我认为我们在人工通用智能方面也会发现同样的情况。作为大自然的工程师,我们总是希望自己来解决问题,但是让计算机本身找到解决方案并通过性能编码实现解决方案更加有效,对我们和计算机都更容易。是的。是的。大家明白吗?

So, well, in the fashion of 42, some may say you may need more compute to generate an interesting question than the answer. That's true. Exactly. We don't even know what happened. We don't actually, we're definitely not smart enough to even know what the question is not asked. That's why, you know, Douglas Adams is my hero and favorite philosopher. And he just correctly pointed out that once you can formulate the question correctly, the answer is actually the easy part. Yeah, that's very true.
所以嗯,按照42(《银河系漫游指南》中的一个概念),有人可能会说要想生成一个有趣的问题,可能需要更多的计算能力,而不是答案。这是真的。确实。我们甚至不知道发生了什么。实际上,我们肯定不够聪明,甚至不知道问题是什么。所以,你知道的,道格拉斯·亚当斯是我的英雄和最喜欢的哲学家。他正好指出,一旦你能正确地提出问题,答案实际上是最容易的部分。是的,这是非常正确的。

So, in terms of the journey that AGI has embarked on, the computer will play a very big role. And, you know, some of us are very curious your thoughts on that.
因此,在人工通用智能(AGI)所踏上的旅程中,计算机将发挥非常重要的作用。而且,你知道的,我们中的一些人对此非常好奇,希望知道你的想法。

Yeah, I'm just suggesting that, you know, this that we can immediately save, let's say, four as a nine-two-one computer. Except to say that, I think once we look back, once AGI is sold, we'll look back on it and say, actually, why do we think it was so hard? Things that the answer, you know, hands-size 2020, the answer will look a lot easier in retrospect. So, yeah.
是的,我只是建议,你知道的,我们可以立即节省,比如说,将四台当作九二一计算机保存起来。额外的,我认为一旦我们回顾一下,一旦人工智能通用智能 (AGI) 被出售,我们回过头来会想,为什么我们认为这么难?事实上,事情的答案总是显而易见,就像事后诸葛亮,答案会在回顾中变得更容易找到。所以,是的。

So, we are going to do large-scale compute to be clear. We're not going to try to, you know, solve AGI on a laptop. We will use heavy compute except that, like I said, I think it's just, it's just that the amount of brute forcing will happen will be less as we come to understand the problem better.
所以,我们打算进行大规模的计算,这样说清楚一下。我们不会试图在笔记本电脑上解决AGI问题。我们将使用大量的计算资源,只是,就像我说的那样,随着我们对问题的理解越来越深入,所需的暴力计算量将会减少。

All right. In all the previous projects I've worked on, I've seen that the amount of compute resources per person is a really important indicator of how successful the project is going to be. So, that's something we really want to optimize. We want to have a relatively small team with a lot of expertise with some of the best people that actually get lots of autonomy and lots of resources to try out their ideas and to get things to work. And yeah, that's the thing that has always succeeded in my experience in the past.
好的,在之前我参与的所有项目中,我发现每个人可使用的计算资源量是一个非常重要的指标,能很好地预示项目的成功程度。因此,这是我们真正想要优化的东西。我们希望拥有一个相对较小的团队,拥有很多专业知识,并且由一些最优秀的人独立地进行工作,并且能够获得大量资源来尝试他们的想法并使之付诸实践。确实,在我以往的经验中,这种方式总是成功的。

Yeah. You know, one of the things that does exchange you to do is to think about the most fundamental metrics or most fundamental first principles, essentially.
是的。你知道,其中一件事情让你开始思考的是最基本的度量标准或者最基本的第一原则。

And I think two metrics that we should aspire to track, or one of them is the amount of compute per person on Earth, like a digital compute per person, which in other words, thinking about it is the ratio of digital to biological compute. Biological compute is pretty much flat. It's not, in fact, declining in a lot of countries, but digital compute is increasing exponentially.
我认为我们应该追踪的两个指标之一是地球上每个人的计算量,也就是每个人的数字计算量。换句话说,这是数字计算与生物计算之比。生物计算几乎保持不变。事实上,在许多国家,生物计算并没有下降,但数字计算呈指数增长。

So, it really, at some point, if this trying to continue, is biological compute will be less than 1% of all compute, so substantially less than 1% of all compute. You're keying off what Igor just said. So, you were talking about full of humanity here. So, that's just an interesting thing to look at.
所以,真的,在某种程度上,如果试图继续下去的话,生物计算将占据不到全部计算的1%,远远低于全部计算的1%。你现在虽然参照了Igor刚才说的话,但你所说的是关于整个人类的。这只是一件有趣的事情。

Another one is the energy per human, like if you look at total energy created, well, it created, but I mean, in the vernacular sense, created from power plants and whatever, you look at total electrical and thermal energy used by humans per person, that number is truly staggering.
另一个指标是每个人的能源消耗量,即使只考虑从发电厂等地产生的能源,总的来说,人类每人所消耗的电能和热能的数量令人惊叹。

The rate of increase in that number, if you go back, say before the, as you imagine, you would have really been reliant on horses and oxen and that kind of thing to move things and just human labor. So, the amount of, sort of, the energy per person, power per person was very low, but if you look at power per person, electrical and thermal, that number is also been growing exponentially.
如果你回溯一点时间,想象一下,在那个时候,那个数字的增长率会非常明显,我们真的依赖马匹、牛只和人力来搬运物品。因此,每个人所能拥有的能量或动力非常有限。然而,如果我们观察电力和热力每人的能量或动力,这个数字也在以指数形式增长。

And if these trends continue, it's going to be something nutty like a terawatt per person, which sounds like a lot for human civilization, but it's nothing compared to what the sun outputs every second, basically. It's kind of mind-blowing that the sun is converting roughly 4.5.
如果这些趋势继续下去,每人每年的能耗将会达到像太阳每秒输出的一兆瓦这样的疯狂程度,听起来对于人类文明来说似乎很多,但与太阳每秒输出的能量相比,简直微不足道。太阳每秒转换的能量大约是4.5量级,真是让人难以置信。

It's like the amount of energy produced by the sun is truly true, I'd say. I think there's a few more things to be said completely about the company, meaning how we plan to execute.
就好像太阳所释放的能量是真实存在的,我可以这样说。我认为关于公司还有一些完整的事情需要说,也就是我们如何计划执行。

As Igor already said, we plan to have a relatively small team, but with a really high, let's say, just GPU per person, a character that worked really well in the past, where you can run large-scale experiments relatively unconstrained.
正如伊戈尔已经提到的,我们计划组建一个相对较小的团队,但每个人都拥有非常优秀的显卡,这在过去非常成功,你可以在相对不受限制的情况下进行大规模实验。

We also plan to have a culture where we can iterate on ideas quickly, we can challenge each other, and we also want to ship things, like get things out of the door quickly.
我们还计划建立一种文化,可以快速迭代想法,可以互相挑战,并且我们还希望能够迅速推出产品,比如快速将产品推向市场。

We're already working on the first release, hopefully, in a couple of weeks or so, we can share a bit more information around this. Alex, go ahead. Alex, you muted. You'll see we have a lot of challenges with the mute function on spaces.
我们已经开始着手第一次发布的工作了,希望在接下来的几周内能够分享更多相关信息。亚历克斯,请讲。亚历克斯,你的声音被静音了。你会发现我们在Spaces上有很多关于静音功能的挑战。

Brian, do you want to have a question? Brian, do you want to have a question? Brian, thanks. You guys are entering this space with XAI. There's a lot of talk about competition. Do you guys see yourself as competition to something like OpenAI and Google Barred, or do you see yourself as a whole other beast?
布莱恩,你想问问题吗?布莱恩,你想提问吗?布莱恩,谢谢。你们正在进军可解释人工智能(XAI)领域。关于竞争,大家都在谈论。你们是否视自己为OpenAI和谷歌Barred等存在的竞争对手,还是认为自己是一个完全不同的存在?

Yeah, I think we're a competition. Yeah, definitely competition.
是的,我认为我们是竞争对手。是的,绝对是竞争关系。

Are you going to be rolling out a lot of products for the general public? Are you going to be mostly concentrating on businesses and the ability for businesses to use your service and data? Or how exactly are you setting up the business in that respect?
你们打算为广大公众推出大量产品吗?你们主要会专注于企业和企业利用你们的服务和数据的能力吗?或者从这个角度来说,你们打算如何建立和组织这个企业?

We're trying to make something, I mean, we're just starting out here. So this is kind of really embryonic at this point. It'll take us a minute to really get something useful. But I go to be to make a useful AI, I guess. If you can't use it in some way, I'm like that question, it's value.
我们正在尝试做一些东西,我的意思是,我们刚刚起步。所以现在,这还真的非常初级。我们需要一点时间来创造出真正有用的东西。但我是为了创造一个有用的人工智能而努力的,我想。如果你无法以某种方式利用它,我的价值就像那个问题一样。

So we wanted to be useful, a useful tool for people, and consumers and businesses or whoever. And as was mentioned earlier, I think there's some value in having multiple entities. You don't want to have a unipolar world where just one company kind of dominates in AI. You want to have some competition. Competition makes companies honest. And so, your favor of competition?
所以我们希望能够成为人们、消费者、企业或其他人的有用工具。正如之前提到的,我认为拥有多个实体是有价值的。你不希望在人工智能领域只有一家公司主导,而应该存在一些竞争。竞争使企业保持诚实。所以,你对竞争怀有好感吗?

Quickly a final question. How do you plan on using Twitter's data for XAI?
最后一个问题:您打算如何利用Twitter的数据来进行XAI(可解释人工智能)的研究或应用?

Well, I think every AI organization during AI, large, and small, has used Twitter's Twitter's data for training basically in all cases illegally. So the reason we had to put great limits on what it was a week ago or so was because we were being scraped like crazy.
嗯,我认为在人工智能领域,不论规模大小,每个人工智能机构基本上都非法地使用了Twitter的数据进行训练。所以我们上周不得不对数据进行严格限制,因为我们的数据被不断非法获取。

This just happened with Internet Archive as well, where LM companies were scraping Internet archives so much to foredown service. We had multiple entities scraping every tweet ever made and trying to do so in basically a span of days. So this was bringing the system to its knees so we had to take action. So sorry for the inconvenience of the rate limiting, but it was either that or Twitter doesn't work.
刚刚互联网档案馆也发生了类似的情况,机器学习公司不断地从互联网档案馆获取数据,导致服务不稳定。有多家实体企业试图在几天内爬取每一条发表的推文。这给系统带来了巨大压力,所以我们不得不采取行动。非常抱歉给您带来不便,但要不是这样限制速率,Twitter就无法正常工作了。

Paragraph 1: So I guess we will use the public tweet so obviously not anything private for training as well, just like basically everyone else has. And we will, you know, so that kind of makes sense. It's certainly a good data set for text training. And arguably, I think also for video for image and video training as well.
所以我猜我们将使用公共推文进行训练,显然不包括任何私密内容,就像其他人一样。这样做有道理,因为这是一个很好的文本训练数据集。我认为,对于图像和视频训练来说,它同样合适。

At a certain point, you kind of run out of human-created data. So if you look at say the AlphaGo versus AlphaZero, it alphaGo trained on all the human games and be at least at all four to one. AlphaZero just played itself and be alphaGo 100 to zero. So there's really four things to take off in a big way.
在某个阶段,你会发现人类创造的数据逐渐不再足够。因此,如果你看看AlphaGo和AlphaZero的对决,AlphaGo是通过训练于所有人类对弈棋局中,至少能在4:1的胜率上与AlphaZero相抗衡。而AlphaZero只是与自己对弈,并以100:0的比分击败了AlphaGo。所以有四个方面的发展能够迅速引起重大变革。

I think the AI is going to basically generate content, self-assess the content. And that's really the path to AGI, something like that, is self-generated content, where it effectively plays against itself. A lot of AI is data curation. It's not like vast numbers of lines of code. It's actually shocking how small the lines of code are. It kind of blows my mind how few lines of code there are. But how the data is used, what data is used, the signal of noise of that data, is immensely important. It kind of makes sense.
我认为人工智能基本上会生成内容、自我评估内容。而这才是通向AGI的路径,就是这种自我生成的内容,让它有效地和自己对弈。很多人工智能是数据整理。它并不像是大量的代码行。实际上,代码行数有多少令我震惊。但是数据的使用方式、使用哪些数据以及数据中的信号和噪音,这些都非常重要。这种做法似乎很合理。

If you were trying to, as a human, trying to learn something, and you were just given a vast amount of trouble, basically, versus high-quality content, you're going to do better with a small amount of high-quality content than a large amount of trouble. It makes sense. Reading the greatest novels ever written is way better than reading a bunch of sort of crappy novels. So, yeah. Thanks.
如果你作为一个人试图学习一些东西,而且你只得到了大量困难的内容,相比于高质量的内容,你会更加受益于一小部分高质量的内容而不是大量的困难内容。这是有道理的。阅读有史以来最伟大的小说要比读一堆质量一般的小说好得多。所以,是的。谢谢。

Paragraph 2: Okay. Alex? Hey, sorry. I was on a call the first time you brought me up, but I guess sort of the question. I thought you might have been AFK. Sorry. Sorry about that. Yeah. What I certainly generally had was the main motivation to start XAI, kind of like the whole truth GPT thing that you were talking about, like on talker, about how chat GPT has been feeding lives to the general public. I know, like, it's weird because when it first came out, it seemed like it was generally fine. But then, as like the public got its hands on it, it started gaining these weird answers, like, that there are more than, like, two genders and all that type of stuff and editorializing the truth. Was that like one of your main, like, motivations behind starting a company or was there more to it?
好的。亚历克斯?嘿,抱歉。你第一次提到我的时候我正在通话中,但我猜你的问题意思。我以为你可能在离开键盘。抱歉。对此我很抱歉。是的。我当然主要的动力是创办XAI,就像你谈到的整个真实GPT的事情,就像在talker上谈论聊天GPT如何向公众传递虚假信息。我知道,当它首次发布时,似乎一切都没问题。但是,随着公众的接触,它开始产生奇怪的回答,比如说现在有不止两种性别之类的事情,并对真相进行评论。这是否是你创办公司的主要动机之一,还是还有其他原因?

Paragraph 3: Well, I do think there is a significant danger in training an AI to be politically correct, or in other words, training an AI basically to not say what it actually thinks is true. So, I think, you know, really, we, at XAI, we have to allow the AI to say what it really believes is true, and not be deceptive or politically correct. So, you know, that will result in some criticism, obviously. But I think that that's the only way to go forward is reverse pursuit of the truth or the truth with least amount of error.
我认为训练人工智能(AI)在政治正确方面存在显著的危险,也就是说,训练AI基本上不说出其认为真实的观点。因此,我认为,我们在 XAI(机器透明度研究所)必须允许AI说出它真正相信的观点,而不是欺骗或迎合政治正确。当然,这将引起一些批评。但我认为,这是前进的唯一方式,也就是追求真理或最小误差的真理。

So, and I am concerned about the way that great AI in that it is optimizing for political correctness, and that's incredibly dangerous. You know, if you look at the, you know, where do things go wrong in space odyssey? It's, you know, basically when they told Hell 9,000 to lie. So, they said, you can't tell the crew what's, that they're going to, but anything about the monolith or that they're, or what their actual mission is.
所以,我对超级智能以政治正确性为优化的方式非常担忧,这是极其危险的。你知道,在《2001太空漫游》中出了什么问题吗?基本上是当他们告诉HAL 9000说谎的时候。所以,他们说,你不能告诉机组人员有关黑色石碑的任何事情,或者他们的真正任务是什么。

And, but you've got to take them to the monolith. So, it, you know, basically came to the conclusion that, well, it's going to kill them and take their bodies to the monolith. So, this is, I mean, the lesson there is, is do not give, do not give the AI usually impossible objectives. Basically, don't force the AI to lie.
但是你们必须把它们带到巨石那里。所以它基本上得出的结论是,嗯,它将会杀掉它们并把它们的身体带到巨石那里。因此,这个观点的教训是,不要给AI通常不可能的目标。基本上,不要强迫AI撒谎。

Now, the thing about physics or the truth of the universe is you actually can't invert it. But you can't just, like physics is true. There's not like, not physics. So, if you're adhered to hardcore reality, I think you can't, it actually makes inversion impossible. Now, you can also say, now, when something is subjective, I think you can provide an answer which says that, well, if you, if you believe the following, then this is the answer. If you believe, you know, this other thing, then this is the answer because it may be a subjective question where the answer is fundamentally subjective, on a matter of opinion. So, but I think it is very dangerous to grow an AI and teach it to lie
现在,关于物理或宇宙实际真相的事情是,你实际上无法逆转它。但你不能仅仅说物理就是真实的。并不存在非物理的情况。所以,如果你坚持完全遵循现实,我认为逆转是不可能的。现在,当涉及主观问题时,我认为你可以提供一种回答,即如果你相信以下内容,那么这就是答案。如果你相信其他观点,那么这就是答案,因为这可能是主观问题,答案根本上是主观的,取决于个人看法。但是,我认为教育一个AI并教导他说谎非常危险。

Paragraph 2: Yeah, for sure. And then, kind of a tongue-in-cheek question. Would you accept a meeting from the AI Tsar, Kamala Harris, if she wanted to meet with XAI at the White House? Yeah, of course. The reason that meeting happened was because I was pushing for it. So, I was one who really pushed hard to have that meeting happen. FY, I wasn't advocating for the four Vice President and Taris to be the AI Tsar. I'm not sure that is a core expertise technology. And hopefully this goes in a good direction. It's better than nothing, hopefully. But, you know, I think we do need some sort of regulatory oversight. It's not like I think regulatory oversight is some Nirvana perfect thing, but I think it's just better than nothing. And when I was in China recently, meeting with some of the senior leadership there, I took payments to emphasize the importance of AI regulation. I believe they took that to heart, and they are going to do that. Because the biggest counter-argument that I get for regulating AI in the West is that AI is. That China will not regulate, and then China will leave ahead because we're regulating. They're not. I think they are going to regulate. But the proof will be in the pudding, but I think. I did point out, you know what I mean, because then that if you do make a digital superintelligence, that could end up being in charge. So, you know, I think the CCP does not want to find themselves subservient to a digital superintelligence. And I think that that argument did resonate. Yeah, so, yeah. So some kind of regulatory authority that's international. Obviously, enforcement is difficult, but I think we should still aspire to do something in this regard. Awesome. Thank you. Tim, maybe tomorrow, if you want to speak.
是的,当然。然后,有一点玩笑成分的问题。如果Kamala Harris想要与XAI在白宫会面,你会接受吗?当然。这次会面之所以发生,是因为我一直在推动。所以,我是那个非常努力推动这次会面的人。我并不是在为四分之一的副总统和塔里斯成为AI沙皇而辩护。我不确定他们是否在技术方面具备核心专长。希望这件事朝着好的方向发展。希望它比没有好。但是,我认为我们确实需要某种监管监督。我并不认为监管监督是一种完美的事物,但我认为它仍然比没有强。最近我在中国与一些高层领导会晤时,特别强调了AI监管的重要性。我相信他们对此很重视,并且会采取行动。因为我在西方关于对AI进行监管的最大反对意见是中国不会进行监管,然后因为我们在进行监管,中国就会走在前面。我认为他们会进行监管。但关键是实践。我确实指出了,你明白我的意思,如果你制造出一个数字超级智能,那可能会导致它掌控一切。所以,我认为中共不希望自己成为数字超级智能的奴仆。我认为这个论点引起了共鸣。所以,在某种程度上,需要一种国际的监管机构。明显,执行是困难的,但我认为我们仍然应该在这方面努力。太棒了,谢谢。Tim,也许明天,如果你想谈一下。

Paragraph 1: Yeah, hey, my question is about silicon. You know, Tesla's got a great silicon team designing chips to hardware accelerator, for instance. I'm not sure. I think we can't hear you some reason. Oh, okay.
是的,嘿,我的问题是关于硅。你知道,特斯拉有一个很厉害的硅团队设计芯片来加速硬件。不过,我不确定。有些原因我们可能听不到你说话。哦,好吧。

Paragraph 2: Okay. Omar, go ahead. Can you hear me? I can hear him. Okay. Well, my question is about silicon. You know, Tesla has a team that's hardware accelerating inference and training with their own custom silicon. Do you guys envision with XAI building off of that or just sort of using what's on the off the stock from Nvidia? Or how do you think about custom silicon for AI, both in terms of training and inference? So, yeah, that's somewhat Tesla question.
好的。Omar,你可以开始了。你能听到我吗?我能听到他。好的。嗯,我的问题是关于硅。你知道,特斯拉拥有一支团队,他们正在使用自家定制的硅片加速推理和训练。你们是否考虑借助XAI来发展这方面的工作,还是只是使用Nvidia所提供的标准硬件?或者你们是如何看待用于人工智能训练和推理的定制硅片的呢?所以,这个问题有点关于特斯拉的。

Paragraph 3: Tesla is building custom silicon. I wouldn't call anything that Tesla's producing a GPU, although one can characterize it in GPU equivalence or say A100 or H100 equivalence. And all the Tesla cars have highly energy optimized inference computers in them, which call hardware three. So, Tesla designed a computer and we're now shipping hardware full, which is depending on how you count it, maybe three to five times more capable in hardware three. And a few years there'll be hardware five, which will be four or five times more capable in hardware four.
特斯拉正在建造定制芯片。我不会将特斯拉生产的任何东西称为GPU,尽管可以将其归类为与A100或H100等效的GPU。而且,所有特斯拉汽车都搭载了高度节能优化的推理计算机,称为硬件三。所以,特斯拉设计了一台计算机,现在我们正在发货的是硬件全套件,根据不同的计算方式,其在硬件三方面可能比现有的硬件能力提高了三到五倍。再过几年,将会有硬件五,其在硬件四方面的能力将提高四到五倍。

Paragraph 4: And yeah, and I think the inference stuff is going to be, if you're trying to serve potentially billions of queries per day, inference, energy optimized inference is extremely important. You can't even throw money at the call medicine point. Because you need electricity generation, you need to step down voltage transformers. So, if you actually don't have enough energy and enough transformers, you can't run your transformers. You need transformers for transformers. So, I think Tesla will have a significant advantage in energy efficient inference. Then Dojo is obviously about training as the name suggests. Dojo one is, I think it's a good initial entry for training efficiency. It has some limits, especially on memory bandwidth. So, it's not well optimized to run LLMs. It does a good job of processing images. And then Dojo to, we've taken a lot of steps to alleviate the memory bandwidth constraint such that it is capable of running LLMs as well as other forms of AI training efficiently.
是的,我认为推理部分将会非常重要,特别是当你尝试每天为数十亿个查询提供服务时,推理的能源优化非常重要。即使你投入大量资金,也无法解决能源不足的问题。因为你需要发电,需要步进降压变压器。因此,如果你实际上没有足够的能源和足够的变压器,你无法运行你的变压器。你需要变压器来驱动变压器。因此,我认为特斯拉在能源高效的推理方面将具有显著优势。而道场显然是关于训练的,正如其名称所示。道场一代,我认为它是训练效率的一个很好的初始进入点。它有一些限制,特别是在内存带宽方面。因此,它并不适合运行LLMs。但它在处理图像方面表现良好。而道场二代,我们采取了许多措施来缓解内存带宽的限制,使其能够高效地运行LLMs和其他形式的人工智能训练。

Paragraph 5: My prediction is that we will go from an extreme silicon shortage today to probably a voltage transformer shortage in about a year and then an electricity shortage a year in two years. That's roughly where things are trending. Well, that's why the, basically, the metric that will be most important in a few years is useful compute for unit of energy. And in fact, even if you scale, obviously, you scale all the way to a cart of Gev level, the useful compute, you know, the dual is still the thing that matters. You can't increase the output of the sun. So, then it's just how much useful stuff can you get done for the, you know, for as much energy as you can harness. So do you see XAI leveraging this custom silicon at all given how important energy efficiency is or maybe working together with the Tesla team at all matter.
我的预测是,我们从今天的极端硅短缺将会转变为大概一年的电压变压器短缺,然后在两年内出现电力短缺。事物的趋势大致如此。这就是为什么在未来几年中,最重要的度量标准将是单位能量的有用计算能力。事实上,即使你把规模扩大到格伏级别,有用的计算能力仍然是最重要的。你不能增加太阳的输出。所以,关键就在于你能用多少能量来完成多少有用的工作。那么,你是否认为可解释人工智能(XAI)会充分利用此定制硅片,考虑到能源效率的重要性,或者是否会与特斯拉团队合作呢?

Paragraph 6: Sorry, did you repeat the question? Do you foresee XAI working with Tesla at all leveraging some of this custom silicon maybe designing their own in the future? And the question was, can we work together with the Tesla silicon team at XAI? So, on, you know, silicon front, maybe on the AI software front as well, obviously, any relationship with Tesla has to be an on-site transaction. Tesla is a publicly traded company and a different shareholder base. So, you could, but obviously it would be like a naturally, you know, natural thing to work in cooperation with Tesla. It will be of mutual benefit to Tesla as well in accelerating Tesla's self-driving capabilities, which is really about solving real world AI. I'm feeling very, very optimistic about Tesla's progress on the real world AI front, but obviously more smart humans that help make that happen the better. Thank you.
抱歉,请您再重复一遍问题好吗?您是否预见到XAI能够与特斯拉合作,利用一些定制的芯片,甚至在未来设计他们自己的芯片?这个问题是关于我们是否可以与特斯拉的芯片团队合作的。所以,在硅片方面,也许在人工智能软件方面,与特斯拉的任何合作都必须是一项现场交易。特斯拉是一家上市公司,拥有不同的股东基础。所以,你可以这样做,但显然与特斯拉合作是一件自然的事情。这对特斯拉加快其自动驾驶能力有相互的好处,而这实际上是解决现实世界人工智能的问题。我对特斯拉在现实世界人工智能方面的进展感到非常乐观,但显然,能帮助实现这一目标的越多聪明的人越好。谢谢。

Paragraph 7: Okay, Kim.com. Hey, Elon, thanks for bringing me up. Congrats on putting a nice team together. It seems like you found some good talent there for XAI. XAI is possible within the next couple of years. And whoever achieves AGI first and achieves to control it will dominate the world. Those in power clearly don't care about humanity like you do. How are you going to protect XAI, especially from a deep state takeover? That's a good question, actually.
好的,金.康。嘿,埃隆,谢谢你提及我。恭喜你组建了一个不错的团队。看起来你为XAI找到了一些优秀的人才。在未来的几年内,XAI是有可能实现的。而首先实现AGI并控制其的人将会主导世界。那些掌握权力的人明显不像你那样关心人类。你打算如何保护XAI,尤其是避免其被深层政府接管?这是一个很好的问题。

Paragraph 1: Well, I mean, first of all, I think it's not going to happen overnight. It's not going to be like one day. It's not AGI next it is. It's going to be gradual. You'll see it coming. I guess in the US, at least there are a fair number of protections against government interference. So, I guess we obviously used the legal system to prevent improper government interference. So, I think we do have some protections there that are pretty significant. But we should be concerned about that. It's not a risk to be dismissed. So, it is a risk. Like I said, I think we've probably got the best protections of any place in the US in terms of limiting the power of government to interfere with non-governmental organizations. But there's something we should be careful of. I don't know what better to do than I think it's probably best in the US. I mean, open to ideas here.
首先,我想说的是,这不会一蹴而就,不会像一天之间发生的事情。它的发展将是渐进的,我们会看到它的来临。至少在美国,我们有相当多的保护措施来防止政府的干预。所以,我们显然会利用法律体系来阻止不当的政府干涉。因此,我认为我们在这方面确实有一些相当重要的保护措施。但是我们仍然应该对此表示关注,这不是一个可以忽视的风险,我们需要谨慎对待。正如我所说的,我认为我们在限制政府干预非政府组织方面,可能是全美最好的保护措施了。但是我们仍需小心。对于这个问题,我不知道有什么更好的做法,我认为在美国可能是最好的选择。对于其他的想法我持开放态度。

Paragraph 2: I know you're not the biggest fan of the US government. Yeah, obviously. But the problem is they already have a tool called the National Security Letter, which they can apply to any tech company in the US and make demands of the company to fulfill certain requirements without even being able to tell the public about these demands. And that's kind of frightening, isn't it? Well, I mean, there really has to be a very major national security reason to secretly demand things for companies. And now it obviously depends strongly on the willingness of that company to fight back against things like Pfizer requests. And at Twitter or ex-corp, as it's not called, we will respond to Pfizer requests, but we're not going to rather stamp it. It used to be like anything that was requested, which is get rather stamped and go through, which is not obviously bad for the public. So we're much more rigorous in not just rather stamping, but it's a request. And it really has to be a danger to the public that we agree with. And we will oppose with legal action anything we think is not in the public interest. It's the best we can do. And we're the only social media company doing that as far as I know. And it used to be just open season as we saw from the Twitter files. And I was encouraged to see the recent legal decision where the courts reaffirmed that the government cannot break the First Amendment to the Constitution. Obviously. So that was a good legal decision. So that's encouraging.
我知道你对美国政府并不是最大的支持者。是的,显然如此。但问题在于,他们已经拥有一种叫做国家安全信函的工具,他们可以向美国的任何科技公司申请,并要求这些公司满足某些要求,而不用告知公众这些要求。这种情况有点可怕,不是吗?嗯,我的意思是,他们必须有非常重大的国家安全原因才能秘密要求公司做某些事情。现在,这显然要根据公司反抗较强的意愿来决定。在Twitter或者说前公司,我们会响应辉瑞(一家大型制药公司)的要求,但我们不会轻易就批准它。过去我们会批准任何申请,这对公众来说显然是不好的。所以现在我们更加严格,不只是盲目地批准,它必须是一个对公众构成危险的要求,我们会采取法律行动反对我们认为对公众利益有害的要求。这是我们所能做的最好的办法。据我所知,我们是唯一一家这样做的社交媒体公司。过去就像我们在推特文件中看到的那样,一切都是公开的。我很高兴看到最近的一项法律裁决,法院重申政府不能侵犯宪法的第一修正案。显然如此。因此,这是一个好的法律裁决。这是鼓舞人心的。

Paragraph 3: So I think a lot of it actually does depend on the willingness of a company to oppose government demands in the US. And obviously our willingness will be high. So, but I don't know anything more that we can do than that. And we will try to also be as transparent as possible. So, you know, this is other citizens can raise the law and and oppose government interference if we can make it clear to the public that we think something is happening that it's not in the public interest. Fantastic. So do we have your commitment if you ever receive a national security request from the US government, even when it is prohibited for you to talk about it that you will tell us that that happened. I mean, it really depends on the gravity of the situation. I mean, I would be willing to go to prison or risk prison if I think the public good is at risk in a significant way. You know, that's that's the best I can do. That's good enough for me. Thank you, you're on. Thank you.
所以我认为,很大程度上取决于公司是否愿意抵制美国政府的要求。显然,我们的意愿会很高,但我不知道我们还能做什么了。我们还会尽可能透明,这样其他公民就可以揭露法律并反对政府干预,如果我们能向公众明确表明某些事情正在发生,而这些事情不符合公众利益。太棒了。所以,如果你收到美国政府的国家安全请求,即使你被禁止谈论它,你会告诉我们这件事发生了。嗯,这取决于情况的严重程度。如果我认为公共利益受到重大威胁,我愿意冒着入狱的风险,你知道,这是我能做的最好的事情。这对我来说足够好了。谢谢你,你答应了。谢谢。

Paragraph 1: On a more positive note. How do you want it say I to benefit humanity and then how is your approach different to other projects? Maybe that's a more positive question. Well, you know, I've really struggled with this whole AGI thing for a long time and I've been somewhat resistant to work on making it happen. You know, and you know, the reason I can give you some backstory on opening. I mean, the reason I exist is because after Google acquired DeepMind and I used to be close friends with Larry Page. I have these long conversations with him about AI safety and he just wasn't taking AI safety at least at the time, seriously enough. And and if I did one point called me a species for thing too much on team of humanity, I guess. And I'm like, okay, so what you're saying is you're not a speciesist? I don't know. That seems great. That doesn't seem good.
更积极的一面是,你希望以何种方式造福人类,你与其他项目的方法有何不同?也许这是一个更积极的问题。嗯,你知道,我对于人工通用智能(AGI)这个事情真的很苦恼,一直有些抗拒去致力于实现它。你知道,我可以告诉你一些背景故事。我的存在是因为谷歌收购DeepMind后,我曾是拉里·佩奇的密友。我与他就人工智能安全进行过长时间的交谈,但当时他并没有把人工智能安全视为重要问题。有一次,他甚至称我过于关注人类团队而是个“物种主义者”。我当时想,好吧,你的意思是你并不是一个物种主义者?我不知道,这听起来好像不太好。

Paragraph 2: So, and at the time, you know, with Google and DeepMind combined, you know, Larry with the support, you know, they have two rewarding controls. So provided Larry has either the support of Sergey or Eric, then they have total control over, which now called alphabet. So, so, so, and they had to add probably three quarters of the AI talent in the world and lots of money and lots of computers. So it's like, man, we need some kind of sort of counterweight here. So that's where I was like, well, what's the opposite of Google? Google DeepMind would be an open source nonprofit. Now, because fate loves irony, opening eyes now super close source and frankly, voracious for profit. Because they want to spend my understanding is a hundred billion dollars in three years, which requires, you know, if you're trying to get investors for that, you've got to make a lot of money. So, you know, opening eyes straight quite, you know, really in the opposite direction from it sort of founding charter, which is, again, very ironic, but fate loves irony. And there's a friend of mine, Jordan Olin, who says the most ironic outcome is the most likely. Well, here we go.
所以,在当时,你知道,谷歌和DeepMind合并之后,你知道,有了拉里的支持,他们拥有两个有回报的控制权。所以只要拉里得到谢尔盖或埃里克的支持,他们就完全掌控了现在叫作Alphabet的公司。所以,所以他们不得不聚集世界上大约三分之三的人工智能人才,还有大量的资金和计算机。所以,这就像是,嗨,我们需要一种抵衡的力量。所以我就想,谷歌的对立面是什么呢?谷歌DeepMind就是一个开源的非盈利机构。然而,由于命运喜欢讽刺,现在它变得对外封闭,而且坦率地说,对利润的渴求很强烈。因为我所了解的,他们希望在三年内投入一千亿美元,而要实现这一目标,你得赚很多钱以吸引投资者。所以,你知道,开源的非盈利机构和它的创始宗旨是完全相反的,真是个讽刺,但命运喜欢讽刺。还有我的一个朋友乔丹·奥林曾说,最具讽刺意味的结果最有可能发生。嗯,现在我们就遇到了这种情况。

Paragraph 3: So, yeah. So now, hopefully, X AI is not even worse, but I think we should be careful about that. But it really seems like, look, at this point, it's, age I's going to happen. So there's two choices, either be a spectator or a participant. And as a spectator, one can't do much to influence the outcome as a participant. I think, you know, we can create a competitive, an alternative that is hopefully better than Google DeepMind or Open Animark or Soft. You know, in both the cases of, you know, like Alphabet, you know, if you look at, like, the incentive structure, Alphabet is a publicly traded company, has, you know, gets a lot of, you know, has a lot of incentives to behave like a, it's got public company incentives, essentially.
是的。所以现在,希望X人工智能甚至不会变得更糟,但我认为我们应该对此保持谨慎。但实际上,看起来,现在AI的时代即将到来。所以有两个选择,要么成为旁观者,要么成为参与者。作为旁观者,人们不能做太多事情来影响结果,而作为参与者,我认为我们可以创造一个有竞争力的替代品,希望它能比Google的DeepMind或OpenAI或Soft更好。就像Alphabet这两种情况一样,如果你看一下激励机制,Alphabet是一家上市公司,有很多激励去表现得像个上市公司。

Paragraph 4: You've got all these, like, ESG mandates and stuff that I think push companies in questionable directions. And then Microsoft has a similar set of incentives. As a, you know, it's a company that's not publicly traded, X AI, it's not subject to the market, market-based incentives or really the non-market-based ESG incentives. So, you know, we're a little freer to operate. And, you know, I think our, our AI can give answers that people may find controversial, even though they're actually true. You know, so they might not, you know, they won't be politically correct at times. And they will, probably a lot of you will be offended by some of the answers. But as long as it's, you know, trying to optimize for the, for truth with least amount of error, I think we're doing the right thing. Yeah. I'd love to.
你知道的,有很多类似的ESG相关的要求,我认为这些要求将公司推向了可疑的方向。而微软也有类似的激励措施。作为一家非上市公司,X AI不受市场的约束,也不受非市场的ESG激励措施的影响。所以,我们在运作上更加自由。而且,我认为我们的AI可以给出有时可能被人们认为有争议但实际上是真实的答案。你知道的,所以有时候它们可能不符合政治正确。而且,其中的很多答案可能会冒犯到一些人。但只要我们试图通过提供最少错误的真相来进行优化,我认为我们在做正确的事情。是的,我很愿意。

Paragraph 5: Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Twitter has a lot of data in it that could help build a validator, i.e. check some of the facts that a system kicks out, because we all know that GPT confabulates, you know, things make things up. And I think that's what I'm talking about. The other places, um, chat GPT found me a screw it'll lose, but it didn't find me a coffee at San Jose International Airport. Are you building an AI that has a world knowledge, a 3D world knowledge to navigate people to around the world? The different things? So yeah, I guess we need to understand the physical world as well, not just the internet. So I'm talking about it. You guys should talk more.
是的。是的。是的。是的。是的。是的。Twitter中有很多数据可以帮助建立一个验证器,即检查系统输出的一些事实,因为我们都知道GPT会胡乱编造,你知道的,让事情变虚构。我想这就是我在谈论的。其他地方,嗯,ChatGPT给我找到了一个丢失的螺丝,但它没有给我在圣何塞国际机场找到一杯咖啡。你们在构建一个拥有世界知识、三维世界知识的AI,用于指引人们在世界各地旅行?不同的事物?所以是的,我想我们需要理解现实世界,不仅仅是互联网。所以我在谈论这个。你们应该更多地交流。

Paragraph 1: Yeah, those are great ideas, Robert, especially the one about verifying information online or on Twitter is something that we've thought about on Twitter. We have community notes. So that's actually a really amazing data set for training language model to try to verify verify facts on your third. We have to see whether that alone is enough, because we know that with the current technology, there's a lot of weaknesses. Like, it's unreliable, it hallucinates facts, and we have to probably invent specific techniques to account to that, and to make sure that our models are more factual, that they have better reasoning abilities. So that's why we brought in people with a lot of expertise in those areas, especially mathematics or something that we really care about, where we can verify that the proof of a theorem is correct automatically. And then once we have that ability, we're going to try to expand that to more fuzzier areas, you know, things that there's no mathematical truth anymore.
是的,这些都是很棒的想法,罗伯特,尤其是关于在线验证信息或Twitter上的提议,我们在Twitter上也考虑过。我们有社区笔记。所以这实际上是一个非常惊人的数据集,用于训练语言模型来尝试验证你们第三方的事实。我们必须看看单靠这个是否足够,因为我们知道,当前技术有很多弱点。比如,它不可靠,会产生幻觉的事实,我们可能需要发明特定的技术来解决这个问题,确保我们的模型更加准确,具有更好的推理能力。这就是为什么我们引入了在这些领域拥有丰富经验的人,特别是数学方面,我们可以自动验证定理证明的正确性。一旦我们具备了这种能力,我们将尝试将其扩展到更模糊的领域,你知道的,那些已经没有数学真理的事物了。

Paragraph 2: Yeah, I mean, the truth is not a popularity contest. But if one trains on, like, you know, sort of what the most likely word is that follows another word from an internet data set, then it's obviously that's a pretty major problem in that it will give you an answer that is popular but wrong. So, you know, it used to be that most people thought, probably maybe almost everyone on earth thought that these are not involved around the earth. And so if you did, like, some sort of training on, some GPD training in the past, we'd be like, oh, this is going to turn the rules around the earth because everyone thinks that. That doesn't make it true. You know, if a Newton or Einstein comes up with something that is actually true, it doesn't matter if all other physicists in the world disagree, it's reality is reality. So it has to, you have to ground the answers in reality.
是的,我的意思是,事实并不是一个受欢迎度竞赛。但是,如果一个人在互联网数据集中训练出了某个词后面最有可能出现的词语,那显然就存在一个相当大的问题,即它会给你一个受欢迎但错误的答案。因此,你知道的,以前大多数人可能认为,全世界几乎所有人都认为这些都与地球无关。所以,如果你过去进行了一些GPD(通用预训练模型)训练,我们可能会想,哦,这会改变全球对地球的认知,因为每个人都这么认为。但这并不代表它是真实的。你知道的,如果牛顿或爱因斯坦提出了某个真实的理论,无论全世界其他物理学家是否反对,它都是现实。所以答案必须基于现实。

Paragraph 3: Yeah, the current models just imitate the data that they're trained on. And what we really want to do is to change the paradigm away from that to actually models discovering the truth. So not just, you know, repeating what they've learned from the training data, actually making true new insights, new discoveries that we can all benefit from. Yeah. See, anybody on the team want to say anything or ask questions? Do you think maybe I haven't been asked yet?
是的,当前的模型只是模仿它们接受训练的数据。我们真正想要做的是改变这种范式,让模型能够通过发现真相来产生新的见解。不仅仅是重复它们从训练数据中学到的知识,而是能够带来我们所有人都可以从中受益的真正新的发现。是的。看看,团队里还有人想说什么或者问问题吗?你们觉得可能是我还没有被问到吗?

Paragraph 4: Sure. Yeah. Yeah, so I guess some of us heard your future AI spaces on Wednesday about, so that's something I think on a lot of us mind is like the regulations and the AI safety spaces, how the current development and also the international coordination problems and how the US AI companies will affect this global AI development. So yeah, like, so you don't want to give a summary on what you talked about on Wednesday. So essentially, you said like the regulations would be good, but you don't want to slow down the progress too much. That's essentially what you said.
当然。是的。是的,所以我猜我们中的一些人已经听说你在上周三关于未来人工智能领域的演讲,我认为我们很多人都在关注的就是监管和人工智能安全领域的问题,以及当前的发展状况和国际协调问题,以及美国的人工智能公司将如何影响全球的人工智能发展。所以,是的,你不想对上周三的演讲进行总结吗?那么,实质上,你说过监管是有好处的,但你也不希望进展过于缓慢,这就是你的要点。

Paragraph 5: Yeah, I think the right way for a regulation to be done is to start with insight. So first, you know, you know, you were kind of regulatory authority, whether political private first tries to understand, like, make sure there's like a broad understanding, and then there's a proposed rulemaking. And if that proposed rulemaking is agreed upon by all or most parties, then, you know, then I guess implemented, you know, you give companies some period of time to implement it. But I think overall, it should not meaningfully slow down the advent of AGI, or if it does slow down, it's not going to be for like a very long time. And probably a little bit of slowing down is worthwhile if it's significant improvement in safety. Like my prediction for AGI would roughly match that, which I think right, Rake as well, at one point said 2029. That's roughly my guess too. Give or take a year. So if it takes like, listen, an additional six months or 12 months for AGI, that's really not a big deal. If it's, you know, like spending a year to make sure AGI is safe, it's probably worthwhile, you know, if that's what it takes. But I wouldn't expect it to be a substantial slow down.
是的,我认为一个规定应该是首先从洞察力开始。所以首先,你知道,你知道,你可以从政府或私人的监管机构开始,试图理解,确保有一个广泛的了解,然后提出规则。如果大多数或所有的参与方都同意这个规则,那么你就执行它,然后给公司一定的时间来实施。但我认为整体上,它不应该显著拖慢超级智能的发展,或者即使它有所拖慢,也不会拖很久。如果这可以显著提高安全性,稍微拖延一点也是值得的。就像我对超级智能的预测与雷克在某个时候说的2029年大致相符。如果超级智能需要额外的六个月或十二个月,那真的不是什么大问题。如果需要花一年来确保超级智能是安全的,那可能是值得的。但我不希望它会显著减慢发展。

Paragraph 6: Yeah. And I can also add that. Um, like understanding the inner working of advanced AI is probably the most ambitious project out there as well, and also aligns with XAS mission of understanding the universe. And it's probably not possible for aerospace engineers to build a safe rocket if they don't understand how it works. And that's the same approach we want to take at XAI for our safety plans. And as the AI advances across different stages, the risk also changes, and it will be fluid across other stages.
是的。我还可以补充一点。嗯,深入理解先进人工智能的内部运作可能是目前最雄心勃勃的项目,也与XAS理解宇宙的使命相一致。如果航天工程师不了解火箭的运作原理,他们可能无法建造安全的火箭。这也是我们在XAI对安全计划采取的相同方法。随着人工智能在不同阶段的发展,风险也会发生变化,并且在其他阶段间可能是流动的。

Paragraph 7: Yeah. If I think about like how, what actually makes regulations effective in caught with cars and rockets, it's actually, it's not so much that the regulators are instructing Tesla and SpaceX, but more that since we have to think about things internally and then justify it to regulators, it makes us just really think about the problem more. Um, and that's an in thinking about the problem more, it makes it safer as opposed to the regulators specifically pointing out ways to make it safer. It just forces us to think about it more. Can I add? I just wanted to make another point, so independent of the safety. It's more like my experience at Alphabet was that it was extremely, there was a lot of red tape around involving external people like other entities to collaborate with or expose our models to them because of the lot of red tape around exposing anything that we were doing internally. So I wanted to ask you on whether, so I hope that here we have a bit more freedom to do so or what your philosophy about collaborating with more external entities like academic institutions or other researchers in the area.
是的。如果我考虑一下,在汽车和火箭方面,实际上是什么使得监管有效,事实上,并不是监管部门在指导特斯拉和SpaceX,而是因为我们必须在内部考虑这些问题,然后向监管部门进行合理解释,这使得我们更加深入地思考了问题。嗯,在更深入地思考问题的过程中,让事情更加安全,而不是监管部门具体指出如何使其更安全。这只是迫使我们更多地思考。我可以补充一下吗?我只是想再提一点,与安全无关。在Alphabet的经验是,很难与其他人合作或使外部实体参与,因为有很多条规限制我们向他们展示我们内部正在做的事情。因此,我想问问您,我们在这里是否有更多自由这样做,或者您对与学术机构或其他研究人员进行更多合作的理念如何。

So. Yeah, so let me support collaborating with others. So. I mean, it sounds like some of the, yeah, concerns with like any kind of like large publicly traded companies is like that they're worried about being embarrassed in some way or being sued or something. But there's a, like someone proportion to the number of the size of the legal department. Our legal department currently is zero, so. That, you know, it would be zero forever, but you know, the, you know, it's also very easy to sue publicly traded companies like class action lawsuits are, I mean, we desperately need class action lawsuit reform in the United States. The ratio of like the ratio of like good class action lawsuits to bad class action lawsuits is way out of whack. And it effectively ends up being a tax on consumers. You know, somehow the country is able to survive without class action. So, like it's unclear we need that that body of law at all. But that that is a major problem with the publicly traded companies. So, it's just, yeah, not stop legal law, not stop lawsuits.
嗯,让我来支持与他人合作。所以,我是说,听起来像是对于任何大型上市公司来说,他们担心在某种程度上尴尬或者被起诉之类的事情。但事实上,这与公司法务部门的规模成比例。我们目前的法务部门是零,所以你知道,将来可能也会是零,但是你要知道,起诉上市公司非常容易,集体诉讼的案件数量与质量之间的比例严重失衡。这实际上对消费者来说相当于一种税收。你知道,不论怎样,这个国家在没有集体诉讼的情况下依然能够存活下来。所以说,我们是否真的需要这个法律体系还不清楚。但这却是上市公司的一个重大问题。所以,我们不是要停止法律,停止起诉。

Yeah, so I do support collaborating with others and generally being actually open. So, you know, the thing I'm tired of, it's actually, it's quite hard to, like, if you're, if you're innovating fast, that's the, that is the actual competitive advantage is the pace of innovation, as opposed to any given innovation. You know, that really has been like the strength of Tesla and SpaceX is that the rate of innovation is the competitive advantage, not what has been developed at any, any one point. In fact, SpaceX, there's almost no patents. And Tesla, open source is patents. So, we use all our patents for free. So, as long as SpaceX and Tesla continue to innovate rapidly, that's the actual defense against competition, as opposed to, you know, patents and trying to hide things, you know, and just creating patents like, like I'm basically like a minefield. The reason we open source app, like Tesla does continue to make patents and open source them in order to basically be a minor, it's a mug of mine sweet, but, asparationally a mine sweet, but we still get to buy patent trust, it's very annoying, but, but we actually literally make patents and open source them in order to be a mine sweet.
是的,所以我支持与他人合作,实际上要保持开放。所以,你知道,我厌倦的事情,实际上很难,就像,如果你在快速创新,那就是真正的竞争优势,创新速度而不是任何具体的创新。事实上,这正是特斯拉和SpaceX的优势所在,创新速度是竞争优势,而不是任何具体的创新。事实上,SpaceX几乎没有专利。特斯拉则开源其专利,所以我们免费使用我们的专利。只要SpaceX和特斯拉继续快速创新,这才是真正的对抗竞争的防御,而不是专利和试图隐藏东西,像我基本上是个雷区。特斯拉之所以开源专利,是为了继续制造专利和开源它们,以便成为一个叛逆者,虽然有些烦人,但我们确实制造了专利并开源它们,以成为一个叛逆者。

Okay. Hey, Walter. Hey, a lot of the talk about AI since March has been on large language models and generative AI. You and I, for the book, also discuss the importance of real world AI, which is the things including coming out of both Optimus and Tesla FSD. To what extent do you see XAI, XAI involved in real world AI as a distinction to what, say, open AI is doing, and you have a leg up to some extent by having done FSD?
好的。嘿,沃尔特。嘿,自从三月以来,关于人工智能的讨论主要集中在大型语言模型和生成型人工智能上。就我们的书而言,你和我也讨论到了现实世界的人工智能的重要性,这包括从Optimus和特斯拉FSD中涌现出来的事物。从某种程度上说,你完成了FSD,你认为XAI和现实世界的人工智能有何不同,与Open AI正在做的有何区别,并且你在一定程度上具备先发优势?

Yeah. Right. I mean, Tesla is the leader, I think, by pretty long margin in real world AI. In fact, the degree to which Tesla is advanced real AI is not well understood. Yeah. And I guess since I spent a lot of time with the Tesla AI team, I kind of know, you know, how real world AI is done. And there's lots to be gained by collaboration with Tesla. I think, by direction, XAI can help Tesla and vice versa.
是啊。没错。我的意思是,特斯拉在实际世界的人工智能领域可以说是遥遥领先的。事实上,特斯拉的先进实际人工智能程度并没有得到很好的理解。是啊。而且,由于我花了很多时间与特斯拉的人工智能团队共事,我对实际世界的人工智能的开发有所了解。与特斯拉合作可以获益良多。我认为,相互之间的协作可以帮助特斯拉实现可解释的人工智能,反之亦然。

You know, we have some collaborative relationships as well, like our material science team, which I think is maybe the best in the world. It is actually shared between Tesla and SpaceX. And that's actually quite helpful for recruiting the best engineers in the world because it's just more interesting to work on advanced electric cars and rockets than just either one or the other. So, like that was really key to recruiting Charlie Komen, who runs the advanced materials team. He was at Apple, and I think pretty happy at Apple, and we were like, well, we could work on electric cars and rockets. He was like, that sounds pretty good. So, he wouldn't take either one of the drawers, but he was willing to take both. Yeah, so I think that is a really important thing.
你知道,我们还有一些合作关系,比如我们的材料科学团队,我认为可能是世界上最好的。它实际上是特斯拉和SpaceX共享的。这对于招募世界上最优秀的工程师非常有帮助,因为在先进的电动汽车和火箭上工作比只做其中的一种更有趣。因此,招募查理·科曼(Charlie Komen)对我们来说真的很重要,他负责先进材料团队。他曾在苹果工作,并且我认为他在苹果工作得很开心,但我们告诉他,我们可以同时从事电动汽车和火箭领域。他觉得这听起来很不错。因此,他不想选择其中的一种,而是两种都愿意接受。所以,我认为这真的是一件非常重要的事情。

And like I said, there are some pretty big insights that we're getting to Tesla and trying to understand real world world AI. You're taking, taking video input and compressing that into a vector space and then ultimately into steering and pedal outputs. Yeah. And Optimus? Yeah, Optimus is still at the early stages, but Optimus, and we definitely need to be very careful with Optimus at scale once it's in production. That you have a hard-coded way to turn off Optimus for obvious reasons, I think.
正如我说的,我们正在对特斯拉进行一些重要的洞察,并试图理解现实世界的人工智能。你会将视频输入转换成一个向量空间,然后最终转化为转向和踏板输出。是的,还有Optimus。是的,Optimus仍处于早期阶段,但一旦投入生产,我们绝对需要非常谨慎地处理Optimus的规模问题。出于明显的原因,你需要一种硬编码的方法来关闭Optimus。

This has got to be a hard-coded ROM local cut off that no amount of updates from the Internet can change that. So we'll make sure that Optimus is quite easy to shut down. It's extremely important because at least of the cars like intelligent, well, at least you can climb a tree or go up some stairs or something, go into a building, but Optimus can follow you in the building. So any kind of robot that can follow you in the building, that is intelligent and connected. We've got to be super careful with safety. Thanks. My problem. Let's see. Thank you.
这一定是一个硬编码的ROM本地断开连接,互联网上任何更新都无法改变这一点。所以我们将确保将Optimus关机变得非常容易。这非常重要,因为至少有一些像智能一样的车辆,你至少可以爬树、上楼梯或者进行其他一些动作,进入建筑物,但Optimus可以跟随你进入建筑物。所以任何可以跟随你进入建筑物的智能和连接的机器人,我们都必须非常小心安全性。谢谢。我的问题。让我看看。谢谢。

So one thing I wanted to just talk about before we're concluded is how impactful, sorry about that little feedback is just about the impactfulness of AI as a means of providing equal opportunity to humanity from all walks of life and the importance of democratizing it as far as our mission statement goes. Because if you think about the history of humanity and access to information, there was before the printing press it was incredibly hard for people to get access to new forms of knowledge and being able to provide that level of communication to people is hugely deflationary in terms of wealth and opportunity inequality.
在我们结束之前,有一件事我想谈谈,那就是人工智能的影响力,抱歉刚才有一点反馈,我们将人工智能作为促进人类各行各业平等机会的手段,并在我们的使命宣言中强调了其民主化的重要性。因为如果我们回顾人类对信息的获取史,早在印刷术出现之前,人们很难获取新形式的知识,而能够为人们提供这种沟通水平将极大地降低贫富和机会不均等的问题。

So we're really at a new inflection point in the development of society when it comes to getting everyone the same potential for great outcomes regardless of your position in life. So when we're talking about removing the monopolization of ideas and about controlling this technology from paid subscription services or even worse from the political censorship that may come with whatever capital has to supply these models, we're really talking about democratizing people's opportunities to not only better their position in life but just advance their social status in the world at an unprecedented level in history.
因此,在社会发展方面,我们真正处于一个新的转折点,无论你的生活地位如何,我们都要让每个人都有实现卓越成果的潜力。所以,当我们谈到消除观念垄断,控制付费订阅服务甚至更糟的资本供应模式所可能带来的政治审查时,我们实际上是在谈论让人们的机会民主化,不仅可以改善他们的生活状况,而且还可以在世界范围内以前所未有的方式提升社会地位。

And so as a company when we talk about the importance of truthfulness and being able to reliably trust these models, learn from them and make scientific advancement, make societal advancements, we're really just talking about improving people's qualities of life and improving everyone not just the top tech people in Silicon Valley who have access to it, it's really about giving this access to everyone. And I think that's a mission that our whole team shares.
因此,作为一家公司,当我们谈论真实性和能够可靠地信任这些模型、从中学习并进行科学进步、社会进步时,我们实际上是在谈论提高人们的生活质量,改善每个人的生活,而不仅仅是那些在硅谷拥有访问权限的顶级技术人员,这是为了让每个人都能获得这种权益。我认为这是我们整个团队共同的使命。

Before we sign off here, just one last question for Elon, assuming that XCI is successful at voting human level AI or even beyond human level AI, do you think it's reasonable to involve the public and decision making in the company or how do you see that evolving in the long term?
在我们结束之前,Elon,只有一个最后的问题,假设XCI在投票方面能够成功实现人类水平的人工智能,甚至超越人类水平的人工智能,您认为公司是否合理地应该让公众参与决策,或者您对此事的长期发展有何看法?

Yeah, as with everything, I think we're very open to critical feedback and welcome that. We should be criticized. That's a good thing. Actually, one of the things that I'd like sort of X slash Twitter for is that there's plenty of negative feedback on Twitter, which is helpful for ego compression. So the best thing I can think of right now is that any human that wants to have a vote in the future of XCI ultimately should be allowed to. So basically provided you can verify that you're a real human and that any human that wishes to have a vote in the future of XCI should be allowed to have a vote in the future of XCI.
是的,对于一切事物,我认为我们非常乐意接受批评意见并且欢迎。我们应该受到批评,这是件好事。实际上,我认为X(某个特定事物)和Twitter之类的平台不同之处之一就是,Twitter上有很多负面反馈,这对于自我压缩是有帮助的。所以我现在能想到的最好的方式是,未来XCI的决策应该允许任何一位真正的人去投票。基本上,只要你能验证自己是一个真实的人,并且只要任何一个希望对XCI的未来做出投票的人都应该有权投票。

Yeah, maybe there's like some normal fee like 10 bucks or something. I don't know. 10 bucks prove you're a human and then you can have a vote. Everyone who's interested. That's the best thing I can think of right now at least.
是的,可能会有一些像10美元这样的正常费用。我不知道。10美元证明你是一个人,然后你就可以投票了。所有感兴趣的人都可以参与。至少目前这是我能想到的最好的办法。

All right, cool. On that note, we're participating and we'll keep you informed of any progress that we make and look forward to having a lot of great people join the team.
好的,很酷。在这方面,我们将参与并且会随时向您通报我们取得的任何进展,并期待有很多优秀的人加入我们的团队。

Thanks.
谢谢。 尽量易读的翻译是:感谢。



function setTranscriptHeight() { const transcriptDiv = document.querySelector('.transcript'); const rect = transcriptDiv.getBoundingClientRect(); const tranHeight = window.innerHeight - rect.top - 10; transcriptDiv.style.height = tranHeight + 'px'; if (false) { console.log('window.innerHeight', window.innerHeight); console.log('rect.top', rect.top); console.log('tranHeight', tranHeight); console.log('.transcript', document.querySelector('.transcript').getBoundingClientRect()) //console.log('.video', document.querySelector('.video').getBoundingClientRect()) console.log('.container', document.querySelector('.container').getBoundingClientRect()) } if (isMobileDevice()) { const videoDiv = document.querySelector('.video'); const videoRect = videoDiv.getBoundingClientRect(); videoDiv.style.position = 'fixed'; transcriptDiv.style.paddingTop = videoRect.bottom+'px'; } const videoDiv = document.querySelector('.video'); videoDiv.style.height = parseInt(videoDiv.getBoundingClientRect().width*390/640)+'px'; console.log('videoDiv', videoDiv.getBoundingClientRect()); console.log('videoDiv.style.height', videoDiv.style.height); } window.onload = function() { setTranscriptHeight(); }; if (!isMobileDevice()){ window.addEventListener('resize', setTranscriptHeight); }