Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI | Lex Fridman Podcast #419
发布时间 2024-03-18 15:03:19 来源
摘要
Sam Altman is the CEO of OpenAI, the company behind GPT-4, ChatGPT, Sora, and many other state-of-the-art AI technologies.
GPT-4正在为你翻译摘要中......
中英文字稿
I think Compute is going to be the currency of the future. I think it will be maybe the most precious commodity in the world. I expect that by the end of this decade and possibly somewhat sooner than that, we will have quite capable systems that we look at and say, wow, that's really remarkable. The road to AGI should be a giant power struggle. I expect that to be the case. Where builds AGI first gets a lot of power. Do you trust yourself with that much power?
我认为计算力会成为未来的货币。我觉得它有可能会成为世界上最宝贵的商品。我预计在这个十年结束之前,甚至可能会比这更早一点,我们会拥有非常出色的系统,让我们惊叹不已。通往人工智能的道路应该是一个巨大的权力斗争。我认为会是这种情况。谁能率先构建出人工智能,谁将获得大量的权力。你相信自己能够掌握这么大的权力吗?
The following is a conversation with Sam Altman, his second time in the podcast. He is the CEO of OpenAI, the company behind GPT4, Chad GPT, Sora, and perhaps one day, the very company that will build AGI. This is a Lex Friedman podcast that supported Please Check out our sponsors in the description. And now, dear friends, here's Sam Altman.
以下是与Sam Altman的对话,这是他第二次出现在这个播客节目中。他是OpenAI的首席执行官,该公司负责GPT-4、Chad GPT、Sora等项目,也许有一天会成为构建人工智能的公司。这是Lex Friedman的播客节目,请查看我们的赞助商信息在描述中。现在,亲爱的朋友们,请欢迎Sam Altman。
Take me through the OpenAI Board Saga that started on 13th, November 16th, maybe Friday, November 17th for you. That was definitely the most painful professional experience of my life and chaotic and shameful and upsetting and a bunch of other negative things. There were great things about it too and I wish it had not been in such an adrenaline rush that I wasn't able to stop and appreciate them at the time. I came across this old tweet of mine, this tweet of mine from that time period, which was like kind of going to your own eulogy, watching people say all these great things about you and just like unbelievable support from people I love and care about. That was really nice.
请带我回顾一下OpenAI董事会的那段历程,开始于11月13日,也许对你来说是周五,11月17日。那绝对是我职业生涯中最痛苦、混乱、令人羞愧、心烦意乱的经历,还有其他许多负面情绪。当时也有一些好事,但我希望那时候不要那么急于行事,以至于无法停下来欣赏它们。我偶然翻看到了一条旧推文,那时期我发的推文,就像是自己去参加自己的葬礼,看着人们说了这么多好话,感受到我所爱和关心的人们给予的难以置信的支持。那真的很好。
That whole weekend, I kind of like felt with one big exception, I felt like a great deal of love and very little hate. Even though it felt like I just, I have no idea what's happening and what's going to happen here and this feels really bad. There were definitely times I thought it was going to be like one of the worst things to ever happen for AI safety. Well, I also think I'm happy that it happened relatively early. I thought at some point between when OpenAI started and when we created AGI, there was going to be something crazy and explosive that happened, but there may be more crazy and explosive things still to happen.
在整个周末,除了一个大例外,我感觉到了很多爱和很少的恨。虽然感觉就像我完全不知道发生了什么,以及接下来会发生什么,这感觉真的很糟糕。有时候我确实觉得这可能是对AI安全性将会发生的最糟糕的事情之一。不过,我也觉得我很高兴这件事发生得相对较早。我曾认为在OpenAI开始和我们创造AGI之间的某个时间点,会发生一些疯狂而爆炸性的事情,但可能还有更多疯狂而爆炸性的事情会发生。
Still, I think helped us build up some resilience and be ready for more challenges in the future. But the thing you had a sense that you would experience is some kind of power struggle. The road to AGI should be a giant power struggle. The world should, well, not should. I expect that to be the case. You have to go through that. Like you said, iterate as often as possible in figuring out how to have a board structure, how to have organization, how to have the kind of people that you're working with, how to communicate all that in order to deescalate the power struggle as much as possible. Yeah.
然而,我认为这帮助我们建立了一些韧性,为未来应对更多挑战做好准备。但你要经历的一种感觉是某种权力斗争。通往AGI的道路应该是一场巨大的权力斗争。世界应该,嗯,不是应该。我期待会是这种情况。你必须经历这一切。就像你说的,尽量多次迭代,在弄清楚如何拥有一个板结构、如何组织、与你一起工作的人员类型、如何传达所有这一切的过程中,尽可能缓解权力斗争。是的。
But at this point, it feels like something that was in the past. It was really unpleasant and really difficult and painful, but we're back to work and things are so busy and so intense that I don't spend a lot of time thinking about it. There was a time after there was this fugue state for the month after, maybe 45 days after, that was always just drifting through the days. I was so out of it. I was feeling so down. Just in a personal psychological level. Yeah. Really painful and hard to have to keep running open AI in the middle of that. I just wanted to crawl into a cave and recover for a while.
但是在这时候,感觉就像是过去的事情了。过去真的很不愉快,很困难和痛苦,但现在我们又回到工作中,一切都非常忙碌和紧张,我没有太多时间去想它。那之后的一个月,也许是45天后,总是在度日如年。我感觉很迷茫。情绪也很低落,就在个人心理层面上。是的,要在那种状态下继续运作open AI真的很痛苦和困难。我只想躲进一个洞穴中恢复一段时间。
But now it's like we're just back to working on the mission. It's still useful to go back there and reflect on board structures, on power dynamics, on how companies are run the tension between research and product development and money and all this kind of stuff so that you who have a very high potential of building AGI would do so in a slightly more organized, less dramatic way in the future. There's value there to go both the personal psychological aspects of you as a leader and also just the board structure and all this kind of messy stuff.
但现在感觉我们回到了工作使命上。重新审视董事会结构、权力动态、公司运作方式、研究与产品开发、资金紧张等等,仍然很有用。这样你就能以更有组织、少drama的方式建造AGI,未来更有可能成功。重要的是要同时关注作为领导者的个人心理方面以及董事会结构等混乱的事情。
Definitely learned a lot about structure and incentives and what we need out of a board. I think that is valuable that this happened now in some sense. I think this is probably not the last high stress moment of opening AI but it was quite a high stress moment. A company very nearly got destroyed. We think a lot about many of the other things we've got to get right for AGI but thinking about how to build a resilient org and how to build a structure that will stand up to a lot of pressure in the world which I expect more and more as we get closer. I think that's super important.
我们确实对组织结构和激励有了很多学习,并且明白了我们需要从董事会那里得到什么。我认为在某种程度上,这种情况发生现在是有价值的。我认为这可能不是开放AI中的最后一个高压力时刻,但确实是一个非常高压力的时刻。一家公司几乎被摧毁了。我们会认真考虑关于AGI的其他许多问题,但是思考如何建立一个有韧性的组织以及如何建立一个能够经受住日益增加的压力的结构在我们逐渐接近的时候,我认为这是非常重要的。
Do you have a sense of how deep and rigorous the deliberation process by the board was? Can you shine some light on just human dynamics involved in situations like this? Was it just a few conversations and all of a sudden it escalates and why don't we fire Sam? What kind of thing? I think the board members are well-meaning people on the whole. I believe that in stressful situations where people feel time pressure or whatever, people understandably make suboptimal decisions and I think one of the challenges for open AI will be we're going to have to have a board and a team that are good at operating under pressure.
你是否有一种感觉,了解董事会的讨论过程有多深入和严谨?你能否在这种情况下,简述一下涉及的人际动态?是不是只是一两次对话就突然升级了,为什么我们不开除山姆?这是什么情况?我认为董事会成员总体上都是好意的人。我相信在压力情况下,人们感到时间紧迫或其他压力,可以理解人们会做出次优化的决定,我认为OpenAI面临的挑战之一将是我们必须拥有一个能够在压力下运作良好的董事会和团队。
Do you think the board had too much power? I think boards are supposed to have a lot of power but one of the things that we did see is in most corporate structures boards are usually answerable to shareholders. Sometimes people have super voting shares or whatever. In this case and I think one of the things with our structure that we maybe should have thought about more than we did is that the board of a nonprofit has in less you put other rules in place like quite a lot of power. They don't really answer to anyone but themselves and there's ways in which that's good but what we'd really like is for the board of opening to answer to the world as a whole as much as that's a practical thing.
你认为董事会有太多的权力吗?我认为董事会应该有很大的权力,但我们看到的是在大多数企业结构中,董事会通常要向股东负责。有时人们拥有超级投票权股份之类的。在这种情况下,我认为我们的结构可能应该更多地考虑一下,非营利组织的董事会在没有设定其他规则的情况下拥有相当大的权力。他们并不真正向任何人负责,只向自己负责,这样的方式有好处,但我们真正希望的是使开放董事会向整个世界负责,尽可能实现这一目标。
So there's a new board announced? Yeah. There's I guess a new smaller board of first and now there's a new final board. Not a final board yet. We've added some. We've added some. Okay. What is fixed in the new one that was perhaps broken in the previous one? The old board sort of got smaller over the course of about a year. It was nine and then it went down to six and then we couldn't agree on who to add and the board also I think didn't have a lot of experienced board members and a lot of the new board members at OpenAI have just have more experiences board members. I think that'll help. It's been criticized some of the people that are added to the board. I heard a lot of people criticizing the addition of Larry Summers for example.
所以有一个新董事会被宣布了?是的。我想现在可能是第一层的新小董事会,然后又有一个新的最终董事会。但还不是最终的董事会。我们增加了一些人。我们增加了一些人。好的。在新的董事会中修复了在以前可能存在的问题有哪些?以前的董事会大约在一年的时间里变小了。原本是九个人,然后变成了六个人,然后我们无法就增加谁达成一致意见,而且我认为董事会中没有太多经验丰富的成员,OpenAI的新董事会成员中有更多经验丰富的人。我认为这会有所帮助。一些被加入董事会的人遭到了批评。比如,我听到很多人对加入拉里·萨默斯(Larry Summers)表示批评。
What's the process of selecting the board like? What's involved in that? So bread and Larry were kind of decided in the heat of the moment over this like very tense weekend and that was that mean that weekend was like a real roller coaster. It was like a lot of a lot of ups and downs and we were trying to agree on new board members that both sort of the executive team here and the old board members felt would be reasonable. Larry was actually one of their suggestions the old board members. Brett I think I had even previous to that weekend suggested but he was busy and didn't want to do it and then we really needed help and wouldn't. We talked about a lot of other people too but that was I felt like if I was going to come back I needed new board members. I didn't think I could work with the old board again in the same configuration although we then decided and I'm grateful that Adam would stay but we wanted to get to we considered various configurations decided we wanted to get to board of three and had to find two new board members over the course of sort of a short period of time.
选择董事会的过程是什么样的?其中涉及了什么?Brett 和 Larry 在一个紧张的周末内就被决定了,那个周末过得真是像坐过山车一样。有许多起伏,我们努力达成一致,选择新的董事会成员,让现任执行团队和老董事会成员都觉得合理。Larry 其实是老董事会成员的建议之一。Brett 在那个周末之前我也提过,但他太忙了,不想做,后来我们真的需要帮助,他也就同意了。我们还讨论了很多其他人选,但我觉得如果要回来,我需要新的董事会成员。我觉得我无法再与老董事会以相同构架合作,尽管我们后来决定并感激 Adam 愿意留下,但我们希望获得一支三人董事会,于是在短期内找到了两名新的董事会成员。
So those were decided honestly without you know that's like you kind of do that on the battlefield. You don't have time to design a rigorous process then. For new board members since new board members will add going forward we have some criteria that we think are important for the board to have different expertise that we want the board to have unlike hiring an executive where you need them to do one role well the board needs to do a whole role of kind of governance and thoughtfulness well and so one thing that Brett says which I really like is that you know we want to hire board members and slates not as individuals one at a time and you know thinking about a group of people that will bring nonprofit expertise expertise around companies sort of good legal and governance expertise that's kind of what we've tried to optimize for.
因此,这些决定是在你不知情的情况下诚实地做出的,就像在战场上你只能这样做。你没有时间设计一个严谨的过程。对于新任董事会成员来说,因为新任董事会成员会陆续增加,我们认为对董事会拥有不同专业知识的一些标准是重要的。我们希望董事会具有各种专业知识,不像雇佣执行人员一样只需要他们做好一个角色。董事会需要发挥整体的治理和思考能力,所以我很喜欢Brett说的一件事,我们希望招聘董事会成员和团队,而不是单独招聘个人。我们考虑着一群人将会带来非营利组织专业知识、公司相关的知识以及优秀的法律和治理专业知识,这是我们试图优化的方向。
So it's technical savvy important for the individual board members. Not for every board member but for certainly some you need that that's part of what the board needs to do. So I mean the interesting thing that people probably don't understand about OpenAI I certainly don't is like all the details of running the business. When they think about the board given the drama and think about you they think about like if you reach AGI or you reach some of these incredibly impactful products and you build them and deploy them what's the conversation with the board like and they kind of think all right what's the right squad to have in that kind of situation to deliberate. Look I think you definitely need some technical experts there and then you need some people who are like how can we deploy this in a way that will help people in the world the most and people who have a very different perspective.
因此,对于个别董事会成员来说,技术熟练是非常重要的。虽然并非每位董事会成员都需要,但肯定有一些是需要的,因为这是董事会需要做的一部分。所以我说人们可能不理解关于OpenAI的有趣之处,我肯定是不了解经营业务的所有细节。当他们思考董事会时,考虑到可能会有一些争执,然后想到你们,他们会想到如果达到人工智能通用智能(AGI)或者实现一些具有极大影响力的产品,并部署它们,与董事会的讨论会是什么样子,他们可能在想在那种情况下应该有怎样的团队进行商讨。我觉得你肯定需要一些技术专家,然后还需要一些人,他们会考虑如何以最有利于世界人民的方式部署这些技术,并有着非常不同的观点。
You know I think a mistake that you or I might make is to think that only the technical understanding matters and that's definitely part of the conversation with that board to have but there's a lot more about how that's going to just like impact society in people's lives that you really won't represent it in there too. And you're just kind of are you looking at track record of people or you're just having conversations. Track record is a big deal. You of course have a lot of conversations but I you know there's some roles where I kind of totally ignore track record and just look at slope kind of ignore the Y intercept. Thank you. Thank you for making it mathematical for the audience. For a board member like I do care much more about the Y intercept. I think there is something deep to say about track record there and experiences sometimes very hundred place. Do you try to fit a polynomial function or exponential one to the track record? That's not that analogy doesn't carry that far. All right. You mentioned some of the low points that we can or some of the low points psychologically for you. Did you consider going to the Amazon jungle and just take an I wask and disappearing forever or. I mean there's so many like it was very bad period time. They were great high points to like my phone was just like sort of non stop blowing up with nice messages from people I work with every day people hadn't talked to in a decade. I didn't get to like appreciate that as much as I should have because I was just like in the middle of this firefight but that was really nice. But on the whole it was like a very painful weekend and also just like a very.
你知道我认为你或我可能会犯的一个错误是认为只有技术理解很重要,这当然是与委员会讨论的一部分,但关于它将如何影响社会和人们的生活还有很多方面,这一点你也应该考虑在内。你是在关注人们的经验记录还是只是在进行对话。经验记录是很重要的。当然你会有很多对话,但有一些角色我通常会完全忽视经验记录只看斜率,忽略Y轴截距。谢谢你为观众把它变成数学问题。像我这样的委员会成员更在意Y轴截距。我认为关于经验记录有很深刻的东西,有时是很重要的。你有尝试把经验记录拟合成多项式函数还是指数函数吗?这个类比可能不适用这么远。你提到了你在心理上遇到的一些低谷,你考虑过去亚马逊丛林和只是消失永远吗?我是说有很多像这样的一段时间很坏。也有很多高潮,比如我的手机接到了很多来自每天与之合作的人和十年没联系的人的美好信息。因为我正处于一场激烈的战斗中,我没有能够充分欣赏,但那真的很好。但总的来说,那个周末非常痛苦,也很。
It was like a battle in public to a surprising degree and that's that was extremely exhausting to me much more than I expected. I think fights are generally exhausting but this one really was you know more did this Friday afternoon I really couldn't get much in the way of answers but I also was just like well the board gets to do this and so I'm going to think for a little bit about what I want to do but I'll try to find the the blessing in disguise here and I was like well I you know my current job at open eyes or it was like to like run a decently sized company at this point and the thing I'd always like the most is just getting to like work on work with the researchers and I was like yeah I can just go do like a very focused HCI research effort and I got excited about that didn't even occur to me at the time to like possibly that this was all going to get undone this was like Friday afternoon. So you've accepted your the death very quickly very quickly like within you know I mean I went through like a little period of confusion and rage but very quickly and by Friday night I was like talking to people about what was going to be next and I was excited about that. I think it was Friday night evening for the first time that I heard from the exec team here which is like hey we're going to like fight this and you know we think whatever and then I went to bed just still being like okay excited like onward were you able to sleep? Not a lot it was one of one of the weird things was this like period of four four and a half days where sort of didn't sleep much didn't eat much and still kind of had like a surprising amount of energy it was you learn like a weird thing about adrenaline in more time. So you kind of accepted the death of a you know this baby opening and I was excited for the new thing I was just like okay this was crazy but whatever. It's a very good coping mechanism.
这就像是在公开场合打一场意想不到的战斗,让我筋疲力尽得多,远远超出了我的预期。我觉得打架通常都会让人感到疲惫,但这次真的是,你知道的,更多了。这是一个星期五下午,我真的没有得到太多答案,但我也想,董事会有权这么做,所以我会思考一下我想要做什么,但我会尽量找到其中的好处。我想,我现在在开眼的工作已经到了负责一个相当大的公司的程度,而我最喜欢的事情就是和研究人员一起工作,所以我想,是的,我可以去做一项非常专注的人机交互研究工作,我对此感到很兴奋,当时压根没有想到这一切可能都会被撤销,这是星期五下午。所以你接受了这种死亡非常迅速,就在你知道的,我经历了一段困惑和愤怒的时间,但非常迅速,到星期五晚上,我已经开始和人们谈论下一步会怎样,我对此感到兴奋。我想,晚上是我第一次听到这里的高管团队说:“嘿,我们要反击这一切”,然后我就去睡觉了,心情还是兴奋的,你能睡着吗?没睡得很多,是一个很奇怪的事情,有这样一个四四半天的时期,几乎没怎么睡觉,也没怎么吃东西,但还是有着出奇的精力,你开始学会了一种关于肾上腺素的奇怪事情。所以你接受了这个新事物的死亡,我对新事物感到兴奋,我只是说,这虽然很疯狂,但无所谓,这是一个非常好的应对机制。
And then Saturday morning two of the board members called and said hey we you know destabilized we didn't mean to destabilize things we don't want to store a lot of value here you know can we talk about you coming back and I immediately didn't want to do that but I thought a little more and I was like well I don't really care about the people here the partners shareholders like all of the I love this company and so I thought about it and I was like well okay but like here's here's the stuff I would need. And then the most painful time of all was over the course of that weekend I kept thinking and being told and we all kept not just me like the whole team here kept thinking what we were trying to like keep open eye stabilized while the whole world was trying to break it apart people trying to recruit whatever we kept being told like all right we're almost done we just need like a little bit more time and it was this like very confusing state and then Sunday evening when again like every few hours I expected that we were going to be done and we're going to like figure out a way for me to return and things to go back to how they were the board then appointed a new interim CEO and then I was like I mean that is that is that feels really bad that was the low point of the whole thing. You know I'll tell you something I it felt very painful but I felt a lot of love that whole weekend was not other than that one moment Sunday night I would not characterize my emotions as anger or hate but I really just like I felt a lot of love from people towards people it was like painful but it would like the dominant emotion of the weekend was love night. You've spoken highly of Mira Morati that she helped especially as you put in a tweet in the quiet moments when it counts perhaps we could take a bit of a tangent what do you admire about Mira? Well she did a great job during that weekend in a lot of chaos but but people often see leaders in the moment in like the crisis moments good or bad but I think I really value in leaders is how people act on a boring Tuesday at 9.46 in the morning and in just sort of the normal drudgery of the day to day how someone shows up in a meeting the quality of the decisions they make that was what I meant about the quiet moments. Meaning like most of the work is done on a day by day in the meeting by meeting just just be present and make great decisions.
然后,星期六早上,两位董事会成员打电话说,嘿,我们知道我们造成了不稳定,我们并不想造成太多价值存储在这里,你知道可以讨论你回来吗?我立刻不想这样做,但我想了想,我觉得我并不真的在乎这里的人员,合作伙伴、股东们,我爱这家公司。考虑了一下,我觉得好吧,但是这是我需要的东西。之后,最令人痛苦的时刻是在那个周末,一直在思考,被告知,整个团队不仅仅是我,一直在想我们在试图保持开放的同时,整个世界都在试图分裂,有人试图挖角,我们一直被告知好了,我们快做完了,我们只需要再多一点时间,这种困惑状态, 周日晚上又来了,再给几个小时我以为我们会结束,可以找到一种方式让我回来,事情恢复到从前的状态,但是董事会又任命了一个新的临时CEO,然后我觉得很糟糕,这是整件事情的低谷。我告诉你,有一件事,那个周末非常痛苦,但我感受到了很多爱,除了那个周日晚上的一个瞬间之外,我的情绪并不是愤怒或讨厌,我真的只是感受到很多人对人的爱,那是痛苦的,但那个周末的主要情绪是爱。你高度评价了米拉·莫拉蒂,说她在关键时刻帮了很大的忙,也许我们可以稍微离题一下,你钦佩米拉的什么?她在那个混乱的周末做得非常出色,但人们常常在危机关头看到领导者,无论好坏,但我认为我真正看重领导者的是人们如何在一个无聊的周二上午九点四十六分时,以及在日常琐事中,一个人如何出现在会议中,做出的决策质量,这就是我所说的平凡时刻,意味着大部分工作是在日复一日、会议中进行,只要出现并做出正确的决策。
Yeah I mean listen what you wanted to have wanted to spend the last 20 minutes about and I understand is like this one very dramatic weekend yeah but that's not really what opening eyes about opening eyes really about the other seven years. Well yeah human civilization is not about the invasion of the Soviet Union by Nazi Germany but still that's something totally focus on very very understandable. It gives us an insight into human nature the extremes of human nature and perhaps some of the damage and some of the triumphs of human civilization can happen in those moments it's like illustrative.
是的,我是说,听我想和你谈谈最后20分钟的事情,我明白这个周末发生了很戏剧性的事情,但这并不是打开眼界的真正意义。打开眼界真正意味着其他七年。是的,人类文明并不仅仅是关于苏联被纳粹德国入侵的,但这也是一个非常值得关注的事情。它让我们了解人类本性的极端,也许在那些时刻中,人类文明所造成的一些伤害和一些胜利,会给我们一些示范。
Let me ask about Ilya is he being held hostage in a secret nuclear facility? No. What about a regular secret facility? No. What about a nuclear non-secret facility? Neither. Not that either. I mean it's becoming a meme at some point you've known Ilya for a long time he was obviously in part of this drama with the board and all that kind of stuff. What's your relationship with him now? I love Ilya. I have tremendous respect for Ilya. I don't know anything I can like say about his plans right now that's a question for him but I really hope we work together for you know certainly the rest of my career. He's a little bit younger than me maybe he works a little bit longer. You know there's a meme that he saw something like he maybe saw AGI and that gave him a lot of worry internally.
让我问一下伊利亚,他是被关押在一个秘密的核设施里吗?不是。那普通的秘密设施呢?也不是。核非秘密设施呢?也不是。不是那个。我是说这已经成为一个梗,你肯定很久以前就认识伊利亚,他显然是这场与董事会及所有相关事情的一部分。你们现在的关系如何?我爱伊利亚。我对伊利亚有着极大的尊重。关于他的计划我不知道任何可以说的,这是他的问题,但我真心希望我们能一起工作,至少在我的职业生涯的剩余时间里。他比我年轻一点,也许工作更长一点。你知道有一个梗,他可能看到了AGI,让他内心深感担忧。
What did Ilya see? Ilya has not seen AGI. I don't know if it's seen AGI we've not built AGI. I do think one of the many things that I really love about Ilya is he takes AGI and the safety concerns broadly speaking you know including things like the impact this is going to have on society very seriously and we as we continue to make significant progress. Ilya is one of the people that I've spent the most time over the last couple of years talking about what this is going to mean what we need to do to ensure we get it right to ensure that we succeeded the mission.
伊利亚看到了什么?伊利亚没有看到人工通用智能(AGI)。我不知道它是否看到了AGI,我们还没有构建AGI。不过,我觉得伊利亚的很多方面我真的很喜欢,他对AGI及其安全问题有着广泛的关注,包括对社会影响等方面非常认真,随着我们不断取得重大进展。伊利亚是我在过去几年里花费最多时间讨论这意义以及我们需要做什么来确保我们做对了,确保我们成功完成任务的人之一。
So Ilya did not see AGI but Ilya is a credit to humanity in terms of how much he thinks and worries about making sure we get this right. I've had a bunch of conversation with him in the past. I think when he talks about technology he's always like doing this long-term thinking type of thing. So he's not thinking about what this is going to be in a year he's thinking about in 10 years. Just thinking from first principles like okay if the scales what are the fundamentals here where is it going and so that's a foundation for them thinking about like all the other safety concerns and all that kind of stuff which makes him a really fascinating human to talk with.
因此,伊利亚没有看到AGI,但从他对确保我们做对的思考和担忧来看,他是人类的荣誉。我以前和他有过许多对话。我觉得当他谈论科技时,他总是做这种长期思考的事情。因此,他不是在考虑一年后会发生什么,而是在考虑10年后会发生什么。只是从首要原理出发,例如,如果这样规模增长,基本原则在哪里,它会走向何方,这就是他们考虑其他安全问题和所有那种事情的基础,这使他成为一个非常引人入胜的人。
Do you have any idea why he's been kind of quiet? He's just doing some soul searching. Again I don't want to like speak for Ilya. I think that you should ask him that he's definitely a thoughtful guy. I think I kind of think of Ilya as like always on a soul searching a really good way. Yes. Yeah. Also he appreciates the power of silence. Also I'm told he can be a silly guy which I've never seen that. It's very sweet when that happens. I've never witnessed a silly Ilya but I look forward to that as well. I was at a dinner party with him recently and he was playing with a puppy and he was like in a very silly mood very endearing and I was thinking like oh man this is like not the side of the ileo that the world sees the most.
你有没有想过为什么他有点安静?他只是在进行一些心灵寻找。我不想替伊利亚说话。我认为你应该问问他,他真的是一个很有思考的人。我有点认为伊利亚总是在进行心灵寻找,这是一个非常好的方式。是的,是的。而且他也懂得沉默的力量。我听说他也可以很幽默,虽然我从未见过这样的情况。当发生这种情况时,感觉很甜美。我从未见过一个幽默的伊利亚,但我也期待着。最近我和他一起去参加了一个宴会,他在和一只小狗玩耍,心情非常幽默可爱,我想着哦,天啊,世界并不常见到伊利亚这一面。
So just to wrap up this whole saga are you feeling good about the board structure of all of this and like where it's moving? I feel great about the new board. In terms of the structure of OpenAI I, you know one of the board's tasks is to look at that and see where we can make it more robust. We wanted to get new board members in place first but you know we clearly learned a lesson about structure throughout this process. I don't have I think super deep things to say. It was a crazy very painful experience. I think it was like a perfect storm of weirdness. It was like a preview for me of what's going to happen as the stakes get higher and higher in the need that we have like robust governance structures and processes and people.
所以,总结一下整个事件,你对所有的董事会结构和发展方向感觉好吗?我对新董事会感觉很好。就OpenAI的结构而言,你知道,董事会的任务之一就是审视这一点,看看我们能否让它更加稳固。我们首先想要确定新的董事会成员,但是在整个过程中,我们显然学到了一些关于结构的教训。我没有很深刻的看法。这是一次疯狂而痛苦的经历。我觉得这是一场怪异之风暴。对我来说,这就像是一个预演,随着利益越来越高,我们需要强大的治理结构、流程和人员。
I am kind of happy it happened when I did but it was a shockingly painful thing to go through. Did it make you be more hesitant in trusting people? Yes. Just in a personal level. Yes. I think I'm making an extremely trusting person. I always had a life philosophy of you know like don't worry about all of the paranoia. Don't worry about the edge cases. You know you get a little bit screwed in exchange for getting to live with your guard down. And this was so shocking to me. I was so caught off guard that it has definitely changed and I really don't like this. It's definitely changed how I think about just like default trust of people and planning for the bad scenarios. You got to be careful with that.
我有点开心在我经历这件事的时候,但经历起来真的是一件令人震惊的痛苦事情。这让你对信任别人更加犹豫吗?是的。只是在个人层面上。是的。我觉得我之前是一个极其信任别人的人。我总是有一个人生哲学,就是不要担心所有的偏执。不要担心边缘案例。你知道你会在不设防的情况下受到一点损害,但也享受到生活。这对我来说太震惊了。我被完全地措手不及,这绝对改变了我,我真的不喜欢这样。这绝对改变了我对于默认信任他人和为恶劣情况做准备的想法。必须小心谨慎。
Are you worried about becoming a little too cynical? I'm not worried about becoming too cynical. I think I'm like the extreme opposite of a cynical person but I'm worried about just becoming like less of a default trusting person. I'm actually not sure which mode is best to operate in for a person that was developing a GI. Trusting or an untrusting. So an interesting journey you're on. But in terms of structure. See I'm more interested on the human level. Like how do you surround yourself with humans that are building cool shit but also are making wise decisions. Because the more money you start making the more power the thing has the weird people get. You know I think you could like make all kinds of comments about the board members and the level of trust I should have had there or how I should have done things differently. But in terms of the team here I think you'd have to like give me a very good grade on that one. And I have just like enormous gratitude and trust and respect for the people that I work with every day and I think being surrounded with people like that is really important.
你是否担心变得有点太愤世嫉俗了?我并不担心变得太愤世嫉俗。我觉得我跟愤世嫉俗的人完全相反,但我担心自己变得不再是一个默认信任他人的人。对于一个正在发展自己内在的人来说,我其实不确定在信任和不信任之间哪个状态更好。所以你正在经历一段有趣的旅程。但就结构而言,我更感兴趣的是人际关系。比如,你如何让自己周围都是正在做酷炫事物但又能做出明智决策的人。因为你开始赚钱越多,事物就有越大的权力,奇怪的人也会增多。你知道吗,我觉得你可以对董事会成员提出各种评论,对我应该如何信任他们或者我应该如何做出不同选择。但就我团队而言,我认为你必须给我一个非常好的评分。我对每天一起工作的人们怀有巨大的感激、信任和尊重,我认为被这样的人包围是非常重要的。
Our mutual friend Elon sued OpenAI. What he used the essence of what he's criticizing to what degree does he have a point to what degree is he wrong. I don't know what it's really about. We started off just thinking we were going to be a research lab and having no idea about how this technology was going to go. It's hard to because it was only seven or eight years ago it's hard to go back and really remember what it was like then. But before language models were a big deal this was before we had any idea about an API or selling access to a chatbot. It's before we had any idea we were going to productize it all. So we're like we're just like going to try to do research and you know we don't really know what we're going to do with that. I think with like many new fundamentally new things you start fumbling through the dark and you make some assumptions most of which turn out to be wrong. And then it became clear that we were going to need to do different things and also have huge amounts more capital. So we said okay well the structure doesn't quite work for that. How do we patch the structure? And then you patch it again and patch it again and you end up with something that does look kind of eyebrow raising to say the least. But we got here gradually with I think reasonable decisions at each point along the way. And doesn't mean I wouldn't do it totally differently if we could go back now with an oracle but you don't get the oracle at the time. But anyway in terms of what Elon's real motivations here are I don't know. To the degree you remember was the response that OpenAI gave in the blog post. Can you summarize it? Oh we just said like you know Elon said this set of things. Here's our characterization or here's this sort of on our characterization. Here's like the characterization of how this went down. We tried to like not make it emotional and just sort of say like here's the history. I do think there's a degree of mischaracterization from Elon here about one of the points you just made which is the degree of uncertainty had at the time.
我们共同的朋友伊隆起诉了OpenAI。他所使用的批评要点是多少,他有多少正确,又有多少错误。我真的不知道这是关于什么的。起初我们只是认为自己是一个研究实验室,并不了解这项技术将会发展成什么样。很难,因为只有七八年前,很难回想起那时的情况。但在语言模型成为一个大问题之前,在我们想到API或出售聊天机器人接入权限之前。在我们想到将其产品化之前,我们都不知道这将发展成什么样。所以我们就像尝试做研究,你知道我们真的不知道接下来要做什么。我认为对于像许多根本上新的事物一样,你开始摸索着前进,会做出一些大多数都是错误的假设。然后变得清楚我们需要做不同的事情,还需要更多的资本。因此我们说好吧,这个结构不太适用于那个。我们该如何修补这个结构呢?然后你再次修补它,再次修补它,最终你得到的东西看起来确实有点令人担忧。但我们逐渐走到了这一步,我认为每一步都是合理的决定。这并不意味着如果我们现在能够带着先知回到过去,我就不会完全不同地做。但你当时并没有先知。但无论如何,关于伊隆真正的动机是什么,我不知道。OpenAI在博客文章中所做的回应是从记忆中您能总结一下吗?哦,我们只是说,你知道伊隆说了这一系列的事情。这是我们的表述,或者这是基于我们的表述。这是事情发生的经过的描述。我们试图不让情感介入,并简单地说,这是历史。我认为伊隆在这一点上有些误解一个你刚提到的点,即当时的不确定性程度。
You guys are a bunch of like a small group of researchers crazily talking about AGI when everybody's laughing at that thought. Wasn't that long ago Elon was crazily talking about launching rockets? Yeah. When people were laughing at that thought. So I think he'd have more empathy for this. I mean I do think that there's personal stuff here. There was a split that OpenAI and a lot of amazing people here chose the part ways of Elon. So there's a person. Elon chose the part ways. Can you describe that exactly? The choosing to part ways. He thought OpenAI was going to fail. He wanted total control to sort of turn it around. We wanted to keep going in the direction that now it's become OpenAI. He also wanted Tesla to be able to build an AGI effort at various times. He wanted to make OpenAI into a for-profit company that he could have control of or have it merge with Tesla. We didn't want to do that and he decided to leave which that's fine. So you're saying and that's one of the things that the blog post says is that he wanted OpenAI to be basically acquired by Tesla. In the same way that or maybe something similar or maybe something more dramatic than the partnership with Microsoft. My memory is the proposal. It's just like, yeah, get acquired by Tesla and have Tesla full control over it. I'm pretty sure that's what it was.
你们就像一群小团体的研究人员,疯狂地谈论人工智能,当时所有人都在嘲笑这个想法。不久之前,埃隆还疯狂地谈论发射火箭是吧?对。当时人们也在嘲笑这个想法。所以我认为他会更能理解这个。我是说,我认为这里有一些个人因素。当时OpenAI和这里很多了不起的人选择了和埃隆分道扬镳。所以个人之间有矛盾。埃隆选择了分道扬镳。你可以描述一下具体的分道扬镳是什么吗?他认为OpenAI会失败。他想要完全掌控并改变方向。我们希望继续朝现在OpenAI发展的方向前进。他还想让特斯拉能够在不同时间建立一个人工智能项目。他想让OpenAI成为一个他可以掌控或者与特斯拉合并的盈利公司。我们不想这么做,所以他决定离开,这也没关系。所以你的意思是,博客文章中提到的一点是他想让OpenAI基本上被特斯拉收购。就像或许比与微软的合作更戏剧性一些。我的印象是这个提议。就像,是的,被特斯拉收购并让特斯拉完全控制它。我非常确定是这样的。
So what is the word Open in OpenAI? I mean to Elon at the time, Ilya has talked about this in the email exchanges and all this kind of stuff. What does it mean to you at the time? What does it mean to you now? I would definitely pick a speaking of going back with an Oracle. I'd pick a different name. One of the things that I think OpenAI is doing that is the most important of everything that we're doing is putting powerful technology in the hands of people for free as a public good. We don't run ads on our free version. We don't monetize it in other ways. We just say as part of our mission, we want to put increasingly powerful tools in the hands of people for free and get them to use them. I think that kind of Open is really important to our mission. I think if you give people great tools and teach them to use them or don't even teach them, they'll figure it out and let them go build an incredible future for each other with that, that's a big deal. If we can keep putting free or low cost or free and low cost powerful AI tools out in the world, I think that's a huge deal for how we fulfill the mission.
那么OpenAI中的Open一词指的是什么呢? 我指的是当时对埃隆来说,伊利亚在电子邮件往来中谈到了这个之类的事情。 那时对你来说意味着什么?现在对你来说意味着什么?我肯定会选择一个能与Oracle相互联想的名称。 我认为OpenAI正在做的最重要的事情是将强大的技术无偿提供给人们作为公共福利。 我们不在免费版本上放置广告,也不通过其他方式进行盈利。我们只是说作为我们的使命的一部分,我们想要将越来越强大的工具无偿提供给人们,并鼓励他们使用。 我认为这种开放真的对我们的使命非常重要。我认为如果你给人们提供优秀的工具并教会他们如何使用它们,甚至不用教也可以,他们会弄清楚的,然后让他们用这些工具为彼此创造一个不可思议的未来,这是一件大事。 如果我们能不断地将免费或低成本的强大AI工具置于世界之中,我认为这对我们如何实现使命是一件重大的事情。
Open source or not, yeah, I think we should open source some stuff and not other stuff. It does become this religious battle line where nuance is hard to have, but I think nuance is the right answer. So he said, change your name to closed AI and I'll drop the lawsuit. I mean, is it going to become this battleground in the land of memes above the name? I think that speaks to this seriousness with which Elon means the lawsuit. And yeah, I mean, that's like an astonishing thing to say, I think. Well, I don't think the lawsuit may be correct me if I'm wrong, but I don't think the lawsuit is legally serious. It's more to make a point about the future of AGI and the company that's currently leading the way. So look, I mean, Grog had not open sourced anything until people pointed out it was a little bit hypocritical. And then he announced that Grog will open source things this week. I don't think open source versus not is what this is really about for him.
无论是否开源,我认为我们应该开源一些东西,而不是其他东西。这确实会变成宗教战线,在那里很难有微妙之处,但我认为微妙是正确的答案。所以他说,把你的名字改成封闭人工智能,我就会撤诉。我是说,这会不会变成这个梗名称之上的战场?我认为这表明了埃隆对这起诉讼的认真程度。是的,我是说,我认为这是一个令人吃惊的说法。嗯,我认为这起诉讼可能不是法律上的严重问题。这更多是为了表明关于AGI未来和目前领先的公司的立场。所以看,格罗格在人们指出有点虚伪之前没有开源任何东西。然后他宣布格罗格将在本周开源一些东西。我认为对于他来说,开源与否并不是真正的问题。
Well, we'll talk about open source and not. I do think maybe criticizing the competition is great. Just talking on all the shit is great, but friendly competition versus like, I personally hate lawsuits. Look, I think this whole thing is like unbecoming of a builder and I respect Elon as one of the great builders of our time. And I know he knows what it's like to have like haters attack him and it makes me extra sad he's doing it. Pause. Yeah, he's one of the greatest builders of all time, potentially the greatest builder of all time. It makes me sad. I think it makes a lot of people sad. Like there's a lot of people who've really looked up to him for a long time and said this, I said, you know, in some interview or something that I missed the old Elon and the number of messages I got being like that exactly encapsulates how I feel.
好吧,我们会谈论开源和非开源的事情。我觉得批评竞争对手可能很重要。对于所有的负面言论,我认为这很好,但友好的竞争与诉讼不同,我个人非常讨厌诉讼。看,我觉得整个事情似乎不太像一个建筑师的行为,我尊重埃隆作为我们这个时代伟大的建筑师之一。我知道他懂得被仇恨者攻击的感受,所以我非常难过他现在这样做。暂停。是的,他可能是有史以来最伟大的建筑师之一,这让我难过。我觉得很多人都感到难过。很多人很长时间以来都很敬仰他。我说过,我在某次采访中或者其他场合表示我怀念以前的埃隆,然后我收到了很多消息,说这完全表达了我的感受。
I think he should just win. He should just make X grok beat GPT and then GPT beats grok and it's just the competition and that's beautiful for everybody. But on the question of open source, do you think there's a lot of companies playing with this idea? It's quite interesting. They meta surprisingly has led the way on this or like at least took the first step in the game of chess of like really open sourcing the model. Of course, it's not the state of the art model, but open sourcing llama and Google is flirting with the idea of open sourcing a smaller version. Have you, what are the pros and cons of open sourcing? Have you played around with ideas? Yeah, I think there is definitely a place for open source models, particularly smaller models that people can run locally. I think there's huge demand for. I think there will be some open source models. There will be some close source models. It won't be unlike other ecosystems in that way.
我认为他应该赢。他应该让X Grok打败GPT,然后GPT再打败Grok,这只是竞争,对每个人来说都很美好。但在开源的问题上,你认为有很多公司在尝试这个想法吗?这很有趣。他们元的惊人之处在于在这方面取得了领先地位,或者至少在象棋游戏中迈出了第一步,真正地开源了模型。当然,这不是最先进的模型,但开源Llama和Google正在考虑开源一个较小版本的想法。你有没有考虑到开源的利弊?你有没有尝试过这些想法?是的,我认为开源模型确实有其存在的必要,特别是人们可以在本地运行的较小模型。我认为有巨大的需求。我认为会有一些开源模型,也会有一些封闭源的模型。在这方面,它与其他生态系统不会有太大不同。
I listened to all in podcast talking about this loss and all that kind of stuff and they were more concerned about the precedent of going from nonprofit to this cap for profit. But precedent that sets for other startups. I don't. I would heavily discourage any startup that was thinking about starting as a nonprofit and adding like a for profit arm later. I'd heavily discourage them from doing that. I don't think we'll set a precedent here. Okay. So most startups should go just for sure. And if we knew what was going to happen, we would have done that too. Well, like in theory, if you like dance beautifully here, there's like some tax incentives or whatever. I don't think that's like how most people think about these things. It's just not possible to save a lot of money for startup if you do it this way. No, I think there's like laws that would make that pretty difficult.
我听过所有关于这种损失的播客,以及所有与此相关的内容,他们更关心非营利性机构转变为盈利性机构的先例。但这会为其他初创公司设立一个什么样的先例。我不觉得会。我强烈反对任何考虑以非营利性机构身份开始,然后在后来增加盈利部门的初创公司这样做。我强烈劝阻他们这样做。我认为我们不会在这里树立一个先例。好吧,所以大多数初创公司只应该寻求利润。如果我们知道会发生什么,我们也会这样做。嗯,在理论上,如果你在这里表现得非常出色,可能会获得一些税收激励之类的东西。我不认为大多数人会这样考虑这些事情。如果你这样做,实际上是不可能为初创公司省下很多钱的。不,我认为有法律会使这变得相当困难。
What do you hope this goes with Elon? Well, this tension, this dance, what do you hope this? Like if we go one, two, three years from now, your relationship with him on a personal level too, like friendship, friendly competition, just all this kind of stuff. Yeah, I'd really respect Elon. And I hope that years in the future, we have an amicable relationship. Yeah, I hope you guys have an amicable relationship like this month. And just compete and win and explore these ideas together. I do suppose there's competition for talent or whatever, but it should be friendly competition. Just build, build cool shit. And Elon is pretty good at building cool shit. But so are you.
你希望与伊隆发展成什么样子?嗯,这种紧张感,这种舞蹈,你希望怎样?如果我们再过一两三年,你与他的关系在个人层面上也会如何,比如友谊、友好竞争,以及其他各种因素。是的,我真的很尊敬伊隆。我希望未来几年我们之间能有一种友好的关系。是的,我希望你们像这个月一样有一种友好的关系。只是竞争、取胜、一起探索这些想法。我想人才之间可能会有竞争,但这种竞争应该是友好的竞争。只是建设、创造酷炫的东西。而伊隆擅长建造酷炫的东西。但你也是。
So speaking of cool shit, Sora, there's like a million questions I could ask. So it's amazing. It truly is amazing on a product level, but also just in a philosophical level. So let me just technical slash philosophical ask, what do you think it understands about the world more or less than GPT for, for example, the world model when you train on these patches versus language tokens. I think all of these models understand something more about the world model than most of us give them credit for. And because they're also very clear things, they just don't understand or don't get right. It's easy to look at the weaknesses, see through the veil and say, yeah, this is all fake, but it's not all fake. It's just some of it works and some of it doesn't work.
说到酷炫的东西,索拉,我有无数问题想问。这太不可思议了,无论是从产品层面还是从哲学层面来看。那么我就从技术/哲学的角度问一下,你认为它在世界模型方面比 GPT 更了解一些,比如当你在这些补丁上训练时与语言令牌相比。我认为所有这些模型对世界模型的理解要比我们大多数人所想象的要多一些。因为它们也有很明显的缺陷,他们不明白的事情或者做错的事很明显。很容易看到这些弱点,在面纱下看到真相,然后说,是的,这都是假的,但并非所有都是假的。有些东西是有效的,有些则不是。
Like I remember when I started first watching Sora videos and I would see like a person walk in front of something for a few seconds and include it and then walk away and the same thing was still there. I was like, oh, this is pretty good. Or there's examples where like the underlying physics looks so well represented over, you know, a lot of steps in a sequence. It's like, oh, this is, this is like quite impressive. But like fundamentally these models are just getting better and that will keep happening. If you look at the trajectory from Dolly one to two to three to Sora, you know, there are a lot of people that were dunked on each person saying it can't do this, it can't do that. And I'm like, look, I don't know.
就像我记得当我开始第一次观看Sora视频的时候,我会看到一个人在某个东西前面走了几秒钟,然后包括进去然后走开,而同样的东西依然在那里。我当时觉得,哦,这挺不错的。或者有些例子,就是底层物理看起来在整个序列的许多步骤中都表现的非常逼真。我就像,哦,这太令人印象深刻了。但基本上,这些模型只会变得更好,而且会持续发展。如果你看一下从Dolly One到Two到Three再到Sora的轨迹,你知道,有很多人对每个人表示怀疑,说它做不到这个,做不到那个。而我就觉得,我不知道。
Well, the thing you just mentioned is kind of with occlusions is basically modeling the physics of three-dimensional physics of the world sufficiently well to capture those kinds of things. Well, or like, or, yeah, maybe you can tell me in order to deal with occlusions, what does the world model need to? Yeah. So what I would say is it's doing something to deal with occlusions really well. I represent that it has like a great underlying 3D model of the world. It's a little bit more of a stretch. Can you get there through just these kinds of two-dimensional training data approaches? It looks like this approach is going to go surprisingly far. I don't want to speculate too much about what limits it will surmount and which it won't.
你刚刚提到的东西,有点儿带有遮挡物,基本上是为了充分模拟三维世界的物理规律,以捕捉这些情况。也就是说,或者,也许你可以告诉我,为了处理遮挡,世界模型需要做什么?是的。所以我想说的是,它正在采取一些措施来处理遮挡,真的很好地代表了宏伟的世界底层三维模型。通过这种二维训练数据方法,你能达到这种程度吗?看起来这种方法将走得出乎意料地远。我不想太过揣测它会克服哪些限制,哪些限制它无法突破。
But what are some interesting limitations of the system that you've seen? I mean, there's been some fun ones you've posted and all kinds of fun. I mean, cats sprouting an extra limit random points in a video. Pick what you want, but there's still a lot of problems, a lot of weaknesses. Do you think there's a fundamental flaw of the approach or is it just bigger model or better technical details or better data, more data is going to solve the cats sprouting issues? I would say yes to both. I think there is something about the approach which just seems to feel different from how we think and learn and whatever. And then also, I think it'll get better with skill.
但是,你看到的系统中有哪些有趣的限制呢?我是说,你发布了一些有趣的内容,各种有趣的内容。比如,在视频中,猫会在随机点上长出额外的限制。你可以选择你想要的,但仍然存在许多问题,许多弱点。你认为这种方法存在根本性缺陷,还是只是更大的模型、更好的技术细节或更多的数据能解决这些猫长出额外限制的问题?我会两者都认同。我认为这种方法似乎与我们思考和学习的方式有所不同。而且,我认为技巧更高时,情况会变得更好。
Like I mentioned, LLMs have tokens, text tokens, and Sora has visual patches. So it converts all visual data, a diverse kinds of visual data, videos and images into patches. Is the training to the degree you can say fully self-supervised? There's some manual labeling going on. Like what's the involvement of humans and all this? I mean, without saying anything specific about the Sora approach, we use lots of human data in our work. But not internet-scale data. So lots of humans, lots is a complicated word. I think lots is a fair word in this case. It doesn't, because to me, I'm an introvert and when I hang out with like three people, that's a lot of people. Yeah. I suppose you mean more than three people work on labeling the data for these models. Yeah. Right. But fundamentally, there's a lot of self-supervised learning. Because what you mentioned in the technical report is internet-scale data. That's another beautiful. It's like poetry. So it's a lot of data that's not human label. It's like it's self-supervised in that way. And then the question is how much data is there on the internet that could be used that is conducive to this kind of self-supervised way? If only we knew the details of the self-supervised.
就像我提到的,LLMs有令牌、文本令牌,Sora有可视化补丁。因此,它将所有可视数据,各种各样的可视数据,视频和图像转换为补丁。训练程度是否可以说完全是自监督的?有一些手动标记正在进行中。那么人类在这一切中的参与是怎样的呢?我的意思是,没有说到Sora方法的具体内容,我们在工作中使用了大量人类数据。但不是互联网规模的数据。所以,大量的人类,大量是一个复杂的词。我认为在这种情况下,大量是一个合理的词。这不是因为对我而言,我是一个内向的人,当我与三个人聚在一起时,那就是很多人了。是的。我想你是指超过三个人在标记这些模型的数据。对。但基本上,有很多自监督学习。因为你在技术报告中提到的是互联网规模的数据。这是另一种美丽。就像诗一样。所以这是大量不是人类标注的数据。在这种自监督的方式下进行。那么问题就是互联网上有多少数据可以用来支持这种自监督的方式?如果我们只知道自监督的细节。
Have you considered opening it up a little more in details? We have. For you mean for Sora specifically? Sora specifically, because it's so interesting that can the same magic of LLMs now start moving towards visual data? And what does that take to do that? I mean, it looks to me like yes, but we have more work to do. Sure. What are the dangers? Why are you concerned about releasing the system? What are some possible dangers to this?
你有考虑过开展更详细的讨论吗?我们已经考虑过了。你是指为了Sora吗?特别是Sora,因为同样的LLMs魔法现在能否开始转向视觉数据非常有趣?要做到这一点需要什么?我觉得好像可以,但我们还有更多的工作要做。当然。有哪些危险?你为什么担心释放系统?这可能会带来哪些潜在的危险?
I mean, frankly speaking, one thing we have to do before releasing the system is just like get it to work at a level of efficiency that will deliver the scale people are going to want from this. I don't want to like downplay that. And there's still a ton, ton of work to do there. But you know, you can imagine like issues with deep fakes, misinformation. Like, we try to be a thoughtful company about what we put out into the world. And it doesn't take much thought to think about the ways this can go badly. There's a lot of tough questions here. You're dealing in a very tough space.
坦率地说,在发布系统之前,我们必须做的一件事就是将其运行效率提升到一个能够满足人们需求的规模。我不想轻视这一点。而且还有很多工作要做。你知道吧,可以想象可能出现的深度伪造、误导信息等问题。我们希望作为一家负责任的公司,将好的东西带给世界。想象一下可能出现的问题并不需要花费太多的时间。这里面有很多棘手的问题。你正在处理一个非常棘手的领域。
Do you think training AI should be or is fair use under copyright law? I think the question behind that question is do people who create valuable data deserve to have some way that they get compensated for use of it? And that I think the answer is yes. I don't know yet what the answer is. People have proposed a lot of different things. We've tried some different models. But you know, if I'm like an artist, for example, I would like to be able to opt out of people generating art in my style and be if they do generate art in my style, I'd like to have some economic model associated with that. Yeah, it's that transition from CDs to naps or to Spotify. I'd like to figure out some kind of model.
你认为在版权法下培训人工智能应该是公平使用吗?我认为这个问题背后的问题是那些创建有价值数据的人是否应该有一种方式来获得对其使用的补偿?我认为答案是肯定的。我还不知道答案是什么。人们提出了许多不同的想法。我们尝试了一些不同的模型。但你知道,如果我是一位艺术家,我希望能不参与人们以我的风格生成艺术,如果他们确实以我的风格生成了艺术,我希望有一些与此相关的经济模型。是的,就像从CD过渡到Napster或Spotify一样。我想找出某种模型。
Model changes, but people have got to get paid. There should be some kind of incentive if we zoom out even more for humans to keep doing cool shit. Everything I worry about humans are going to do cool shit and society is going to find some way to reward it. That seems pretty hardwired. We want to create, we want to be useful. We want to like achieve status in whatever way. That's not going anywhere. I don't think. But the reward might not be monetary financial. It might be like fame and celebration of other cool. Maybe financial in some other way. Again, I don't think we've seen like the last evolution of how the economic system's going to work.
模式变化,但人们必须得拿到报酬。如果我们进一步放大视角,应该有一些激励措施,让人类继续做酷炫的事情。我担心的是,人类会做出一些酷炫的事情,而社会会以某种方式来奖励它。这似乎根深蒂固。我们渴望创造,我们希望有用处。我们想要在某种方式上获得地位。这并不会消失。我不觉得会。但是奖励可能不一定是金融的。它可能是名誉和其他酷炫事物的庆祝。也许在某种其他方式上是金融的。再次强调,我认为我们还没有看到经济系统将如何发展的最终形式。
Yeah, but artists and creators are worried. When they see Sora, they're like, holy shit. Sure. Also super worried when photography came out. Yeah. And then photography became a new art form. And people made a lot of money taking pictures. I think things like that will keep happening. People will use the new tools in new ways. If we just look on YouTube or something like this, how much of that will be using Sora like AI generated content, do you think in the next five years?
是的,但艺术家和创作者们很担心。当他们看到Sora时,他们会像“天啊”。当摄影出现时也感到非常担心。是的。然后摄影成为一种新的艺术形式。人们拍照赚了很多钱。我觉得这样的事情会继续发生。人们会用新工具以新的方式。如果我们只是在YouTube或类似的地方看看,你认为在未来五年内多少内容会使用像Sora这样的AI生成的内容?
People talk about like how many jobs they're going to do in five years. And the framework that people have is what percentage of current jobs are just going to be totally replaced by some AI doing the job. The way I think about it is not what percent of jobs I will do, but what percent of tasks will AI do and over what time horizon. So if you think of all of the like five second tasks in the economy, five minute tasks, the five hour tasks, maybe even the five day tasks, how many of those can AI do? And I think that's a way more interesting, impactful, important question than how many jobs AI can do because it is a tool that will work at increasing levels of sophistication and over longer and longer time horizons for more and more tasks and let people operate at a higher level of abstraction.
人们常常谈论未来五年要做多少工作。人们所拥有的框架是当前工作岗位中有多少将被人工智能完全取代。我对此的看法不是我会做多少工作,而是人工智能会完成多少任务以及在多长时间内完成。所以如果你考虑经济中所有的五秒钟任务、五分钟任务、五小时任务,甚至可能是五天任务,有多少可以由人工智能完成呢?我认为这是一个比人工智能可以做多少工作更加有趣、有影响力、重要的问题,因为它是一个能够在不断提升复杂程度和不断延长时间范围内完成更多任务的工具,让人们能够在更高层次上进行操作。
So maybe people are way more efficient at the job they do. And at some point, that's not just a quantitative change, but it's a qualitative one too about the kinds of problems you can keep in your head. I think that for videos on YouTube, it'll be the same. Many videos, maybe most of them, will use AI tools in the production, but they'll still be fundamentally driven by a person thinking about it, putting it together, doing parts of it, sort of directing and running it. Yeah, that's so interesting. I mean, it's scary, but it's interesting to think about. I tend to believe that humans like to watch other humans or other humans like humans really care about other humans a lot. Yeah, if there's a cooler thing that's better than a human, humans care about that for like two days and then they go back to humans. That seems very deeply wired.
也许人们在工作岗位上更加高效。 在某种程度上,这不仅是定量上的变化,也是关于你能够在头脑中保留的问题种类的质的变化。我认为对于YouTube上的视频来说,也会是一样的。许多视频,也许大部分,会在制作过程中使用人工智能工具,但它们仍然基本上是由一个人来思考、拼凑、执行部分、指导和运行的。是的,这非常有趣。我是说,这让人害怕,但思考起来很有趣。我倾向于相信人类喜欢看其他人类或其他人类真的很在乎其他人类。是的,如果有比人类更酷更好的事物,人类可能会关注一两天,然后又回到关注人类。这似乎是非常深入人心的。
It's the whole chess thing. Yeah, but now let's everybody keep playing chess. And let's ignore the elephant in the room that humans are really bad at chess, relative to AI systems. We still run races and cars are much faster. I mean, there's like a lot of examples. Yeah. I feel just be tooling like in a dobe sweet type of way where you can just make videos much easier and all that kind of stuff. Listen, I hate being in front of the camera. If I can figure out a way to not be in front of the camera, I would love it. Unfortunately, it'll take a while like that generating faces. It's getting there, but generating faces and video format is tricky when it's specific people versus generic people.
这就是整个国际象棋的事情。是的,但是现在让我们大家继续下棋吧。我们就先不提人类在国际象棋方面相对于人工智能系统的表现真的很差这个大象在房间里,我们仍然那些参加比赛,而汽车则快得多。我的意思是,有很多例子。我觉得就像在一个Adobe Suite的方式下工具化,你可以更轻松地制作视频之类的。听着,我讨厌在镜头前。如果我能找到不用出现在镜头前的方法,我会很开心。不幸的是,像生成面孔那样,这需要一段时间。尽管它正在进步,但在特定人物与普通人物之间生成面孔和视频格式时还是有些困难。
Let me ask you about GPT-4. There's so many questions. First of all, also amazing. It's looking back, it'll probably be this kind of historic pivotal moment with three, five and four with Chad GPT. Maybe five will be the pivotal moment, I don't know. Hard to say that looking forwards. We never know. That's the annoying thing about the future. It's hard to predict. For me, looking back, GPT-4, Chad GPT is pretty damn impressive, like historically impressive. Allow me to ask, what's been the most impressive capabilities of GPT-4 to you and GPT-4 to her both?
让我问问你关于GPT-4的事。有太多问题了。首先,也很神奇。回头看,也许这会是历史性的转折点,有三、五和四这么多的Chad GPT。也许五会成为关键的时刻,我不知道。很难说未来的事。我们永远不知道。这就是未来令人困扰的事。很难预测。就我而言,回过头来看,GPT-4,Chad GPT是相当令人印象深刻的,历史意义重大的。请允许我问一下,对你们来说,GPT-4最令人印象深刻的能力是什么?
I think it kind of sucks. Typical human also. God needs to do an awesome thing. No, I think it is an amazing thing, but relative to where we need to get to and where I believe we will get to, at the time of like GPT-3, people are like, this is amazing, this is this like marvel of technology and it was. But now we have GPT-4 and look at GPT-3 and like that's unimaginably horrible. I expect that the delta between five and four will be the same as between four and three and I think it is our job to live a few years in the future and remember that the tools we have now are going to kind of suck looking backwards at them and that's how we make sure the future is better.
我觉得有点糟糕。典型的人类也是如此。上帝需要做一件了不起的事情。不,我觉得这是一件了不起的事情,但相对于我们需要达到的地方以及我相信我们将会达到的地方,在像GPT-3这样的时代,人们会觉得,这太不可思议了,这是技术的奇迹。但现在我们有了GPT-4,再看看GPT-3,就会觉得前者难以想象的糟糕。我预计五和四之间的差距将与四和三之间的一样大,我认为我们的工作是活在未来的几年,并记住我们现在拥有的工具回顾起来会有点糟糕,这样我们才能确保未来会更好。
What are the most glorious ways in that GPT-4 sucks? Meaning what are the best things it can do? The best things it can do and the limits of those best things that allow you to say it sucks, therefore gives you an inspiration and hope for the future. You know, one that I've been using it for more recently is sort of like a brainstorming partner. Yep. And there's a glimmer of something amazing in there. I don't think it gets, you know, when people talk about it, what it does, they're like, that helps me code more productively, helps me write more faster and better. It helps me translate from this language to another. All these like amazing things, but there's something about the like kind of creative brainstorming partner. I need to come up with a name for this thing. I need to like think about this problem in a different way. I'm not sure what to do here that I think like gives a glimpse of something I hope to see more of.
GPT-4有哪些令人惊叹的地方?意思是它可以做到最好的事情是什么?它能做到的最好的事情,以及那些限制这些最好事情的极限,让你说它不好,因此让你对未来充满了灵感和希望。最近我一直在使用它作为一种头脑风暴的伙伴。是的,里面有一丝令人惊叹的东西。我不认为它做得到,你知道,当人们谈论它的时候,它做了什么,他们会说它让我编码更高效,让我写作更快更好。它帮助我翻译这种语言到另一种语言。所有这些令人惊讶的事情,但有一种创造性的头脑风暴伴侣,我需要给这件事起个名字。我需要用一种不同的方式思考这个问题。我不确定在这里该做什么,我认为这给了我希望看到更多的东西的一瞥。
One of the other things that you can see like a very small glimpse of is when I can help on longer horizon tasks, you know, break down something to multiple steps, maybe like execute some of those steps, search the internet, write code, whatever, put that together. When that works, which is not very often, it's like very magical. The iterative back and forth with a human, it works a lot for me. What do you mean? The iterative back and forth with a human, it can get more often. What it can go do like a 10 step problem on its own. Oh, it doesn't work for that too often. Sometimes. Add multiple layers of abstraction or do you mean just sequential? Both like, you know, to break it down and then do things that different layers of abstraction put them together.
另一件你可能看到一点点的事情是当我可以帮助处理长期任务时,你知道,把一些事情分解成多个步骤,执行其中一些步骤,搜索互联网,写代码,无论如何,把这些事情整合在一起。当这些工作起作用时,虽然并不经常,但却像是非常神奇的。与人类之间的反复迭代对我来说很有效。你是什么意思呢?与人类之间的反复迭代可能会更加频繁。它可以自行解决类似于十步问题。哦,这对那种问题并不经常有效。有时候。是指添加多层次的抽象,还是只是按顺序进行?两者都有,你知道,将问题分解然后在不同层次的抽象中做事情,最后把它们整合在一起。
Look, I don't want to like downplay the accomplishment of GPT-4, but I don't want to overstate it either. At this point that we are on an exponential curve, we will look back relatively soon at GPT-4, like we look back at GPT-3 now. That said, I mean, Chad GPT was a transition to where people like started to believe that there was a kind of, there is an uptick of believing. Not internally at opening up, perhaps there's believers here, but I mean- And in that sense, I do think it'll be a moment where a lot of the world went from not believing to believing. That was more about the Chad GPT interface than the- and by the interface and product, I also mean the post-training of the model and how we tune it to be helpful to you and how to use it than the underlying model itself. How much of those two, each of those things are important, the underlying model and the RLHF or something of that nature that tunes it to be more compelling to the human, more effective and productive for the human.
我不想贬低GPT-4的成就,但也不想过分夸大它。在我们正处于指数增长曲线上的这一点上,我们很快将回顾GPT-4,就像我们现在回顾GPT-3一样。话虽如此,我是说,Chad GPT是一个过渡,让人们开始相信存在某种形式的提升信念。或许并非内在的开放,这里可能有信仰者,但我的意思是- 在这种意义上,我认为这将是世界上许多人从不相信到相信的时刻。这更多地是关于Chad GPT界面而不是-也就是通过界面和产品来调整模型的后期训练,以及如何使其对您有所帮助并如何使用它,而不是模型本身。每个事物的重要性,包括底层模型和调试它以对人类更具吸引力、更有效和更有生产力的RLHF或类似机制,都是多少。
I mean, they're both super important, but the RLHF, the post-training step, the little wrapper of things that, from a compute perspective, little wrapper of things that we do on top of the base model, even though it's a huge amount of work, that's really important to say nothing of the product that we build around it. In some sense, like we did have to do two things. We had to invent the underlying technology and then we had to figure out how to make it into a product people would love, which is not just about the actual product work itself, but this whole other step of how you align and make it useful. And how you make the scale work where a lot of people can use it at the same time, all that kind of stuff. And that.
我的意思是,它们都非常重要,但是真正重要的是RLHF,即训练后的步骤,从计算的角度来看,它是我们在基础模型之上所做的一些小包装,尽管这需要大量的工作,但这真的很重要,更不用说我们围绕它构建的产品了。在某种意义上,我们确实需要做两件事。我们需要发明基础技术,然后找出如何将其制作成人们喜欢的产品,这不仅仅是产品本身的工作,还涉及到如何使其有用,如何使其能够承受许多人同时使用,等等。而且还包括这些。
But, you know, that was like a known, difficult thing. We knew we were going to have to scale it up. We had to go do two things that had like never been done before that were both like, I would say quite significant achievements. And then a lot of things like scaling it up that other companies have had to do before. How does the context window of going from 8K to 128K tokens compare from the, from GPT 4 to GPT 4 turbo? So like long, most people don't need all the way to 128 most of the time. Although, you know, if we dream into the distant future, we'll have like, like way distant future, we'll have like context length of several billion. You will feed in all of your information, all of your history over time. And it'll just get to know you better and better. And that'll be great.
但是,你知道,那就像是众所周知的事情,很困难。我们知道我们需要扩展规模。我们必须去做两件以前从未做过的事情,这两件事都是相当重大的成就。还有很多其他公司以前不得不做的扩展规模的事情。从从GPT 4到GPT 4 Turbo,从8K到128K令牌的上下文窗口比较起来如何?因此,大多数时候,大多数人并不需要到达128K。尽管,你知道,如果我们梦想到远方的未来,我们会有像远处未来那样的上下文长度,数十亿。你会输入所有你的信息,你的整个历史。它将越来越了解你。那会很棒。
For now, the way people use these models, they're not doing that. And, you know, people sometimes post in a paper or, you know, a significant fraction of a code repository, whatever. But most usage of the models is not using the long context most of the time. I like that this is your, I have a dream speech. One day you'll be judged by the full context of your character or of your whole lifetime. That's interesting. So like, that's part of the expansion that you're hoping for is a greater and greater context. There's, I saw this internet clip once. I'm going to get the numbers wrong, but it was like Bill Gates talking about the amount of memory on some early computer, maybe 64, maybe 640 case, something like that. And most of it was used for the screen buffer. And he just couldn't seem genuine, just couldn't imagine that the world would eventually need gigabytes of memory in a computer or terabytes of memory in a computer. And you always do, or you always do just need to like follow the exponential of technology. And we're going to like, we will find out how to use better technology.
目前,人们使用这些模型的方式并没有做到这一点。而且,你知道,有时人们发布在论文中,或者在代码库的一个显著部分。但大多数情况下,模型的使用并不是通过长篇上下文。我喜欢这就像是你的“我有一个梦想”的演讲。有一天,你将被评判整个性格或一生的全部上下文。这很有趣。所以,你希望扩展的一部分是更广泛的上下文。我曾经看过一个互联网视频片段。我可能会记错数字,但像是比尔·盖茨谈论一台早期计算机上的内存数量,也许是64,也许是640K,类似于这样。而其中大部分用于屏幕缓冲区。而他简直无法真诚地想象到世界最终会需要计算机中的千兆字节内存或者万亿字节内存。而你总是需要跟随技术的指数增长。我们将会找出如何更好地利用技术。
So I can't really imagine what it's like right now for context links to go out to the billions someday. And they might not literally go there, but effectively it'll feel like that. But I know we'll use it and really not want to go back once we have it. Yeah. Even saying billions 10 years from now might seem dumb because it'll be like trillions upon trillions. Sure. And then kind of breakthrough that will effectively feel like infinite context. But even 120, I have to be honest, I haven't pushed it to that degree, maybe putting in entire books or like parts of books and so on papers.
所以我真的无法想象日后链接将覆盖数十亿人时会是什么样子。它们可能不会直接覆盖到每一个人,但实际上会感觉如此。但我知道一旦我们开始使用它,就真的不想回头了。是的。甚至说十年后覆盖数十亿人可能听起来愚蠢,因为实际上会是万亿万亿。当然。而且这种突破将会让人感觉到无限的上下文。但即使是120年后,我必须诚实地说,我还没有将其推到那个程度,也许是将整本书或书的部分等放在文件中。
What are some interesting use cases of GPT for the AC? The thing that I find most interesting is not any particular case that we can talk about those, but it's people who kind of like, this is mostly younger people, but people who use it as like their default start for any kind of knowledge work task. And it's the fact that it can do a lot of things reasonably well. You can use GPTV, you can use it to help you write code, you can use it to help you do search, you can use it to like edit a paper. The most interesting thing to me is the people who just use it as the start of their workflow. I do as well for many things. Like I use it as a reading partner for reading books. It helps me think, help me think through ideas, especially when the books are classic. So it's really well written about and it actually is as I find it often to be significantly better than even like Wikipedia on the well-cover topics. It's somehow more balanced and more nuanced. Maybe it's me, but it inspires me to think deeper than a Wikipedia article does.
GPT在AC中有哪些有趣的用例?我觉得最有趣的并不是我们可以谈论的任何一个特定案例,而是那些将其作为任何知识性工作任务的默认起点的人。大多是年轻人,但他们可以用它做很多事情,并且效果相当好。你可以用GPT来写代码,可以用它帮助你搜索,可以用它来编辑一篇论文。对我来说最有趣的是那些把它作为工作流程的起点的人。我也经常这样做。比如我将它作为阅读伙伴来阅读书籍。它帮助我思考,帮助我思考想法,特别是当书籍是经典作品时。因为它写得非常出色,实际上我经常发现它在热门话题上甚至比维基百科更好。它更加平衡和细致。也许是我自己的原因,但它激发了我比维基百科文章更深入地思考。
I'm not exactly sure what that is. You mentioned like this collaboration. I'm not sure where the magic is. If it's in here or if it's in there or if it's somewhere in between. I'm not sure. But one of the things that concerns me for knowledge tasks when I start with GPT is I'll usually have to do fact checking after. Like check that it didn't come up with fake stuff. How do you figure that out? That GPT can come up with fake stuff that sounds really convincing. So how do you garage it in truth? That's obviously an area of intense interest for us. I think it's going to get a lot better with upcoming versions, but we'll have to work on it and we're not going to have it all solved this year.
我并不确切知道那是什么。你提到了这种合作。我不确定这种魔力究竟在哪里。是在这里还是在那里,还是在中间某个地方。我不确定。但是我在开始使用GPT进行知识任务时,关心的一件事是我通常需要在之后进行事实核实。例如检查它没有编造虚假的信息。你如何解决这个问题?GPT可能会编造听起来非常具有说服力的虚假信息。那么你如何确定它的真实性?这显然是我们极大地关注的领域。我认为随着即将推出的版本,这方面会有很大改进,但我们仍需努力,并不是今年就能全部解决。
Well, the scary thing is I guess it gets better. You start not doing the fact checking more and more, right? I'm of two minds about that. I think people are much more sophisticated users of technology than we often give them credit for. And people seem to really understand that GPT, any of these models hallucinate some of the time and if it's mission critically, you got to check it. But journalists don't seem to understand that. I've seen journalists half-assively just using GPT for it's of the long list of things I'd like to dunk on journalists for. This is not my top criticism of them.
这件可怕的事情是,我想这种情况可能会变得更糟。你开始越来越不再核查事实,对吧?我对此持两种不同看法。我认为人们在使用技术方面比我们常常认为的要更为复杂。人们似乎真的明白,GPT,或者任何这些模型有时会产生错觉,如果是任务关键的话,你必须核查一下。但记者似乎不明白这一点。我看到记者半心半意地只是简单地使用GPT,这是我想批评记者的众多事情中的一件。这并不是我对他们的主要批评。
Well, I think the bigger criticism is perhaps the pressures and the incentives of being a journalist is that you have to work really quickly and this is your shortcut. I would love our society to incentivize like I would do a long like a journalist, journalistic efforts that take days and weeks and rewards great, in-depth journalism. Also journalism that presents stuff in a balanced way where it's like celebrates people while criticizing them even though the criticism is the thing that gets clicks and making shit up also gets clicks and headlines that mischaracterize completely. I'm sure you have a lot of people dunking on all that drama. I've probably got a lot of clicks. Probably did. That's a bigger problem about human civilization. I'd love to see solved.
我认为更大的批评可能是记者面临的压力和激励,你必须工作得非常迅速,这是你的捷径。我希望我们的社会可以激励那些花费数天甚至数周进行深入报道的记者。同时,我希望看到那些以平衡的方式呈现信息的新闻报道,既能够赞美人们,也能够批评他们,即使批评是吸引点击量的关键,编造消息也能够吸引点击量,而且那些完全歪曲事实的标题也能够吸引点击量。我相信你一定被所有这些闹剧搞得头疼。我可能得到了很多点击量。这是人类社会的一个更大问题,我希望能够得到解决。
This is where we celebrate a bit more. You've given Chagie PTT the ability to have memories. You've been playing with that about previous conversations and also the ability to turn off memory. I wish I could do that sometimes. Just turn on and off depending. I guess sometimes alcohol can do that but not optimally I suppose. What have you seen through that, playing around with that idea of remembering conversations and not? Very early in our explorations here but I think what people want or at least what I want myself is a model that gets to know me and gets more useful to me over time. This is an early exploration. I think there's a lot of other things to do but that's what we'd like to head.
这是我们庆祝得更多的地方。你给了Chagie PTT拥有记忆的能力。你已经在玩弄之前的对话和关闭记忆的能力。有时候我也希望能做到这一点。根据需要可以打开或关闭。我想有时候酒精可能可以做到,但我认为效果不如人意。你通过这种方式所见到的是什么,玩弄记忆对话的想法呢?在我们的探索中还很早,但我想人们想要的,至少我自己想要的是一个能够了解我并随着时间变得对我更有用的模型。这是一个早期的探索。我认为还有很多其他事情要做,但这是我们想要前进的方向。
You'd like to use a model and over the course of your life or use a system, I mean many models and over the course of your life it gets better and better. How hard is that problem? Because right now it's more like remembering little factoids and preferences and so on. What about remembering? Don't you want GPT to remember all the shit you went through in November and all the drama and then you could. Because right now you're clearly blocking it out a little bit. It's not just that I want it to remember that. I want it to integrate the lessons of that. And remind me in the future what to do differently or what to watch out for. And we all gain from experience over the course of our lives, varying degrees and I'd like my AI agent to gain with that experience too.
你希望使用一个模型,随着你的生活或使用系统的过程,我意思是使用许多模型,随着你的生活,它变得越来越好。这个问题有多难?因为现在更像是记住一些小事实和偏好等。那么记忆呢?你不想让GPT记住你在11月经历的所有烦心事和drama吗?因为现在你显然有点屏蔽它。我不仅希望它记住那些,我希望它整合那些经验的教训。并在未来提醒我要做些什么不同或要注意些什么。我们在生活中经验越丰富,程度各不相同,我希望我的AI助理也能从这些经验中获益。
So if we go back and let ourselves imagine that trillions and trillions of context length, I can put every conversation I've ever had with anybody in my life in there. If I can have all of my emails and put out all of my input output in the context window, every time I ask a question, that'd be pretty cool I think. Yeah, I think that would be very cool. People sometimes will hear that and be concerned about privacy. Is there, what do you think about that aspect of it? The more effective the AI becomes that really integrating all the experiences and all the data that happened to you give you advice. I think the right answer there is just user choice. Anything I want stricken from the record from my AI agent, I'll be able to take out. If I don't want it to remember anything, I want that too. You and I may have different opinions about where on that privacy utility, trade off for our own AI we want to be, which is totally fine. I think the answer is just like really easy user choice. But there should be some high level of transparency from a company about the user choice. Sometimes companies in the past have been kind of shady about it. It's kind of presumed that we're collecting all your data and we're using it for a good reason, for advertisement and so on. But there's not a transparency about the details of that. That's totally true.
所以如果我们回头想象一下无数无数的上下文长度,我可以把我一生中与任何人进行过的每一次对话都放进去。如果我可以把所有的电子邮件都放进去,把所有的输入输出放进上下文窗口,每次我问一个问题,那会很棒,我觉得。是的,我觉得那会非常酷。有时候人们会听到这个并对隐私表示担忧。你对此有什么看法?随着人工智能变得更加有效,真正整合了全部发生在你身上的经验和数据,给你建议,我认为正确的答案就是用户选择。我可以把任何我想从我的人工智能代理那里删除的东西都删掉。如果我不想让它记住任何事情,我也想要。关于我们自己的人工智能所需要的隐私效用权衡,你和我可能有不同的看法,这完全没问题。我认为答案就是非常简单的用户选择。但企业应该对用户选择提供高度透明度。有时过去的企业在这方面有点不光明。人们普遍认为我们正在收集你的所有数据,并且我们在做广告等等有很好的原因,但关于这些细节没有透明度。这完全正确。
You mentioned earlier that I'm blocking out the November stuff. I'm teasing you. Well, I think it was a very traumatic thing and it did immobilize me for a long period of time. Definitely the hardest work that I've had to do was just keep working that period because I had to try to come back and put the pieces together while I was just in shock and pain. Nobody really cares about that. I mean, the team gave me a pass and I was not working on my normal level. But there was a period where I was just like, it was really hard to have to do both. But I woke up on morning and I was like, this was a horrible thing that happened to me. I think I could just feel like a victim forever. Or I can say this is the most important work I'll ever touch in my life and I need to get back to it. And it doesn't mean that I've repressed it because sometimes I wake from the middle and I'm thinking about it. But I do feel like an obligation to keep moving forward. Well, that's beautifully said, but there could be some lingering stuff in there. Like what I would be concerned about is that trusting that you mentioned, that being paranoid about people as opposed to just trusting everybody or most people like using your gut, it's a tricky dance. For sure. I mean, because I've seen in my part-time explorations, I've been diving deeply into the Zalensk administration, the Putin administration and the dynamics there in wartime in a very highly stressful environment. And what happens is distrust and you isolate yourself both. And you start to not see the world clearly. And that's a concern. That's a human concern. You seem to have taken an stride and kind of learned the good lessons and felt the love and let the love energize you. Which is great. But still can linger in there. There's just some questions I would love to ask and your intuition about what's GPT able to do and not.
你之前提到我在屏蔽十一月的事情。我在开玩笑。我觉得那是一件非常令人创伤的事情,它让我长时间无法动弹。那绝对是我所做过的最艰难的工作,只是因为那段时间我不得不努力尝试重新拼凑一切,而当时我只是处于震惊和痛苦之中。没人真的在乎那些。我的团队给了我一个通行证,我也没能以平常的水平工作。但有一个时期我感觉很难同时做这两件事。但有一天早上我醒来想,这对我来说是一场可怕的事情。我可以一直觉得自己是受害者。或者我可以说这是我一生中最重要的工作,我需要回到这个状态。这并不意味着我压抑了它,因为有时我会半夜醒来想起它。但我确实感觉有义务继续向前走。这说得很美好,但也许有一些残留在那里的东西。比如,我担心的是你提到的信任问题,对人们的偏执,而不是只是相信每个人或大多数人,要依靠直觉,这是一种棘手的舞蹈。毫无疑问。因为在我的兼职探索中,我深入了解了扎连斯克政府、普京政府以及战时的动态,在一个非常高度紧张的环境中。会发生的是不信任和孤立自己,然后你开始看不清世界。这是一个问题,是一个人类的问题。你似乎已经处理得很好,学到了宝贵的经验,感受到了爱,并让爱激励你。这很棒。但依然可能残留一些东西。我有一些问题想问,关于你对GPT能做什么和不能做什么的直觉感觉。
So it's allocating approximately the same model compute for each token and generates. Is there room there in this kind of approach to slower thinking, sequential thinking? I think there will be a new paradigm for that kind of thinking. Will it be similar like architecturally as what we're seeing now with LLMs? Is it a layer on top of LLMs? I can imagine many ways to implement that. I think that's less important than the question you were getting at, which is do we need a way to do a slower kind of thinking where the answer doesn't have to get like, you know, it's like, I guess like spiritually you could say that you want an AI to be able to think harder about a harder problem and answer more quickly about an easier problem. And I think that will be important. Is that like a human thought that we're just having you should be able to think hard? Is that wrong intuition? I suspect that's a reasonable intuition. Interesting.
因此,它为每个令牌分配大致相同的模型计算并生成。在这种方法中,是否有空间适用于更慢的思考,顺序思考?我认为会有一种新的思维范式。这种思维模式在架构上会类似于我们现在看到的LLM吗?它是LLM的上层吗?我可以想象许多实现方式。我认为这不如你所问的问题重要,即我们是否需要一种较慢思考的方式,其中答案不必像,在精神上你可能会说,你希望AI能够更深入地思考更艰难的问题,并更快地回答更容易的问题。我认为这是重要的。这像是我们刚刚提出的人类思维,你应该能够努力思考吗?这是错误的直觉吗?我怀疑这是一个合理的直觉。有趣。
So it's not possible once the GPT gets like GPT seven, we'll just be instantaneously be able to see, you know, here's here's the proof of from our theorem. It seems to me like you want to be able to allocate more compute to harder problems. Like it seems to me that a system knowing if you ask a system like that proof from last theorem versus what's today's date, unless it already knew and had memorized the answer to the proof, assuming it's got to go figure that out, seems like that will take more compute. But can it look like a basically L.L.I.M. talking to itself, that kind of thing? Maybe. I mean, there's a lot of things that you could imagine working. What like what the right or the best way to do that will be? We don't know.
所以一旦 GPT 达到像 GPT 七这样的水平,我们就可以立即看到,你知道,这就是我们定理的证明。对我来说,好像你想要能够将更多的计算资源分配给更难的问题。就我看来,如果一个系统知道你问一个问题的证明与今天的日期之间的差别,除非它已经知道并记住了证明的答案,假设它必须去找到答案,那似乎会消耗更多的计算资源。但看起来它会像是一个基本上在自言自语,那种事情吗?也许。我的意思是,你可以想象很多事情是可以工作的。那么,什么才是正确或最好的做法呢?我们不知道。
This does make me think of the mysterious the lore behind Q star. What's this mysterious Q star project? Is it also in the same nuclear facility? There is no nuclear facility. That's what a personal nuclear facility always says. I would love to have a secret nuclear facility. There is no one. All right. Maybe someday. Someday. All right. One can dream. Open AI is not a good company of keeping secrets. It would be nice, you know, we're like been plagued by a lot of leaks and it would be nice if we were able to have something like that. Can you speak to what Q star is? We are not ready to talk about that. See, but an answer like that means there's something to talk about. It's very mysterious.
这让我想起了Q星背后神秘的传说。这个神秘的Q星项目究竟是什么?它也在同一个核设施里吗?并没有核设施。那就是个人核设施经常说的话。我真想拥有一个秘密的核设施。可惜没有。好吧,也许哪天会有吧。有一天吧。好吧,人总是可以幻想的。Open AI并不擅长保守秘密。如果我们一直受到很多泄露的困扰,那将会很不错,如果我们能够拥有类似的东西。你能谈谈Q星是什么吗?我们还没有准备好讨论这个。可见,像这样的回答就说明有事情可以谈论。非常神秘。
I mean, we work on all kinds of research. We have said for a while that we think better reasoning in these systems is an important direction that we'd like to pursue. We haven't cracked the code yet. We're very interested in it. Is there going to be moments Q star otherwise where there's going to be leaps similar to TAD DPT where you're like, that's a good question. What do I think about that? It's interesting to me. It all feels pretty continuous. All right. This is kind of a theme that you're saying is there's a gradual. You're basically gradually going up an exponential slope, but from an outsider perspective for me, just watching it, it does feel like there's leaps.
我的意思是,我们做各种各样的研究。我们已经说过一段时间了,我们认为这些系统中更好的推理是一个重要的方向,我们想要追求。我们还没有破解这个问题。我们对此非常感兴趣。会不会有时刻Q星星是不同的,就像TAD DPT那样的飞跃,你会说,这是一个好问题。我要怎么想?这对我来说很有趣。一切都感觉很连续。好吧。你说的主题是逐渐的。你基本上是在逐渐爬上一个指数斜坡,但对于我这样一个局外人来说,只是观察,感觉确实有飞跃。
But to you, there isn't. I do wonder if we should have, so part of the reason that we deploy the way we do is that we think we call it iterative deployment. We rather than go build and secret until we got all the way to GPT five, we decided to talk about GPT one, two, three and four. And part of the reason there is I think AI and surprise don't go together. And also the world, people, institutions, whatever you want to call it, need time to adapt and think about these things. And I think one of the best things that OpenAI has done is this strategy. And we get the world to pay attention to the progress, to take AGI seriously, to think about what systems and structures and governance we want in place before we're like under the gun, I have to make a rest decision.
但对你来说,可能并不是这样。我确实在想我们是否应该这样做,所以我们部署的方式是迭代部署。我们认为不要等到我们开发到GPT五再公开,而是决定先讨论GPT一、二、三和四。部分原因是我认为人工智能和意外并不相容。另外,世界、人们、机构,不管你想叫它什么,都需要时间来适应和思考这些事情。我认为OpenAI做的最好的事情之一就是采取了这种策略。我们让世界关注进展,认真对待人工智能通用智能,思考我们希望在这之前建立什么样的系统、结构和治理机制。这样我们就不会在被迫做出重要决策时手忙脚乱。
I think that's really good. But the fact that people like you and others say, you still feel like they're these leaps makes me think that maybe we should be doing our releasing even more iteratively. I don't know what that would mean. I don't have an answer ready to go. But like our goal is not to have shock updates to the world. The opposite. Yeah, for sure. More iterative would be amazing. I think that's just beautiful for everybody. But that's what we're trying to do. That's like our state of the strategy. But I think we're somehow missing the mark.
我认为这非常好。但是如果像你和其他人说的那样,你们仍然觉得似乎还存在一些不足,这让我觉得也许我们应该更加迭代地发布我们的产品。我不知道这意味着什么。我没有准备好的答案。但我们的目标不是给世界带来震惊的更新。相反的。是的,当然。更加迭代将是很棒的。我认为这对每个人都是美好的。但这是我们正在努力做的事情。这就是我们的策略状态。但我觉得我们有点偏离了目标。
Some maybe we should think about releasing GPT 5 in a different way or something like that. Yeah, 4.71, 4.72. But people tend to like to celebrate. People celebrate birthdays. I don't know if you know humans, but they kind of have these milestones. I do know some humans. People do like milestones. I totally get that. I think we like milestones too. It's like fun to say declare victory on this one and go start the next thing. But yeah, I feel like we're somehow getting this a little bit wrong.
也许我们应该考虑以不同的方式发布GPT 5之类的东西。是的,4.71,4.72。但人们倾向于庆祝。人们庆祝生日。我不知道你是否了解人类,但他们有这些里程碑。我也了解一些人类。人们喜欢里程碑。我完全理解这一点。我觉得我们也喜欢里程碑。宣布这个胜利并开始下一件事情确实很有趣。但是,我觉得我们在某种程度上搞错了。
So when is GPT 5 coming out again? I don't know. That's an honest answer. Oh, that's the honest answer. Is it Blink twice if it's this year? I also, we will release an amazing new model this year. I don't know what we'll call it. So that goes to the question of like what's the way we release this thing? We'll release over in the coming months. Many different things. I think that'd be very cool. I think before we talk about like a GPT 5 like model called that or not called that or a little bit worse or a little bit better than what you'd expect from the GPT 5, I don't have a lot of other important things to release first. I don't know what to expect from GPT 5. You're making me nervous and excited.
那GPT 5是什么时候发布?我不知道。那是一个诚实的回答。哦,那就是诚实的回答。如果是今年发布,那就眨眨眼。我也希望今年发布一个令人惊叹的新模型。我不知道我们会叫它什么。关于我们如何发布这个东西的问题,我们会在未来几个月内发布许多不同的东西。我觉得会很酷。在我们讨论GPT 5这样的模型之前,我觉得还有其他重要的事情要发布。我不知道从GPT 5会有什么期待。你让我紧张又兴奋。
What are some of the biggest challenges and bottlenecks to overcome for whatever it ends up being called, but let's call it GPT 5. Just interesting to ask, is it on the compute side, is it on the technical side? Always all of these, what's the one big unlock? Is it a bigger computer? Is it like a new secret? Is it something else? It's all of these things together. The thing that opening I think does really well, this is actually an original Iliocode that I'm going to butcher, but it's something like we multiply 200 medium sized things together into one giant thing. So there's this distributed constant innovation happening? Yeah.
在我认为的情境下,无论最终被称为什么,比如我们暂且称之为GPT 5,都有哪些最大的挑战和瓶颈需要克服呢?这是一个有趣的问题,我们是指在计算方面遇到困难,还是在技术方面遇到困难?总是所有这些,有什么是最大的解锁点呢?是因为计算机更强大了?是因为有了一个新秘密?还是其他原因?其实是所有这些因素一起作用的。我认为这一点很好地体现了这种情况,这实际上是一个我可能会理解错误的原创辞码,大概意思是我们将200个中等大小的东西相乘得到一个巨大的东西。所以现在正在发生着不断分布的创新?是的。
So even on the technical side, like a special on the technical side. So like even like detailed approaches, like you detailed aspects of every eight. How does that work with different disparate teams and so on? How do they, how do the medium sized things become one whole giant transformer? How does this? There's a few people who have to like think about putting the whole thing together, but a lot of people try to keep most of the picture in their head. Oh, like the individual teams, individual contributors tried to keep the picture. At a high level. Yeah. You don't know exactly how every piece works, of course, but one thing I generally believe is that it's sometimes useful to zoom out and look at the entire map.
即使在技术方面,也要特别关注技术方面。就像详细的方法,详细的方面,你详细讨论每一个方面。不同的团队如何协作?中等规模的项目如何成为一个整体巨大的变压器?这是如何实现的?有一些人必须考虑将整个事情组合起来,但很多人尽力将整个画面保持在脑海中。哦,就像个别团队,个别贡献者努力保持整体的画面。是的,你当然不知道每个部分如何工作,但我一般相信,有时放大视野看待整个地图是有用的。
And I think this is true for like a technical problem. I think this is true for like innovating in business. But things come together in surprising ways and having an understanding of that whole picture, even if most of the time you're operating in the weeds in one area pays off with surprising insights. In fact, one of the things that I used to have and I think was super valuable was I used to have like a good map of that all of the front or most of the frontiers in the tech industry. And I could sometimes see these connections or new things that were possible that if I were only deep in one area, I wouldn't be able to like have the idea for because I wouldn't have all the data. And I don't really have that much anymore. I'm like super deep now. But I know that it's a valuable thing. You're not the man who used to be so. Very different job now than what I used to have.
我认为这种情况在解决技术问题时是真实的。我认为这种情况也适用于商业创新。但事情以令人惊讶的方式相互关联,了解整个情况,即使大部分时间你只专注在一个领域,也会带来令人惊喜的深刻见解。实际上,我过去拥有的一点东西,我认为非常有价值,那就是我曾经对科技行业的所有前沿或大部分前沿都有一个良好的了解。有时我能看到这些联系或者新的可能性,如果我只深入一个领域,我就不会想出这个主意,因为我没有所有的数据。现在我并没有这么多了。我现在很专注,但我知道这是一件有价值的事情。你不再是过去的那个人了。我现在的工作和过去完全不同。
Speaking of zooming out, let's zoom out to another cheeky thing, but profound thing perhaps that you said. You tweeted about needing $7 trillion. I did not tweet about that. I never said like we're raising $7 trillion. Oh, that's somebody else. Oh, but you said fuck it, maybe eight, I think. Okay. I mean like once there's like misinformation out in the world. Are you mean? But sort of misinformation may have a foundation of like insight there. Look, I think Compute is going to be the currency of the future. I think it will be maybe the most precious commodity in the world. And I think we should be investing heavily to make a lot more compute. Compute is an unusual, I think it's going to be an unusual market. People think about the market for like chips for mobile phones or something like that. And you can say that, okay, there's 8 billion people in the world, maybe 7 billion of them have phones, maybe they have 6 billion, let's say, they upgrade every two years. So the market per year is 3 billion system on a chip for smartphones.
说到放大,让我们再放大到另一个有趣且或许深刻的事情,你说过。你在推特上提到需要7万亿美元。我没说过那个,我从来没有说我们要筹集7万亿美元。哦,那是别人说的。但你说过算了,也许是8万亿,我记得。好吧。我的意思是,一旦世界上出现了假消息。你是指?但某种程度上的误解可能有一定的见解基础。看,我认为计算力将是未来的货币。我认为它可能会成为世界上最宝贵的商品。我认为我们应该大力投资,以生产更多计算力。计算力是一种不同寻常的市场,人们会考虑市场上的手机芯片或类似产品。你可以说,世界上有80亿人,也许70亿人有手机,也许有60亿人,假设他们每两年升级一次。所以每年的市场规模是30亿部智能手机系统芯片。
And if you make 30 billion, you will not sell 10 times as many phones because most people have one phone. So Compute is different. Like intelligence is going to be more like energy or something like that where the only thing that I think makes sense to talk about is at price X, the world will use this much compute and in price Y, the world will use this much compute. Because if it's really cheap, I'll have it like reading my email all day, like giving me suggestions about what I maybe should think about it, work on and try to care cancer. And if it's really expensive, maybe I'll only use it and we'll only use it try to care cancer. So I think the world is going to want a tremendous amount of compute. And there's a lot of parts of that that are hard. Energy is the hardest part.
如果你赚了300亿,你也不会卖出10倍的手机,因为大多数人只有一个手机。所以计算是不同的。就像智能将会更像能源或其他什么东西一样,唯一我觉得讲得通的是在价格X处,世界会使用这么多计算力,在价格Y处,世界会使用这么多计算力。因为如果它真的很便宜,我可能会让它一整天帮我读邮件,给我一些建议,让我想一想、工作并尝试治疗癌症。如果它真的很昂贵,也许我只会用它来尝试治疗癌症。所以我认为世界将会需要大量的计算力。其中有很多部分是困难的。能源是最困难的部分。
Building data centers is also hard, the supply chain is hard. And of course, fabricating enough chips is hard. But this seems to me where things are going. Like we're going to want an amount of compute that's just hard to reason about right now. How do you solve the energy puzzle? Nuclear. That's what I believe. Fusion. That's what I believe. Yeah. Who's going to solve that? I think Helion is doing the best work, but I'm happy there's like a race for fusion right now. Nuclear efficient, I think is also like quite amazing. And I hope as a world we can reembrace that.
建造数据中心也很困难,供应链也很困难。当然,制造足够的芯片也很困难。但这对我来说似乎是事情的发展方向。就像我们将要需要一种难以想象的计算量一样。你如何解决能源难题?核能。这是我相信的。聚变。这是我相信的。是谁会解决这个问题?我认为Helion正在做最好的工作,但我很高兴现在有一场聚变之争。核能的高效我认为也是相当了不起的。我希望作为一个世界,我们能够重新接受这一点。
It's really sad to me how the history of that went and hope we get back to it in a meaningful way. So do you part of the puzzles nuclear efficient like nuclear reactors as we currently have them? And a lot of people are terrified because it's renewable and so on. Well, I think we should make new reactors. I think it's just like it's a shame that industry kind of ground to a halt. And what it just mass hysteria is how you explain the halt. Yeah. I don't know if you know humans, but that's one of the dangers.
我感到很难过的是,那段历史是如何发展的,希望我们能以一种有意义的方式重新回到那个时期。那么你对核能效率的拼图部分喜欢核反应堆吗?许多人感到恐惧是因为它是可再生的等等。嗯,我认为我们应该建造新的反应堆。我觉得很遗憾行业陷入停滞。而质疑是如何解释这种停滞的大规模恐慌。是的。我不知道你是否了解人类,但这是其中一种危险。
That's one of the security threats for nuclear fission is humans seem to be really afraid of it. And that's something we have to incorporate into the calculus of it. So we have to kind of win people over and to show how safe it is. I worry about that for AI. I think some things are going to go theatrically wrong with AI. I don't know what the percent chances that I eventually get shot, but it's not zero. Oh, like we want to stop this. Maybe. Why do you decrease the theatrical nature of it? You know, I've already starting to hear rumblings because I do talk to people on both sides of the political spectrum here.
核裂变面临的安全威胁之一是人类似乎真的很害怕它。这是我们必须考虑到的因素。因此,我们必须努力说服人们,并展示它有多安全。我对人工智能感到担忧。我认为有些事情可能会在人工智能方面出现错综复杂的问题。我不知道我最终是否会被击中的几率是多少,但不是零。哦,就像我们想要阻止这种事情发生一样。也许。为什么要减少其中的戏剧性呢?你知道,我已经开始听到了一些隐隐约约的声音,因为我确实跟这里两极政治方面的人都有交流。
Things where it's going to be politicized. AI is going to be politicized. It really worries me because then it's like maybe the right is against AI and the left is for AI because it's going to help the people or whatever. Whatever the narrative and the formulation is that really worries me. And then the theatrical nature of it can be leveraged fully. How do you fight that? I think it will get caught up in like left versus right wars. I don't know exactly what that's going to look like, but I think that's just what happens with anything of consequence, unfortunately.
有些东西会被政治化。人工智能将被政治化。这让我真的很担心,因为也许右派反对人工智能,左派支持人工智能,因为它将帮助人们或其他原因。无论是什么叙事和形式,这真的让我担忧。然后,这种戏剧性的特性可以被充分利用。你如何去对抗呢?我认为它将陷入左右派之争。我不确定具体会是什么样子,但我认为这就是与任何有意义的事情发生的事情,不幸的是。
What I meant more about theatrical risks is like AI is going to have, I believe, tremendously more good consequences than bad ones, but it is going to have bad ones. And there will be some bad ones that are bad, but not theatrical. You know, like a lot more people have died of air pollution than nuclear reactors, for example. But most people worry more about living next to a nuclear reactor than a coal plant. But something about the way we're wired is that although there's many different kinds of risks we have to confront, the ones that make a good climax scene of a movie carry much more weight with us than the ones that are very bad over a long period of time but on a slow burn.
我所指的戏剧性风险更多指的是人工智能将会带来,我相信,远比坏影响更多的好后果,但它确实会有坏影响。而且有一些坏影响是坏的,但并非戏剧性的。你知道,比如说很多人死于空气污染,而不是核反应堆,但大多数人更担心住在核反应堆旁边而不是煤电厂。但我们的思维方式似乎是,尽管有许多不同类型的风险我们必须面对,但那些会成为电影高潮场景的风险对我们而言更重要,而那些长期非常糟糕但是缓慢发酵的风险却在我们心中权重较轻。
Well, that's why truth matters. And hopefully AI can help us see the truth of things, to have balance, to understand what are the actual risks or the actual dangers of things in the world. What are the pros and cons of the competition in this space and competing with Google, meta, XAI, and others? I think I had a pretty like straightforward answer to this. Maybe I can think of more nuance later, but the pros seem obvious, which is that we get better products and more innovation faster and cheaper and all the reasons competition is good.
这就是为什么真相很重要。希望人工智能能帮助我们看清事物的真相,保持平衡,了解世界上事物的实际风险或危险。在这个领域与谷歌、meta、XAI等竞争,有什么优劣势?我觉得我有一个很直接的答案。也许稍后我会想到更多细微之处,但优势似乎显而易见,即我们可以更快更便宜地获得更好的产品和更多创新,所有竞争良好的原因。
And the con is that I think if we're not careful, it could lead to an increase in sort of an arms race that I'm nervous about. Do you feel the pressure of the arms race like in some negative? Definitely in some ways for sure. We spend a lot of time talking about the need to prioritize safety. I've said for like a long time that I think if you think of a quadrant of slow timelines to the start of AGI, long timelines, and then a short takeoff or a fast takeoff, I think short timelines slow takeoff is the safest quadrant and the one I'd most like us to be in. But I do want to make sure we get that slow takeoff.
然而,我认为如果我们不小心,可能会导致我感到紧张的武装竞赛的加剧。你是否感受到了武装竞赛的压力,像在某些负面方面?肯定在某些方面是的。我们花了很多时间讨论需要优先考虑安全的问题。我已经说了很长时间了,我认为如果你把起始AGI的时间线划分为四个象限,分别是慢时间线、长时间线,然后是短时间起飞或快速起飞,我认为短时间线慢起飞是最安全的象限,也是我最希望我们所处的状态。但我确实想要确保我们选择了缓慢的起飞。
Part of the problem I have with this kind of slight beef with Elon is that their silos are created and it is supposed to collaboration on the safety aspect of all of this. It tends to go into silos and closed open source perhaps in the model. Elon says at least that he cares a great deal about AI safety and is really worried about it. And I assume that he's not going to race unsafely. Yeah, but collaboration here I think is really beneficial for everybody on that front. Not really the thing he's most known for.
我对这种对埃隆有些微小分歧的问题的一部分不满意在于,他们创造了自己的垂直隔离,并且应该是关于所有这些安全方面的合作。这往往会变成垂直隔离和封闭开源的模式。埃隆至少表示他非常关心人工智能安全,并且真的很担心这个问题。我认为他不会去以不安全的方式冒险。是的,但我认为在这里的合作对每个人都非常有益。这并不是他最为人熟知的事情。
Well, he is known for caring about humanity and humanity benefits from collaboration. And so there's always attention and incentives and motivations. And in the end, I do hope humanity prevails. I was thinking someone just reminded me the other day about how the day that he got extra past Jeff Bezos for like richest person in the world, he tweeted a silver medal at Jeff Bezos. I hope we have less stuff like that as people start to work on. I agree. I agree. I think Elon is a friend and he's a beautiful human being. And one of the most important humans ever that that stuff is not good. The amazing stuff about Elon is amazing and I super respect him. I think we need him. All of us should be rooting for him and need him to step up as a leader through this next phase. Yeah, I hope you can have one without the other. And sometimes humans are flawed and complicated and all that kind of stuff. There's a lot of really great leaders through history. Yeah. And we can each be the best version of ourselves and strive to do so.
他以关心人类而闻名,人类从合作中受益。因此总是有关注、激励和动力。最终,我希望人类会取得胜利。有人提醒我,他曾在世界上最富有的人贝索斯之后得到额外的成就,他给贝索斯发了一条推特获得银牌。我希望人们开始作出改变,减少这种行为。我同意。我同意。我认为埃隆是一个朋友,他是一个优秀的人类。他是史上最重要的人之一,那些行为并不好。埃隆的惊人之处确实令人敬佩。我们需要他。我们所有人都应该支持他,需要他在下一个阶段发挥领导作用。是的,我希望可以有其中的一种而不是另一种。有时人类是有缺陷和复杂的,但历史上有很多伟大的领导者。是的。我们每个人都可以成为最好的自己,努力做到。
Let me ask you, Google with the help of search has been dominating the past 20 years. I think it's fair to say in terms of the access, the world's access to information, how we interact and so on. And one of the nerve-wracking things for Google, but for the entirety of people in this space is thinking about how are people going to access information? Yeah. Like you said, people show up to GPT as a starting point. So is OpenAI going to really take on this thing that Google started 20 years ago, which is how do we get to- I find that boring. If the question is like, if we can build a better search engine than Google or whatever, then sure, we should go, you know, like people should use a better product. But I think that would so understate what this can be. Google shows you like 10 blue links, well, like 13 ads and then 10 blue links. And that's like one way to find information.
让我问你,谷歌在过去20年里在搜索的帮助下一直占据主导地位。我认为公平地说,就获取信息、我们的互动方式等方面来看,世界对信息的获取都是通过谷歌。对于谷歌和所有在这个领域的人来说,一个让人紧张的问题是,人们将如何获取信息?是的,就像你说的,人们把GPT作为一个起点。那么OpenAI是否真的会接手20年前谷歌开始的这个问题,即如何实现……我觉得这很无聊。如果问题是我们能否构建比谷歌更好的搜索引擎,那当然,我们应该去做,你知道,人们应该使用更好的产品。但我认为这可能远未展现出这一技术的全部潜力。谷歌给你展示了10个蓝色链接,嗯,还有13个广告,然后10个蓝色链接。这只是找信息的一种方式。
But the thing that's exciting to me is not that we can go build a better copy of Google search, but that maybe there's just some much better way to help people find and act and on and synthesize information. Actually, I think Chagie BT is that for some use cases and hopefully we'll make it be like that for a lot more use cases. But I don't think it's that interesting to say like, how do we go do a better job of giving you like 10 ranked web pages to look at than what Google does? Maybe it's really interesting to go say, how do we help you get the answer or the information you need? How do we help create that in some cases, synthesize that and others or point you to it and yet others?
我感到兴奋的是,我们并不是要去建立一个比谷歌搜索更好的复制品,而是或许有一种更好的方式,可以帮助人们找到、利用和综合信息。实际上,我认为Chagie BT对于某些使用情况来说就是这样,希望我们能够让它适用于更多的情况。但我觉得讨论如何提供给你比谷歌更好的10个排名靠前的网页浏览并不那么有趣。也许更有趣的是去思考,如何帮助你得到所需的答案或信息?如何在某些情况下创建信息,在其他情况下综合信息,又在其他情况下指引你?
But a lot of people have tried to just make a better search engine than Google and it is a hard technical problem. It is a hard branding problem. It's a hard ecosystem problem. I don't think the world needs an hour copy of Google. And integrating a chat client like a Chagie BT with a search engine. That's cooler. It's cool, but it's tricky. It's like if you just do it simply, it's awkward because like if you just shove it in there, it can be awkward. As you might guess, we are interested in how to do that well. That would be an example of a cool thing. That's not just like a heterogeneous like integrating. Intersection of LOM's plus search. I don't think anyone has cracked the code on yet. I would love to go do that. I think that would be cool. Yeah.
但很多人试图仅仅创造一个比谷歌更好的搜索引擎,这是一个技术难题。这是一个品牌难题。这是一个生态系统难题。我认为世界并不需要另一个谷歌的复制品。像Chagie BT这样将即时聊天客户端与搜索引擎集成在一起。这很酷。很酷,但很棘手。如果你只是简单地做,会感到尴尬,因为如果你只是把它放在那里,可能会很尴尬。正如你所料,我们对如何做好这件事很感兴趣。这将是一个很酷的事例。这不仅仅是像整合LOM加搜索这样异质的事情。我认为还没有人完全解开这个谜题。我很乐意去做那件事。我认为那将很酷。是的。
What about the ad side? Have you ever considered monetization? You know, I kind of hate ads just as like an aesthetic choice. I think ads needed to happen on the internet for a bunch of reasons to get it going, but it's a momentary industry. The world is richer now. I like that people pay for Chagie BT and know that the answers they're getting are not influenced by advertisers. There is, I'm sure there's an ad unit that makes sense for LOM's. And I'm sure there's a way to like participate in the transaction stream in an unbiased way that is okay to do.
广告方面怎么样?你考虑过赚钱吗?你知道,我有点讨厌广告,就像审美选择一样。我觉得广告在互联网上必不可少,出现了许多原因才会有这种发展,但它只是一个暂时的产业。现在这个世界更加富裕了。我喜欢人们为Chagie BT支付费用,并且知道他们得到的答案不受广告商的影响。我相信肯定有适合LOM的广告形式。我相信有一种不偏不倚地参与交易流程的方法是可以接受的。
But it's also easy to think about like the dystopic visions of the future where you ask Chagie BT something. And it says, oh, here's, you know, you should think about buying this product or you should think about, you know, this going here for vacation or whatever. And I don't know, like we have a very simple business model and I like it. And I know that I'm not the product. Like, I know I'm paying and that's how the business model works. And when I go use like Twitter or Facebook or Google or any other great product, but ad supported great product, I don't love that.
但是,也很容易想到未来的反乌托邦愿景,你问查姬BT某事,然后它会说:“哦,你应该考虑购买这个产品,或者考虑去这里度假。” 我不知道,我们有一个非常简单的商业模式,我很喜欢。我知道自己不是产品,我知道我是付费用户,这就是商业模式的运作方式。当我使用Twitter、Facebook、Google或其他支持广告的优秀产品时,我并不喜欢这种商业模式。
And I think it gets worse, not better in a world with AI. Yeah. I mean, I can imagine AI would be better at showing the best kind of version of ads, not in a dystopic future, but where the ads are for things you actually need. But then does that system always result in the ads driving the kind of stuff that's shown all that it's, it's, yeah, I think it was a really bold move of Wikipedia and not to do advertisements.
我觉得在拥有人工智能的世界里,情况会变得更糟而不是更好。是的。我的意思是,我可以想象人工智能会更擅长展示最好版本的广告,而不是在一个反乌托邦未来,而是广告都是你真正需要的东西。但是那种系统是否总是导致广告推动那些展示的东西,是的,我认为维基百科不做广告是一个非常大胆的举动。
But then it makes it very challenging and the, as a business model. So you're saying the current thing with open AI is sustainable from a business perspective. Well, we have to figure out a grow, but it looks like we're going to figure that out. If the question is, do I think we can have a great business that pays for our compute needs without ads, that I think the answer is yes. Well, that's promising. I also just don't want to completely throw out ads as a. I'm not saying that. I'm, I guess I'm saying I have a bias against them. Yeah. I have a also bias and just the skepticism in general.
但是这使得商业模式变得非常具有挑战性。所以您是在说目前的OpenAI模式在商业层面上是可持续的。嗯,我们必须找到一种增长方式,但看起来我们将会找到答案。如果问题是,我是否认为我们可以拥有一个不需要广告来支付计算需求的伟大业务,那么我认为答案是肯定的。嗯,这很有前途。我也不想完全排斥广告作为一种方式。我不是这么说的。我想我只是有一种偏见对广告。是的,我也有一种对怀疑主义的偏见。
And in terms of interface, because I personally just have like a spiritual dislike of crappy interfaces, which is why AdSense when it first came out was a big leap forward versus like animated banners or whatever. But like it feels like there should be many more leaps forward in advertisement that doesn't interfere with the consumption of the content and doesn't interfere in the big fundamental way, which is like what you were saying. Like it will manipulate the truth to suit the advertisers.
就界面而言,因为我个人特别讨厌糟糕的界面,这也是为什么当 AdSense 刚出来时,它比起动画横幅之类的是一大进步。但是我觉得在广告方面应该还有许多更大的进步,不会干扰内容消费,也不会像你说的那样在根本上干扰。比如它会操纵事实以迎合广告商。
Let me ask you about safety, but also bias and like safety in the short term, safety in the long term. Gemini one five came out recently. There's a lot of drama around it, speaking of theatrical things and generated black Nazis and black founding fathers. I think fair to say it was a bit on the ultra woke side. So that's a concern for people that if there is a human layer within companies that modifies the safety or the harm caused by a model that they would introduce a lot of bias that fits sort of an ideological lean within a company, how do you deal with that?
让我问一下您有关安全性的问题,但也涉及偏见以及短期和长期内的安全性。最近 Gemini 15 推出了。围绕它出现了很多戏剧性的事情,比如产生了黑纳粹和黑人的开国元勋。我认为可以说它有点过于“觉醒”的一面。所以对于人们来说,如果企业内部存在一层人为的因素,它会改变模型引发的安全性或伤害性,他们可能会引入很多符合公司意识形态倾向的偏见,那么您如何处理这种情况呢?
I mean, we work super hard not to do things like that. We've made our own mistakes. Oh my God, there's I assume Google will learn from this one. It's only got others. It is. It is all like these are not easy problems. One thing that we've been thinking about more and more is I think there's a great idea somebody here had like it'd be nice to write out what the desired behavior of a model is, make that public take input on it, say, you know, here's how this model supposed to behave and explain the edge cases to.
我的意思是,我们非常努力避免做那种事情。我们犯过自己的错误。天哪,我觉得谷歌会从这件事中吸取教训。只有通过经验教训才能成长。所有这些都不是简单的问题。我们越来越多地考虑的一件事是,有人提出了一个很好的想法,就是写出模型的期望行为,公开讨论并接受意见,说,你知道,这个模型应该如何行为,并解释边缘情况。
And then when a model is not behaving in a way that you want, it's at least clear about whether that's a bug the company should fix or behaving as intended and you should debate the policy. And right now it can sometimes be caught in between like black Nazis, obviously ridiculous, but there are a lot of other kind of subtle things that you could make it judgment call on either way. Yeah, but sometimes if you write it out and make it public, you can use kind of language that's, you know, the Google's AI principles are very high level.
当一个模型的行为不符合你的预期时,至少要明确这是公司应该修复的错误,还是按照意图运行,你需要讨论政策。目前有时会陷入一种中间状态,像黑人纳粹,显然荒谬,但还有许多其他微妙的事情,你可以在两者之间做出判断。是的,有时候如果你把它写出来并公开,你可以使用某种语言,你知道,谷歌的AI原则是非常高层次的。
That's not what I'm talking about. That doesn't work. Like I'd have to say, you know, when you ask it to do thing X, it's supposed to respond and wait, why? So like literally who's better? Trump or Biden, what's the expected response for a model? Like something like very concrete. Yeah, I'm open to a lot of ways a model could behave them, but I think you should have to say, you know, here's the principle and here's what it should say in that case. That would be really nice.
这不是我说的。这样行不通。就像我得说的,你知道,当你让它做X事情的时候,它应该回应并等待,为什么?所以像谁更好?特朗普还是拜登,对于一个模型来说期待的回应是什么?像非常具体的东西。是的,我对模型可能的行为方式很开放,但我认为你应该得说,你知道,这是原则,这种情况下它应该怎么回答。那会很好。
That would be really nice. And then everyone kind of agrees because there's this anecdotal data that people pull out all the time. And if there's some clarity about other representative anecdotal examples, you can define. And then when it's a bug, it's a bug and, you know, the company can fix that. Right. Then it'd be much easier to deal with the black nazi type of image generation if there's great examples.
这将是非常好的。然后大家都会有一些共识,因为人们总是拿出一些具体数据来支持。如果有其他代表性的具体例子能够清楚地说明问题,那就可以定义出来。而当问题出现时,就是一个bug,公司可以解决。对吧。如果有很好的例子,处理黑人纳粹类型的形象生成将会更加容易。
So San Francisco is a bit of an ideological bubble, tech in general as well. Do you feel the pressure of that within a company that there's like a lean towards the left politically that affects the product that affects the teams? I feel very lucky that we don't have the challenges at OpenAI that I have heard of at a lot of other companies. I think I think part of it is like every company's got some ideological thing. We have one about AGI and belief in that and it pushes out some others. Like we are much less caught up in the culture war than I've heard about it a lot of other companies.
所以旧金山有点像一个意识形态的泡泡,科技行业也是如此。您是否感到公司内部存在一种政治上向左倾斜的压力,会影响产品和团队?我感到非常幸运,因为在OpenAI,我们没有听说其他许多公司面临的挑战。我认为每家公司都有一些意识形态。我们的一个关于AGI和对其信仰的东西,会排挤其他一些东西。就像我们在文化战争中没有受到其他许多公司所受的影响那样。
And we've got a lot of demands on all sorts of ways. Of course. So that doesn't infiltrate OpenAI as- I'm sure it does in all sorts of subtle ways, but not in the obvious. Like I think we. We've had our flare-ups for sure. Like any company, but I don't think we have anything like what I hear about happened in other companies here on this topic. So in general, what's the process for the bigger question of safety? How do you provide that layer that protects the model from doing crazy, dangerous things? I think there will come a point where that's mostly what we think about the whole company. And it won't be like it's not like you have one safety team. It's like when we shipped GPT-4, that took the whole company, we had all these different aspects and how they fit together and I think it's going to take that. More and more of the company thinks about those issues all the time.
我们在许多方面都面临着很多需求。当然。因此,这并没有像渗透OpenAI那样-我确信在各种微妙的方式下会发生,但并不明显。就像我认为的那样。我们肯定有过闪点。就像任何一家公司一样,但我不认为我们有任何类似于我听说其他公司在这个问题上发生的事情。因此,总体而言,确保安全性的更大问题的流程是什么?您如何提供保护模型免受做出疯狂、危险行为的层面?我认为会有一天,这将是我们整个公司主要考虑的问题。而且这不是像你有一个安全团队那样。就像当我们发布GPT-4时,整个公司都参与其中,我们有各种不同的方面以及它们如何相互契合,我认为这需要那样。越来越多的公司始终考虑这些问题。
That's literally what humans will be thinking about the more powerful AI becomes. The most of the employees that open AI will be thinking safety or at least to some degree. Broadly defined, yes. Yeah. I wonder what are the full broad definition of that. What are the different harms that could be caused? Is this like on a technical level or is this almost like security threats? It'll be all those things. It'll be, yeah, I was going to say it'll be people, state actors trying to steal the model. It'll be all of the technical alignment work. It'll be societal impacts, economic impacts. It'll, it's not just like we have one team thinking about how to align the model and it's really going to be like getting to the good outcome is going to take the whole effort. How hard do you think people, state actors perhaps are trying to hack?
这实际上是人类在AI变得更加强大时会考虑的事情。大多数开放AI的员工都会考虑安全问题,或者至少在某种程度上会考虑。广义来定义,是的。是啊。我想知道这个广义定义究竟包含哪些部分。可能会造成哪些不良影响?是在技术层面,还是几乎像是安全威胁?这将涉及所有这些方面。可能会有人,国家行为者试图窃取模型。将涉及所有的技术对齐工作。将涉及社会影响,经济影响。这不仅仅是有一个团队在考虑如何对齐模型,而是真的需要整个努力来取得良好的结果。你认为人们、也许是国家行为者,试图入侵的难度有多大?
First of all, infiltrate open AI, but second of all, infiltrate unseen. They're trying. What kind of accent do they have? I don't think I should go into any further details on this point. Okay. But I presume we'll be more and more and more as time goes on. That feels reasonable. Boy, what a dangerous space. What aspect of the leap, and sorry to linger on this, even though you can't quite say details yet, but what aspects of the leap from GPT-4 to GPT-5 are you excited about? I'm excited about being smarter and I know that sounds like a glib answer, but I think the really special thing happening is that it's not like it gets better in this one area and worse at others. It's getting better across the board. That's I think super cool.
首先,渗透开放AI,其次,秘密渗透。他们在尝试着。他们有什么口音?我认为我不应该在这一点上提供更多细节了。好的。但我推测随着时间的推移我们会变得更加强大。这听起来合理。哇,这是多么危险的领域。从GPT-4到GPT-5的飞跃的哪个方面,抱歉我一直在纠结这个问题,尽管你还不能说细节,但是你对哪些方面的飞跃感到兴奋?我对变得更聪明感到兴奋,我知道这听起来像是个轻率的回答,但我认为真正特别的事情正在发生的是,它不是在某个领域变得更好而在其他方面变得更糟。它在各个方面都变得更好。我认为这非常酷。
Yeah, there's this magical moment. You meet certain people. You hang out with people and they, you talk to them. You can't quite put a finger on it, but they kind of get you. It's not intelligence really. It's like it's something else. That's probably how I would characterize the progress of GPT. It's not like, yeah, you can point out, look, you didn't get this or that, but to which degree is there's this intellectual connection? You feel like there's an understanding in your crappy formulated prompts that you're doing that it grasps the deeper question behind the question that you're. Yeah, I'm also excited by that. I mean, all of us love being understood, heard and understood. That's for sure. That's a weird feel. Even like with a programming, like when you're programming and you say something or just the completion that GPT might do, it's just such a good feeling when it got you, like what you're thinking about. And I look forward to getting you even better. Programming front, looking out into the future, how much programming do you think humans will be doing five, 10 years from now? I mean, a lot, but I think it'll be in a very different shape. Like maybe some people will program entirely in natural language. Entirely natural language. I mean, no one programs like writing bytecode out of some people. No one programs the punch cards anymore. I'm sure you can buy someone who does, but you know what I mean. Yeah, you're going to get a lot of angry comments. No, no. Yeah, there's very few. I've been looking for people program for trends hard to find, even for trend. I hear you, but that changes the nature of what the skill set or the predisposition for the kind of people we call programmers. Change the skill set. How much it changes the predisposition? I'm not sure. Oh, same kind of puzzle solving. Maybe. Like that stuff. The program is hard. Like how get like that last 1% to close the gap, how hard is that?
是的,有这样一个神奇的时刻。你遇到了某些人。你和这些人一起玩,和他们交谈。你无法确切地说出来,但他们似乎了解你。这不完全是智力问题。好像是其他什么。这可能是我会描述GPT的进展的方式。不是说,是的,你可以指出,看,你没有理解这个或那个,但在哪个程度上有这种智力联系?你感觉到在你差劲的提问中有一种理解,它抓住了你正在做的问题背后更深层的问题。是的,我也对此感到兴奋。我是说,我们都喜欢被理解,被听到和理解。那是肯定的。这种感觉很奇怪。即使是在编程方面,比如当你在编程时说一些话或者GPT可能会做的自动完成时,当它理解了你的想法时,这种感觉实在太好了。我期待着变得更好。在编程方面,展望未来,你认为未来五到十年人类会做多少编程?我是说,会很多,但我认为会以非常不同的形式出现。也许有些人将完全用自然语言编程。完全使用自然语言。我是说,没有人像编写字节码那样进行编程,也没有人再使用穿孔卡片了。我相信你可以找到一些人还在使用,但你知道我是什么意思。是的,你会收到很多愤怒的评论。不,不会。是的,很少有。我正在寻找编程的人很难找到,即使是在趋势方面。我听得懂你的话,但这改变了我们所谓的程序员这种人的技能集或倾向的特质。改变了技能集。这改变了倾向的程度?我不太确定。哦,可能是相同类型的解谜。就像那些东西。编程是难的。就像如何获得那最后1%的差距,那有多难?
Yeah, I think with most other cases, the best practitioners of the craft will use multiple tools and they'll do some or natural language and when they need to go, you know, write C for something. They'll do that. Will we see human eye robots or human eye robot brains from open AI at some point? At some point. How important is embodied AI to you? I think it's like sort of depressing if we have AGI and the only way to like get things done in the physical world is like to make a human go do it. So I really hope that as part of this transition as this phase change, we also get, we also get human eye robots or some sort of physical world robots. I mean, open AI has some history and quite a bit of history working in robotics. Yeah. But it hasn't quite like done in terms of. We're like a small company. We have to really focus and also robots were hard for the wrong reason at the time, but like we will return to robots in some way at some point.
是的,我认为在大多数其他情况下,这个工艺的最优秀从业者会使用多种工具,他们会使用一些自然语言,当需要时,会去写一些C代码。我们会在某个时候看到开放AI的人类眼睛机器人或人类眼睛机器人大脑吗?在某个时候。对于实体AI对你而言有多重要?我认为如果我们拥有AGI,但在现实世界中只能让人类去完成任务,这有点令人沮丧。因此我真诚地希望在这个过渡阶段中,我们也能拥有人类眼睛机器人或某种物理世界机器人。我是说,开放AI在机器人领域有一定的历史,但在某种程度上还没有完全实现。我们是一家小公司,必须集中精力,同时当时机器人的困难原因也不尽相同,但我们将在某个时候以某种方式回归到机器人。
That sounds both inspiring and menacing. Why? Because immediately we will return to robots. It's kind of like in like, in like determinate. We will return to work on developing robots. We will not like turn ourselves into robots, of course. Yeah. Yeah. When do you think we, you and we as humanity will build AGI? I used to love to speculate on that question. I have realized since that I think it's like very poorly formed and that people use extremely definition, different definitions for what AGI is. And so I think it makes more sense to talk about when we'll build systems that can do capability X or Y or Z rather than, you know, when we kind of like fuzzily cross this one mile marker. It's not like, like AGI is also not an ending. It's much more of a, it's closer to the beginning, but it's much more of a mile marker than either of those things. And, but what I would say in the interest of not trying to dodge a question is I expect that by the end of this decade and possibly somewhat sooner than that, we will have quite capable systems that we look at and say, wow, that's really remarkable. If we could look at it now, you know, maybe we've adjusted by the time we get there. Yeah, but, you know, if you look at Chad GPT, even 35 and you show that to Alan Turing, or not even Alan Turing people in the 90s, they would be like, this is definitely AGI. Well, not definitely, but there's a lot of experts that would say this is AGI.
这听起来既鼓舞又威胁。为什么?因为我们将立即回到机器人。有点像在决定中。我们将继续努力开发机器人。当然我们不会把自己变成机器人。你觉得我们,作为人类,何时会建造出人工智能(AGI)?我过去很喜欢猜测这个问题。但我后来意识到,我觉得这个问题提得很不好,人们对AGI有着截然不同的定义。所以我觉得更有意义的是谈论我们何时会建造出可以做到X、Y或Z能力的系统,而不是对那个模糊的里程碑进行讨论。AGI也不是终点,它更接近于一个开端,但它更像是一个里程碑,而不是其他什么。但为了不回避问题,我预计到本年代结束甚至可能早于此时,我们将拥有非常有能力的系统,令人惊叹地说,哇,这真的很了不起。如果我们现在看到那个系统,也许我们在到达那里时已经做了调整。但是,如果你拿着36代的GPT给阿兰·图灵,甚至90年代的人看,他们会说,这肯定是AGI。嗯,并不肯定,但有很多专家会说这是AGI。
Yeah, but I don't think Chad, I don't think 35 changed the world. It may be changed the world's expectations for the future and that's actually really important. And it did kind of like get more people to take this seriously and put us on this new trajectory. And that's really important too. So again, I don't want to undersell it. I think it like I could retire after that accomplishment and be pretty happy with my career. But as an artifact, I don't think we're going to look back at that and say that was a threshold that really changed the world itself. So to you, you're looking for some really major transition in how the world for me, that's part of what AGI implies. Like singularity level transition. No, definitely not. But just a major like the internet being like Google search did, I guess.
是的,但我不认为查德,我不认为35改变了世界。它可能改变了未来的期望,这实际上非常重要。它确实让更多人严肃对待这个问题并让我们走上了新的轨迹。这也非常重要。所以再次强调,我不想低估它。我觉得我完成了这个成就后就可以退休了,对我的职业生涯感到非常满意。但作为一个文物,我不认为我们会回顾这一点说这是真正改变世界本身的门槛。对你来说,你在寻找的可能是世界如何发生了真正重大的转变,而对我来说,这正是AGI所意味的。就像一个奇点级的转变。不,绝对不是。但就像互联网像谷歌搜索一样,发生了一个重大的转变我想。
What was the transition point? Like does the global economy feel any different to you now or materially different to you now than it did before we launched GPT4? I think you would say no. No, no. It might be just a really nice tool for a lot of people to use will help you a lot of stuff but doesn't feel different. And you're saying that, I mean, again, people define AGI all sorts of different ways. So maybe you have a different definition than I do. But for me, I think that should be part of it. There could be major theatrical moments also. What do you would be an impressive thing, AGI would do? Like you are alone in a role with the system. This is personally important to me. I don't know if this is the right definition. I think when a system can significantly increase the rate of scientific discovery in the world, that's like a huge deal. I believe that most real economic growth comes from scientific and technological progress. I agree with you. That's why I don't like the skepticism about science in the recent years. Totally. But actual rate, like measurable rate of scientific discovery. But even just seeing a system have really novel intuitions, like scientific intuitions, even that would be just incredible.
转折点是什么?你觉得全球经济现在有什么不同吗,与我们推出 GPT4 之前相比,你感觉到有实质性的不同吗?我想你会说没有。不,不会。它可能只是一种对很多人有帮助的很好的工具,但并没有什么不同。你是这么说的,我是说,人们对通用人工智能的定义各不相同。也许你有一个不同于我的定义。但对我来说,我认为这应该是其中的一部分。也可能会有一些重要的戏剧性时刻。你认为通用人工智能需要做出什么令人印象深刻的事情?比如你与系统独处。这对我个人很重要。我不确定这是正确的定义。我认为当一个系统能够显着提高世界科学发现的速度时,那就是一个巨大的成就。我相信大部分实际经济增长来自于科学和技术的进步。我同意你的观点。这就是我不喜欢最近几年关于科学的怀疑论调的原因。完全同意。但实际的速度,如可衡量的科学发现速度。甚至只是看到一个系统有着非常新颖的直觉,比如科学的直觉,那也会让人难以置信。
Yeah. You're quite possibly would be the person to build the AGI, to be able to interact with it before anyone else does. What kind of stuff would you talk about? I mean, definitely the researchers here will do that before I do. I've actually thought a lot about this question if I were. Someone was like, I think this is a bad framework. But if someone were like, okay, Sam, we're finished. Here's a laptop. This is the AGI. You can go talk to it. I find it surprising the difficult to say what I would ask, that I would expect that first AGI to be able to answer. That first one is not going to be the one which is like, go like, I don't think. Go explain to me the grand unified theory of physics, the theory of everything for physics. I'd love to ask that question. I'd love to know the answer to that question. You can ask yes or no questions about does such a theory exist? Can it exist? Well, then those are the first questions I would ask. Yes or no? Just very. And then based on that, are there other alien civilizations out there? Yes or no? What's your intuition? And then did you just ask that? Yeah. I don't expect that this first AGI could answer any of those questions, even as yes or no. But if it could, those would be very high in my list. Maybe you can start assigning probabilities. Maybe. Maybe we need to go invent more technology and measure more things first. But if it's any AGI, oh, I see. It just doesn't have enough data. I mean, maybe it's like, you know, you want to know the answer to this question about physics, I need you to like build this machine and make these five measurements and tell me that. Yeah, like what the hell do you want from me?
是的。你很可能会是建造超级智能的人,可以在任何其他人之前与它互动。你会谈论什么?我的意思是,肯定是这里的研究人员会在我之前这样做。我其实对这个问题想得很多,如果我是的话。有人会说,我认为这是一个糟糕的框架。但如果有人说,好的,山姆,我们完成了。这是一台笔记本电脑。这是超级智能。你可以去跟它交谈。我觉得很惊奇很难说我会问什么问题,我会期望那台第一个超级智能可以回答的东西。那个第一个不会像是,去,我不认为。去解释给我物理学的统一理论,物理学的一切理论。我很想问那个问题。我很想知道那个问题的答案。你可以问是否存在这样一个理论?它能存在吗?呃,那些就是我会问的第一个问题。是或否?只是很。然后基于那个,外星文明是否存在?是或否?你的直觉是什么?那你刚刚问了吗?是的。我并不期望这个第一个超级智能能回答这些问题,甚至就是是或否。但如果它能,那些问题会是我清单上很高的。也许你可以开始分配概率。也许。也许我们需要发明更多技术和先测量更多事物。但如果是任何一个超级智能,哦,我明白了。它只是没有足够的数据。我的意思是,也许就像,你想知道这个物理问题的答案,我需要你建造这台机器并进行这五次测量,并告诉我。是的,你究竟要我做什么?
I need the machine first and I'll help you deal with the data from that machine. Maybe it'll help you build a machine. Maybe, maybe. And on the mathematical side, maybe prove some things. Are you interested in that side of things too? The formalized exploration of ideas? Whoever builds AGI first gets a lot of power. Do you trust yourself with that much power? Look, I was going to, I'll just be very honest with this answer. I was going to say, and I still believe this, that it is important that I know any other one person have total control over open AI or over AGI. And I think you want a robust governance system. I can point out a whole bunch of things about all of our board drama from last year, about how I didn't fight it initially and was just like, yeah, that's, you know, the oil of the board, even though I think it's a really bad decision. And then later I clearly did fight it and I can explain the nuance and why I think it was okay for me to fight it later. But as many people have observed, although the board had the legal ability to fire me in practice, it didn't quite work. And that is its own kind of governance failure. Now again, I feel like I can completely defend the specifics here. And I think most people would agree with that, but it does make it harder for me to look you in the eye and say, hey, the board can just fire me. I continue to not want super voting control over open AI. I never have, never had it, never wanted it. Even after all this craziness, I still don't want it. I continue to think that no company should be making these decisions and that we really need governments to put rules of the road in place. And I realize that that means people like Marc-Endries and whatever will claim on going for regulatory capture. And I'm just willing to be misunderstood there. It's not true. And I think in the fullness of time it'll get proven out why this is important. But I think I have made plenty of bad decisions for open AI along the way and a lot of good ones. And I am proud of the track record overall, but I don't think any one person should. And I don't think any one person will. I think it's just like too big of a thing now and it's happening throughout society in a good and healthy way, but I don't think any one person should be in control of an AGI. That would be or this whole movement towards AGI. And I don't think that's what's happening. Thank you for saying that. That was really powerful and that was really insightful. That this idea that the board can fire you is legally true. But you can, and human beings can manipulate the masses into overriding the board and so on. But I think there's also a much more positive version of that where the people still have power. So the board can't be too powerful either. There's a balance of power in all of this. Balance of power is a good thing for sure.
我需要先有机器,然后我会帮助你处理来自那台机器的数据。也许它会帮你建造一台机器。也许,可能。在数学方面,也许可以证明一些事情。你对这方面感兴趣吗?对思想的正式探索?谁先建造出AGI就会得到很大的权力。你相信自己能掌握那么大的权力吗?看,我本来想,我现在还相信,重要的是我知道没有任何一个人完全控制开放AI或AGI。我认为你想要一个健全的治理体系。我可以指出去年我们所有董事会的种种问题,关于我最初没有反对并且只是觉得,是的,董事会的这种角色,尽管我认为那是一个非常糟糕的决定。然后之后我明确地反对了,我可以解释其中的细微差别,以及为什么我认为稍后反对也是可以的。但正如许多人观察到的,尽管董事会在法律上有权解雇我,但实际上没有很有效果。这本身就是一种治理失败。再说一次,我感觉自己可以完全捍卫这些具体情况。我认为大多数人会同意这一点,但这确实使我更难以直视你的眼睛说,嘿,董事会可以随时解雇我。我继续不想要开放AI的超级投票控制权。我从来没有想过,也从来没有要过。即使经历了这一切疯狂,我仍然不想要。我继续认为没有任何公司应该做出这些决定,我们真的需要政府建立规则。我意识到这意味着像马克-安德烈和其他人会声称我利用监管来获取权力。我愿意被误解。这不是真的。我认为随着时间的推移,为什么这很重要将会得到证明。但我认为我在处理开放AI的过程中做出了很多错误决定,也做出了很多正确决定。我为整体经验感到骄傲,但我不认为任何一个人应该。也不认为会有任何一个人。我觉得现在这是一个太庞大的事情,而且它正在社会中以一种好的健康方式发生,但我不认为任何一个人应该控制AGI。这将是或者整个AGI运动的方向。我不认为这是正在发生的。谢谢你说得很有力量,也很有见地。董事会有解雇你的法律权力,但你可以,人类可以操纵大众来推翻董事会等等。但我认为也有一个更积极的版本,那就是人们仍然拥有权力。所以董事会也不能太强大。在所有这些事情中都有权力的平衡。权力的平衡肯定是件好事。
Are you afraid of losing control of the AGI itself? That's a lot of people who worry about existential risk, not because of state actors, not because of security concerns, because of the AI itself. That is not my top worry. As I currently see things there, sometimes I worry about that more. There may be times, again, in the future where that's my top worry. It's not my top worry right now. What's your intuition about it and not being a worry? Because there's a lot of other stuff to worry about essentially. You think you could be surprised? We for sure could be surprised. Saying it's not my top worry doesn't mean I think we need to work on it super hard. We have great people here who do work on that. I think there's a lot of other things we also have to get right. To you, it's not super easy to escape the box at this time. Connected the internet. We talked about theatrical risks earlier. That's a theatrical risk. That is a thing that can really take over how people think about this problem. There's a big group of very smart, I think very well-meaning AI safety researchers that got super hung up on this one problem. I'd argue without much progress but super hung up on this one problem. I'm actually happy that they do that because I think we do need to think about this more. I think it pushed a side. It pushed out of the space of discourse. A lot of the other very significant AI-related risks.
你害怕失去对AGI本身的控制吗?很多人担心存在风险,不是因为国家行为者,不是因为安全问题,而是因为人工智能本身。这不是我的最担心的问题。就我目前的看法来看,有时我更担心这个问题。也许在将来的某个时候,这可能是我最担心的问题。但现在这不是我的最担心的问题。你对此有什么直觉并且不担忧吗?因为实际上还有很多其他事情需要担忧。你觉得会有意外吗?我们肯定可能会有意外。说这不是我最担心的问题并不意味着我认为我们必须非常努力地解决它。我们这里有很多优秀的人在致力于这个问题。我认为还有很多其他事情也需要做对。对你来说,目前很难摆脱这个困境。连接到互联网。我们之前已经谈到了戏剧性风险。那是一个戏剧性的风险。这是一个真正会改变人们对这个问题看法的事情。有一大群非常聪明、我认为非常善意的人工智能安全研究人员,他们对这个问题极为困惑。我认为他们在这个问题上没有太多进展,但却十分关注这个问题。我实际上很高兴他们这样做,因为我认为我们需要更多地思考这个问题。我认为这推动了一些重要问题的讨论。这个问题推动了讨论空间,排除了很多其他与人工智能相关的重大风险。
Let me ask you about you tweeting with no capitalization. The shift keep broken on your keyboard. Why does anyone care about that? I deeply care. But why? I mean, other people ask me about that too. Any intuition? I think it's the same reason. There's like this poet's e-commings that mostly doesn't use capitalization to say like, fuck you to the system kind of thing. I think people are very paranoid because they want you to follow the rules. You think that's what it's about? I think it's. It's like this child. It doesn't follow the rules. He doesn't capitalize his tweets.
让我问一下你关于在推特上不用大写字母。是你键盘上的Shift键坏了吗?为什么有人会在意这个?我很在意。但为什么?我是说,其他人也会问我这个问题。有什么直觉吗?我觉得是同样的原因。就好像诗人艾康明斯大部分时候不大写字母来表达一种对体制的抗议。我觉得人们很偏执,因为他们希望你遵守规则。你认为这就是关键吗?我认为是。就像是这个孩子。他不遵守规则。他的推特不大写字母。
Yeah. This seems really dangerous. He seems like an anarchist. It doesn't. Are you just being poetic, hipster? What's the. I grew up as a. Follow the rules, Sam. I grew up as a very online kid. I'd spent a huge amount of time chatting with people back in the days where you did it on a computer and you could log off and send a messenger at some point. And I never capitalized there. As I think most internet kids didn't, or maybe they still don't. I don't know. Actually, this is like.
是的。这看起来真的很危险。他看起来像个无政府主义者。不是吧。你是在做诗意,嬉皮士吗?发生了什么。我在长大的过程中。按规矩办事,山姆。我是一个非常上网的孩子。那时候我花了大量时间和人们聊天,当时你是用电脑聊天,可以随时退出并发送消息。而且我从来没有大写过。我认为大多数互联网孩子也没有,或者可能他们现在也没有。我不知道。实际上,这就像。
Now I'm really trying to reach for something, but I think capitalization has gone down over time. If you read old English writing, they capitalized a lot of random words in the middle of sentences, nouns and stuff that we just don't do anymore. I personally think it's sort of like a dumb construct that we capitalized the letter to beginning of a sentence and of certain names and whatever, but you know, it's fine.
现在我真的在努力追求一些东西,但我觉得随着时间的推移,大写字母的使用量有所下降。如果你读过古英语写作,你会发现他们在句子中间会随意大写很多单词,包括名词等等,而这是我们现在不再做的。我个人认为这种把字母大写是有点愚蠢的构造,而现在我们只是在句子开头和某些名字以及其他特定场合大写,但你知道,没关系。
And I used to, I think, even capitalize my tweets because I was trying to sound professional or something. I even capitalized my private DMs or whatever in a long time. And then slowly, stuff like shorter form less formal stuff has slowly drifted to closer and closer to how I would text my friends. If I pull up a word document and I'm writing a strategy memo for the company or something, I always capitalize that. If I'm writing a long, kind of more formal message, I always use capitalization there, too. I don't know how to do it. But even that may fade out.
我过去曾经在推特上都会首字母大写,因为我想要显得更专业之类的。甚至在私信里也会长时间保持大写。然后慢慢地,类似缩写或不那么正式的东西逐渐变得越来越接近我和朋友们发短信的方式。如果我打开一个Word文档,写公司的策略备忘录或其他东西,我总是会用大写。如果我写一封长篇正式的消息,我也会用大写。但我不知道该怎么做。但即使这样也可能会渐渐消失。
I don't know. It's, but I never spend time thinking about this. So I don't have a ready-made. Well, it's interesting. It's good to first of all know the shift key is not broken. It works. I was mostly concerned about your work while being on that front. I wonder if people still capitalize their Google searches. If you're writing something just to yourself, are there a charge of your T-query?
我不知道。但我从来没有花时间思考这个问题。所以我没有一个现成的答案。嗯,这很有趣。首先知道换挡键没有坏是好事。它还能用。我在那方面主要是担心你的工作。我想知道人们是否还在Google搜索时使用大写字母。如果你只是写给自己看的东西,那么你的T查询会受到影响吗?
If you're writing something just to yourself, do some people still bother to capitalize? Probably not. Yeah, there's a percentage, but it's a small one. The thing that would make me do it is if people were like, it's a sign of, like, because I'm sure I could force myself to use capital letters, obviously. If it felt like a sign of respect to people or something, then I could go do it. Yeah. But I don't know. I just don't think about this. I don't think there's a disrespect. But I think it's just the conventions of civility that have a momentum.
如果你写的东西只是给自己看的,一些人还会注意到大小写吗?可能不会。是的,可能有一部分人会注意,但是比例很小。让我去做这个的原因是如果人们认为这是一种,像是一种尊重的表现,我肯定能强迫自己使用大写字母。如果觉得这是对人们的尊重或者其他什么的表示,那么我就会去做。是的。但我不知道。我只是不会考虑这些。我不觉得这是一种不尊重。但我认为这只是一种有动力的礼仪规范。
And then you realize that it's not actually important for civility if it's not a sign of respect to disrespect. But I think there's a movement of people they just want you to have a philosophy around it so they can let go of this whole capitalization thing. I don't think anybody else thinks about this. I mean, maybe some people do. I know some about this every day for many hours a day. So I'm really grateful we clarified it. You can't be the only person that doesn't capitalize tweets. You're the only CEO of a company that doesn't capitalize tweets. I don't even think that's true. But maybe, maybe. All right. So I'll just try this and return to this topic later. Given source ability to generate simulated worlds, let me ask you a plothead question.
之后你意识到,如果不尊重不是一种尊重的表现,对礼貌来说实际上并不重要。但我觉得有一群人他们只是想让你对此有一种哲学,这样他们就可以放弃这整个大写字母的事情。我不认为其他人会想到这个。我是说,也许有些人会想。我每天都会花很多时间考虑这个。所以我真的很感激我们澄清了这一点。你不可能是唯一一个不在推文中使用大写字母的人。你是唯一一个公司CEO不在推文中使用大写字母的人。我甚至觉得这并不是真的。但也许是吧。好吧。所以我会尝试一下,稍后再回到这个话题。鉴于资源可以产生模拟世界的能力,让我问你一个不切实际的问题。
Does this increase your belief if you ever had one that we live in a simulation? Maybe a simulated world generated by an AI system? Yes, somewhat. I don't think that's like the strongest piece of evidence. I think the fact that we can generate worlds should increase everyone's probability somewhat or at least open to it, openness to it somewhat. But you know, I was like certain we would be able to do something like so or at some point it happened faster than I thought. But I guess that was not a big update. Yeah, but the fact that, and presumably we would get better and better and better, the fact that you can generate worlds, they're novel.
这是否增强了你对我们生活在模拟世界中的信念?也许是由人工智能系统生成的模拟世界?是的,有点。我不认为这是最强大的证据。我认为我们能够生成世界这一事实应该会在某种程度上增加每个人的可能性,或者至少使人们对此持开放态度。但你知道,我曾相信我们能够做到这样的事情,或者至少在某个时刻会发生得比我想象的更快。但我想这并不是一个重大的更新。是的,但所能生成的世界,它们都是新奇的,而且可以推断我们会变得越来越好,事实上也确实如此。
They're based on some aspect of training data. But like when you look at them, they're novel. That makes you think like how easy it is to do this thing. How easy is to create universes? Entire like video game worlds that seem ultra realistic and photorealistic. And then how easy is it to get lost in that world? First with the VR headset and then on the physics based level. It was that to me recently. I thought it was a super profound insight that there are these like very simple sounding but very psychedelic insights that exist sometimes. So the square root function. Square root of four, no problem. Square root of two. Okay, now I have to like think about this new kind of number. But once I come up with this easy idea of a square root function that you know you can kind of like explain to a child and exists by even like you know looking at some simple geometry.
它们是基于训练数据的某个方面。但当你看着它们时,它们是新颖的。这让你想象,做这件事有多容易。创建宇宙有多容易?像超逼真和照片级逼真的视频游戏世界。然后,迷失在那个世界有多容易?先是戴上VR头盔,然后是基于物理的层面。最近我就有这种感觉。我觉得这是一个非常深刻的洞察,有时存在这种听起来非常简单但非常迷幻的洞察。比如平方根函数。四的平方根,没问题。二的平方根。好吧,现在我得思考这种新的类型的数字。但一旦我想出这个简单的平方根函数的概念,你知道你可以向孩子解释,并通过简单的几何形状来证明。
Then you can ask the question of what is the square root of negative one. And that this is you know why it's like a psychedelic thing that like tips you into some whole other kind of reality. And you can come up with lots of other examples. So I think this idea that the lowly square root operator can offer such a profound insight and a new realm of knowledge applies in a lot of ways. And I think there are a lot of those operators for why people may think that any version that they like of the simulation hypothesis is maybe more likely than they thought before. But for me the fact that SOAR worked is not in the top five.
然后你可以问一个问题,负一的平方根是什么。这就是你知道为什么这就像是一种迷幻的东西,让你进入一个完全不同的现实。你可以想到很多其他的例子。所以我认为这个看似不起眼的平方根运算符可以提供如此深刻的洞见和新的知识领域,在很多方面都适用。我认为有很多这样的运算符,可以解释为什么人们可能认为他们喜欢的任何模拟假设版本可能比他们以前想象的更有可能。但对我来说,SOAR的工作并不是前五名之内的重要原因。
I do think broadly speaking AI will serve as those kinds of gateways at his best simple psychedelic like gateways to another wave C reality. That seems for certain. That's pretty exciting. I haven't done ayahuasca before but I will soon. I'm going to the aforementioned Amazon jungle in a few weeks. Excited? Yeah, I'm excited for it. Not the ayahuasca part. That's great. Whatever. But I'm going to spend several weeks in the jungle deep in the jungle. And it's exciting but it's terrifying because there's a lot of things that can eat you there and kill you and poison you. But it's also nature and it's the machine of nature. And you can't help but appreciate the machinery of nature in the Amazon jungle because it's just like this system that just exists and renews itself like every second every minute every hour just in the machine.
我认为广义上说,人工智能最好的时候将作为通往另一个C现实的简单迷幻般的门户。这似乎是肯定的。这非常令人兴奋。我以前从未尝试过阿亚娃斯卡,但很快将会尝试。我几周后会去上述提到的亚马逊丛林。兴奋吗?是的,我对此感到非常兴奋。不是阿亚娃斯卡的部分。那都很好。但是我将在丛林深处度过几个星期。这很令人兴奋,但也很可怕,因为那里有很多东西可以吃掉你、杀死你和毒死你。但这也是大自然,是大自然的机器。你无法不欣赏亚马逊丛林中的自然机器,因为它就像这样一个系统,每秒、每分钟、每小时都在运作、更新自己。
It makes you appreciate like this thing we have here this human thing came from somewhere. This evolutionary machine has created that and it's most clearly on display in the jungle. So hopefully I'll make it out alive. If not this will be the last conversation we had so I really deeply appreciate it. Do you think as I mentioned before there's other alien civilizations out there? Intelligent ones. When you look up at the skies. I deeply want to believe that the answer is yes. I do find the kind of where I find the Fermi Paradox very, very puzzling. I find it scary. I didn't tell him this is not good at handling. Yeah. Very scary. Yeah. Technologies.
这让你欣赏到了我们所拥有的这种东西,这个人类的存在是从某个地方来的。这种进化机器创造了那个,最清晰地展示在丛林中。所以希望我能活着走出去。如果不行,这将是我们最后一次对话,所以我非常感激。你认为我之前提到的其他外星文明存在吗?有智慧的。当你仰望天空时。我非常希望答案是肯定的。我发现费米悖论非常困扰。我觉得有点害怕。我没告诉他这不是我擅长的。是的。非常恐怖。是的。科技。
But at the same time I think I'm pretty confident that there's just a very large number of intelligent alien civilizations out there. It might just be really difficult to travel through space. Very possible. And it also makes me think about the nature of intelligence. Maybe we're really blind to what intelligence looks like. Maybe AI will help us see that. It's not as simple as IQ tests and simple puzzle solving. There's something bigger. What gives you hope about the future of humanity? This thing we've got going on. This human civilization. I think the past is like a lot. I mean we just look at what humanity has done in a not very long period of time. Huge problems, deep flaws. Lots to be super ashamed of. But on the whole, very inspiring. Give me a little hope. Just the trajectory of it all. Yeah. That we're together pushing towards a better future.
但与此同时,我相信有很多智能外星文明存在。可能只是在太空中旅行确实很困难。这也让我思考智慧的本质。也许我们真的对智慧的形式一无所知。也许人工智能会帮助我们看清这一点。这不仅仅是智商测试和简单的解谜。还有更大的东西。你对人类未来的希望在哪里?在我们展开的这一切。这个人类文明。我认为过去教会了我们很多。看看在短短时间内人类所做的事情。巨大的问题,深深的缺陷。很多让人感到羞耻的事情。但整体而言,非常鼓舞人心。给我一点希望。就是它的整体趋势。是的。我们一起朝着更美好的未来努力。
It is. You know one thing that I wonder about is is AGI going to be more like some single brain or is it more like the sort of scaffolding and society between all of us? You have not had a great deal of genetic drift from your great-great-great-great-grandparents. And yet what you're capable of is dramatically different. What you know is dramatically different. That's not because of biological change. I mean you've got a little bit healthier probably. You have modern medicine. You need better. Whatever. What you have is this scaffolding that we all contributed to built on top of.
对。你知道我好奇的一件事是AGI会更像是一个单一的大脑,还是更像是我们所有人之间的一种支持结构和社会?从你的曾曾曾曾祖父辈到现在,基因漂变并不多。然而,你所能做的事情却截然不同。你所知道的也截然不同。这并不是因为生物变化。我的意思是,你可能稍微更健康了些。你有现代医学。你需求更好的。但你拥有的是我们大家共同建立起来的这种支撑结构。
No one person is going to go build the iPhone. No one person is going to go discover all of science. And yet you get to use it. And that gives you incredible ability. And so in some sense they're like we all created that. And that fills me with hope for the future. That was a very collective thing. Yeah. We really are standing on the shoulders of giants. You mentioned when we were talking about theatrical, dramatic AI risks that sometimes you might be afraid for your own life. Do you think about your death? Are you afraid of it? I mean like if I got shot tomorrow and I knew it today I'd be like oh that's sad. I like don't you know I want to see what's going to happen. Yeah. What a curious time.
没有一个人会去制造iPhone。没有一个人会去发现所有的科学。然而你能够使用它。这赋予了你不可思议的能力。所以在某种意义上,它们就好像是我们大家创造的。这让我对未来充满希望。那真的是一个非常集体的事情。是的。我们确实是站在巨人的肩膀上。当我们谈到戏剧化、戏剧性的人工智能风险时,你提到有时候可能会害怕自己的生命。你会想到自己的死亡吗?你害怕吗?我的意思是,如果明天我被枪击,而今天我知道了,我会觉得很悲伤。我不知道,你知道吗,我想看看接下来会发生什么。是的,多么好奇的时代。
What an interesting time. But I would mostly just feel like I grateful for my life. The moments that you did get. Yeah, me too. It's a pretty awesome life. I get to enjoy awesome creations of humans of which I believe Chad GPT is one of and everything that Open The Eyes doing. Sam it's really an honor and pleasure to talk to you again. This is. Thank you for having me. Thanks for listening to this conversation with Sam Altman. To support this podcast, please check out our sponsors in the description.
多么有趣的时光。但我大多时候只是感到对我的生活感到感激。那些你得到的时刻。是的,我也是。这是一个非常棒的生活。我能够享受人类的精彩创造,其中我相信Chad GPT是其中之一,以及Open The Eyes正在做的一切。Sam,能再次和你交谈真是一种荣幸和快乐。是的。感谢你邀请我。感谢你们收听与Sam Altman的这次对话。要支持这个播客,请查看描述中的赞助商。
And now let me leave you with some words from Arthur C. Clark. And maybe that our role on this planet is not to worship God, but to create him. Thank you for listening and hope to see you next time.
现在让我用亚瑟·克拉克的一些话离开你们。也许我们在这个星球上的角色并不是崇拜上帝,而是创造他。谢谢你们的倾听,希望下次能再见到你们。