No Priors Ep. 15 | With Kelvin Guu, Staff Research Scientist, Google Brain
发布时间 2023-05-04 10:00:22 来源
摘要
How do you personalize AI models? A popular school of thought in AI is to just dump all the data you need into pre-training or fine tuning. But that's costly and less controllable than using AI models as a reasoning engine against an external data source, and thus the intersection of retrieval with LLMs has become an increasingly interesting topic.
Kelvin Guu, Staff Research Scientist at Google, wants to make machine learning cheaper, easier, and more accessible. Kelvin joins Sarah and Elad this week to talk about the newer methods his team is working on in machine learning, training, and language understanding. He has completed some of the earliest work on retrieval-augmented language models (REALM) and training LLMs to follow instructions (FLAN).
00:00 - Introduction
01:44 - Kelvin’s background in math, statistics and natural language processing at Stanford
03:24 - The questions driving the REALM Paper
07:08 - Frameworks around retrieval augmentation & expert models
10:16 - Why is modularity important
11:36 - FLAN Paper and instruction following
13:28 - Updating model weights in real time and other continuous learning methods
15:08 - Simfluence Paper & explainability with large language models
18:11 - ROME paper, “Model Surgery” exciting research areas
19:51 - Personal opinions and thoughts on AI agents & research
24:59 - How the human brain compares to AGI regarding memory and emotions
28:08 - How models become more contextually available
30:45 - Accessibility of models
33:47 - Advice to future researchers
GPT-4正在为你翻译摘要中......