title

The Future of Life

Future of Life Institute

32
Followers
127
Plays
The Future of Life

The Future of Life

Future of Life Institute

32
Followers
127
Plays
OVERVIEWEPISODESYOU MAY ALSO LIKE

Details

About Us

FLI catalyzes and supports research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.

Latest Episodes

FLI Podcast: Distributing the Benefits of AI via the Windfall Clause with Cullen O'Keefe

As with the agricultural and industrial revolutions before it, the intelligence revolution currently underway will unlock new degrees and kinds of abundance. Powerful forms of AI will likely generate never-before-seen levels of wealth, raising critical questions about its beneficiaries. Will this newfound wealth be used to provide for the common good, or will it become increasingly concentrated in the hands of the few who wield AI technologies? Cullen O'Keefe joins us on this episode of the FLI Podcast for a conversation about the Windfall Clause, a mechanism that attempts to ensure the abundance and wealth created by transformative AI benefits humanity globally. Topics discussed in this episode include: -What the Windfall Clause is and how it might function -The need for such a mechanism given AGI generated economic windfall -Problems the Windfall Clause would help to remedy -The mechanism for distributing windfall profit and the function for defining such profit -The legal permissibility of the Windfall Clause -Objections and alternatives to the Windfall Clause You can find the page for this podcast here: https://futureoflife.org/2020/02/28/distributing-the-benefits-of-ai-via-the-windfall-clause-with-cullen-okeefe/ Timestamps: 0:00 Intro 2:13 What is the Windfall Clause? 4:51 Why do we need a Windfall Clause? 06:01 When we might reach windfall profit and what that profit looks like 08:01 Motivations for the Windfall Clause and its ability to help with job loss 11:51 How the Windfall Clause improves allocation of economic windfall 16:22 The Windfall Clause assisting in a smooth transition to advanced AI systems 18:45 The Windfall Clause as assisting with general norm setting 20:26 The Windfall Clause as serving AI firms by generating goodwill, improving employee relations, and reducing political risk 23:02 The mechanism for distributing windfall profit and desiderata for guiding it’s formation 25:03 The windfall function and desiderata for guiding it’s formation 26:56 How the Windfall Clause is different from being a new taxation scheme 30:20 Developing the mechanism for distributing the windfall 32:56 The legal permissibility of the Windfall Clause in the United States 40:57 The legal permissibility of the Windfall Clause in China and the Cayman Islands 43:28 Historical precedents for the Windfall Clause 44:45 Objections to the Windfall Clause 57:54 Alternatives to the Windfall Clause 01:02:51 Final thoughts This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

64 MIN2 h ago
Comments
FLI Podcast: Distributing the Benefits of AI via the Windfall Clause with Cullen O'Keefe

AIAP: On the Long-term Importance of Current AI Policy with Nicolas Moës and Jared Brown

From Max Tegmark's Life 3.0 to Stuart Russell's Human Compatible and Nick Bostrom'sSuperintelligence, much has been written and said about the long-term risks of powerful AI systems. When considering concrete actions one can take to help mitigate these risks, governance and policy related solutions become an attractive area of consideration. But just what can anyone do in the present day policy sphere to help ensure that powerful AI systems remain beneficial and aligned with human values? Do today's AI policies matter at all for AGI risk? Jared Brown and Nicolas Moës join us on today's podcast to explore these questions and the importance of AGI-risk sensitive persons' involvement in present day AI policy discourse. Topics discussed in this episode include: -The importance of current AI policy work for long-term AI risk -Where we currently stand in the process of forming AI policy -Why persons worried about existential risk should care about present day AI policy -AI and the global community -The rationality and irrationality around AI race narratives You can find the page for this podcast here: https://futureoflife.org/2020/02/17/on-the-long-term-importance-of-current-ai-policy-with-nicolas-moes-and-jared-brown/ Timestamps: 0:00 Intro 4:58 Why it’s important to work on AI policy 12:08 Our historical position in the process of AI policy 21:54 For long-termists and those concerned about AGI risk, how is AI policy today important and relevant? 33:46 AI policy and shorter-term global catastrophic and existential risks 38:18 The Brussels and Sacramento effects 41:23 Why is racing on AI technology bad? 48:45 The rationality of racing to AGI 58:22 Where is AI policy currently? This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

71 MIN1 w ago
Comments
AIAP: On the Long-term Importance of Current AI Policy with Nicolas Moës and Jared Brown

FLI Podcast: Identity, Information & the Nature of Reality with Anthony Aguirre

Our perceptions of reality are based on the physics of interactions ranging from millimeters to miles in scale. But when it comes to the very small and the very massive, our intuitions often fail us. Given the extent to which modern physics challenges our understanding of the world around us, how wrong could we be about the fundamental nature of reality? And given our failure to anticipate the counterintuitive nature of the universe, how accurate are our intuitions about metaphysical and personal identity? Just how seriously should we take our everyday experiences of the world? Anthony Aguirre, cosmologist and FLI co-founder, returns for a second episode to offer his perspective on these complex questions. This conversation explores the view that reality fundamentally consists of information and examines its implications for our understandings of existence and identity. Topics discussed in this episode include: - Views on the nature of reality - Quantum mechanics and the implication...

105 MINFEB 1
Comments
FLI Podcast: Identity, Information & the Nature of Reality with Anthony Aguirre

AIAP: Identity and the AI Revolution with David Pearce and Andrés Gómez Emilsson

In the 1984 book Reasons and Persons, philosopher Derek Parfit asks the reader to consider the following scenario: You step into a teleportation machine that scans your complete atomic structure, annihilates you, and then relays that atomic information to Mars at the speed of light. There, a similar machine recreates your exact atomic structure and composition using locally available resources. Have you just traveled, Parfit asks, or have you committed suicide? Would you step into this machine? Is the person who emerges on Mars really you? Questions like these –– those that explore the nature of personal identity and challenge our commonly held intuitions about it –– are becoming increasingly important in the face of 21st century technology. Emerging technologies empowered by artificial intelligence will increasingly give us the power to change what it means to be human. AI enabled bio-engineering will allow for human-species divergence via upgrades, and as we arrive at AGI and ...

123 MINJAN 16
Comments
AIAP: Identity and the AI Revolution with David Pearce and Andrés Gómez Emilsson

On Consciousness, Morality, Effective Altruism & Myth with Yuval Noah Harari & Max Tegmark

Neither Yuval Noah Harari nor Max Tegmark need much in the way of introduction. Both are avant-garde thinkers at the forefront of 21st century discourse around science, technology, society and humanity’s future. This conversation represents a rare opportunity for two intellectual leaders to apply their combined expertise — in physics, artificial intelligence, history, philosophy and anthropology — to some of the most profound issues of our time. Max and Yuval bring their own macroscopic perspectives to this discussion of both cosmological and human history, exploring questions of consciousness, ethics, effective altruism, artificial intelligence, human extinction, emerging technologies and the role of myths and stories in fostering societal collaboration and meaning. We hope that you'll join the Future of Life Institute Podcast for our final conversation of 2019, as we look toward the future and the possibilities it holds for all of us. Topics discussed include: -Max and Yuval's ...

60 MINJAN 1
Comments
On Consciousness, Morality, Effective Altruism & Myth with Yuval Noah Harari & Max Tegmark

FLI Podcast: Existential Hope in 2020 and Beyond with the FLI Team

As 2019 is coming to an end and the opportunities of 2020 begin to emerge, it's a great time to reflect on the past year and our reasons for hope in the year to come. We spend much of our time on this podcast discussing risks that will possibly lead to the extinction or the permanent and drastic curtailing of the potential of Earth-originating intelligent life. While this is important and useful, much has been done at FLI and in the broader world to address these issues in service of the common good. It can be skillful to reflect on this progress to see how far we've come, to develop hope for the future, and to map out our path ahead. This podcast is a special end of the year episode focused on meeting and introducing the FLI team, discussing what we've accomplished and are working on, and sharing our feelings and reasons for existential hope going into 2020 and beyond. Topics discussed include: -Introductions to the FLI team and our work -Motivations for our projects and existentia...

99 MIN2019 DEC 28
Comments
FLI Podcast: Existential Hope in 2020 and Beyond with the FLI Team

AIAP: On DeepMind, AI Safety, and Recursive Reward Modeling with Jan Leike

Jan Leike is a senior research scientist who leads the agent alignment team at DeepMind. His is one of three teams within their technical AGI group; each team focuses on different aspects of ensuring advanced AI systems are aligned and beneficial. Jan's journey in the field of AI has taken him from a PhD on a theoretical reinforcement learning agent called AIXI to empirical AI safety research focused on recursive reward modeling. This conversation explores his movement from theoretical to empirical AI safety research — why empirical safety research is important and how this has lead him to his work on recursive reward modeling. We also discuss research directions he's optimistic will lead to safely scalable systems, more facets of his own thinking, and other work being done at DeepMind. Topics discussed in this episode include: -Theoretical and empirical AI safety research -Jan's and DeepMind's approaches to AI safety -Jan's work and thoughts on recursive reward modeling -AI safety...

58 MIN2019 DEC 17
Comments
AIAP: On DeepMind, AI Safety, and Recursive Reward Modeling with Jan Leike

FLI Podcast: The Psychology of Existential Risk and Effective Altruism with Stefan Schubert

We could all be more altruistic and effective in our service of others, but what exactly is it that's stopping us? What are the biases and cognitive failures that prevent us from properly acting in service of existential risks, statistically large numbers of people, and long-term future considerations? How can we become more effective altruists? Stefan Schubert, a researcher at University of Oxford's Social Behaviour and Ethics Lab, explores questions like these at the intersection of moral psychology and philosophy. This conversation explores the steps that researchers like Stefan are taking to better understand psychology in service of doing the most good we can. Topics discussed include: -The psychology of existential risk, longtermism, effective altruism, and speciesism -Stefan's study "The Psychology of Existential Risks: Moral Judgements about Human Extinction" -Various works and studies Stefan Schubert has co-authored in these spaces -How this enables us to be more altruistic...

58 MIN2019 DEC 3
Comments
FLI Podcast: The Psychology of Existential Risk and Effective Altruism with Stefan Schubert

Not Cool Epilogue: A Climate Conversation

In this brief epilogue, Ariel reflects on what she's learned during the making of Not Cool, and the actions she'll be taking going forward.

4 MIN2019 NOV 28
Comments
Not Cool Epilogue: A Climate Conversation

Not Cool Ep 26: Naomi Oreskes on trusting climate science

It’s the Not Cool series finale, and by now we’ve heard from climate scientists, meteorologists, physicists, psychologists, epidemiologists and ecologists. We’ve gotten expert opinions on everything from mitigation and adaptation to security, policy and finance. Today, we’re tackling one final question: why should we trust them? Ariel is joined by Naomi Oreskes, Harvard professor and author of seven books, including the newly released "Why Trust Science?" Naomi lays out her case for why we should listen to experts, how we can identify the best experts in a field, and why we should be open to the idea of more than one type of "scientific method." She also discusses industry-funded science, scientists’ misconceptions about the public, and the role of the media in proliferating bad research. Topics discussed include: -Why Trust Science? -5 tenets of reliable science -How to decide which experts to trust -Why non-scientists can't debate science -Industry disinformation -How to comm...

51 MIN2019 NOV 27
Comments
Not Cool Ep 26: Naomi Oreskes on trusting climate science

Latest Episodes

FLI Podcast: Distributing the Benefits of AI via the Windfall Clause with Cullen O'Keefe

As with the agricultural and industrial revolutions before it, the intelligence revolution currently underway will unlock new degrees and kinds of abundance. Powerful forms of AI will likely generate never-before-seen levels of wealth, raising critical questions about its beneficiaries. Will this newfound wealth be used to provide for the common good, or will it become increasingly concentrated in the hands of the few who wield AI technologies? Cullen O'Keefe joins us on this episode of the FLI Podcast for a conversation about the Windfall Clause, a mechanism that attempts to ensure the abundance and wealth created by transformative AI benefits humanity globally. Topics discussed in this episode include: -What the Windfall Clause is and how it might function -The need for such a mechanism given AGI generated economic windfall -Problems the Windfall Clause would help to remedy -The mechanism for distributing windfall profit and the function for defining such profit -The legal permissibility of the Windfall Clause -Objections and alternatives to the Windfall Clause You can find the page for this podcast here: https://futureoflife.org/2020/02/28/distributing-the-benefits-of-ai-via-the-windfall-clause-with-cullen-okeefe/ Timestamps: 0:00 Intro 2:13 What is the Windfall Clause? 4:51 Why do we need a Windfall Clause? 06:01 When we might reach windfall profit and what that profit looks like 08:01 Motivations for the Windfall Clause and its ability to help with job loss 11:51 How the Windfall Clause improves allocation of economic windfall 16:22 The Windfall Clause assisting in a smooth transition to advanced AI systems 18:45 The Windfall Clause as assisting with general norm setting 20:26 The Windfall Clause as serving AI firms by generating goodwill, improving employee relations, and reducing political risk 23:02 The mechanism for distributing windfall profit and desiderata for guiding it’s formation 25:03 The windfall function and desiderata for guiding it’s formation 26:56 How the Windfall Clause is different from being a new taxation scheme 30:20 Developing the mechanism for distributing the windfall 32:56 The legal permissibility of the Windfall Clause in the United States 40:57 The legal permissibility of the Windfall Clause in China and the Cayman Islands 43:28 Historical precedents for the Windfall Clause 44:45 Objections to the Windfall Clause 57:54 Alternatives to the Windfall Clause 01:02:51 Final thoughts This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

64 MIN2 h ago
Comments
FLI Podcast: Distributing the Benefits of AI via the Windfall Clause with Cullen O'Keefe

AIAP: On the Long-term Importance of Current AI Policy with Nicolas Moës and Jared Brown

From Max Tegmark's Life 3.0 to Stuart Russell's Human Compatible and Nick Bostrom'sSuperintelligence, much has been written and said about the long-term risks of powerful AI systems. When considering concrete actions one can take to help mitigate these risks, governance and policy related solutions become an attractive area of consideration. But just what can anyone do in the present day policy sphere to help ensure that powerful AI systems remain beneficial and aligned with human values? Do today's AI policies matter at all for AGI risk? Jared Brown and Nicolas Moës join us on today's podcast to explore these questions and the importance of AGI-risk sensitive persons' involvement in present day AI policy discourse. Topics discussed in this episode include: -The importance of current AI policy work for long-term AI risk -Where we currently stand in the process of forming AI policy -Why persons worried about existential risk should care about present day AI policy -AI and the global community -The rationality and irrationality around AI race narratives You can find the page for this podcast here: https://futureoflife.org/2020/02/17/on-the-long-term-importance-of-current-ai-policy-with-nicolas-moes-and-jared-brown/ Timestamps: 0:00 Intro 4:58 Why it’s important to work on AI policy 12:08 Our historical position in the process of AI policy 21:54 For long-termists and those concerned about AGI risk, how is AI policy today important and relevant? 33:46 AI policy and shorter-term global catastrophic and existential risks 38:18 The Brussels and Sacramento effects 41:23 Why is racing on AI technology bad? 48:45 The rationality of racing to AGI 58:22 Where is AI policy currently? This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

71 MIN1 w ago
Comments
AIAP: On the Long-term Importance of Current AI Policy with Nicolas Moës and Jared Brown

FLI Podcast: Identity, Information & the Nature of Reality with Anthony Aguirre

Our perceptions of reality are based on the physics of interactions ranging from millimeters to miles in scale. But when it comes to the very small and the very massive, our intuitions often fail us. Given the extent to which modern physics challenges our understanding of the world around us, how wrong could we be about the fundamental nature of reality? And given our failure to anticipate the counterintuitive nature of the universe, how accurate are our intuitions about metaphysical and personal identity? Just how seriously should we take our everyday experiences of the world? Anthony Aguirre, cosmologist and FLI co-founder, returns for a second episode to offer his perspective on these complex questions. This conversation explores the view that reality fundamentally consists of information and examines its implications for our understandings of existence and identity. Topics discussed in this episode include: - Views on the nature of reality - Quantum mechanics and the implication...

105 MINFEB 1
Comments
FLI Podcast: Identity, Information & the Nature of Reality with Anthony Aguirre

AIAP: Identity and the AI Revolution with David Pearce and Andrés Gómez Emilsson

In the 1984 book Reasons and Persons, philosopher Derek Parfit asks the reader to consider the following scenario: You step into a teleportation machine that scans your complete atomic structure, annihilates you, and then relays that atomic information to Mars at the speed of light. There, a similar machine recreates your exact atomic structure and composition using locally available resources. Have you just traveled, Parfit asks, or have you committed suicide? Would you step into this machine? Is the person who emerges on Mars really you? Questions like these –– those that explore the nature of personal identity and challenge our commonly held intuitions about it –– are becoming increasingly important in the face of 21st century technology. Emerging technologies empowered by artificial intelligence will increasingly give us the power to change what it means to be human. AI enabled bio-engineering will allow for human-species divergence via upgrades, and as we arrive at AGI and ...

123 MINJAN 16
Comments
AIAP: Identity and the AI Revolution with David Pearce and Andrés Gómez Emilsson

On Consciousness, Morality, Effective Altruism & Myth with Yuval Noah Harari & Max Tegmark

Neither Yuval Noah Harari nor Max Tegmark need much in the way of introduction. Both are avant-garde thinkers at the forefront of 21st century discourse around science, technology, society and humanity’s future. This conversation represents a rare opportunity for two intellectual leaders to apply their combined expertise — in physics, artificial intelligence, history, philosophy and anthropology — to some of the most profound issues of our time. Max and Yuval bring their own macroscopic perspectives to this discussion of both cosmological and human history, exploring questions of consciousness, ethics, effective altruism, artificial intelligence, human extinction, emerging technologies and the role of myths and stories in fostering societal collaboration and meaning. We hope that you'll join the Future of Life Institute Podcast for our final conversation of 2019, as we look toward the future and the possibilities it holds for all of us. Topics discussed include: -Max and Yuval's ...

60 MINJAN 1
Comments
On Consciousness, Morality, Effective Altruism & Myth with Yuval Noah Harari & Max Tegmark

FLI Podcast: Existential Hope in 2020 and Beyond with the FLI Team

As 2019 is coming to an end and the opportunities of 2020 begin to emerge, it's a great time to reflect on the past year and our reasons for hope in the year to come. We spend much of our time on this podcast discussing risks that will possibly lead to the extinction or the permanent and drastic curtailing of the potential of Earth-originating intelligent life. While this is important and useful, much has been done at FLI and in the broader world to address these issues in service of the common good. It can be skillful to reflect on this progress to see how far we've come, to develop hope for the future, and to map out our path ahead. This podcast is a special end of the year episode focused on meeting and introducing the FLI team, discussing what we've accomplished and are working on, and sharing our feelings and reasons for existential hope going into 2020 and beyond. Topics discussed include: -Introductions to the FLI team and our work -Motivations for our projects and existentia...

99 MIN2019 DEC 28
Comments
FLI Podcast: Existential Hope in 2020 and Beyond with the FLI Team

AIAP: On DeepMind, AI Safety, and Recursive Reward Modeling with Jan Leike

Jan Leike is a senior research scientist who leads the agent alignment team at DeepMind. His is one of three teams within their technical AGI group; each team focuses on different aspects of ensuring advanced AI systems are aligned and beneficial. Jan's journey in the field of AI has taken him from a PhD on a theoretical reinforcement learning agent called AIXI to empirical AI safety research focused on recursive reward modeling. This conversation explores his movement from theoretical to empirical AI safety research — why empirical safety research is important and how this has lead him to his work on recursive reward modeling. We also discuss research directions he's optimistic will lead to safely scalable systems, more facets of his own thinking, and other work being done at DeepMind. Topics discussed in this episode include: -Theoretical and empirical AI safety research -Jan's and DeepMind's approaches to AI safety -Jan's work and thoughts on recursive reward modeling -AI safety...

58 MIN2019 DEC 17
Comments
AIAP: On DeepMind, AI Safety, and Recursive Reward Modeling with Jan Leike

FLI Podcast: The Psychology of Existential Risk and Effective Altruism with Stefan Schubert

We could all be more altruistic and effective in our service of others, but what exactly is it that's stopping us? What are the biases and cognitive failures that prevent us from properly acting in service of existential risks, statistically large numbers of people, and long-term future considerations? How can we become more effective altruists? Stefan Schubert, a researcher at University of Oxford's Social Behaviour and Ethics Lab, explores questions like these at the intersection of moral psychology and philosophy. This conversation explores the steps that researchers like Stefan are taking to better understand psychology in service of doing the most good we can. Topics discussed include: -The psychology of existential risk, longtermism, effective altruism, and speciesism -Stefan's study "The Psychology of Existential Risks: Moral Judgements about Human Extinction" -Various works and studies Stefan Schubert has co-authored in these spaces -How this enables us to be more altruistic...

58 MIN2019 DEC 3
Comments
FLI Podcast: The Psychology of Existential Risk and Effective Altruism with Stefan Schubert

Not Cool Epilogue: A Climate Conversation

In this brief epilogue, Ariel reflects on what she's learned during the making of Not Cool, and the actions she'll be taking going forward.

4 MIN2019 NOV 28
Comments
Not Cool Epilogue: A Climate Conversation

Not Cool Ep 26: Naomi Oreskes on trusting climate science

It’s the Not Cool series finale, and by now we’ve heard from climate scientists, meteorologists, physicists, psychologists, epidemiologists and ecologists. We’ve gotten expert opinions on everything from mitigation and adaptation to security, policy and finance. Today, we’re tackling one final question: why should we trust them? Ariel is joined by Naomi Oreskes, Harvard professor and author of seven books, including the newly released "Why Trust Science?" Naomi lays out her case for why we should listen to experts, how we can identify the best experts in a field, and why we should be open to the idea of more than one type of "scientific method." She also discusses industry-funded science, scientists’ misconceptions about the public, and the role of the media in proliferating bad research. Topics discussed include: -Why Trust Science? -5 tenets of reliable science -How to decide which experts to trust -Why non-scientists can't debate science -Industry disinformation -How to comm...

51 MIN2019 NOV 27
Comments
Not Cool Ep 26: Naomi Oreskes on trusting climate science
hmly
himalayaプレミアムへようこそ聴き放題のオーディオブックをお楽しみください。