Himalaya: Listen. Learn. Grow.

4.8K Ratings
Open In App
title

The Future of Life

Future of Life Institute

54
Followers
242
Plays
The Future of Life

The Future of Life

Future of Life Institute

54
Followers
242
Plays
OVERVIEWEPISODESYOU MAY ALSO LIKE

Details

About Us

FLI catalyzes and supports research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.

Latest Episodes

Evan Hubinger on Inner Alignment, Outer Alignment, and Proposals for Building Safe Advanced AI

It's well-established in the AI alignment literature what happens when an AI system learns or is given an objective that doesn't fully capture what we want. Human preferences and values are inevitably left out and the AI, likely being a powerful optimizer, will take advantage of the dimensions of freedom afforded by the misspecified objective and set them to extreme values. This may allow for better optimization on the goals in the objective function, but can have catastrophic consequences for human preferences and values the system fails to consider. Is it possible for misalignment to also occur between the model being trained and the objective function used for training? The answer looks like yes. Evan Hubinger from the Machine Intelligence Research Institute joins us on this episode of the AI Alignment Podcast to discuss how to ensure alignment between a model being trained and the objective function used to train it, as well as to evaluate three proposals for building safe advanced AI. Topics discussed in this episode include: -Inner and outer alignment -How and why inner alignment can fail -Training competitiveness and performance competitiveness -Evaluating imitative amplification, AI safety via debate, and microscope AI You can find the page for this podcast here: https://futureoflife.org/2020/07/01/evan-hubinger-on-inner-alignment-outer-alignment-and-proposals-for-building-safe-advanced-ai/ Timestamps: 0:00 Intro 2:07 How Evan got into AI alignment research 4:42 What is AI alignment? 7:30 How Evan approaches AI alignment 13:05 What are inner alignment and outer alignment? 24:23 Gradient descent 36:30 Testing for inner alignment 38:38 Wrapping up on outer alignment 44:24 Why is inner alignment a priority? 45:30 How inner alignment fails 01:11:12 Training competitiveness and performance competitiveness 01:16:17 Evaluating proposals for building safe and advanced AI via inner and outer alignment, as well as training and performance competitiveness 01:17:30 Imitative amplification 01:23:00 AI safety via debate 01:26:32 Microscope AI 01:30:19 AGI timelines and humanity's prospects for succeeding in AI alignment 01:34:45 Where to follow Evan and find more of his work This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

97 MINJUL 2
Comments
Evan Hubinger on Inner Alignment, Outer Alignment, and Proposals for Building Safe Advanced AI

Barker - Hedonic Recalibration (Mix)

This is a mix by Barker, Berlin-based music producer, that was featured on our last podcast: Sam Barker and David Pearce on Art, Paradise Engineering, and Existential Hope (With Guest Mix). We hope that you'll find inspiration and well-being in this soundscape. You can find the page for this podcast here: https://futureoflife.org/2020/06/24/sam-barker-and-david-pearce-on-art-paradise-engineering-and-existential-hope-featuring-a-guest-mix/ Tracklist: Delta Rain Dance - 1 John Beltran - A Different Dream Rrose - Horizon Alexandroid - lvpt3 Datassette - Drizzle Fort Conrad Sprenger - Opening JakoJako - Wavetable#1 Barker & David Goldberg - #3 Barker & Baumecker - Organik (Intro) Anthony Linell - Fractal Vision Ametsub - Skydroppin’ Ladyfish\Mewark - Comfortable JakoJako & Barker - [unreleased] Where to follow Sam Barker : Soundcloud: @voltek Twitter: twitter.com/samvoltek Instagram: www.instagram.com/samvoltek/ Website: www.voltek-labs.net/ Bandcamp: sambarker.bandcamp.com/ Where to follow Sam's label, Ostgut Ton: Soundcloud: @ostgutton-official Facebook: www.facebook.com/Ostgut.Ton.OFFICIAL/ Twitter: twitter.com/ostgutton Instagram: www.instagram.com/ostgut_ton/ Bandcamp: ostgut.bandcamp.com/ This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

43 MINJUN 27
Comments
Barker - Hedonic Recalibration (Mix)

Sam Barker and David Pearce on Art, Paradise Engineering, and Existential Hope (With Guest Mix)

Sam Barker, a Berlin-based music producer, and David Pearce, philosopher and author of The Hedonistic Imperative, join us on a special episode of the FLI Podcast to spread some existential hope. Sam is the author of euphoric sound landscapes inspired by the writings of David Pearce, largely exemplified in his latest album — aptly named "Utility." Sam's artistic excellence, motivated by blissful visions of the future, and David's philosophical and technological writings on the potential for the biological domestication of heaven are a perfect match made for the fusion of artistic, moral, and intellectual excellence. This podcast explores what significance Sam found in David's work, how it informed his music production, and Sam and David's optimistic visions of the future; it also features a guest mix by Sam and plenty of musical content. Topics discussed in this episode include: -The relationship between Sam's music and David's writing -Existential hope -Ideas from the Hedonistic Im...

102 MINJUN 25
Comments
Sam Barker and David Pearce on Art, Paradise Engineering, and Existential Hope (With Guest Mix)

Steven Pinker and Stuart Russell on the Foundations, Benefits, and Possible Existential Threat of AI

Over the past several centuries, the human condition has been profoundly changed by the agricultural and industrial revolutions. With the creation and continued development of AI, we stand in the midst of an ongoing intelligence revolution that may prove far more transformative than the previous two. How did we get here, and what were the intellectual foundations necessary for the creation of AI? What benefits might we realize from aligned AI systems, and what are the risks and potential pitfalls along the way? In the longer term, will superintelligent AI systems pose an existential risk to humanity? Steven Pinker, best selling author and Professor of Psychology at Harvard, and Stuart Russell, UC Berkeley Professor of Computer Science, join us on this episode of the AI Alignment Podcast to discuss these questions and more. Topics discussed in this episode include: -The historical and intellectual foundations of AI -How AI systems achieve or do not achieve intelligence in the same way as the human mind -The rise of AI and what it signifies -The benefits and risks of AI in both the short and long term -Whether superintelligent AI will pose an existential risk to humanity You can find the page for this podcast here: https://futureoflife.org/2020/06/15/steven-pinker-and-stuart-russell-on-the-foundations-benefits-and-possible-existential-risk-of-ai/ You can take a survey about the podcast here: https://www.surveymonkey.com/r/W8YLYD3 You can submit a nominee for the Future of Life Award here: https://futureoflife.org/future-of-life-award-unsung-hero-search/ Timestamps: 0:00 Intro 4:30 The historical and intellectual foundations of AI 11:11 Moving beyond dualism 13:16 Regarding the objectives of an agent as fixed 17:20 The distinction between artificial intelligence and deep learning 22:00 How AI systems achieve or do not achieve intelligence in the same way as the human mind 49:46 What changes to human society does the rise of AI signal? 54:57 What are the benefits and risks of AI? 01:09:38 Do superintelligent AI systems pose an existential threat to humanity? 01:51:30 Where to find and follow Steve and Stuart This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

112 MINJUN 16
Comments
Steven Pinker and Stuart Russell on the Foundations, Benefits, and Possible Existential Threat of AI

Sam Harris on Global Priorities, Existential Risk, and What Matters Most

Human civilization increasingly has the potential both to improve the lives of everyone and to completely destroy everything. The proliferation of emerging technologies calls our attention to this never-before-seen power — and the need to cultivate the wisdom with which to steer it towards beneficial outcomes. If we're serious both as individuals and as a species about improving the world, it's crucial that we converge around the reality of our situation and what matters most. What are the most important problems in the world today and why? In this episode of the Future of Life Institute Podcast, Sam Harris joins us to discuss some of these global priorities, the ethics surrounding them, and what we can do to address them. Topics discussed in this episode include: -The problem of communication -Global priorities -Existential risk -Animal suffering in both wild animals and factory farmed animals -Global poverty -Artificial general intelligence risk and AI alignment -Ethics -Sam’s b...

92 MINJUN 2
Comments
Sam Harris on Global Priorities, Existential Risk, and What Matters Most

FLI Podcast: On the Future of Computation, Synthetic Biology, and Life with George Church

Progress in synthetic biology and genetic engineering promise to bring advancements in human health sciences by curing disease, augmenting human capabilities, and even reversing aging. At the same time, such technology could be used to unleash novel diseases and biological agents which could pose global catastrophic and existential risks to life on Earth. George Church, a titan of synthetic biology, joins us on this episode of the FLI Podcast to discuss the benefits and risks of our growing knowledge of synthetic biology, its role in the future of life, and what we can do to make sure it remains beneficial. Will our wisdom keep pace with our expanding capabilities? Topics discussed in this episode include: -Existential risk -Computational substrates and AGI -Genetics and aging -Risks of synthetic biology -Obstacles to space colonization -Great Filters, consciousness, and eliminating suffering You can find the page for this podcast here: https://futureoflife.org/2020/05/15/on-the-fut...

73 MINMAY 16
Comments
FLI Podcast: On the Future of Computation, Synthetic Biology, and Life with George Church

FLI Podcast: On Superforecasting with Robert de Neufville

Essential to our assessment of risk and ability to plan for the future is our understanding of the probability of certain events occurring. If we can estimate the likelihood of risks, then we can evaluate their relative importance and apply our risk mitigation resources effectively. Predicting the future is, obviously, far from easy — and yet a community of "superforecasters" are attempting to do just that. Not only are they trying, but these superforecasters are also reliably outperforming subject matter experts at making predictions in their own fields. Robert de Neufville joins us on this episode of the FLI Podcast to explain what superforecasting is, how it's done, and the ways it can help us with crucial decision making. Topics discussed in this episode include: -What superforecasting is and what the community looks like -How superforecasting is done and its potential use in decision making -The challenges of making predictions -Predictions about and lessons from COVID-19 You ...

80 MINMAY 1
Comments
FLI Podcast: On Superforecasting with Robert de Neufville

AIAP: An Overview of Technical AI Alignment in 2018 and 2019 with Buck Shlegeris and Rohin Shah

Just a year ago we released a two part episode titled An Overview of Technical AI Alignment with Rohin Shah. That conversation provided details on the views of central AI alignment research organizations and many of the ongoing research efforts for designing safe and aligned systems. Much has happened in the past twelve months, so we've invited Rohin — along with fellow researcher Buck Shlegeris — back for a follow-up conversation. Today's episode focuses especially on the state of current research efforts for beneficial AI, as well as Buck's and Rohin's thoughts about the varying approaches and the difficulties we still face. This podcast thus serves as a non-exhaustive overview of how the field of AI alignment has updated and how thinking is progressing. Topics discussed in this episode include: -Rohin's and Buck's optimism and pessimism about different approaches to aligned AI -Traditional arguments for AI as an x-risk -Modeling agents as expected utility maximizers -Ambitious ...

141 MINAPR 16
Comments
AIAP: An Overview of Technical AI Alignment in 2018 and 2019 with Buck Shlegeris and Rohin Shah

FLI Podcast: Lessons from COVID-19 with Emilia Javorsky and Anthony Aguirre

The global spread of COVID-19 has put tremendous stress on humanity’s social, political, and economic systems. The breakdowns triggered by this sudden stress indicate areas where national and global systems are fragile, and where preventative and preparedness measures may be insufficient. The COVID-19 pandemic thus serves as an opportunity for reflecting on the strengths and weaknesses of human civilization and what we can do to help make humanity more resilient. The Future of Life Institute's Emilia Javorsky and Anthony Aguirre join us on this special episode of the FLI Podcast to explore the lessons that might be learned from COVID-19 and the perspective this gives us for global catastrophic and existential risk. Topics discussed in this episode include: -The importance of taking expected value calculations seriously -The need for making accurate predictions -The difficulty of taking probabilities seriously -Human psychological bias around estimating and acting on risk -The massi...

86 MINAPR 9
Comments
FLI Podcast: Lessons from COVID-19 with Emilia Javorsky and Anthony Aguirre

FLI Podcast: The Precipice: Existential Risk and the Future of Humanity with Toby Ord

Toby Ord’s “The Precipice: Existential Risk and the Future of Humanity" has emerged as a new cornerstone text in the field of existential risk. The book presents the foundations and recent developments of this budding field from an accessible vantage point, providing an overview suitable for newcomers. For those already familiar with existential risk, Toby brings new historical and academic context to the problem, along with central arguments for why existential risk matters, novel quantitative analysis and risk estimations, deep dives into the risks themselves, and tangible steps for mitigation. "The Precipice" thus serves as both a tremendous introduction to the topic and a rich source of further learning for existential risk veterans. Toby joins us on this episode of the Future of Life Institute Podcast to discuss this definitive work on what may be the most important topic of our time. Topics discussed in this episode include: -An overview of Toby's new book -What it means to ...

70 MINAPR 1
Comments
FLI Podcast: The Precipice: Existential Risk and the Future of Humanity with Toby Ord

Latest Episodes

Evan Hubinger on Inner Alignment, Outer Alignment, and Proposals for Building Safe Advanced AI

It's well-established in the AI alignment literature what happens when an AI system learns or is given an objective that doesn't fully capture what we want. Human preferences and values are inevitably left out and the AI, likely being a powerful optimizer, will take advantage of the dimensions of freedom afforded by the misspecified objective and set them to extreme values. This may allow for better optimization on the goals in the objective function, but can have catastrophic consequences for human preferences and values the system fails to consider. Is it possible for misalignment to also occur between the model being trained and the objective function used for training? The answer looks like yes. Evan Hubinger from the Machine Intelligence Research Institute joins us on this episode of the AI Alignment Podcast to discuss how to ensure alignment between a model being trained and the objective function used to train it, as well as to evaluate three proposals for building safe advanced AI. Topics discussed in this episode include: -Inner and outer alignment -How and why inner alignment can fail -Training competitiveness and performance competitiveness -Evaluating imitative amplification, AI safety via debate, and microscope AI You can find the page for this podcast here: https://futureoflife.org/2020/07/01/evan-hubinger-on-inner-alignment-outer-alignment-and-proposals-for-building-safe-advanced-ai/ Timestamps: 0:00 Intro 2:07 How Evan got into AI alignment research 4:42 What is AI alignment? 7:30 How Evan approaches AI alignment 13:05 What are inner alignment and outer alignment? 24:23 Gradient descent 36:30 Testing for inner alignment 38:38 Wrapping up on outer alignment 44:24 Why is inner alignment a priority? 45:30 How inner alignment fails 01:11:12 Training competitiveness and performance competitiveness 01:16:17 Evaluating proposals for building safe and advanced AI via inner and outer alignment, as well as training and performance competitiveness 01:17:30 Imitative amplification 01:23:00 AI safety via debate 01:26:32 Microscope AI 01:30:19 AGI timelines and humanity's prospects for succeeding in AI alignment 01:34:45 Where to follow Evan and find more of his work This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

97 MINJUL 2
Comments
Evan Hubinger on Inner Alignment, Outer Alignment, and Proposals for Building Safe Advanced AI

Barker - Hedonic Recalibration (Mix)

This is a mix by Barker, Berlin-based music producer, that was featured on our last podcast: Sam Barker and David Pearce on Art, Paradise Engineering, and Existential Hope (With Guest Mix). We hope that you'll find inspiration and well-being in this soundscape. You can find the page for this podcast here: https://futureoflife.org/2020/06/24/sam-barker-and-david-pearce-on-art-paradise-engineering-and-existential-hope-featuring-a-guest-mix/ Tracklist: Delta Rain Dance - 1 John Beltran - A Different Dream Rrose - Horizon Alexandroid - lvpt3 Datassette - Drizzle Fort Conrad Sprenger - Opening JakoJako - Wavetable#1 Barker & David Goldberg - #3 Barker & Baumecker - Organik (Intro) Anthony Linell - Fractal Vision Ametsub - Skydroppin’ Ladyfish\Mewark - Comfortable JakoJako & Barker - [unreleased] Where to follow Sam Barker : Soundcloud: @voltek Twitter: twitter.com/samvoltek Instagram: www.instagram.com/samvoltek/ Website: www.voltek-labs.net/ Bandcamp: sambarker.bandcamp.com/ Where to follow Sam's label, Ostgut Ton: Soundcloud: @ostgutton-official Facebook: www.facebook.com/Ostgut.Ton.OFFICIAL/ Twitter: twitter.com/ostgutton Instagram: www.instagram.com/ostgut_ton/ Bandcamp: ostgut.bandcamp.com/ This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

43 MINJUN 27
Comments
Barker - Hedonic Recalibration (Mix)

Sam Barker and David Pearce on Art, Paradise Engineering, and Existential Hope (With Guest Mix)

Sam Barker, a Berlin-based music producer, and David Pearce, philosopher and author of The Hedonistic Imperative, join us on a special episode of the FLI Podcast to spread some existential hope. Sam is the author of euphoric sound landscapes inspired by the writings of David Pearce, largely exemplified in his latest album — aptly named "Utility." Sam's artistic excellence, motivated by blissful visions of the future, and David's philosophical and technological writings on the potential for the biological domestication of heaven are a perfect match made for the fusion of artistic, moral, and intellectual excellence. This podcast explores what significance Sam found in David's work, how it informed his music production, and Sam and David's optimistic visions of the future; it also features a guest mix by Sam and plenty of musical content. Topics discussed in this episode include: -The relationship between Sam's music and David's writing -Existential hope -Ideas from the Hedonistic Im...

102 MINJUN 25
Comments
Sam Barker and David Pearce on Art, Paradise Engineering, and Existential Hope (With Guest Mix)

Steven Pinker and Stuart Russell on the Foundations, Benefits, and Possible Existential Threat of AI

Over the past several centuries, the human condition has been profoundly changed by the agricultural and industrial revolutions. With the creation and continued development of AI, we stand in the midst of an ongoing intelligence revolution that may prove far more transformative than the previous two. How did we get here, and what were the intellectual foundations necessary for the creation of AI? What benefits might we realize from aligned AI systems, and what are the risks and potential pitfalls along the way? In the longer term, will superintelligent AI systems pose an existential risk to humanity? Steven Pinker, best selling author and Professor of Psychology at Harvard, and Stuart Russell, UC Berkeley Professor of Computer Science, join us on this episode of the AI Alignment Podcast to discuss these questions and more. Topics discussed in this episode include: -The historical and intellectual foundations of AI -How AI systems achieve or do not achieve intelligence in the same way as the human mind -The rise of AI and what it signifies -The benefits and risks of AI in both the short and long term -Whether superintelligent AI will pose an existential risk to humanity You can find the page for this podcast here: https://futureoflife.org/2020/06/15/steven-pinker-and-stuart-russell-on-the-foundations-benefits-and-possible-existential-risk-of-ai/ You can take a survey about the podcast here: https://www.surveymonkey.com/r/W8YLYD3 You can submit a nominee for the Future of Life Award here: https://futureoflife.org/future-of-life-award-unsung-hero-search/ Timestamps: 0:00 Intro 4:30 The historical and intellectual foundations of AI 11:11 Moving beyond dualism 13:16 Regarding the objectives of an agent as fixed 17:20 The distinction between artificial intelligence and deep learning 22:00 How AI systems achieve or do not achieve intelligence in the same way as the human mind 49:46 What changes to human society does the rise of AI signal? 54:57 What are the benefits and risks of AI? 01:09:38 Do superintelligent AI systems pose an existential threat to humanity? 01:51:30 Where to find and follow Steve and Stuart This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

112 MINJUN 16
Comments
Steven Pinker and Stuart Russell on the Foundations, Benefits, and Possible Existential Threat of AI

Sam Harris on Global Priorities, Existential Risk, and What Matters Most

Human civilization increasingly has the potential both to improve the lives of everyone and to completely destroy everything. The proliferation of emerging technologies calls our attention to this never-before-seen power — and the need to cultivate the wisdom with which to steer it towards beneficial outcomes. If we're serious both as individuals and as a species about improving the world, it's crucial that we converge around the reality of our situation and what matters most. What are the most important problems in the world today and why? In this episode of the Future of Life Institute Podcast, Sam Harris joins us to discuss some of these global priorities, the ethics surrounding them, and what we can do to address them. Topics discussed in this episode include: -The problem of communication -Global priorities -Existential risk -Animal suffering in both wild animals and factory farmed animals -Global poverty -Artificial general intelligence risk and AI alignment -Ethics -Sam’s b...

92 MINJUN 2
Comments
Sam Harris on Global Priorities, Existential Risk, and What Matters Most

FLI Podcast: On the Future of Computation, Synthetic Biology, and Life with George Church

Progress in synthetic biology and genetic engineering promise to bring advancements in human health sciences by curing disease, augmenting human capabilities, and even reversing aging. At the same time, such technology could be used to unleash novel diseases and biological agents which could pose global catastrophic and existential risks to life on Earth. George Church, a titan of synthetic biology, joins us on this episode of the FLI Podcast to discuss the benefits and risks of our growing knowledge of synthetic biology, its role in the future of life, and what we can do to make sure it remains beneficial. Will our wisdom keep pace with our expanding capabilities? Topics discussed in this episode include: -Existential risk -Computational substrates and AGI -Genetics and aging -Risks of synthetic biology -Obstacles to space colonization -Great Filters, consciousness, and eliminating suffering You can find the page for this podcast here: https://futureoflife.org/2020/05/15/on-the-fut...

73 MINMAY 16
Comments
FLI Podcast: On the Future of Computation, Synthetic Biology, and Life with George Church

FLI Podcast: On Superforecasting with Robert de Neufville

Essential to our assessment of risk and ability to plan for the future is our understanding of the probability of certain events occurring. If we can estimate the likelihood of risks, then we can evaluate their relative importance and apply our risk mitigation resources effectively. Predicting the future is, obviously, far from easy — and yet a community of "superforecasters" are attempting to do just that. Not only are they trying, but these superforecasters are also reliably outperforming subject matter experts at making predictions in their own fields. Robert de Neufville joins us on this episode of the FLI Podcast to explain what superforecasting is, how it's done, and the ways it can help us with crucial decision making. Topics discussed in this episode include: -What superforecasting is and what the community looks like -How superforecasting is done and its potential use in decision making -The challenges of making predictions -Predictions about and lessons from COVID-19 You ...

80 MINMAY 1
Comments
FLI Podcast: On Superforecasting with Robert de Neufville

AIAP: An Overview of Technical AI Alignment in 2018 and 2019 with Buck Shlegeris and Rohin Shah

Just a year ago we released a two part episode titled An Overview of Technical AI Alignment with Rohin Shah. That conversation provided details on the views of central AI alignment research organizations and many of the ongoing research efforts for designing safe and aligned systems. Much has happened in the past twelve months, so we've invited Rohin — along with fellow researcher Buck Shlegeris — back for a follow-up conversation. Today's episode focuses especially on the state of current research efforts for beneficial AI, as well as Buck's and Rohin's thoughts about the varying approaches and the difficulties we still face. This podcast thus serves as a non-exhaustive overview of how the field of AI alignment has updated and how thinking is progressing. Topics discussed in this episode include: -Rohin's and Buck's optimism and pessimism about different approaches to aligned AI -Traditional arguments for AI as an x-risk -Modeling agents as expected utility maximizers -Ambitious ...

141 MINAPR 16
Comments
AIAP: An Overview of Technical AI Alignment in 2018 and 2019 with Buck Shlegeris and Rohin Shah

FLI Podcast: Lessons from COVID-19 with Emilia Javorsky and Anthony Aguirre

The global spread of COVID-19 has put tremendous stress on humanity’s social, political, and economic systems. The breakdowns triggered by this sudden stress indicate areas where national and global systems are fragile, and where preventative and preparedness measures may be insufficient. The COVID-19 pandemic thus serves as an opportunity for reflecting on the strengths and weaknesses of human civilization and what we can do to help make humanity more resilient. The Future of Life Institute's Emilia Javorsky and Anthony Aguirre join us on this special episode of the FLI Podcast to explore the lessons that might be learned from COVID-19 and the perspective this gives us for global catastrophic and existential risk. Topics discussed in this episode include: -The importance of taking expected value calculations seriously -The need for making accurate predictions -The difficulty of taking probabilities seriously -Human psychological bias around estimating and acting on risk -The massi...

86 MINAPR 9
Comments
FLI Podcast: Lessons from COVID-19 with Emilia Javorsky and Anthony Aguirre

FLI Podcast: The Precipice: Existential Risk and the Future of Humanity with Toby Ord

Toby Ord’s “The Precipice: Existential Risk and the Future of Humanity" has emerged as a new cornerstone text in the field of existential risk. The book presents the foundations and recent developments of this budding field from an accessible vantage point, providing an overview suitable for newcomers. For those already familiar with existential risk, Toby brings new historical and academic context to the problem, along with central arguments for why existential risk matters, novel quantitative analysis and risk estimations, deep dives into the risks themselves, and tangible steps for mitigation. "The Precipice" thus serves as both a tremendous introduction to the topic and a rich source of further learning for existential risk veterans. Toby joins us on this episode of the Future of Life Institute Podcast to discuss this definitive work on what may be the most important topic of our time. Topics discussed in this episode include: -An overview of Toby's new book -What it means to ...

70 MINAPR 1
Comments
FLI Podcast: The Precipice: Existential Risk and the Future of Humanity with Toby Ord
hmly
Welcome to Himalaya LearningDozens of podcourses featuring over 100 experts are waiting for you.