Skip to content

A curated list of awesome resources for Artificial Intelligence Alignment research

Notifications You must be signed in to change notification settings

dit7ya/awesome-ai-alignment

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 

Repository files navigation

NOTE: April 2023

As of April 2023, there is a lot of new interest in the field of AI Alignment. However, this repo is unmaintained since I gave up hope about solving alignment on-time as a species - almost three years ago.

Maybe AI Safety Support is one of the definitive resources right now.

I will, however, accept PRs on this repo.

Awesome Artificial Intelligence Alignment

https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg

Welcome to Awesome-AI Alignment - a curated list of awesome resources for getting-into and staying-in-touch with the research in AI Alignment.

AI Alignment is also known as AI Safety, Beneficial AI, Human-aligned AI, Friendly AI etc.

If you are a newcomer to this field, start with the Crash Course below.

Pull requests are welcome.

Table of Contents

A Crash Course for a Popular Audience

Watch These Two TED Talks

Read These Blogposts by Tim Urban

Read More about Real Research on AI Safety

Books

Courses

Research Agendas

Literature Reviews

Technical Papers

Agent Foundations

Machine Learning

Frameworks/ Environments

Talks

Popular

Technical

Blogposts

Communities/ Forums

Institutes/ Research Groups

Technical Research

Policy and Strategy Research

Podcasts

Episodes in Popular Podcasts

Dedicated Podcasts

  • AI Alignment Podcast by Lucas Perry [Future of Life Institute]
  • 80000hours Podcast by Rob Wiblin

Events

Newsletters

Other Lists Like This

About

A curated list of awesome resources for Artificial Intelligence Alignment research

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published