Random Resources

AGI Safety resources

This is a compiled list of the best resources and organisations working on Artificial Intelligence Safety that I’m aware of, with the specific goal of reducing existential risk to humanity. It is likely a non-exhaustive list, so please contact me for suggestions if you feel something is missing.

I helped make it as a resource for participants in the AGI Safety Fundamentals programme, which I help run with Effective Altruism Cambridge. This is a global, interactive programme where participants engage with experts over arguments motivating the field of AI Safety.

Timesheet template – Google sheets
A simple but effective time tracking spreadsheet that I built. I’d recommend trying this to anyone with a self-driven schedule who has any stress about whether they’re spending time on the right stuff.

I filled it out at the end of each day to notice how much time I spent on each of two major projects I was working on. It helped me to realise I was neglecting one, and ultimately made a decision to focus on the other full-time.

I now have a more proactive schedule where I plan my weeks in advance, and condense them down into daily plans that I try to stick to.

Newsletter recommendations

  • Rohin Shah’s Alignment Newsletter – keep up with latest blogs and paper publications in AI alignment. Technical view, but sometimes covers governance advances.
  • Jack Clark’s weekly Artificial Intelligence Roundup – good for keeping up with latest developments in AI. A slight gearing towards people thinking about governance of AI, but covers technical advances for general interest.
  • Chartr – a weekly email displaying 3 stories or other current affairs with data. I appreciate gaining a quick intuition for the scale of phenomena, and remember these much better than words in a news article!

Getting started in Python and Machine learning

  • A guide I wrote with my advice for learning python and machine learning (theory, and practically in python) for the first time.

Talks I have given

  • ‘Intro to AI safety (why worry?)’. Beginner audience intended, not assuming too much ML knowledge (but it helps). Tries to clarify exactly what we are concerned about in AI Safety without waving too many hands, and motivate why these are not easy problems to solve. It’s long, but I think it’s more important to be thorough.
  • ‘Overview of where and how you can do alignment and governance work’ – makes the case that there are actually careers you can pursue to do something about it. Having introduced the problem concretely, it’s nice to show people that there are tangible things they can aim for to have an impact. This is more subjective in terms of my take on what helps.
  • If they will be useful to you, you are welcome to just use the slides. If you do, I’d appreciate 1) accreditation, 2) you let me know that you’re going to do so, 3) whilst letting me know, let me know if you have any feedback, changed anything, and how your audience responded!

Other recommendations