Given the the rapidly advancing capabilities in Artificial Intelligence (AI), I believe aligning AI with human values is a priority to navigate the coming century. I have therefore dedicated my career to reducing potential risks associated with the development of Artificial General Intelligence (AGI).
I currently lead AI Safety field building strategy at Cambridge Effective Altruism CIC. I aim to direct individuals with the skills needed to address problems in AI Alignment and Governance to the opportunities that make a difference, and create new opportunities where they don’t already exist.
Under that effort, I run the global AGI Safety fundamentals programme. The programme provides discussion groups, facilitated by experts in alignment, to help budding Alignment and Governance researchers develop their thoughts on ideas in the field. You can read more about the impact of that programme here.
Previously, I worked on a collaboration with researchers at the Future of Humanity Institute to develop and empirically investigate provably safe Machine Learning algorithms. We developed Bayesian Reinforcement Learning methods using pessimism. Before that, I spent two years in industry as a Machine Learning engineer, and technical product manager.
I like to have fun too – you can catch me bouldering and playing the guitar!
GitHub – Reinforcement Learning implementations e.g. OpenAI Spinning Up, my previous research on pessimistic agents, etc.
Resources I’ve made or contributed to, which may be useful to others. Examples include information on AGI Safety, and a time tracking template.
Multiphysics analysis with CAD-based parametric breeding blanket creation for rapid design iteration. Completed during Undergraduate Summer Research Placement, Culham Centre for Fusion Energy, 2017