I believe there is a reasonable chance that we develop Artificial General Intelligence (AGI) this century, and that preparing for that is a priority to navigate the next 100 years safely. I work on AI safety field building to help people work on alignment and governance of advanced, AGI systems ahead of time.
I am a cofounder of a non-profit called BlueDot Impact. It aims to direct individuals with the skills needed to address pressing global problems, to a variety of opportunities that make a differenceincluding AGI alignment and governance, and alternative proteins, etc.
My main project with that organisation is running the global AGI Safety Fundamentals Programme. The programme provides discussion groups, facilitated by experts in alignment, to help budding alignment and governance researchers develop their thoughts on ideas in the field.
Previously, I worked on a collaboration with researchers at the Future of Humanity Institute to develop and empirically investigate provably safe Machine Learning algorithms. We developed Bayesian Reinforcement Learning methods using pessimism. Before that, I spent two years in industry as a Machine Learning engineer, and technical product manager.
I like to have fun too – you can catch me bouldering and playing the guitar!
Get in touch
If you don’t have my email, you can get in touch here.