People are talking about the risks of AI, and the importance of AI alignment. But what does this mean in practice? And what can be done about it?This talk attempts to inject some formal rigour into both those questions. If there's time, we'll also look at why answers in the area are so fraught and varied, and why expertise is of limited use.
Stuart Armstrong's research at the Future of Humanity Institute centres on formal decision theory, general existential risk, the risks and possibilities of Artificial Intelligence (AI), assessing expertise and predictions, and anthropic (self-locating) probability.
He has been working on several methods of analysing the likelihood of certain outcomes and in making decisions under the resulting uncertainty, as well as specific measures for reducing AI risk. His collaboration with DeepMind on Interruptibility has been mentioned in over 100 media articles.
His Oxford D.Phil was in parabolic geometry, calculating the holonomy of projective and conformal Cartan geometries. He later transitioned into computational biochemistry, designing several new ways to rapidly compare putative bioactive molecules for virtual screening of medicinal compounds.
After the event there will be a short presentation by the Cambridge Critical Thinking Society, followed by an informal discussion group on the evenings talk which is open to anyone who wishes to continue the debate.