Is (Artificial) Intelligence Unbounded?

Transcribed and lightly edited for clarity; any errors are mine. Original conversation between Dwarkesh Patel and Paul Christiano is hosted here.

Dwarkesh Patel: Is there an upper bound on intelligence? Not in the near term, but in terms of superintelligence, at some point, how far do you think it can go?

Paul Christiano (answering speculatively): It seems like it's going to depend a little bit on what is meant by intelligence. It kind of reads as a question similar to “is there an upper bound on strength?” or something. There are a lot of forms... I think there are arbitrarily smart input-output functionalities; and then if you hold fixed the amount of compute, there is some smartest one. If you ask, “what's the best set of 1040 operations?” there are only finitely many of them. So there does exist some “best one” for any particular notion of best that you have in mind. For the unbounded question, where you're allowed to use arbitrary description complexity and compute, then probably not. I mean, there is some “optimal conduct.” If I have some goal in mind, and I ask what action best achieves it, if you imagine a little box [which could intelligently carry out your instructions] embedded in the universe... I think there is an optimal input-output behavior. So I guess in that sense, I think there is an upper bound to intelligence. But it's not saturatable in the physical universe because it's definitely exponentially slow.

Dwarkesh: Because of [for example, heat dissipation], it might just be physically impossible to instantiate something smarter [than this optimal “little box”].

Paul: For example, if you imagine what the best thing is, it would almost certainly involve, say, simulating every possible universe it might be in, modulo moral constraints, which I don't know if you want to include. So that would be very very slow. It would involve simulating... I don't know exactly how slow, but, like, double-exponential slow.