Physical Limits in Designing Super Intelligence (Infinitely Fast Computer)

By Alexander Chislenko

I once had some absolutely non-practical idea on how to increase maximal computer speed beyond evident physical limitations, and hope it is appropriate to share it in this discussion. ‘Conventional limitations’ are determined by:

1. The minimal possible size of computing elements

2. They determine max. density of memory, which, at our (my) current knowledge of physics, can be put at around 10**100 elements(of Planck, not atomic! size) per cubic meter;

3. Speed of communications (lets assume C is the limit)

4. Architecture – let’s think it’s parallel, and 100% of useful transactions happen between physically adjacent computing elements. It gives us (we don’t care about a few extra orders of magnitude here) 10**40 operations on each of 10**100 elements = 10**140 ops per cubic meter = the ultimate limit to the density of intelligence * the volume of the Universe… (I am sure though that 1) this kind of intelligence would find further ways to improve itself, and 2) we can not possibly make any relevant statement about features of so complex objects)
5. Connectedness of the computer. No matter how well we arrange the elements, the computer should still be an integrated system, which means that any element should be able to communicate to any other one, and this should take up to a whole cubic light-meter of time per just 10**100 of complexity…

We can remove this last limitation by effectively increasing the dimensionality of the physical space, which can help squeeze higher volumes into smaller sizes(diameters) of ‘computers’. For that, we could create around a certain point (‘center of the computer’) a number of space-time bottle-shaped half-open bubbles (Planckeons). If each of them has an internal volume of 1 cubic metre, and their ‘necks’ are small enough to fit, say, a million of them in a 1-meter vicinity of our center, then we’ll have a computing space with a radius of 2 meters and a volume of 1 million cubic metre. By shrinking the sizes of the necks, and putting zillions of new bubbles into each bubble, and so on, we can pack any volume into any size, and thus build a machine with NO SPEED LIMIT. As for the problems with energy dissipation, we can just close each sub-bubble as soon as we get the results of the computational task it has been assigned, and leave the spawned universes digest their own heat, keeping our master-space clean and cool.

Well, it all just shows that we can easily solve theoretical problems with theoretical technologies. The really interesting problem in developing extremely sophisticated computers (and other systems), in my opinion, is the evident fact that humans:

1) are increasingly unable to understand the skyrocketing complexity of developing systems;

2) are unwilling to incorporate superior design techniques into themselves (I did some polling on it), which is their only chance to catch up, at the expense of the ‘human nature’ in its narrow human sense.

3) are not willing to let go of control. This situation can not last for long, and it will definitely explode long before the design techniques reach any fundamental physical limits.
[Author: Alexander Chislenko]

About bruceleeeowe
An engineering student and independent researcher. I'm researching and studying quantum physics(field theories). Also searching for alien life.

2 Responses to Physical Limits in Designing Super Intelligence (Infinitely Fast Computer)

  1. Mark Louis says:

    How about using extradimensions fabricated through gravitational waves travelling at the speed of light? Highly compact extradimension one over another could suffice relatively quite higher volume for computing quantum buffs. In my opinion, it’s only way to create super intelligence. Won’t you write about it.🙂
    mark.

  2. Torbjörn Larsson, OM says:

    Nope. Even if nature arranges itself to algorithmic structures, as it seems, we can only use matter states for computation.

    These put an entropic limit on memory/volume, something like 10^88 states in the observable universe IIRC. (Which is decreasing with universe expansion, btw. The final state will have a, worn down, galaxy of computational resources.)

    You gain *a lot* resources on the computations themselves though, if Deutsch is correct. (See “The Fabric of Reality”.) Then the quantum many-world universe does the computation, which decoheres back to your memory answer space if you do it wisely. So there is no practical limit here.

    Another way to look at it is to assume that the holographic principle of theoretical physics really is true. Then there is a two-dimensional boundary on everything, not only matter states. A thin weave to make computation on, but an efficient one.

    Though if the purpose is intelligence, we all know, or should know, that it is nuts to throw resources on the problem. Biological brains took millions of years to evolve. All intelligence we know of is embodied, so a computer will just sit there.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: