A supercomputer around the sun. What for?

Or Matrioshka brain like/Dyson swarm/Server Sky as navigational computer and other uses of it. And a different stuff of the scale.

First of all, let’s determine the scale of the system, let say 500 W per node consists of 100W processor(s), 300W of graphic card(s) (kinda), 100W for other needs.

the power of the sun is 3.82e26W, with efficiency 40% we will have 3.056e+23 nodes in that super-cluster.

So far(in this and previous century) we have produced less than 1e14 processors. (kinda my wild guess, embedded processors included and they are 99% of things we usually call a processor, Quora question How many microprocessors does Intel sell per year? contains an interesting link).

The Point is that even if you have dedicated 1e9 factories to produce nodes for MB and for spare parts of MB, it makes sense to replicate all technologies which are needed in production(just replicate earth manufacturing capacities, and scale it a bit) in one production node, because scale of operation way above of anything we have done on earth at the moment. This way each production node will be self-sufficient.

On earth orbit, at distance 1 a.u. from the Sun, 1e9 factories with the size of the earth will cover half(0.47) of the surface of the 1 a.u. sphere around the Sun. (I just find that funny illustration, they do not have to be so big, but collectors have to be 2 times bigger)

The Problems on the way to digital heaven.

One of the main problems, in terms of feasibility, will be developing(not building) those (1e9) production units, they have to be fully automatic in production, installation and replacement of broken units, recycling of them.

This way fundamental cost will be in developing such automatic system, partially by copying existing earth/moon/mars/venus/mercury/whatever technologies with complementing them with automation to convert it into a fully automated system which builds our supercomputer.

The second fundamental cost is developing a fully automated system of extraction and resources supply at initial stages for the growth of the supercluster. We need matter to build the thing from, and considering the scale of thing it is a herculian task by itself, but it is possible.

Maintenance cost will be in overseeing that supercluster and decision making about it, monitoring it, just in case. No other maintenance costs are needed, this system will work until the sun is burning.

In general producing low heating processor is not needed, because energy flux is limited by energy flux at that distance from the star, this way maximum temperature for this system will be limited by how high temperature of a black body may be, at that distance from the sun. (see notes)

Depending on configuration, and how are those calculation units arranged – – as a thin layer or more globular structure – radiation resistant processor may not be required, because construction may have a pretty thick layer of protection(km’s thick)
But radiation resistant processors are produced today for aerospace needs, and replacing those which are damaged by flares will be a small percentage of manufacturing capabilities which the entire system have, and may be done just as an upgrade as the system rolls. For the same reason, there is no need in processors with a decade long life span. But as an example, some processors produced for Voyager are still working even today, so they are perfectly capable for decade-long lifespan, but in reality such long life processors it is not what you may wish, as replacing them with more efficient units as soon as possible(as it makes sense) is a good thing.

However, radiation resistant processors may be produced by other no silicon-based technologies, based on nano vacuum tubes principle – different variation are tested in labs:

Navigation problem.

To use such a system as a solution for navigation problem is overkill, let’s be straight there.

For navigation system, it is enough to place beacons each 5’000’000 km, and example of such system and some consequences and some use case are described in this answer of my, here

The entire solar system may be covered up 10 a.u. by about 100 million of such nodes, with a modern notebook of calculation power, and forming a network of those units it will be way much than enough to solve any navigation problems, one may face in the solar system.

5’000’000km are based on LISA project, where 5 million km is planned distance between probes, for detecting gravitational waves. I do not know which precision they should keep the distance between them, but I guess if it is good for detecting gravitational waves then it is good enough for navigation purposes. (See Note, LISA precision)

Notes

Mass of the computer system may be about the mass of a planet like earth if the average mass is 10kg per calculation node.

Distance from the Sun is about 1 a.u. because it is about temperature which current system still may work, even without active cooling. The temperature of a black body is about 130 °C at that distance.

Considering one unit is about let say 1 cubic meter of volume per calculation node (average) and 1e9 of such clusters, size of one cluster is about sphere with about 90 km of its diameter, so biggest thing there(in size) is energy collector, and cooling system, with tiny specks of actual processing and production units attached to it. (or those calculation units are tiny layer on collection system about 0.01m tick)

  • example of that system is in this answer provided by Thucydides and it is called “Server Sky“. The question itself is very similar to this question.

Speed and time to build that system are highly dependent on the energy efficiency of processes used and that is only limiting factor of the project in terms of actual building. (it may take less than 25 years to build such system)

Note, tracking all small size objects in the star system

I’m actually a big fan of tracking each object in solar system up to 3 liters in volume, including and specially including Oort cloud, as I think it is very precious database which will allow us to take a look in great details in the past of our star system, and history of events which took place around our star system in the past, millions maybe hundred million years in the past. Which stars were passing by, spectrum and intensity of their light, distance, trajectory relatively to our system, their composition. Records about our planets in our system(trajectories, evolution of our system, stability of our system, atmosphere composition of our planets, do we had some planets we do not have at the moment) , maybe about planets in other systems(which have fly near our system) and maybe even more – all that as hologram may be recorded in Oort cloud, in composition of big small and tiny asteroids there, layer by layer.

Not sure trough if such a task, like see in the past of our system, need an MB system, it will definitely help, but considering how energy inefficient are current systems compared to biological brains and that problem is suitable for parallel processing(which is excellent case for quantum computing) I would not dedicate all available energy for the task 1% is way as enough, there are more important tasks, and better equipment will probably exceed the needs for this task even on 1% of the sun energy.

But just tracking objects for navigation purposes, this task is much simpler.

But it needs more sophisticated equipment then one we have at the moment, and slightly different approach than just building super cluster.

The main problem which has to be solved is to track relative positions and speeds of those bodies. In fact, this base may be very huge, but what is nice about it, for navigational purposes we do not have to have this data base in a data center. Optimization is exactly same as for simulating matter interaction on a molecular level – each piece of information is grouped by the position of those objects. Also, it helps further that in some places they naturally form groups like groups of asteroids in Lagrange points.

It is enough 2 or 3 times to observe the object to predict it position for a long time, with good accuracy. Sometimes we loose previously observed objects, but most of the times it works. Those objects between Earth and Jupiter as an example have orbital periods about 1-11 years. Considering all that if an object will report his position once in 3 months we will easyly be able to predict its positions in the future, between those updates.

Further is the body, longer it takes for it to make a full circle in the orbit. Saturn orbital period is about 29.5 years and it is enough for objects at 10+ a.u. to report their position once in 10 years(maybe) to allow us to predict their position with high accuracy.

Thus if we label all those bodies which we like to keep track of, with some kind of markers and give them the ability to determine their position and ability to keep a connection on demand and to report their positions on schedule we will have position-velocity database in distributed form across solar system, and data will be there where it is needed for navigation, where we find use for it.

Each label should keep track about nearest objects, 10-100 objects nearby, the same way as cellular network works, based on relative proximity. Each individual label will operate with a small amount of information and because of that do not have to be something exciting in terms of calculation power, and most of the time it will be in sleep mode, accumulating available energy for next connection session.

And those labels have to be small, energy efficient, robust, be able to upgrade them self, be able visually or by other means track smaller bodies around them if such task is given. And most important they have to repair themselves, it is not acceptable to send something each time it needs a repair. Also, they should be able to mark and discover objects needed to be labeled, and actually label them – von Neumann probe approach.

The main point for navigation, those labels will be activated when they need to be activated. Some ship plans some trajectory and it requests information about trajectory and corrections from this global solar network, the system asks base stations on that route, they check information about know-marked objects on the route.

Depends on travel speed of the ship, but it is enough to know situation 100’000 km ahead and around, this will give hours to react and make corrections if it is needed. And information which should be accounted is greatly reduced for the ship and for the system as a whole.

It is hard to tell how much bodies should be labeled because the distribution of them is not even across the system. My notebook relatively easy can calculate interaction between planets (8 bodies) with 10 second steps it takes about 17 seconds per year, just on processor in one thread, it is about 40 million calculation per second, may be a bit off with numbers, can’t check at the moment, especially about time, so to be save, probably you may check intersection with 1 million objects on pretty typical PC, per second. (you do not have to check their collisions between themselves because it is known already and most time they probably not collide).

So with 100 km/s speed of the ship, and typical for the objects speed of 10-30 km/s it is easy to navigate between 1 million of objects in about 300’000 km3, which is about 3 objects per km3. Probably even dust do not have such density in inner parts of the solar system. And you still do not need supercomputers.

MB rough power vs solar system small bodies

Using MB power to calculate interaction between all bodies in the system and by that be able to predict positions of the bodies with high accuracy – that is great, but there are some problems.

First of all to find out all those bodies, and determine their positions and velocities. From the distance we can do it with limited precision, farther a body is from our detectors, less is the precision.

Second is – at the moment we have a hard time to determine the mass of the objects, there are different ways to make a good guess, but be sure, we can’t be sure at the moment, and probably in the near future.

Both of that make calculation not precise, not useless but imperfect. (there are also other nongravity related factors, lot of them, as an example Yarkovsky effect or just industrial actions in solar system)

But the system itself, all those bodies with our beacon attachments, it calculates those interactions very well and very precise and at 0 Energy expenses from our side, for free – I like for free.

Also we have to explore our system, it is not enough to know just positions, we should know where and what we can take, by composition, by the amount; Also we have to make a science from that knowledge. For all that we should be there where those objects are. We should be able to test them, to taste them.

Small bodies the problem

The real problems are those markers or labels.

At different technological levels, it makes sense mark different in size objects. At our current level, or better to say which we will have after SpaceX will make BFR – it will be perfectly fine to mark 1km size bodies and keep an eye on less significant objects by using 1km size bodies as observation stations and base stations for the system. It will make sense for the inner solar system.

Better technologies are, farther we can mark, smaller objects we can mark. It makes no sense to mark an object if a marker is bigger then the object itself.

Starshot nanocrafts they probably can be used as labels for small bodies, with a bigger base in the region, which produces them, and sends them to bodies we like to keep track on.

But those starshot markers are very limited in capabilities, but still, they can be useful.

The real breakthrough is possible with nanosystems with self-repair capabilities, and with capabilities to reconfigure them self in systems we like to build from them. Gray goo is actually used to depict such smart matter and such capabilities, but I didn’t saw a blueprint for them, and as they are imagined by many they have huge holes in the plan and have significant limitations.

Technology from the answer which I have already mentioned above, they are perfectly capable of doing the job, in safe and predictable manner, they are something in-between of marco-machines and nano-machines – best of two worlds.

With them, we are perfectly capable of marking all our solar system.

Problem with “huge fleets of vessels to roam the system”

Problem is, reactive propulsion is inefficient, Tsiolkovsky rocket equation, and there is a limited number of places to visit on regular basis (maybe 1e9 places), and a limited amount of mass in our system so limited amounts of ships is possible. My personal recommendation travel with 30km diameter ships, safe and comfortable way of traveling, high military yield in case of demand, and if to extract all building matter(everything excluding Hydrogen and Helium) from the sun to build them it will be enough for about 736’481’481’481 such ships. If ships are smaller let’s say 1km sizes then the number will be about four orders of magnitude bigger, and so on, cube square law.

The point is that huge fleet is not so huge for the star system, because star system is pretty big in size, and number themselves aren’t that huge for even modern computers.

P.S. Short speaking, I wish to say, yes, MB is overkill for navigation.

Note, LISA precision

From their page
https://www.elisascience.org/articles/elisa-mission/elisa-technology

The expected distance changes are tiny, a few parts in 1021 or 1022 of the separation of the spacecraft.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s