Notes about a comment on “Kantian space”

“How did Kantians in the 1800s and afterward address non-Euclidean Geometry? I have strong issues with Kant’s reasoning, since it implicitly relies on the assumption that Euclidean Geometry is the fundamental basis of spatial reasoning, where triangles inherently must all have angles which sum to 180 degrees. In non-Euclidean Geometry, there are triangles which ARE triangles yet do not have angle sums equal to 180 degrees. Same with mathematical reasoning of 5+7=12, which assumes Base 10 and some underlying logic axioms, but does not apply, say, to other number bases such as Base 9, or to clouds (5 clouds + 7 clouds may combine into 1 cloud). Did Kantians address these assumptions of Kant versus later mathematical developments? Your suggestion that it is about the structure of geometry goes to Kant’s thinking and other philosophers’ thinking that you only need logic and some structural reasoning to reach universal truths. But logic and reasoning are flawed due to their implicit assumptions, such as focusing on one type of geometry or counting system as opposed to other types. When one considers the possibility that 5+7 does not always equal 12 (unless one lays out, in detail, the implicit assumptions being used), one should see that Kant’s arguments are flawed.”

(Edit: Should have started the whole post with “In my opinion:”) The particular geometry of Euclid is just an example, the Kantian arguments are not about specific geometries but rather about the structure of any kind of geometry itself, so it doesn’t rely on those assumptions. It can be understood just as abstract as the notion of space evolved over time in linear algebra and physics. Even if we don’t count specific properties of fields, rings and spaces, there is always an idea of “in-ness” and you can’t get rid of it this easily by mentioning non-euclidean geometry. It was hinted in the comment, that every system is “flawed” by having a perspective, because their truths are inherently bound by the frames of that given system. Maybe I’m defending the coherence theory of truth way too much here, but the comment itself presupposes a similar theory of truth, so I’m trying to reply in the same frame of reference. This is not an universal truth, at least we don’t know from just this, may be just a human bias even, this really depends on your further views.

The thing is, that we are talking about geometries and a priori principles, not about some universal truth (that seems like a quite strange idea in itself). What the a priori notion of space means, is that there is something general about them (all of spaces, any space), and it is the same generalization that mathematics is formalizing by discovering all the possible spaces and relations in them.  (Type theory should be especially interesting for philosophy.)
Kant’s examples may be outdated, but the arguments themselves are not affected by that because there is some kind of “space-ness” in everything we think about, it’s not just a semantical problem of using phrases like “in my head” or “out of question”. My personal opinion is that the categories of a priori and a posteriori would need a rework themselves but there is still a sharp distinction between at least two types of ideas. (Here’s an analogy within set theories, you could think of sets as a priori and relations or attributes like being well-ordered as a posteriori.) The very  crucial nature of space is certainly mysterious, but nevertheless it seems to be a fundamental “base” to interpret, imagine or understand anything at all. I think an interesting addition is how modern physics seems to be in agreement with this view in a way, because according to the current models where there is anything (forces), there is space as well. Could it be any other way?

The Simulation Argument

I found these thoughts about the issue on another webpage, although much clearly. For a better explanation of this critique you can visit this website by Brian Eggleston. A better summary would be: “[…] the expectation that he assigns to the number of simulated people is not independent of the prior probability of the existence of other worlds.”

It is clear that Bostrom’s argument is mainly about probabilities, and honestly I love it a lot. As the probability of a post-human civilization capable of running ancestor simulations goes up, and their resources are supposedly let them to run as much of these as they are capable of, your probability of living in such a world is increasing as well.

Why? If you try to imagine all the possible worlds, and pick out the ones that are simulated by a post-human civilization, it’s rather clear that the more they run, the more part of the possible worlds are taken by these simulations (in the original argument it’s actually 1 host + n simulations). Yet I think it’s a pretty premature scenario to consider, given we can’t measure the number of all the possible worlds. Given a big enough (expanding to be sure) universe, as the post-human civilizations’ numbers and their resource management capabilities are increasing, exponentially increase the number of simulations as well.

Yet I impose that just by dropping the “ancestor” part of Bostrom’s argument, it becomes about a slightly different problem completely enclosing the original argument itself. The simulation argument is really about multi-verses (or possible worlds) and nested simulations (or simulations within simulations, their running time and resource usage).

The real problem is that we don’t know how many universes are possible, and we don’t know how big even our own universe is. Whatever enormous number someone thinks of, there is always a bigger number in theory. In reality, how could we measure all the possible universes? Without that information though the simulation argument  fails to hold of to prove (as with logical necessity prove) our simulated reality, because it is certainly a possible case that we may be even the very first civilization capable of such a thing. Highly unlikely given the assumptions of Bostrom? Sure. Impossible? Not the least amount.

To get a bit more technical, you always give probabilities in the ratio of examined case/all cases. So to get a formula such as in the original paper from 2003, you need to give something like Nsw/Naw where Nsw is the number of simulated worlds, Naw is the number of all possible worlds, that is not simulation by any civilization capable of running such a universe simulation. Let me quote this part in his paper.


“[…] we can then see that at least one of the following three propositions must be true.” Meaning the three propositions above. What is really important is seeing how Bostrom claimed to give the probability that will always be around 0.9 probability unless there are no civilizations either capable of or interested in running such an ancestor simulation. (Exactly the three propositions above.) This is because the original argument only consider a single possible world, where you are either in the host universe or within any of the running simulations. Given this environment your chance is clearly as little as 1% if most of the premises are true (and what gives weight to the whole paper is that all of the considered scenarios about consciousness, post-human civilizations’ capabilities, etc. are quite likely).

Bostrom only counts with a single possible universe, that has simulations running in them, and asks what are the chances we are not in a simulation within a host universe given the high probability of a post-human civilization capable of running intensive amount of simulations. You can clearly see a problem for yourself if you consider the scenario of simulations within simulations, as a first step asking what are the chances that the mother/host/original/non-simulated universe is simulated itself? (I’ll probably go by the name host universe.)

As you go deeper within the simulation chain, will the probability of a possible world being a simulation rise or will it go lower? It will rise, and the chance of being in the host universe will be infinitely small within just a few thousands of steps. Because of all the cases there is only one being non-simulated. Even if you accept the only one possible world view, you need to know if there are any simulated universe within the simulated universes, to know the probabilities for sure. However if you accept the view that there could be multi-verses, then with 5 host universe and 5 simulation, the probability becomes 5/10 to be within a non-simulated environment. Unless we assume infinite energy, keeping the simulations at least equally quantized/precise/complex will also give exponential rise to resource usage that leads to the question if these would be run only until a certain technological stage before the simulated ancestor can develop their own simulations or are they run indefinitely? The first and more likely case also limits their number, this is also most probably why Bostrom considers ancestor simulations instead of simulations in general. It is clear that his argument doesn’t work in a retro-causal way – if you are not in a simulation now, then the future possibilities of civilizations running such won’t affect your universe.

You really don’t need to consider multi-verses to see my point. How big is our universe, do we even know? How many possible civilizations could be there that are already running such simulations? Even with a single possible universe, we can classify therefore civilizations into two eras: pre-simulation and post-simulation, the latter being capable of running an ancestor simulation. If the number of ancestor simulations are indeed that high as Bostrom claims so, then the number of post-simulation civilizations number must get lower as well. As the number of universe simulations get higher, the possible number of post-simulation civilizations go down. We are talking about such an immense resource and processing power that it’s difficult to see for sure, but if we can hypothesize that such a computing power is even possible to simulate such a structure as our reality, then it’s equally dubious that these simulations would be run indefinitely.

What are we really talking about is the size of these host or simulated universes. To our best knowledge, our universe is expanding. If we consider this as resource usage for a simulated environment, the host universe must expand and/or keep up with the fast increasing computation demand of such an universe as well, and with the physical limits of computation in mind, even if the limits are hard to understand with a human mind, they exist and very real. If you consider the host universe having very different physics, then we are not very far away from the possibility of multi-verses either.

Why is this multi-verse thing important? The human mind is very bad with numbers in general and especially with big numbers, but let’s assume we have 1000 simulations for every world and we have 5 possible worlds. We already have 5/5000 chance to be in a non-simulated environment. With 50 possible worlds and 10000 simulations it’s 50/50000, and so on. The number of simulations will be always limited within a single universe, as resource is very likely to be finite especially considering the physical limits of computation, the number of possible worlds is very difficult to evaluate. What are crucial to consider for a better understanding of Bostrom’s argument are:

  1. the uptime of simulations,
  2. the resource (computing) limitations of universes,
  3. the number of possible worlds (multi-verses)
  4. physical limits of computation (Landauer’s principle, etc.)

All these simulation arguments change nothing but reinvent theology in an algorithmic concept, and such a world view will eventually face the same ages old issues if they really go by their theory. If I remember well, even when The Matrix came out, people were already talking about how it is related to the indian Brahman concept and such. See also the 9th question here.