Computer Science - Logic in Computer Science, Physics - History and Philosophy of Physics, F.1, and F.m

Abstract

The simulation hypothesis has recently excited renewed interest, especially in the physics and philosophy communities. However, the hypothesis specifically concerns {computers} that simulate physical universes, which means that to properly investigate it we need to couple computer science theory with physics. Here I do this by exploiting the physical Church-Turing thesis. This allows me to introduce a preliminary investigation of some of the computer science theoretic aspects of the simulation hypothesis. In particular, building on Kleene's second recursion theorem, I prove that it is mathematically possible for us to be in a simulation that is being run on a computer \textit{by us}. In such a case, there would be two identical instances of us; the question of which of those is ``really us'' is meaningless. I also show how Rice's theorem provides some interesting impossibility results concerning simulation and self-simulation; briefly describe the philosophical implications of fully homomorphic encryption for (self-)simulation; briefly investigate the graphical structure of universes simulating universes simulating universes, among other issues. I end by describing some of the possible avenues for future research that this preliminary investigation reveals. Comment: 44 pages of text, 5 pages of references, 10 pages of appendices

Economics - General Economics, 91A6, 91A10, 91A20, 91A28, and J.4

Abstract

It is known that a player in a noncooperative game can benefit by publicly restricting his possible moves before play begins. We show that, more generally, a player may benefit by publicly committing to pay an external party an amount that is contingent on the game's outcome. We explore what happens when external parties -- who we call ``game miners'' -- discover this fact and seek to profit from it by entering an outcome-contingent contract with the players. We analyze various structured bargaining games between miners and players for determining such an outcome-contingent contract. These bargaining games include playing the players against one another, as well as allowing the players to pay the miner(s) for exclusivity and first-mover advantage. We establish restrictions on the strategic settings in which a game miner can profit and bounds on the game miner's profit. We also find that game miners can lead to both efficient and inefficient equilibria. Comment: 25 pages, 1 figure

Physics - History and Philosophy of Physics and Condensed Matter - Statistical Mechanics

Abstract

The epistemic arrow of time is the fact that our knowledge of the past seems to be both of a different kind and more detailed than our knowledge of the future. Just like with the other arrows of time, it has often been speculated that the epistemic arrow arises due to the second law of thermodynamics. In this paper we investigate the epistemic arrow of time, using a fully formal framework. We begin by defining a memory system as any physical system whose present state can provide information about the state of the external world at some time other than the present. We then identify two types of memory systems in our universe, along with an important special case of the first type, which we distinguish as a third type of memory system. We show that two of these types of memory system are time-symmetric, able to provide knowledge about both the past and the future. However, the third type of memory systems exploits the second law of thermodynamics in all of its instances we find in our universe. The result is that in our universe, this type of memory system only ever provides information about the past. Finally, we argue that human memory is of this third type, completing the argument. Our analysis is indebted to prior work in Wolpert 1992, but expands and improves upon this work in several respects. Comment: 24 pages

Wolpert, David H., Price, Michael H., Crabtree, Stefani A., Kohler, Timothy A., Jost, Jürgen, Evans, James, Stadler, Peter F., Shimao, Hajime, and Laubichler, Manfred D.

Journal of Computer Applications in Archaeology. February 8, 2024, Vol. 7 Issue 1, p134, 19 p.

Many dynamical systems consist of multiple, co-evolving subsystems (degrees of freedom). These subsystems often depend upon each other in a way that restricts the overall system's dynamics. How does this network of dependencies affect the system's thermodynamics? Prior studies in the stochastic thermodynamics of multipartite processes (MPPs) have approached this question by restricting the system to allow only one subsystem to change state at a time. However, in many real systems, such as chemical reaction networks or electronic circuits, multiple subsystems must change state together. Therefore, studies of MPPs do not apply to such systems. Here, we investigate the thermodynamics of composite processes, in which subsets of subsystems are allowed to change state simultaneously. These subsets correspond to the subsystems that interact with a single mechanism (e.g., a thermal or chemical reservoir) that is coupled to the system. An MPP is simply a (subcase of a) composite process in which all such subsets have cardinality one. We demonstrate the power of the composite systems framework to study the thermodynamics of multiple, co-evolving subsystems. In particular, we derive thermodynamic uncertainty relations for information flows in composite processes. We also derive strengthened speed limits for composite processes. Our results apply to a much broader class of dynamical systems than do results for MPPs, and could guide future studies of the thermodynamics of distributed computational systems. Comment: 8 pages, 2 figures

Tasnim, Farita, Freitas, Nahuel, and Wolpert, David H.

Subjects

Condensed Matter - Statistical Mechanics and Computer Science - Information Theory

Abstract

In many complex systems, whether biological or artificial, the thermodynamic costs of communication among their components are large. These systems also tend to split information transmitted between any two components across multiple channels. A common hypothesis is that such inverse multiplexing strategies reduce total thermodynamic costs. So far, however, there have been no physics-based results supporting this hypothesis. This gap existed partially because we have lacked a theoretical framework that addresses the interplay of thermodynamics and information in off-equilibrium systems. Here we present the first study that rigorously combines such a framework, stochastic thermodynamics, with Shannon information theory. We develop a minimal model that captures the fundamental features common to a wide variety of communication systems, and study the relationship between the entropy production of the communication process and the channel capacity, the canonical measure of the communication capability of a channel. In contrast to what is assumed in previous works not based on first principles, we show that the entropy production is not always a convex and monotonically increasing function of the channel capacity. However, those two properties are recovered for sufficiently high channel capacity. These results clarify when and how to split a single communication stream across multiple channels. Comment: 15 pages, 3 figures

Stochastic thermodynamics is formulated under the assumption of perfect knowledge of all thermodynamic parameters. However, in any real-world experiment, there is non-zero uncertainty about the precise value of temperatures, chemical potentials, energy spectrum, etc. Here we investigate how this uncertainty modifies the theorems of stochastic thermodynamics. We consider two scenarios: in the (called \emph{effective}) scenario we fix the (unknown, randomly generated) experimental apparatus and then repeatedly observe (stochastic) trajectories of the system for that fixed apparatus. In contrast, in a (called \emph{phenomenological}) scenario the (unknown) apparatus is re-generated for each trajectory. We derive expressions for thermodynamic quantities in both scenarios. We also discuss the physical interpretation of effective (scenario) entropy production (EP), derive the effective mismatch cost, and provide a numerical analysis of the effective thermodynamics of a quantum dot implementing bit erasure with uncertain temperature. We then analyze the protocol for moving between two state distributions that maximize effective work extraction. Next, we investigate the effective thermodynamic value of information, focusing on the case where there is a delay between the initialization of the system and the start of the protocol. Finally, we derive the detailed and integrated fluctuation theorems (FTs) for the phenomenological EP. In particular, we show how the phenomenological FTs account for the fact that the longer a trajectory runs, the more information it provides concerning the precise experimental apparatus, and therefore the less EP it generates. Comment: 27 pages, 4 figures

Mathematics - Logic and Physics - History and Philosophy of Physics

Abstract

We introduce a framework that can be used to model both mathematics and human reasoning about mathematics. This framework involves {stochastic mathematical systems} (SMSs), which are stochastic processes that generate pairs of questions and associated answers (with no explicit referents). We use the SMS framework to define normative conditions for mathematical reasoning, by defining a ``calibration'' relation between a pair of SMSs. The first SMS is the human reasoner, and the second is an ``oracle'' SMS that can be interpreted as deciding whether the question-answer pairs of the reasoner SMS are valid. To ground thinking, we understand the answers to questions given by this oracle to be the answers that would be given by an SMS representing the entire mathematical community in the infinite long run of the process of asking and answering questions. We then introduce a slight extension of SMSs to allow us to model both the physical universe and human reasoning about the physical universe. We then define a slightly different calibration relation appropriate for the case of scientific reasoning. In this case the first SMS represents a human scientist predicting the outcome of future experiments, while the second SMS represents the physical universe in which the scientist is embedded, with the question-answer pairs of that SMS being specifications of the experiments that will occur and the outcome of those experiments, respectively. Next we derive conditions justifying two important patterns of inference in both mathematical and scientific reasoning: i) the practice of increasing one's degree of belief in a claim as one observes increasingly many lines of evidence for that claim, and ii) abduction, the practice of inferring a claim's probability of being correct from its explanatory power with respect to some other claim that is already taken to hold for independent reasons. Comment: 43 pages of text, 6 pages of references, 11 pages of appendices

Real-world computers have operational constraints that cause nonzero entropy production (EP). In particular, almost all real-world computers are ``periodic'', iteratively undergoing the same physical process; and ``local", in that subsystems evolve whilst physically decoupled from the rest of the computer. These constraints are so universal because decomposing a complex computation into small, iterative calculations is what makes computers so powerful. We first derive the nonzero EP caused by the locality and periodicity constraints for deterministic finite automata (DFA), a foundational system of computer science theory. We then relate this minimal EP to the computational characteristics of the DFA. We thus divide the languages recognised by DFA into two classes: those that can be recognised with zero EP, and those that necessarily have non-zero EP. We also demonstrate the thermodynamic advantages of implementing a DFA with a physical process that is agnostic about the inputs that it processes.

Physics - History and Philosophy of Physics and Computer Science - Computation and Language

Abstract

In this essay I will consider a sequence of questions. The first questions concern the biological function of intelligence in general, and cognitive prostheses of human intelligence in particular. These will lead into questions concerning human language, perhaps the most important cognitive prosthesis humanity has ever developed. While it is traditional to rhapsodize about the cognitive power encapsulated in human language, I will emphasize how horribly limited human language is - and therefore how limited our cognitive abilities are, despite their being augmented with language. This will lead to questions of whether human mathematics, being ultimately formulated in terms of human language, is also deeply limited. I will then combine these questions to pose a partial, sort-of, sideways answer to the guiding concern of this essay: what we can ever discern about that we cannot even conceive? Comment: 39 pages, 10 pages of which are references

The past two decades have seen a revolution in statistical physics, generalizing it to apply to systems of arbitrary size, evolving while arbitrarily far from equilibrium. Many of these new results are based on analyzing the dynamics of the entropy of a system that is evolving according to a Markov process. These results comprise a sub-field called ``stochastic thermodynamics''. Some of the most powerful results in stochastic thermodynamics were traditionally concerned with single, monolithic systems, evolving by themselves, ignoring any internal structure of those systems. In this chapter I review how in complex systems, composed of many interacting constituent systems, it is possible to substantially strengthen many of these traditional results of stochastic thermodynamics. This is done by ``mixing and matching'' those traditional results, to each apply to only a subset of the interacting systems, thereby producing a more powerful result at the level of the aggregate, complex system. Comment: 17 pages text, 31 pages appendices, 1 figure, to appear in "Encyclopedia of Entropy across the Disciplines"

Wolpert, David H., Price, Michael H., Crabtree, Stefani A., Kohler, Timothy A., Jost, Jurgen, Evans, James, Stadler, Peter F., Shimao, Hajime, and Laubichler, Manfred D.

Historical processes manifest remarkable diversity. Nevertheless, scholars have long attempted to identify patterns and categorize historical actors and influences with some success. A stochastic process framework provides a structured approach for the analysis of large historical datasets that allows for detection of sometimes surprising patterns, identification of relevant causal actors both endogenous and exogenous to the process, and comparison between different historical cases. The combination of data, analytical tools and the organizing theoretical framework of stochastic processes complements traditional narrative approaches in history and archaeology. Comment: 20 pages, 4 figures

Wolpert, David H., Price, Michael H., Crabtree, Stefani A., Kohler, Timothy A., Jost, Jürgen, Evans, James, Stadler, Peter F., Shimao, Hajime, and Laubichler, Manfred D.

Journal of Computer Applications in Archaeology. May 9, 2024, Vol. 7 Issue 1, p244, 2 p.

Previously derived "global" thermodynamic speed limit theorems state that increasing the maximum speed with which a system can evolve between two given probability distributions over its states requires the system to produce more entropy in its evolution. However, these theorems ignore that many systems are not monolithic, but instead comprise multiple subsystems that interact according to an (often sparse) network. Indeed, most naturally-occurring and human-engineered systems of increasing complexity can be decomposed into sets of co-evolving subsystems, where there exist a priori constraints on the dynamics of each subsystem, restricting which other subsystems can affect its dynamics. Here we derive three new SLTs that account for the thermodynamic effects of such constraints. Our first new speed limit strengthens the global speed limit. While our other two SLTs do not have this guarantee, in some situations they are even stronger than our first speed limit. Our results establish that a stochastically evolving system will, on average, produce more entropy in evolving between two distributions within a given time simply due to its comprising multiple, co-evolving subsystems. We illustrate our results with numerical calculations involving a model of two cells sensing and storing information about their environment. Comment: 8 pages, 3 figures; Supplementary information provided as a separate PDF

We consider the problem of driving a finite-state classical system from some initial distribution $p$ to some final distribution $p'$ with vanishing entropy production (EP), under the constraint that the driving protocols can only use some limited set of energy functions $\mathcal{E}$. Assuming no other constraints on the driving protocol, we derive a simple condition that guarantees that such a transformation can be carried out, which is stated in terms of the smallest probabilities in $\{p,p'\}$ and a graph-theoretic property defined in terms of $\mathcal{E}$. Our results imply that a surprisingly small amount of control over the energy function is sufficient (in particular, any transformation $p\to p'$ can be carried out as soon as one can control some one-dimensional parameter of the energy function, e.g., the energy of a single state). We also derive a lower bound on the EP under more general constraints on the transition rates, which is formulated in terms of a convex optimization problem.

The important recent book by G. Schurz appreciates that the no-free-lunch theorems (NFL) have major implications for the problem of (meta) induction. Here I review the NFL theorems, emphasizing that they do not only concern the case where there is a uniform prior -- they prove that there are "as many priors" (loosely speaking) for which any induction algorithm $A$ out-generalizes some induction algorithm $B$ as vice-versa. Importantly though, in addition to the NFL theorems, there are many {free lunch} theorems. In particular, the NFL theorems can only be used to compare the {marginal} expected performance of an induction algorithm $A$ with the marginal expected performance of an induction algorithm $B$. There is a rich set of free lunches which instead concern the statistical correlations among the generalization errors of induction algorithms. As I describe, the meta-induction algorithms that Schurz advocate as a "solution to Hume's problem" are just an example of such a free lunch based on correlations among the generalization errors of induction algorithms. I end by pointing out that the prior that Schurz advocates, which is uniform over bit frequencies rather than bit patterns, is contradicted by thousands of experiments in statistical physics and by the great success of the maximum entropy procedure in inductive inference. Comment: 14 pages

In the real world, one almost never knows the parameters of a thermodynamic process to infinite precision. Reflecting this, here we investigate how to extend stochastic thermodynamics to systems with uncertain parameters, including uncertain number of heat baths / particle reservoirs, uncertainty in the precise values of temperatures / chemical potentials of those reservoirs, uncertainty in the energy spectrum, uncertainty in the control protocol, etc. We formalize such uncertainty with an (arbitrary) probability measure over all transition rate matrices satisfying local detailed balance. This lets us define the effective thermodynamic quantities by averaging over all LDB-obeying rate matrices. We show that the resultant effective entropy violates the second law of thermodynamics. In contrast to the effective entropy though, the expected stochastic entropy, defined as the ensemble average of the effective trajectory-level entropy, satisfies the second law. We then and explicitly calculate the second-order correction to the second law for the case of one heat bath with uncertain temperature. We also derive the detailed fluctuation theorem for expected effective trajectory entropy production for this case, and derive a lower bound for the associated expected work. Next, to ground these formal considerations with experimentally testable bounds on allowed energetics, we derive a bound on the maximal work that can be extracted from systems with arbitrarily uncertain temperature. We end by extending previous work on "thermodynamic value of information", to allow for uncertainty in the time-evolution of the rate matrix.