Computer Science - Logic in Computer Science, Physics - History and Philosophy of Physics, F.1, and F.m

Abstract

The simulation hypothesis has recently excited renewed interest, especially in the physics and philosophy communities. However, the hypothesis specifically concerns {computers} that simulate physical universes, which means that to properly investigate it we need to couple computer science theory with physics. Here I do this by exploiting the physical Church-Turing thesis. This allows me to introduce a preliminary investigation of some of the computer science theoretic aspects of the simulation hypothesis. In particular, building on Kleene's second recursion theorem, I prove that it is mathematically possible for us to be in a simulation that is being run on a computer \textit{by us}. In such a case, there would be two identical instances of us; the question of which of those is ``really us'' is meaningless. I also show how Rice's theorem provides some interesting impossibility results concerning simulation and self-simulation; briefly describe the philosophical implications of fully homomorphic encryption for (self-)simulation; briefly investigate the graphical structure of universes simulating universes simulating universes, among other issues. I end by describing some of the possible avenues for future research that this preliminary investigation reveals. Comment: 44 pages of text, 5 pages of references, 10 pages of appendices

Economics - General Economics, 91A6, 91A10, 91A20, 91A28, and J.4

Abstract

It is known that a player in a noncooperative game can benefit by publicly restricting his possible moves before play begins. We show that, more generally, a player may benefit by publicly committing to pay an external party an amount that is contingent on the game's outcome. We explore what happens when external parties -- who we call ``game miners'' -- discover this fact and seek to profit from it by entering an outcome-contingent contract with the players. We analyze various structured bargaining games between miners and players for determining such an outcome-contingent contract. These bargaining games include playing the players against one another, as well as allowing the players to pay the miner(s) for exclusivity and first-mover advantage. We establish restrictions on the strategic settings in which a game miner can profit and bounds on the game miner's profit. We also find that game miners can lead to both efficient and inefficient equilibria. Comment: 25 pages, 1 figure

Physics - History and Philosophy of Physics and Condensed Matter - Statistical Mechanics

Abstract

The epistemic arrow of time is the fact that our knowledge of the past seems to be both of a different kind and more detailed than our knowledge of the future. Just like with the other arrows of time, it has often been speculated that the epistemic arrow arises due to the second law of thermodynamics. In this paper we investigate the epistemic arrow of time, using a fully formal framework. We begin by defining a memory system as any physical system whose present state can provide information about the state of the external world at some time other than the present. We then identify two types of memory systems in our universe, along with an important special case of the first type, which we distinguish as a third type of memory system. We show that two of these types of memory system are time-symmetric, able to provide knowledge about both the past and the future. However, the third type of memory systems exploits the second law of thermodynamics in all of its instances we find in our universe. The result is that in our universe, this type of memory system only ever provides information about the past. Finally, we argue that human memory is of this third type, completing the argument. Our analysis is indebted to prior work in Wolpert 1992, but expands and improves upon this work in several respects. Comment: 24 pages

Journal for General Philosophy of Science; Sep2023, Vol. 54 Issue 3, p421-432, 12p

Subjects

STATISTICAL correlation, STATISTICAL physics, PHYSICS experiments, GENERALIZATION, and MATHEMATICAL induction

Abstract

The important recent book by Schurz (2019) appreciates that the no-free-lunch theorems (NFL) have major implications for the problem of (meta) induction. Here I review the NFL theorems, emphasizing that they do not only concern the case where there is a uniform prior—they prove that there are "as many priors" (loosely speaking) for which any induction algorithm A out-generalizes some induction algorithm B as vice-versa. Importantly though, in addition to the NFL theorems, there are many free lunch theorems. In particular, the NFL theorems can only be used to compare the expected performance of an induction algorithm A, considered in isolation, with the expected performance of an induction algorithm B, considered in isolation. There is a rich set of free lunches which instead concern the statistical correlations among the generalization errors of induction algorithms. As I describe, the meta-induction algorithms that Schurz advocates as a "solution to Hume's problem" are simply examples of such a free lunch based on correlations among the generalization errors of induction algorithms. I end by pointing out that the prior that Schurz advocates, which is uniform over bit frequencies rather than bit patterns, is contradicted by thousands of experiments in statistical physics and by the great success of the maximum entropy procedure in inductive inference. [ABSTRACT FROM AUTHOR]

Tasnim, Farita, Freitas, Nahuel, and Wolpert, David H.

Subjects

Condensed Matter - Statistical Mechanics and Computer Science - Information Theory

Abstract

In many complex systems, whether biological or artificial, the thermodynamic costs of communication among their components are large. These systems also tend to split information transmitted between any two components across multiple channels. A common hypothesis is that such inverse multiplexing strategies reduce total thermodynamic costs. So far, however, there have been no physics-based results supporting this hypothesis. This gap existed partially because we have lacked a theoretical framework that addresses the interplay of thermodynamics and information in off-equilibrium systems. Here we present the first study that rigorously combines such a framework, stochastic thermodynamics, with Shannon information theory. We develop a minimal model that captures the fundamental features common to a wide variety of communication systems, and study the relationship between the entropy production of the communication process and the channel capacity, the canonical measure of the communication capability of a channel. In contrast to what is assumed in previous works not based on first principles, we show that the entropy production is not always a convex and monotonically increasing function of the channel capacity. However, those two properties are recovered for sufficiently high channel capacity. These results clarify when and how to split a single communication stream across multiple channels. Comment: 15 pages, 3 figures

Many dynamical systems consist of multiple, co-evolving subsystems (i.e., they have multiple degrees of freedom). Often, the dynamics of one or more of these subsystems will not directly depend on the state of some other subsystems, resulting in a network of dependencies governing the dynamics. How does this dependency network affect the full system’s thermodynamics? Prior studies on the stochastic thermodynamics of multipartite processes have addressed this question by assuming that, in addition to the constraints of the dependency network, only one subsystem is allowed to change state at a time. However, in many real systems, such as chemical reaction networks or electronic circuits, multiple subsystems can—or must—change state together. Here, we investigate the thermodynamics of such composite processes, in which multiple subsystems are allowed to change state simultaneously. We first present new, strictly positive lower bounds on entropy production in composite processes. We then present thermodynamic uncertainty relations for information flows in composite processes. We end with strengthened speed limits for composite processes.

In many complex systems, whether biological or artificial, the thermodynamic costs of communication among their components are large. These systems also tend to split information transmitted between any two components across multiple channels. A common hypothesis is that such inverse multiplexing strategies reduce total thermodynamic costs. So far, however, there have been no physics-based results supporting this hypothesis. This gap existed partially because we have lacked a theoretical framework that addresses the interplay of thermodynamics and information in off-equilibrium systems at any spatiotemporal scale. Here we present the first study that rigorously combines such a framework, stochastic thermodynamics, with Shannon information theory. We develop a minimal model that captures the fundamental features common to a wide variety of communication systems. We find that the thermodynamic cost in this model is a convex function of the channel capacity, the canonical measure of the communication capability of a channel. We also find that this function is not always monotonic, in contrast to previous results not derived from first principles physics. These results clarify when and how to split a single communication stream across multiple channels. In particular, we present Pareto fronts that reveal the trade-off between thermodynamic costs and channel capacity when inverse multiplexing. Due to the generality of our model, our findings could help explain empirical observations of how thermodynamic costs of information transmission make inverse multiplexing energetically favorable in many real-world communication systems. Comment: 15 pages, 3 figures

Stochastic thermodynamics is formulated under the assumption of perfect knowledge of all thermodynamic parameters. However, in any real-world experiment, there is non-zero uncertainty about the precise value of temperatures, chemical potentials, energy spectrum, etc. Here we investigate how this uncertainty modifies the theorems of stochastic thermodynamics. We consider two scenarios: in the (called \emph{effective}) scenario we fix the (unknown, randomly generated) experimental apparatus and then repeatedly observe (stochastic) trajectories of the system for that fixed apparatus. In contrast, in a (called \emph{phenomenological}) scenario the (unknown) apparatus is re-generated for each trajectory. We derive expressions for thermodynamic quantities in both scenarios. We also discuss the physical interpretation of effective (scenario) entropy production (EP), derive the effective mismatch cost, and provide a numerical analysis of the effective thermodynamics of a quantum dot implementing bit erasure with uncertain temperature. We then analyze the protocol for moving between two state distributions that maximize effective work extraction. Next, we investigate the effective thermodynamic value of information, focusing on the case where there is a delay between the initialization of the system and the start of the protocol. Finally, we derive the detailed and integrated fluctuation theorems (FTs) for the phenomenological EP. In particular, we show how the phenomenological FTs account for the fact that the longer a trajectory runs, the more information it provides concerning the precise experimental apparatus, and therefore the less EP it generates. Comment: 27 pages, 4 figures

Mathematics - Logic and Physics - History and Philosophy of Physics

Abstract

We introduce a framework that can be used to model both mathematics and human reasoning about mathematics. This framework involves {stochastic mathematical systems} (SMSs), which are stochastic processes that generate pairs of questions and associated answers (with no explicit referents). We use the SMS framework to define normative conditions for mathematical reasoning, by defining a ``calibration'' relation between a pair of SMSs. The first SMS is the human reasoner, and the second is an ``oracle'' SMS that can be interpreted as deciding whether the question-answer pairs of the reasoner SMS are valid. To ground thinking, we understand the answers to questions given by this oracle to be the answers that would be given by an SMS representing the entire mathematical community in the infinite long run of the process of asking and answering questions. We then introduce a slight extension of SMSs to allow us to model both the physical universe and human reasoning about the physical universe. We then define a slightly different calibration relation appropriate for the case of scientific reasoning. In this case the first SMS represents a human scientist predicting the outcome of future experiments, while the second SMS represents the physical universe in which the scientist is embedded, with the question-answer pairs of that SMS being specifications of the experiments that will occur and the outcome of those experiments, respectively. Next we derive conditions justifying two important patterns of inference in both mathematical and scientific reasoning: i) the practice of increasing one's degree of belief in a claim as one observes increasingly many lines of evidence for that claim, and ii) abduction, the practice of inferring a claim's probability of being correct from its explanatory power with respect to some other claim that is already taken to hold for independent reasons. Comment: 43 pages of text, 6 pages of references, 11 pages of appendices

Real-world computers have operational constraints that cause nonzero entropy production (EP). In particular, almost all real-world computers are ``periodic'', iteratively undergoing the same physical process; and ``local", in that subsystems evolve whilst physically decoupled from the rest of the computer. These constraints are so universal because decomposing a complex computation into small, iterative calculations is what makes computers so powerful. We first derive the nonzero EP caused by the locality and periodicity constraints for deterministic finite automata (DFA), a foundational system of computer science theory. We then relate this minimal EP to the computational characteristics of the DFA. We thus divide the languages recognised by DFA into two classes: those that can be recognised with zero EP, and those that necessarily have non-zero EP. We also demonstrate the thermodynamic advantages of implementing a DFA with a physical process that is agnostic about the inputs that it processes.

Physics - History and Philosophy of Physics and Computer Science - Computation and Language

Abstract

In this essay I will consider a sequence of questions. The first questions concern the biological function of intelligence in general, and cognitive prostheses of human intelligence in particular. These will lead into questions concerning human language, perhaps the most important cognitive prosthesis humanity has ever developed. While it is traditional to rhapsodize about the cognitive power encapsulated in human language, I will emphasize how horribly limited human language is - and therefore how limited our cognitive abilities are, despite their being augmented with language. This will lead to questions of whether human mathematics, being ultimately formulated in terms of human language, is also deeply limited. I will then combine these questions to pose a partial, sort-of, sideways answer to the guiding concern of this essay: what we can ever discern about that we cannot even conceive? Comment: 39 pages, 10 pages of which are references

The past two decades have seen a revolution in statistical physics, generalizing it to apply to systems of arbitrary size, evolving while arbitrarily far from equilibrium. Many of these new results are based on analyzing the dynamics of the entropy of a system that is evolving according to a Markov process. These results comprise a sub-field called ``stochastic thermodynamics''. Some of the most powerful results in stochastic thermodynamics were traditionally concerned with single, monolithic systems, evolving by themselves, ignoring any internal structure of those systems. In this chapter I review how in complex systems, composed of many interacting constituent systems, it is possible to substantially strengthen many of these traditional results of stochastic thermodynamics. This is done by ``mixing and matching'' those traditional results, to each apply to only a subset of the interacting systems, thereby producing a more powerful result at the level of the aggregate, complex system. Comment: 17 pages text, 31 pages appendices, 1 figure, to appear in "Encyclopedia of Entropy across the Disciplines"

Philosophical Transactions of the Royal Society A: Mathematical, Physical & Engineering Sciences. 7/11/2022, Vol. 380 Issue 2227, p1-20. 20p.

Subjects

SECOND law of thermodynamics, FEEDBACK control systems, PSYCHOLOGICAL feedback, and SOCIOTECHNICAL systems

Abstract

The second law of thermodynamics can be formulated as a restriction on the evolution of the entropy of any system undergoing Markovian dynamics. Here I show that this form of the second law is strengthened for multi-dimensional, complex systems, coupled to multiple thermodynamic reservoirs, if we have a set of a priori constraints restricting how the dynamics of each coordinate can depend on the other coordinates. As an example, this strengthened second law (SSL) applies to complex systems composed of multiple physically separated, co-evolving subsystems, each identified as a coordinate of the overall system. In this example, the constraints concern how the dynamics of some subsystems are allowed to depend on the states of the other subsystems. Importantly, the SSL applies to such complex systems even if some of its subsystems can change state simultaneously, which is prohibited in a multipartite process. The SSL also strengthens previously derived bounds on how much work can be extracted from a system using feedback control, if the system is multi-dimensional. Importantly, the SSL does not require local detailed balance. So it potentially applies to complex systems ranging from interacting economic agents to co-evolving biological species. This article is part of the theme issue 'Emergent phenomena in complex physical and socio-technical systems: from cells to societies'. [ABSTRACT FROM AUTHOR]

In this essay I will consider a sequence of questions, ending with one about the breadth and depth of the epistemic limitations of our our science and mathematics. I will then suggest a possible way to circumvent such limitations. I begin by considering questions about the biological function of intelligence. This will lead into questions concerning human language, perhaps the most important cognitive prosthesis we have ever developed. While it is traditional to rhapsodize about the perceptual power provided by human language, I will emphasize how horribly limited - and therefore limiting - it is. This will lead to questions of whether human mathematics, being so deeply grounded in our language, is also deeply limited. I will then combine all of this into a partial, sort-of, sideways answer to the guiding question of this essay: what we can ever discern about all that we cannot even conceive of? Comment: 30 pages, 9 pages of references

Wolpert, David H., Price, Michael H., Crabtree, Stefani A., Kohler, Timothy A., Jost, Jurgen, Evans, James, Stadler, Peter F., Shimao, Hajime, and Laubichler, Manfred D.

Historical processes manifest remarkable diversity. Nevertheless, scholars have long attempted to identify patterns and categorize historical actors and influences with some success. A stochastic process framework provides a structured approach for the analysis of large historical datasets that allows for detection of sometimes surprising patterns, identification of relevant causal actors both endogenous and exogenous to the process, and comparison between different historical cases. The combination of data, analytical tools and the organizing theoretical framework of stochastic processes complements traditional narrative approaches in history and archaeology. Comment: 20 pages, 4 figures