Tuesday, June 26, 2007

George Molnar and the Powers That Be

George Molnar’s Powers: A Study in Metaphysics appeared in 2003, with a paperback version following recently. The book presents a realist theory of causal metaphysics founded on a detailed ontological treatment of dispositional properties, or powers. Molnar’s work was brought to my attention last fall by an e-mail correspondent, to whom I’m grateful. I plan to present some notes and thoughts about the book over a couple of posts.

The book is a posthumous publication, Molnar having died in 1999. Some brief biographical information is provided in the introduction by Stephen Mumford (see also this webpage [UPDATE: 8 March 2011 - this was a link to a page about Molnar - now gone), as well as in a preface by D. M. Armstrong. Born in Budapest, Molnar and his family escaped the Nazis and he settled in Australia. He became a philosopher and published a handful of metaphysical papers early in his career. Then leftist political activities led him to depart his university post and exit formal academic philosophy for a couple of decades until just a few years before his death. In those years, this book took shape.

According to Mumford, who prepared the manuscript for publication, the chapters which present the main theory were largely complete; the manuscript lacked an introductory chapter and only fragments existed of the intended final chapters on application of the theory of powers to various metaphysical problems. Mumford has provided a helpful introduction, and edited the final fragments into a condensed last chapter.

Molnar’s theory is a realist account of dispositional properties as causal powers. This realism about dispositional properties and causality is in contrast to work in the Humean tradition which would eliminate dispositional properties and reduce apparent causal power to mere correlation. A traditional strategy is to employ a conditional analysis. Rather than ascribe the dispositional property of solubility to X, one simply notes that if X is placed in water, then it will dissolve. Another way to approach eliminating powers is to claim they can be reduced to categorical micro-physical properties (although many would view charge, mass, spin et. al as paradigm dispositional properties). Molnar will defend dispositional properties as real and ineliminable causal powers of objects.

Molnar’s Ontological Categories

A. Tropes. Molnar wants to present a full ontology of powers, so he must answer the question: what kind of properties are they? His answer is that properties are tropes. He thinks that nominalists are right to distrust the idea that properties as universals are real but err in rejecting all realism about properties. Realists are right in their realism, but wrong about universals, which are too inconsistent with naturalism. There are sections discussing the characteristics of tropes in great detail in Ch. 1, which I will pass over for now. I should note that in addition to powers, Molnar will find a need for non-power properties as well, leading to a property dualism.

B. Objects. There is a classic problem with tropes, however, which is explaining how they bunch up in coherent ways. Attempts to posit ways to bundle tropes together without adding something new to the mix are unsuccessful (see my old post with a link to work on this topic by Dr. Bill Vallicella). Molnar bites the bullet and admits objects as an additional ontological category. Powers are powers of objects. (Later in the book, though, he will admit ungrounded powers as well).

C. Relations. In Molnar’s assessment, objects are separable from their location in space-time. This leads him to add relations as an additional irreducible ontological category.

Given these ingredients, at least one ontological category Molnar will not need are “states of affairs” (or facts or situations, etc.), which he criticizes in section 2.3.

Still, I think that one can be more economical yet with regard to the size of the ontological zoo. I’ll say more later after a discussion of powers, but I think an event ontology can improve on an object oriented ontology.

Monday, June 11, 2007

Incremental Blog Improvements

Since blogger supports labels now, I created a bunch (which are on the sidebar) and went back and labeled most of the old posts. I'm uploading a photo of my smiling face to stick onto the profile page.


Friday, June 08, 2007

Jenkins on Modal Knowledge

Dr. Carrie Jenkins (homepage; blog; also a TAR contributor) posted a thought-provoking draft paper on a topic I’m interested in – the question of how we acquire modal knowledge. My notes on this paper follow below.

In previous papers, Dr. Jenkins has presented a strategy intended to show how abstract (and seemingly a priori) truths can be real and yet our knowledge of these can grounded in a way friendly to an empiricist (arithmetical truths were used as a paradigm case). To give my massively oversimplifed take on this, Step 1 of her process involves our forming concepts which are seeded by and grounded in sensory input. These concepts take the pieces of world-data and extend them into maps of how the world hangs together. In Step 2, we examine and manipulate these concepts in order to extend our knowledge to what we normally think of as abstract truths. This step 2 seems to have some rationalist tint to it (where did we get this concept examination and extrapolation faculty?), but the whole package is meant to bring the process closer to an earthly explanation, and seems plausible.

But can this analysis be extended to truths regarding possibility and necessity? After all, as Jenkins discusses early in the paper, it certainly seems as if our sensory input is restricted to empirical truth values: the modal status of these truth values appears to have no detectable impact on this input. To put this in terms of the two steps above: when it comes to modal knowledge, Step 1 encompasses some mystery as well. What is it about our knowledge of the natural world which gives us the pieces to form a valid conceptual map of possibility and necessity?

She connects this problem with the issues surrounding the modal rationalist position that conceivability implies possibility. Of special concern for this paper is the basic question of why should we think that conceivability has anything to do with possibility to begin with. For conceivability to be a guide to modality, “our powers of conceiving would have to be attuned to modal fact.” (p.10)

She notes this wouldn’t be a problem if one is an anti-realist about modal truths, then we can just say they are mind dependent and just an artifact of our concepts. But we want to look at the situation facing realists regarding mind-independent modal truths. How can we explain our knowledge of such truths, including the idea that conceivability implies possibility, without invoking a special faculty of rational intuition?

What follows next is a careful review of the steps in her analytical program. She explains her take on what concepts are and how conceptual truths and falsehoods can be derived from an examination of concepts. She discusses how concepts can be grounded empirically, so they accurately represent aspects of the world. Her stance is that if concepts are grounded in this way, then examination of them can lead to conceptual knowledge.

Ok, now back to the key question for this paper: how can modal conceptual knowledge be derived from empirically grounded concepts. The distinguishing step seems to be her suggestion that in gaining empirical knowledge we gain not just atomic empirical facts, but information on the structure (or structural relations) of the actual world, and this structure can ground concepts that can be examined to gain modal knowledge. The easiest examples to work with to illuminate this idea are certain instances of necessary truths.

She uses the example “all vixens are female” to walk through the process by which the concepts (of vixen and female) are grounded empirically, and the relations between the concepts are also accurate reflections of the relations between the real features of the world represented by the concepts. The result being that when we examine the concepts we find that cannot conceive of “all vixens are female” as false; this directs us toward belief in the necessity of the proposition. Because the way we arrived at the belief was grounded in the right way, it is true. (Question: is possibility more difficult -- is it just that everything which isn't determined to be necessary is thus contingent?)

She notes that this epistemological story would need a full metaphysical theory to fill out the question of exactly why the accurate understanding of actual world structural relations leads to knowledge of modal truth, and this goes beyond the scope of the paper. But her epistemological strategy should be compatible with a number of theories.*

The paper continues with discussions of further implications and responses to possible objections. Then, as part of the concluding section, Jenkins returns to the problem raised in the beginning – that it certainly seems as though the empirical truth values can convey no sensory knowledge of modality. She says that despite its initial plausibility, there is no good reason to believe this statement is true, and this becomes clear when one considers the structure of the world, in addition to just atomic facts.

I liked the paper a lot. The richness and detail of discussion throughout (not captured in this summary) built a strong cumulative case for the argument; and the successful application of the previously developed framework to this question of modal truth speaks well for the robustness of Dr. Jenkins’ research program.

*In my case I had this idea that if I posit an ontology where the concrete world is constructed from events which are actualized possibilities, then our knowledge of these events gives us direct acquaintance with possibilities in everything that we learn (I further speculated in this post that modal knowledge gained in this way could actually precede and be constitutive of other abstract knowledge.)

Friday, May 18, 2007

Quantum Gravity and Gunk

Something is bothering me at the boundary of physics and metaphysics. It seems very likely that a successful theory of quantum gravity will entail that our actual universe is finite. This follows from two considerations. First, in the new theory, the singularities of general relativity will be banished, and the universe will be seen to be grained at the Planck scale. Second, it seems to me that the observable universe can be identified with the actual universe: in what sense should we consider a putative region of the universe beyond the reach of any possible causal contact to be actual? So, it follows that the actual universe is finite.

Now in reading metaphysical papers recently by Ross Cameron and Jonathan Schaffer, (see posts here and here), I was introduced to the argument for the conceivability of “gunk”. Gunk is stuff every part of which has proper parts -- that is, it is infinitely divisible. Now is a world made of gunk conceivable? It seems so. Now, since I have embraced the general stance that conceivability implies possibility, I would have to concede that if the actual world is finite, this is a contingent rather than necessary fact about the world.

For some reason, this just rubs me the wrong way. I don’t like thinking that something as fundamental as the conclusion that our world is finite in extent is just a contingent fact. But given that we are (famously) adept at conceiving infinities, and the strength of my opinion regarding the modal rationalist link between conceivability and possibility, I’m stuck.

The only strategy which I think might work is as follows. I could assert that the conceivability of infinity is grounded by the whole space of possible worlds, and its application to a single possible world is a mistake. The gunky world would itself have to comprise all possible worlds by virtue of its infinite extent. It would itself necessarily constitute the entire modal space, so it couldn’t also be one of the constituents of modal space. Individual possible worlds themselves would be necessarily finite in this scheme.

Tuesday, May 15, 2007

In the Beginning was the Qubit

So, how did this party get started? In Programming the Universe (see also prior posts on this topic), Seth Lloyd would like to retell the cosmological story with qubits instead of elementary particles. However, the section of the book (chapter 3) where he does this doesn’t really add much to the standard account. He interprets fluctuations in quantum fields as superpositions of bits whose possible outcomes "0" and "1" represent low and high energy density. The collapse (or decoherence, following Lloyd’s preferred interpretation) of these superpositions creates pockets of high density which can then be the target of gravitational attraction. If you take out the references to bits, his story seems to be the standard cosmological model. If the universe is really a quantum computer, then matter-energy fields (and space-time) would be derived from qubits.

In other words, the real innovation would come if the computational model helps point us toward a theory of quantum gravity. And here, Lloyd does have some ideas. The book has just a few pages on this, but more detail is found in his paper, “A theory of quantum gravity based on quantum computation”. Some impressions from this paper are below (with the caveat that as usual I can’t understand large portions of it).


The idea is that the metric structure of space-time and the behavior of quantum matter fields “are derived from and arise out of an underlying quantum computer. (p.2)”. One starts with the fact a quantum computer can be thought of as a universal theory for discrete quantum mechanics. Quantum computers represent a causal network (=computational history) of interactions – actually superpositions of such networks. These can be represented as a graph, similar to those in causal set theory. Now, for the matter side of things, note that at each vertex of the graph (=logic gate), qubits can be transformed or not. When they are transformed, this is a scattering event. Each computation is a superposition of different computational histories, one for each pattern of scattering events. The events are the matter.

On the gravity side of things, the superpositions of these computational histories will be seen to correspond to a fluctuation of space-time geometry. Lloyd’s strategy is to “embed the computational graph in a space-time manifold by mapping [the computational graph] C into R4 via an embedding mapping E. (pp.6-7)”. He says that if you do this, then general covariance will follow from the fact that the informational flow through the network is independent of the way the computation is embedded in space-time. The next step (which seems to be the key part of the paper) makes some additional assumptions so that the geometries derived from the computation explicitly obey the Einstein equations (in their discrete Regge calculus form).

Now I can’t follow all the steps here, but what I think he is doing amounts to a demonstration of how a quantum computation could be consistent with the emergence of general relativistic space-time, rather than showing that it would actually do so as a matter of course. He ends up being at least partially circular in invoking our knowledge of the Einstein equations to achieve his explicit results (if someone would like to correct me on this, please do). In contrast, Fotini Markopoulou’s desired ambition (see here and here) is to show that the emergence of space-time is a general consequence of an underlying quantum micro-theory (likewise Olaf Dreyer).

The paper finishes with some ideas on how such a theory would impact a variety of topics in cosmology. For instance, singularities correspond to bits entering or leaving the universe, and black holes do lose information; the model can handle different stages of cosmological evolution, etc. This is interesting stuff, and I’ll be interested in seeing if these ideas are developed further.

Something which intrigues me is how one is supposed to think about this new proposed atom of the universe, the qubit. A practical quantum computer uses properties of familiar particles (spin of an electron or polarization of a photon) as qubits. But if these particles (as well as space-time itself) are derived from these postulated elementary qubits, what are they? Is the superposed atomic qubit just a pure possibility of existence?

[UPDATE: 25 May, 2007. My comments in first paragraph of this post are a bit unfair since later in the book (Ch.8 p.196) Lloyd revisits the story of the history of the universe incorporating some of the ideas from his sections on quantum gravity and complexity. In this discussion, here the computation does indeed have priority status over matter and gravity.]

Friday, May 04, 2007

Notre Dame Phil. Review of Strawson

For those interested, Leopold Stubenberg has a well-written summary of the recent special edition of the Journal of Consciousness Studies featuring Galen Strawson's panpsychism papers and 17 commentaries. (Hat tip - A brood comb's "power-blogroll"). My posts on this topic are here.

Monday, April 30, 2007

Physical Systems Process Information: So What?

Seth Lloyd’s book (see prior post) has a nice passage in a chapter subsection entitled “So What?” (p. 168). If the universe can indeed be viewed as a quantum computer, why should we care? He poses this further question: “Do we really need a whole new paradigm for thinking about how the universe operates?” Lloyd says (and it would seem difficult to disagree) that the dominant paradigm of the age of science has been that of universe as mechanism. He proposes a new paradigm: “I suggest thinking about the world not simply as a machine, but as a machine that processes information (p.169 – emphasis original).” In my opinion, however, Lloyd’s discussion, while often suggestive, doesn't really answer the "so what" question. Actually, he underplays how radical and interesting a notion this new paradigm really could be.

Unfortunately, in the section quoted from above, Lloyd doesn’t follow through in offering a philosophically compelling interpretation of this new paradigm. He goes on to discuss how the view might better (technically) account for complexity and how it could help on the quest for a theory of quantum gravity – both topics of subsequent sections. Other statements of this sort sprinkled throughout the book are neutral in tone and vague in terms of what they really mean. Here’s the typical quote: “All physical systems register information, and when they evolve dynamically in time, they transform and process that information. (Prologue, p. xi.)”.

I became frustrated at this: What does it really mean to say physical systems process information? In my own (perhaps uninformed) view of classical computing, the only true information processors are the human beings who provide input, program, and interpret the output. The semantics of information processing are provided by humans exclusively, the rest is syntax. This issue is discussed in one subsection of Lloyds’ book, entitled “Meaning” (p.24), where Lloyd relates being asked by a student: “’But doesn’t information have to mean something?’” The response: “’You’re right that when we think of information we normally associate it with meaning,’ I answered. ‘But the meaning of ‘meaning’ is not clear.’” In the rest of the section (written presumably after some reflection on this), he fails to improve on this answer. He discusses how bits can represent information, and then says “the interpreter must provide the meaning.” Note there is nothing innovative or even quantum mechanical about this discussion.

Here’s the unstated radical interpretation of Lloyd’s theory: If physical interactions ubiquitously can be described in terms of information processing, this implies that something we think belongs uniquely to human (and some animal) agents is also a feature of more elementary physical systems: that is, possession of semantic properties, or intentionality. If one is unwilling to take this step, that’s fine, but then there is no important difference between the new and the old paradigm when it comes to interpreting how human life and mind can fit into the picture of an otherwise lifeless mechanistic universe.

Postscript:
It’s not a coincidence that Lloyd’s approach to the measurement problem of QM is conservative. He believes the decoherent-histories approach is practical and useful enough to de-emphasize worries about foundational interpretation.

Tuesday, April 24, 2007

Living and Computing in Lloyd's Universe

I recently read Seth Lloyd’s Programming the Universe. This is a thought-provoking (if a bit meandering) book which explains why we should envision the universe as a quantum computer and how doing so may illuminate our understanding of some difficult questions (it is out in paperback – page references below are to this edition). In addition it offers a useful summary of quantum computing for the general reader, along with discussions of cosmology, thermodynamics and introductory quantum mechanics (all with a computing “gloss”). In this post and one or two to follow, I’ll discuss a couple of Lloyds’ ideas. (For a general review, the NYT’s is here).

As a layperson who had read explanatory books and articles about quantum physics for many years before I ever heard about quantum computers, the first theme the book hammered home for me was that quantum computing in an important sense just is quantum physics. A classical computer can be instantiated in a variety of physical set-ups; a quantum computer is itself a quantum system. While you can try to model a quantum system on a classical computer, you will quickly overwhelm its computational resources. So, quantum computing, in addition to its potential for practical acceleration of computing power generally, gives us a useful and appropriate logical framework to analyze the physics of our world.

The next step is to explore the implications of the ability to perform this kind of “quantum simulation”. Here’s a thumbnail sketch of how the simulation is done (p.149): “Every part of the quantum system to be simulated is mapped onto a collection of qubits in the quantum computer, and interactions between these parts become a sequence of quantum logic operations.” In fact: “…quantum computers could function as universal quantum simulators, whose dynamics could be the analog of any desired physical dynamics. (p.151)” At this point, Lloyd makes the conceptual case that, logically, there is no reason to distinguish between what’s happening in the simulation and the original system.

Now, the step which motivates the book title: while we can’t do it yet, in principle the universe (the accessible part, anyway) is finite in extent, and hypothetically could be simulated in a quantum computer. But, following the point above, since the computer has the same number of qubits as the universe, and since the operations on the qubits simulate the universe’s dynamics, we can say: “Such a quantum computation would constitute a complete description of nature, and so would be indistinguishable from nature. Thus, at bottom, the universe can be thought of as performing a quantum computation. (p.154, emphasis original).”

So what does it mean? What can this view do for us? I think there are two possible answers, one concrete and one more intangible. First, ideas from quantum computing may help in the quest for a theory of quantum gravity. Second, it may offer an improved paradigm for interpreting and understanding the physical world. I’ll follow up on these in future posts.

Monday, April 16, 2007

Geometrogenesis

This is a very cool new word. The context of its coining is the exploration of a new genre of background independent quantum gravity theories. The term appears in 3 recent papers posted on arxiv. Geometrogenesis refers to the emergence of space-time geometry (and matter simultaneously) from a pre-geometric micro-theory of interacting quantum systems.

It looks like the term first appeared in “Quantum Graphity”, a paper by Tomasz Konopka, Fotini Markopoulou, and Lee Smolin. Here, the authors created a model intended as a demonstration of how such a theory could proceed. In the model, degrees of freedom lie on a graph which in a disordered high temperature state can only be described in quantum mechanical terms. The system transitions to an orderly lattice structure at low temperatures.

Markopoulou then added two more (mostly overlapping) papers which step back and survey how theories featuring geometrogenesis fit into the taxonomy of quantum gravity theories and how they differ from other so-called background independent theories like loop quantum gravity.

In the paper “New Directions in Background Independent Quantum Gravity,” Markopoulou describes the “traditional” path to background independent quantum theories of gravity (e.g. LQG) as ones which create microscopic geometric degrees of freedom and then consider quantum superposition or path integrals of these geometries. One challenge for such an approach is that the quest for finding classical dynamical space-time in the low energy limit is made difficult by the fact that the starting point is a timeless, not a dynamical theory. (Note though that Causal Dynamical Triangulations is an approach, discussed in this prior post, which has had some success in getting at least the right large scale dimensionality to emerge from a micro-geometric starting point).

Here is what Markopoulou says in section 1.6.1 of the paper (p.18) about the geometrogenesis picture:
“It is a factor of about twenty orders of magnitude from the physics of the Planck scale described by the microscopic theory to the standard subatomic physics. By analogy with all other physical systems we know, it is reasonable to expect that physics at the two scales decouples to a good approximation. We can expect at least one phase transition interpolating between the microscopic BI phase and the familiar one in which we see dynamical geometry. We shall use the word geometrogenesis for this phase transition.”


She goes on to credit Olaf Dreyer (see this paper, for instance) and quantum computational theorist Seth Lloyd (see here) for advocating this concept of emergence with regard to dynamical space-time.

There are no distances or metrics in the micro-theory; distance is recovered as emerging from the relations among the quantum sub-systems. She also notes that is a feature of this idea that these emergent excitations of the microscopic degrees of freedom define not only geometry but the structure of matter at the same time. Matter and gravity are unified in the pre-geometric phase. The ambition of this approach is highlighted by Markopoulou’s saying that the approach “provides a path towards explaining gravity rather than just quantizing it (emphasis original).”

She discusses some of the challenges the approach faces. One is that the introduction of dynamics in the micro-theory reintroduces time in a theory that is supposed to be background independent (I personally think if local time and causality exist in the micro-theory, that’s OK). Second and more important, can such a theory show that geometry will emerge, or just that it could emerge. In other words will we need to posit a fine-tuning mechanism to have a geometric phase? She thinks some of the early approaches offer the hope that the geometric phase is a generic consequence of the theory.

In the paper, she then goes on to describe a specific approach she’s been working on, which invokes the quantum computing concept of noiseless sub-systems to drive emergence. I have a prior post about this work, so I’ll leave off discussing it here.

I don’t have any right to have an opinion, but I find a lot of intuitive appeal in this approach to quantum gravity. The ground level of reality consists of elementary quantum systems linked in a causal network; it is a natural consequence of this reality that our world emerges at the large scale.

Wednesday, March 21, 2007

Merriam's Quantum Relativity

Paul Merriam posted a paper called Quantum Relativity: Physical Laws Must be Invariant Over Quantum Systems in which he puts forth a conceptual strategy for understanding how a relational interpretation addresses the foundational issues of quantum mechanics. Please see this prior post for more background. What follows is a summary and attempted interpretation of what I found to be key aspects of the paper. The usual caveats are in place: my summaries may be not only incomplete (including omission of formalisms) but also misleading due to errors in interpretation. Please read the paper to judge.

The paper starts with a section which discusses why decoherence does not solve the foundational issues of QM. Since I believe this is generally acknowledged (see this recent blog post from Matt Leifer; an old blog post of mine is here), I’ll just focus on the most important part of this discussion. Recall that one of the perceived shortcomings of the relational interpretation of QM revolves around the question of how two or more interacting systems come to “choose” the same basis. Merriam says that decoherence has a “change of basis” problem of its own.

To see this, Merriam returns to the “Wigner’s friend” framework and replaces "Wigner" with the "environment" to create a decoherence version of the scenario. Relative to the environment E, the experimenter (called A) and the system he or she is measuring (S) are in superposition and evolve according the Schrödinger picture. Decoherence would lead to the selection of relatively stable “classical” appearances of the observable which is the basis of the measurement. But suppose A decides to measure a different observable of S (change of basis). Decoherence takes place over a period of time (decoherence time); this time depends on many factors, but the “change of basis” is a problem for the time between zero and the decoherence time. (Decoherence is not measurement).

Next Merriam discusses (repeating the arguments of his older paper) the issues highlighted by the Wigner’s friend setup, arguing again that the quantum state describes a system relative to another system. Quantum mechanics is an intransitive theory.

The next section is titled “Quantum Relativity”. So having acknowledged the perspectivist nature of QM, what’s the next step? When considering two quantum systems: “The essential point of this paper is that since both systems physically exist they are both valid coordinate frames from which the laws of physics must hold. Quantum mechanics is as valid in S as it is in A.” If A describes S in terms of a superposition across some measurement basis, then S will describe A as starting out in a corresponding superposition. When A observes (measures) S to be in some eigenstate, “S must also observe A to be in some corresponding eigenstate…”

The key point is brought out by the word “must” here and in the title of the paper. The conceptual hurdle we are jumping here is as follows: if QM is valid from the point of view of all “quantum systems” (including everything from electrons to physicists), then when they interact they necessarily select the consistent basis for interaction. The basis problem is solved by asserting that basis choices must match if QM is to be valid from all points of view.

Merriam believes this conceptual leap has consequences analogous to special relativity. The next passage (see p. 6) looks at the formalism of the Schrödinger equation from A’s and S’s perspective and wonders how they can be consistent if the mass is so different in the two cases. But he notes the values for length or distance between the two quantum observations do not have to have the same numeral values in both systems. If distance is scaled to the relationship of the masses, then it is possible to create a transformation from the superposition of S as described by A to that of A described by S. There can be a group of such transformations for any number of systems. Merriam derives a transformation constant in analogy to the role the speed of light c plays in relativistic transformation.

Merriam also speculates about that one could extend the idea to include gravity by taking the equivalence of gravitational force and acceleration to be relative to the local quantum reference system. He suggests the shape a quantum version of Einstein’s equation would take. I will skip for now further discussion of this idea and a section on how gauge invariance might be impacted, since I think the key concept is in place with the analogue to special relativity.

Key to special relativity is the postulate that physical laws valid from one reference frame should be form-invariant when translated to another frame. To review, we assume that QM gives a valid physical description from the point of view of a system, and each quantum system forms a physically valid coordinate frame. Note that systems only share a reference frame when they interact. We should be able to translate the state of a system S which is in superposition relative to system A to the state of A relative to S. Again, this only works if we stipulate that if an interaction takes place, the “basis choice” is necessarily consistent from both perspectives.