# Fletcher: Reduction and Causal Set Theory’s Hauptvermutung

2013-09-27 13:51:49: Fletcher notes that previously work on quantum gravity has focused on reduction and emergence of properties as opposed to intertheoretic reduction.  He’d like to help correct this omission.  His topic here is causal set theory.  He’d like to show that the “Hauptvermutung” or primary conjecture of causal set theory has, as yet, no rigorous formulation, but that paying attention to observables may help show the way to such a formulation.

Sam Fletcher
A brief amendment: previous *philosophical* work on quantum gravity. Of course physicists have investigated intertheoretic reduction, albeit in mostly a heuristic way.

2013-09-27 14:01:31: The theorem that inspires CST is due to Malament, and shows that causally isomorphic spacetimes must be entirely isomorphic up to some conformal factor.  The causal relations uniquely determine a spacetime up to a volume.  The idea of CST is that, if spacetime were composed of discrete parts, we could get the volume as well by “counting” the parts composing any region.

The Hauptvermutung: If we have two faithful or “uniform” embeddings of the same causal set into relativistic spacetimes, with a given density, then the two spacetimes are approximately isometric above the volume scale of 1/(the density).  But what is it for an embedding to be “uniform”?

Normally this is defined in terms of the spacetime, but of course the causal sets are fundamental (not the spacetime), so uniformity must fundamentally mean something like “with high probability.”

But what does “high probability” mean?  This is where observables can help.  Uniformity will be an instrumentally useful standard relative to the observables we’re hoping to predict.

Sam Fletcher

I’d like to clarify the connection between “uniformity” and “arising with high probability.”

Causal set theorists typically insist on understanding embeddings probabilistically, so they take “uniform” to mean that the expectation value of the number of embedded points in a region is proportional to the volume of that region (as determined by the spacetime metric). In particular, they require it to be generated by a Poisson process,

But if one takes the causal set to be more fundamental, then it would be backwards to use understand them as *arising* from a uniform embedding. Rather, one must use the inverse, inferential process: the embedded causal set *could have arisen* “with high probability” from a Poisson process.

2013-09-27 14:03:55: A question from the audience: are we only asking about the connection between an “eigenstate” of a causal set and a classical spacetime.  Fletcher agrees that for present purposes, this is the question he’s interested in.  The concern is that perhaps the only workable way to approximate a spacetime is with a superposition of causal sets.  Fletcher agrees that this is an important problem but would like to focus on a more tractable one.

2013-09-27 14:12:02: Perhaps the requirement that an embedding exist is too strong.  There are many small causal sets that cannot be embedded into a relativistic spacetime.  Small perturbations of a set should not change whether we count it as embeddable, but this will not hold if we require exact embeddings.  So we look instead for embeddings of a coarse–grained causal set, where a coarse-graining is a causal set that could have arisen with high probability from a Bernoulli deletion.

It’s not clear that deletion can be understood as a sort of “averaging” of a causal set.  One doesn’t normally do statistical mechanics by ignoring some atoms entirely; rather we pay attention to larger-scale variables.

A better option may be to use a Bernoulli process to identify vertices on adjacent elements of the set, but this is not easy.

Deleting too many points makes the task of embedding too easy.  So the embedding density should be set by the observables we’re interested in, and how many points can be deleted while still approximating them accurately.

2013-09-27 14:29:52: We also need a way of measuring differences in causal structure (like a “metric” on how causally different spacetimes are).  The most venerable existing proposal has a number of problems.  In particular, they only work for spacetimes defined on the same manifold.

A newer proposal due to Bombelli: Compare isometry classes of spacetimes according to the probability of getting the causal set if n points are selected at random from one of the spacetimes.  As a calculational problem, this is extremely hard.  Moreover, it only works for finite spacetimes.  There hasn’t been much progress with this problem.

Another proposal depends on the notion of Hausdorff distance between two subsets of a metric space.  Gromov generalized this to a distance between metric spaces, corresponding to the minimum possible Hausdorff distance when the two spaces are embedded in any larger space.  But can this be extended to Lorentzian manifolds?

A Lorentzian definition of Gromov-Hausdorff distance is proposed, but it is not clear how good a job it will do of providing good approximate agreement on observables for “nearby” spacetimes.

2013-09-27 14:31:56: Fletcher: The Hauptvermutung seems not to offer a derivation of GR from CST, but it would provide an explanation of some sort for the usefulness of continuum spacetime as an approximation to the causal set-theoretic reality.