Thinking About Thought

Piero Scaruffi

(Copyright © 1998 Piero Scaruffi | Legal restrictions - Termini d'uso )


The New Physics: The Ubiquitous Asymmetry

(Galileo, Newton, Hamilton, Maxwell, Clausius, Carnot, Boltzmann, Poincare', Murray Gell-man, Prigogine, Einstein, Lorentz, Minkowski, Riemann, Planck, Broglie, Heisenberg, Schrodinger, Born, Casimir, Bohr, Bell, Bohm, Price, Von Neumann, Penrose, Lockwood, Deutsch, Hawking, Zurek, Anglin, Dirac, Pauli, Weinberg, Kaluza, Schwarz, Gross, Witten, Montonen, Hooft, Freund, Kaku, Bondi, Davies, Milne, Feynman, Wheeler, Schwarzschild, Godel, Kerr, Tipler, Thorne, Gold, Ricci, Weyl, Strominger, Bekenstein, Guth, Linde, Mach)

The Classical World: Utopia

Since we started with the assumption that our Physics is inadequate to explain at least one natural phenomenon, consciousness, and therefore cannot be "right" (or, at least, complete), it is worth taking a quick look at what Physics has to say about the universe that our consciousness inhabits.

Our view of the world we live in has undergone a dramatic change over the course of this century. Quantum Theory and Relativity Theory have changed the very essence of Physics, painting in front of us a completely different picture of how things happen and why they happen.

Let's first recapitulate the key concepts of classical Physics. Galileo laid them down in the Sixteenth century. First of all, a body in free motion does not need any force to continue moving. Second, if a force is applied, then what will change is the acceleration, not the velocity (velocity will change as a consequence of acceleration changing). Third, all bodies fall with the same acceleration. A century later, Newton expressed these findings in the elegant form of differential calculus and immersed them in the elegant setting of Euclid's geometry. Three fundamental laws explain all of nature (at least, all that was known of nature at the time). The first one states that the acceleration of a body due to a force is inversely proportional to the body's "inertial" mass. The second one states that the gravitational attraction that a body is subject to is proportional to its "gravitational" mass. The third one indirectly states the conservation of energy: to every action there is always an equal reaction.

They are mostly rehashing of Galileo's ideas, but they state the exact mathematical relationships and assign numerical values to constants. They lent themselves to formal calculations because they were based on calculus and on geometry, both formal systems that allowed for exact deduction. By applying Newton's laws, one can derive the dynamic equation that mathematically describes the motion of a system: given the position and velocity at one time, the equations can determine the position and velocity at any later time. Newton's world was a deterministic machine, whose state at any time was a direct consequence of its state at a previous time. Two conservation laws were particularly effective in constraining the motion of systems: the conservation of momentum (momentum being velocity times mass) and the conservation of energy. No physical event can alter the overall value of, say, the energy: energy can change form, but ultimately it will always be there in the same amount.

In the Nineteenth century, the Irish mathematician William Hamilton realized what Newton had only implied: that velocity, as well as position, determines the state of a system. He also realized that the key quantity is the overall energy of the system. By combining these intuitions, Hamilton redefined Newton's dynamic equation with two equations that derived from just one quantity (the Hamiltonian function, a measure of the total energy of the system), that replaced acceleration (a second-order derivative) with the first-order derivative of velocity, and that were symmetrical (once velocity was replaced by momentum). The bottom line was that position and velocity played the same role and therefore the state of the system could be viewed as described by six coordinates, the three coordinates of position plus the three coordinates of momentum. At every point in time one could compute the set of six coordinates and the sequence of such sets would be the history of the system in the world. One could then visualize the evolution of the system in a six-dimensional space, the "phase" space.

In the Ninetieth century two phenomena posed increasing problems for the Newtonian picture: gases and electromagnetism. Gases had been studied as collections of particles, but, a gas being made of many minuscule particles in very fast motion and in continuous interaction, this model soon revealed to be a gross approximation. The classical approach was quickly abandoned in favor of a stochastic approach, whereby what matters is the average behavior of a particle and all quantities that matter (from temperature to heat) are statistical quantities.

In the meantime, growing evidence was accumulating that electric bodies radiated invisible waves of energy through space, thereby creating electromagnetic fields that could interact with each other, and that light itself was but a particular case of an electromagnetic field. In the 1860s the British physicist James Maxwell expressed the properties of electromagnetic fields in a set of equations. These equations resemble the Hamiltonian equations in that they deal with first-order derivatives of the electric and magnetic intensities. Given the distribution of electric and magnetic charges at a time, Maxwell's equation can determine the distribution at any later time. The difference is that electric and magnetic intensities refer to waves, whereas position and momentum refer to particles. The number of coordinates needed to determine a wave is infinite, not six...

By then, it was already clear that Science was faced with a dilemma, one which was bound to become the theme of the rest of the century: there are electromagnetic forces that hold together particles in objects and there are gravitational forces that hold together objects in the universe, and these two forces are both inverse square forces (the intensity of the force is inversely proportional to the square of the distance), but the two quantities they act upon (electric charge and mass) behave in a completely different way, thereby leading to two completely different descriptions of the universe.

Another catch hidden in all of these equations was that the beautiful and imposing architecture of Physics could not distinguish the past from the future, something that is obvious to all of us. All of Physics' equations were symmetrical in time. There is nothing in Newton's laws, in Hamilton's laws, in Maxwell's laws or even in Einstein's laws that can discriminate past from future. Physics was reversible in time, something that goes against our perception of the absolute and (alas) irrevocable flow of time.

Entropy: The Curse of Irreversibility

The single biggest change in scientific thinking may have nothing to do with Relativity and Quantum theories: it may well be the discovery that some processes are not symmetric in time. Before the discovery of the second law of Thermodynamics, all laws were symmetric in time, and change could always be bi-directional. Any formula had an equal sign that meant one can switch the two sides at will. We could always replay the history of the universe backwards. Entropy changed all that.

Entropy was "invented" around 1850 by the German physicist Rudolf Clausius in the process of revising the laws proposed by the French engineer Sadi Carnot, that would become the foundations of Thermodynamics. The first law of Thermodynamics is basically the law of conservation of energy: energy can never be created or destroyed, it can only be transformed. The second law states that any transformation has an energetic cost: this "cost" in transforming energy Clausius called "entropy". Natural processes generate entropy. Entropy explains why heat flows spontaneously from hot to cold bodies, but the opposite never occurs: energy can be lost in entropy, not viceversa.

Clausius summarized the situation like this: the energy of the universe is constant, the entropy of the universe is increasing.

In the 1870s, the German physicist Ludwig von Boltzmann tried to deduce entropy from the motion of gas particles, i.e. from dynamic laws which are reversible in nature. Boltzmann ended up with a statistical definition of entropy to characterize the fact that many different microscopic states of a system result in the same macroscopic state: the entropy of a macrostate is the logarithm of the number of its microstates. It is not very intuitive how this definition of entropy relates to the original one, but it does. Basically, entropy's usefulness is that it turns out to be a measure of disorder in a system.

The second law of Thermodynamics is an inequality: it states that entropy can never decrease. Indirectly, this law states that transformation processes cannot be run backward, cannot be "undone". Young people can age, but old people cannot rejuvenate. Buildings don't improve over the years, they decay. Scrambled eggs cannot be unscrambled and dissolved sugar cubes cannot be recomposed. The universe must evolve in the direction of higher and higher entropy. Some things are irreversible.

The universe as a whole is proceeding towards its unavoidable fate: the "heat death", i.e. the state of maximum entropy, in which no heat flow is possible, which means that temperature is constant everywhere, which means that there is no energy available to produce more heat, which means that all energy in the universe is in the form of heat. (The only escape from the heat death would be if the energy in the universe were infinite).

Scientists were (and still are) puzzled by the fact that irreversibility (the law of entropy) had been deduced from reversibility (basically, Newton's laws). It is weird that irreversibility should arise from the behavior of molecules which, if taken individually, obey physical laws that are reversible. We can keep track of the motion of each single particle in a gas, and then undo it. But we cannot undo the macroscopic consequence of the motion of thousands of such particles in a gas.

The trick is that Boltzmann assumed that a gas (a discrete set of interacting molecules) can be considered as a continuum of points and, on top of that, that the particles can be considered independent of each other: if these arbitrary assumptions are dropped, no rigorous proof for the irreversibility of natural processes exists. The French mathematician Jules Henri Poincare', for example, proved just about the opposite: that every closed system must eventually revert in time to its initial state. Poincare' proved eternal recurrence where Thermodynamics had just proved eternal doom.

Boltzman tried to prove that entropy (and therefore irreversibility) is an illusion, that matter at microscopic level is fundamentally reversible.

Later, several scientists interpreted entropy as a measure of ignorance about the microscopic state of a system, for example as a measure of the amount of information needed to specify it. Murray Gell-man recently summarized these arguments when he gave his explanation for the drift of the universe towards disorder. The reason that nature prefers disorder over order is that there are many more states of disorder than of order, therefore it is more probable that the system ends up in a state of disorder. In other words, the probability of disorder is much higher than the probability of spontaneous order, and that's why disorder happens more often than disorder.

It took the Belgian (but Russian-born) physicist and Nobel-prize winner Ilya Prigogine, in the 1970s, to provide a more credible explanation for the origin of irreversibility. He observed some inherent time asymmetry in chaotic processes at the microscopic level, which would cause entropy at the macroscopic level. He reached the intriguing conclusion that irreversibility originates from randomness which is inherent in nature.

An Accelerated World

Science has been long obsessed with acceleration. Galileo and Newton went down into history for managing to express that simple concept of acceleration. After them Physics assumed that an object is defined by its position, its velocity (i.e., the rate at which its position changes) and its acceleration (i.e., the rate at which its velocity changes). The question is: why stop there? Why we don't need the "ratio and object changes its acceleration" and so forth? Position is a space coordinate. Velocity is the first derivative with respect to time of a space coordinate. Acceleration is the second derivative with respect to time of a space coordinate. Why we only need two orders of derivative to identify an object, and not three or four or twenty-one?

Because the main force we have to deal with is gravity, and it only causes acceleration. We don't know any force that causes a change of acceleration, therefore we are not interested in higher orders of derivatives. To be precise, forces are defined as things that cause acceleration, and only acceleration (as in Newton's famous equation "F=ma"). We don't even have a word for things that would cause a third derivative with respect to time of a space coordinate.

As a matter of fact, Newton explained acceleration by introducing gravity. In a sense Newton found more than just a law of Physics, he explained a millenary obsession: the reason mankind had been so interested in acceleration is that there is a force called gravity that drives the whole world. If gravity did not exist, we would probably never have bothered to study it. Car manufacturers would just tell customers how long it takes for their car to reach such and such a speed. Acceleration would not even have a name.

Relativity: The Primacy of Light

The Special Theory of Relativity was born (in 1905) out of Albert Einstein's belief that the laws of nature must be uniform, whether they describe the motion of bodies or the motion of electrons. Therefore, Newton's equations for the dynamics of bodies and Maxwell's equations for the dynamics of electromagnetic waves had to be unified in one set of equations. In addition, they must be the same in all frames of reference that are "inertial", i.e. whose relative speed is constant. Galileo had shown this to be true for Newton's mechanics, and Einstein wanted it to be true for Maxwell's electromagnetism as well. In order to do that, one must modify Newton's equations, as the Dutch physicist Hendrik Lorentz had already pointed out in 1892. The implications of this unification are momentous.

Relativity conceives all motions as "relative" to something. Newton's absolute motion, as the Moravian physicist Ernst Mach had pointed out over and over, is an oxymoron. Motion is always measured relative to something. Best case, one can single out a privileged frame of reference by using the stars as a meta-frame of reference. But even this privileged frame of reference (the "inertial" one) is still measured relative to something, i.e. to the stars. There is no frame of reference which is at rest, there is no "absolute" frame of reference. While this is what gave Relativity its name, much more "relativity" was hidden in the theory.

In Relativity, space and time are simply different dimensions of the same space-time continuum (as originally guessed by the Russian mathematician Hermann Minkowski).

All quantities are redefined in space-time and must have four dimensions. For example, energy is no longer a simple (mono-dimensional) value, and momentum is no longer a three-dimensional quantity: energy and momentum are one space-time quantity which has four dimensions. Which part of this quantity is energy and which part is momentum depends on the observer: different observers see different things depending on their state of motion, because, based on their state of motion, a four-dimensional quantity gets divided in different ways into an energy component and a momentum component. All quantities are decomposed into a time component and a space component, but how that occur depends on the observer's state of motion.

This phenomenon is similar to looking at a building from one perspective or another: what we perceive as depth, width or height, depends on where we are looking from. An observer situated somewhere else will have a different perspective and measure different depth, width and height. The same idea holds in space-time, except that now time is also one of the quantities that change with "perspective" and the motion of the observer (rather than her position) determines what the "perspective" is. This accounts for bizarre distortions of space and time, as already noted by Lorentz before Einstein: as speed increases, lengths contract and time slows down. This phenomenon is negligible at slow speeds, but becomes very visible at speeds close to the speed of light.

A further implication is that "now" becomes a meaningless concept: one observer's "now" is not another observer's "now". Two events may be simultaneous for one observer, while they may occur at different times for another observer: again, their perspective in space-time determines what they see.

Even the very concept of the flow of time is questionable. There appears to be a fixed space-time, and the past determines the future. Actually, there seems to be no difference between past and future: again, it is just a matter of perspective.

Mass and energy are not exempted from "relativity". The mass and the energy of an object increase as the object speeds up. This principle violates the traditional principle of conservation, which held that nothing can be destroyed or created, but Einstein proved that mass and energy can transform into each other according to his famous formula (a particle at rest has an energy equal to its mass times the speed of light squared), and a very tiny piece of matter can release huge amounts of energy. Scientists were already familiar with a phenomenon in which mass seemed to disappear and correspondingly energy seemed to appear: radioactivity, discovered in 1896. But Einstein's conclusion that all matter is energy was far more reaching.

Light has a privileged status in Relativity Theory. The reason is that, according to Einstein, the speed of light is always the same, no matter what. If one runs at the same speed of a train, one sees the train as standing still. On the contrary, if one could run at the speed of light, one would still see light moving at the speed of light. Most of Relativity's bizarre properties are actually consequences of this postulate. Einstein had to adopt Lorentz's transformations of coordinates, which leave the speed of light constant in all frames of reference, regardless of the speed it is moving at, but in order to achieve this result must postulate that moving bodies contract and moving clocks slow down by an amount that depends on their speed.

If all this sounds unrealistic, remember that according to traditional Physics the bomb dropped on Hiroshima should have simply bounced, whereas according to Einstein's Relativity it had to explode and generate a lot of energy. That bomb remains the most remarkable proof of Einstein's Relativity. Nothing in Quantum Theory can match this kind of proof.

Life On A World Line

The speed of light is finite and one of Relativity's fundamental principles is that nothing can travel faster than light. As a consequence, an object located in a specific point at a specific time will never be able to reach space-time areas of the universe that would require traveling faster than the light.

The "light cone" of a space-time point is the set of all points that can be reached by all possible light rays passing through that point. Because the speed of light is finite, that four-dimensional region has the shape of a cone (if the axis for time is perpendicular to the axes for the three spatial dimensions). The light cone represents the potential future of the point: these are all the points that can be reached in the future traveling at the speed of light or slower. By projecting the cone backwards, one gets the light cone for the past. The actual past of the point is contained in the past light cone and the actual future of the point is contained in the future light cone. What is outside the two cones is unreachable to that point. And, viceversa, no event located outside the light cone can influence the future of that point. The "event horizon" of an observer is a space-time surface that divides space-time into regions which can communicate with the observer and regions which cannot.

The "world line" is the spatio-temporal path that an object is actually traveling through space-time. That line is always contained inside the light cone.

Besides the traditional quantity of time, Relativity Theory introduces another type of time. "Proper" time is the space-time distance between two points on a world line, because that distance turns out to be the time experienced by an observer traveling along that world line.

Relativity erased the concept of an absolute Time, but in doing so it established an even stronger type of determinism. It feels like our lives are rigidly determined and our task in this universe is simply to cruise on our world line. There is no provision in Relativity for free will.

General Relativity: Gravity Talks

Einstein's Relativity Theory is ultimately about the nature of gravitation, which is the force holding together the universe. Relativity explains gravitation in terms of curved space-time, i.e. in terms of geometry.

The fundamental principle of this theory is actually quite simple: Galileo Galilei had already observed that all bodies fall at the same speed under the effect of gravitation; which Newton explained by postulating the equivalence of the gravitational and inertial masses of a body. Einstein simply extended this principle of "equivalence": because of this equivalence, gravity is canceled in a frame of reference which is free falling (i.e., falling with the same acceleration), just like the speed of an object is canceled in a frame of reference which is moving at the same speed. Einstein generalized this phenomenon as: the laws of nature in an accelerating frame are equivalent to the laws in a gravitational field.

Einstein's idea was to regard free falls as natural motions, as straight lines in space time. The only way to achieve this was to assume that the effect of a gravitational field is to produce a curvature of space-time: the straight line becomes a "geodesic", the shortest route between two points on a warped surface (if the surface is flat, then the geodesic is a straight line). Bodies not subject to forces other than a gravitational field move along geodesics of space-time.

The curvature of space-time is measured by a "curvature tensor" originally introduced in 1854 by the German mathematician Bernhardt Riemann. The Riemann geometry comprises the classical Euclidean geometry as a special case, but it is much more general. A specific consequence of Riemann's geometry is that "force" becomes an effect of the geometry of space. A "force" is simply the manifestation of a distortion in the geometry of space. Wherever there is a distortion, a moving object feels a "force" affecting its motion. Riemann's geometry is based on the notion of a "metric (or curvature) tensor", that expresses the curvature of space. On a two-dimensional surface each point is described by three numbers. In a four-dimensional world, it takes ten numbers at each point. This is the metric tensor. Euclid's geometry corresponds to one of the infinite possible metric tensors (the one that represents zero curvature).

With his 1916 field equations, Einstein made the connection with the physical world: he related the curvature of space-time caused by an object to the energy and momentum of the object (precisely, the curvature tensor to the "energy-momentum tensor"). Einstein therefore introduced two innovative ideas: the first is that we should consider space and time together (three spatial dimensions and one time dimension), not as separate; the second is that what causes the warps in this space-time (i.e., what alters the metric from Euclid's geometry) is mass. A mass does not voluntarily cause gravitational effects: a mass first deforms space-time and that warping will affect the motion of other objects that will therefore be indirectly feeling the "gravitational force" of that mass.

Summarizing: the dynamics of matter is determined by the geometry of space-time, and that geometry is in turn determined by the distribution of matter. Space-time acts like an intermediary device that relays the existence of matter to other matter.

Incidentally, this implies that massless things are also affected by gravitation. This includes light itself: a light beam is bent by a gravitational field. Light beams follow geodesics, which may be bent by a space-time warp.

Special Relativity asked the laws of nature be the same in all inertial frames; which implied that they had to be invariant with respect to the Lorentz transformations. As a consequence, Einstein had to accept that clocks slow down and bodies contract. With General Relativity he wanted laws of nature to be the same in all frames, inertial or not (his field equations basically removed the need for inertial frames). This implies that the laws of nature must be "covariant" (basically must have the same form) with respect to a generic transformation of coordinates. That turned out to imply a further erosion of the concept of Time: it turned out that clocks slow down just for being in the wrong place, i.e. in a gravitational field.

While apparent paradoxes (such as the twins paradox) have been widely publicized, Relativity Theory has been amazingly accurate in its predictions and so far no serious blow has been dealt to its foundations. While ordinary people may be reluctant to think of curved spaces and time dilatations, all these phenomena have been corroborated over and over by countless experiments.

Quantum Theory: Enter Uncertainty

The fundamental assumption of Quantum Theory is that any field of force manifests itself in the form of discrete particles (or "quanta"). Forces are manifestations of exchanges of discrete amounts of energy. This was the fundamental discovery made by the German physicist Max Planck in 1900. For example, electromagnetic waves carry an energy which is an integer multiple of a fundamental constant, the "Planck constant".

An implication, as outlined by the French physicist Louis de Broglie in 1923 (after Einstein had made the same assumption regarding to light), is that waves and particles are dual aspects of the same phenomena: every particle behaves like a wave, and every wave can be associated to a particle. One can talk of energy and mass, or one can talk of frequency and wavelength. The two descriptions are equivalent.

The character of this relationship was defined in the mid-1920s by Werner Heisenberg in Germany and Erwin Schrodinger in Austria. In 1926 Max Born realized the implications of the wave-particle duality: the wave associated to a particle turns out to be a wave of probabilities, in order to account for the alternative possibilities that open up for the future of a particle. The state of a particle is described by a "wave function" which summarizes (and superposes) all the alternatives and their probabilities. Schrodinger's equation describes how this wave function evolves in time, and is therefore the quantum equivalent of Hamilton's equations. But at every point in time the wave function describes a set of possibilities, not just one actuality. The particle's current state is actually to be thought of as a "superposition" of all those alternatives that are made possible by its wavelike behavior.

In classical Physics, a quantity (such as the position or the mass) is both an attribute of the state of the system and an observable (a quantity that can be measured by an observer). Quantum Theory makes a sharp distinction between states and observables. If the system is in a given state, an observable can assume a range of values (so called "eigenvalues"), each one with a given probability. The evolution over time of a system can be viewed as due (according to Heisenberg) to time evolution of the observables or (according to Schrodinger) to time evolution of the states

An observer can measure at the same time only observables which are compatible. If the observables are not compatible, they stand in a relation of mutual indeterminacy: the more accurate a measurement of the one, the less accurate the measurement of the other. Position and momentum are, for example, incompatible. Heisenberg's famous "uncertainty principle" states that there is a limit to the precision with which we can measure, at the same time, the momentum and the position of a particle. If one measures the momentum, then it cannot measure the position, and viceversa. This is actually a direct consequence of Einstein's equation that related the wavelength and the momentum (or the frequency and the energy) of a light wave: if coordinates (wavelength) and momentum are related, they are no longer independent quantities. Einstein never believed in this principle, but he was indirectly the one who discovered it.

Ultimately, the world is a physical system in which a set of compatible observables is defined, and whose state is defined by a sum of eigenstates of such observables.

The degree of uncertainty is proportional to the Planck constant. This implies that there is a limit to how small a physical system can be, because, below a quantity proportional to the Planck constant and called "Planck length", the physical laws of Quantum Theory stop working altogether. The Planck scale (10^-33 cm, 10^-43 sec) is the scale at which space-time is no longer a continuum but becomes a grid of events separated by the Planck distance. Planck and Heisenberg proved that at that scale, the vacuum of empty space is actually "full" of all sorts of subtle events, and in 1948 the Dutch physicist Hendrick Casimir even showed how this all-pervading zero-point energy could be measured (so it is now known as "Casimir force"). This was the culmination of the eccentricities of Quantum Theory: that the vacuum was not empty.

The Power of Constants

At this point we can note that all the revolutionary and controversial results of these new theories arose from the values of two constants. Quantum Mechanics was a direct consequence of Planck's constant: were that constact zero, there would be no uncertainty. Relativity Theory was a direct consequence of the speed of light being constant in all frames of reference: were the speed of light infinite, there would be no time dilatation and contraction of length.

These two constants were determined, indirectly, by studying two minor phenomena that we still unsolved at the end of the century: the ether and the black body radiation.

The presence of the ether could not be detected by measuring the speed of light through it; so Einstein assumed that the speed of light is always the same.

The black body does not radiate light with all possible values of energy but only with some values of energy, those that are integer multiples of a certain unit of energy; so Planck assumed that energy exchanges must only occur in discrete packets.

These two universal constants alone revealed a whole new picture of our universe.

Quantum Reality: Fuzzy or Incomplete?

Many conflicting interpretations of Quantum Theory were offered from the beginning.

The Danish physicist Niels Bohr, the man who in 1913 had explained the structure of the atom and whose principle of complementarity was basically an extension of Heisenberg's principle, claimed that only phenomena (what appears to our senses, whether an object or the measurement of an instrument) are real, in the human sense of the word: particles that cannot be seen belong to a different kind of reality, which, circularly, cannot be perceived by humans; and the wave function is therefore not a real thing. Reality is unknowable because it is inherently indeterminate.

Werner Heisenberg believed that the world "is" made of possibility waves and not particles: particles are not real, they are merely "potentialities", something in between ideas and actualities. Reality is due to quantum discontinuities: classical evolution of the Schrodinger equation builds up "propensities", then quantum discontinuities (the collapse of the wave function) select one of those propensities. Every time this happens, reality changes. Therefore reality "is" the sequence of such quantum discontinuities. According to Heisenberg, only waves and events really exist.

Albert Einstein was so unhappy with the uncertainty principle that he accepted Quantum Mechanics only as an incomplete description of the universe. He thought that Quantum Mechanics had neglected some "hidden variables". Einstein's famous argument was that, using Quantum Physics itself, one can prove that, under some circumstances, two particles are always connected: once we measure the position of the first one, we istantaneously determine the position of the other one, even if it has traveled to the other end of the universe. Since no information can travel faster than light, it is impossible for the second particle to react instantaneously to a measurement that occurs so far from it. The only explanation for this paradox is that the second particle must have properties which are not described by Quantum Mechanics. According to Einstein, Quantum Physics provides a fuzzy picture of a sharp reality, whereas for Bohr it provides a compete picture of a fuzzy reality.

Einstein was proven wrong in 1964 by John Bell's famous theorem, which basically ruled out "local hidden variables", precisely the type that Einstein invoked. There are objective, nonlocal connections in the universe. In other words, two particles, once they have interacted, will keep interacting forever (their wave functions get entangled forever), thereby apparently breaking the law of locality (that two objects can interact only if they touch each other or if their interaction is mediated by some other object). Two measurements can be related istantaneously even if they are located in regions too far apart for a light signal to travel between them. Non-locality is a fact of nature.

The American physicist David Bohm believes that both waves and particles exist. He postulated a preferred rest frame (which would define the "istantaneous now") and an istantaneous extraforce that would depend on the probability distribution. The extraforce (the "pilot wave") determines the behavior of particles. Any attempt to measure the properties of a particle alters the pilot wave and therefore the properties themselves.

Bohm also thought of the universe as one indivisible whole. This would be consistent with John Bell's proof. The universe would be, in a sense, just one huge wave of possibilities, all particles connected forever.

The American physicist Alwyn Scott has recently resuscitated Einstein's hypothesis. Scott argues in favor of an interpretation of Quantum Theory as an approximation to a not yet discovered nonlinear theory. The new theory must be nonlinear because it is the only way to remove Heisenberg's uncertainty principle, which descends from the linearity of Schrodinger's equation.

Again inspired by Einstein, the Australian philosopher Huw Price thinks that backward causation (that future can influence the past), or advanced action, is a legitimate option. Price believes that our theories are time-asymmetric because we are conditioned by folk concepts of causality. Physical theories are built starting with the assumption that the future cannot influence the past, and therefore it is no surprise that they prescribe that the future cannot influence the past. If we remove our preconceptions about causality, then we can redraw Quantum Physics. Then it turns out that Einstein was right with his hypothesis of hidden variables, and that Quantum Physics provides an incomplete description of the universe. A complete Quantum Physics will not assign any critical role to the observer.

The Measurement Problem

The biggest problem with Quantum Theory is how the observed world (the world we know, made of well-defined objects) emerges from the quantum world (a world of mere possibilities and uncertainties, thanks to Heisenberg's principle).

When analyzing the evolution of a system, the mathematician John Von Neumann (the same one who invented the computer) distinguished between processes of the first and second kinds. The latter processes occur in isolated systems, on which no measurements can be carried out, and they closely resemble classical, deterministic evolution of a physical system. The former processes occur when a measurement is carried out and they are indeterministic (or at least probabilistic): when an observable is measured, the state of the system suddenly jumps to an unpredictable state (or "eigenstate") associated with the measured eigenvalue of the observable. Unlike classical Physics, in which the new state can be determined from the prior state of the system, Quantum Theory can only specify the probabilities of moving into any of the observable's eigenstates. In quantum lingo, a measurement causes a "collapse of the wave function", after which the observable assumes a specific value.

Von Neumann pointed out that measurement of a system consists in a process of interactions between the instrument and the system, whereby the states of the instrument become dependent on the states of the system. There is a chain of interactions that leads from the system to the observer's consciousness. For example, a part of the instrument is linked to the system, another part of the instrument is linked to the previous part, and so forth until the interaction reaches the observer's eye, then an interaction occurs between the eye and the brain and finally the chain arrives to the observer's consciousness. Eventually, states of the observer's consciousness are made dependent on states of the system, and the observer "knows" what the value of the observable is. Somewhere along this process the collapse has occurred, otherwise the end result of the chain would be that the observer's consciousness would exhibit the same probabilistic behavior of the observable: if the observer reads one value on the instrument, it means that the wave function has collapsed somewhere between the system and the observer's consciousness. A continuous process of the first kind must give rise to a discontinuous process of the second kind. Von Neumann concluded that it was not important at what point the process of the first kind occurred.

The problem is that Quantum Theory does not prescribe or describe when and how this happens. The flow of time is mysteriously altered by measurements: a system evolves in a smooth and deterministic fashion until a measurement is performed, then it jumps more or less randomly into an eigenstate of the measured observable, from where it resumes its smooth evolution until the next measurement. Time seems to behave in an awkwardly capricious way.

As Bohr pointed out, a measurement also introduces irreversibility in nature: collapse cannot be undone. Once we measured a quantity, at that point in time a discontinuity is introduced in the evolution of the wave function. If, after a while, we proceeded backwards in time, we would reach the same point from the future with a wave function which could collapse in any of the legal ways, only one of which is the one that originated the future we are coming from. It is very unlikely that we would retrace the same past.

The Collapse of the Wavefunction: The Meaning of Reality

Quantum Theory is really about waves of possibilities. A particle is described by a wave function as being in many possible places at the same time. When the particle is observed, its wave function "collapses" with definite attributes, including the location it occupies, but such attributes cannot be foreseen until they actually collapse. In other words, the observer can only observe a quantum system after having interfered with it

Von Neumann highlighted an inconsistence in the standard interpretation of Quantum Theory: the objects to be observed are treated as quantum objects (or waves), while the objects that observe (the instruments) are classical objects, with a shape, a position and no wave. The range of uncertainty of a particle is measured by Max Planck's constant. Because Planck's constant is so small, big objects have a well-defined position and shape and everything. The features of small objects such as particles are instead highly uncertain. Therefore, large objects are granted an immunity from quantum laws that is based only on their size.

Recently, Roger Penrose, inspired by work done initiated by Frenkel Karolyhazy in the 1960's, has invoked gravity to justify that special immunity: in the case of large objects, the space-time curvature affects the system's wave function, causing it to collapse spontaneously into one of the possibilities. Precisely, Penrose believes that different space-time curvatures cannot overlap, because each curvature implies a metric and only one metric can be the metric of the universe at a certain point at a certain time. If two systems engage in some interaction, Nature must choose which metrics prevails. Therefore, he concludes, the coupling of a field with a gravitational field of some strength must cause the wave function of the system to collapse. This kind of self-collapse is called "objective" reduction to distinguish it from the traditional reductioin of Quantum Theory which is caused by environmental interaction (such as a measurement). Self-collapse occurs to everything, but the mass of the system determines how quickly it occurs: large bodies self-collapse very quickly, elementary particles would not for millions or even billions of years. That is why the collapse of wavefunctions for elementary particles in practice occurs only when caused by environmental interaction.

In practice, the collapse of the wave, which is the fundamental way in which Quantum Theory can relate to our perceptions, is still a puzzle, a mathematical accident that still has no definite explanation.

It is not clear to anybody whether this "collapse" corresponds to an actual change in the state of the particle, or whether it just represents a change in the observer's amount of knowledge or what. It is not even clear if "observation" is the only operation that can cause the collapse. And whether it has to be "human" (as in "conscious") observation: does a cat collapse the wave of a particle? Does a rock?

What attributes must an object possess to collapse a wave? Is it something that only humans have? If not, what is the smallest object that can collapse a wave? Can another particle collapse the wave of a particle? (In which case the problem wouldn't exist because each particle's wave would be collapsed by the surrounding particles).

What is the measuring apparatus in Quantum Physics? Is it the platform that supports the experiment? Is it the pushing of a button? Is it a lens in the microscope? Is it the light beam that reaches the eye of the observer? Is it the eye of the observer? Is it the visual process in the mind?

It is also a mystery how Nature knows which of the two systems is the measurement system and which one is the measured system: the one that collapses is the measured one, but the two systems are just systems, and it is not clear how Nature can discriminate the measuring one from the measured one and let only the latter collapse.

If a wave collapses (i.e., a particle assumes well-defined attributes) only when observed by a conscious being, then Quantum Theory seems to specify a privileged role for the mind: the mind enters the world through the gap in Heisenberg's uncertainty principle. Indeed, the mind "must" exist for the universe to exist, otherwise nobody would be there to observe it and therefore the world would only be possibilities that never turn into actualities. Reality is just the content of consciousness, as the American physicist Eugene Wigner pointed out. Of course, mind must therefore be an entity that lies outside the realm of Quantum Theory and of Physics in general. The mind must be something special, that does not truly belong to "this" world.

Wigner observed that Schrodinger's equation is linear, but would stop being linear if its object were the very consciousness that collapses the wave. Therefore, Schrodinger's equation would result in a non-linear algorithm that may justify the mind's privileged status.

Quantum theoretical effects could be considered negligible if they only affected particles. Unfortunately, Erwin Schrodinger, with his famous cat experiment, established that Heisenberg's uncertainty affects big objects too. Basically, Schrodinger devised a situation in which a quantum phenomenon causes the cat to die or stay alive, but since any quantum phenomenon is uncertain the cat's life is also uncertain: until we look at the cat, the cat is neither alive nor dead, but simply a wave of possibilities itself. Since no Quantum Theory scientist has volunteered to take the cat's place in Schrodinger's experiment, we don't know for sure what would happen. (One can accuse quantum theorists of being charlatans, but not of being stupid).

The Multiverse: The Quest for Certainty

The traditional (or "Copenhagen") interpretation of Quantum Mechanics seems to be trapped in its unwavering faith in uncertainty. Others have looked for ways out of uncertainty.

One possibility is to deny that the wave function collapses at all. Instead of admitting a random choice of one of many possibilities for the future, one can subscribe to all of the possibilities at the same time. In other words, the probabilistic nature of Quantum Mechanics allows the universe to unfold in an infinite number of ways.

Hugh Everett's "many-universe" interpretation of Quantum Mechanics, originally put forward in 1957, states, basically, that if something physically can happen, it does: in some universe. Physical reality consists of a collection of universes: the "multiverse". We exist in one copy for each universe and observe all possible outcomes of a situation. For a particle there is no wave of possibilities: each possibility is an actuality in one universe. (Alternatively, one can say that there is one observer for each possible outcome of a measurement).

Each measurement splits the universe in many universes (or, as Michael Lockwood puts it, each measurement splits the observer). Biographies form a branching structure, and one which depends on how often they are observed.
No reduction occurs.

The Israeli physicist David Deutsch, too, believes that there really is an infinity of parallel and interacting universes. And the British physicist Stephen Hawking is trying to write down the wave function of the universe, which will actually describe an infinite set of possible universes. Basically, he looks at the universe as if it were one big particle. Just like the wave function of a particle describes an infinite set of possible particles, the wave function of the universe actually describes an infinite set of possible universes.

In the multiverse, Quantum Theory is deterministic and the role of the observer is vastly reduced (we really don't need an observer anymore, since the wave collapses in every single universe, albeit in different ways). Quantum Theory looks more like classical theory, except for the multiplication of universes.

Einselection: Darwinian Collapse

One man who has been studying the problem of how classical Physics emerges from Quantum Physics (how objects that behave deterministically emerge from particles that behave probabilistically, how coherent states of Quantum Mechanics become classical ones) is the Polish-born Wojciech Zurek. In the 1990s, experiments have been performed to show the progressive evolution of a system from quantum to classical behavior. The goal is to observe the progressive collapse of the wave function, the progressive disappearance of quantum weirdness, and the progressive emergence of reality from probability.

Zucker thinks that the environment destroys quantum coherence. The environment includes anything that may interact with the quantum system, from a single photon to a microscope. The environment causes "decoherence" and decoherence causes selection (or "einselection") of which possibilities will become reality. In a sense, the classical state that results from a quantum state is the one that best "fits" the environment. Zucker had developed a technique of "predictability sieve'' that can be applied to a quantum field to study the mysterious boundary between the quantum and the classical worlds.

In America, James Anglin, a close associate of Zurek, is studying the evolution of "open quantum systems" far from equilibrium, which resemble Prigogine's studies on open classical systems.

This line of research is, indirectly, establishing intriguing similarities between the emergence of classical systems from quantum systems and the emergence of living systems from non-living systems.

The Physics of Elementary Particles: Close Encounters with Matter

Quantum Theory redrew the picture of nature and started a race to discover the ultimate constituents of matter. This program culminated in formulation of the theories of Quantum Electrodynamics (virtually invented by the British physicist Paul Dirac in 1928 when he published his equation for the electron in an electromagnetic field, which combined Quantum Mechanics and Special Relativity) and Quantum Chromodynamics (virtually invented by the America physicist Murray Gell-Man in 1963 when he hypothesized the breakdown of the nucleus into quarks).

It follows from Dirac's equation that for every particle there is a corresponding anti-particle which has the same mass and opposite electric charge, and, generally speaking, behaves like the particle moving backwards in space and time.

Forces are mediated by discrete packets of energy, commonly represented as virtual particles or "quanta". The quantum of the electromagnetic field (e.g., of light) is the photon: any electromagnetic phenomenon involves the exchange of a number of photons between the particles taking part in it. Photons exchange energy in units of the Planck constant, a very small value, but nonetheless a discrete value.

Other forces are defined by other quanta: the weak force by the W particle, gravitation by the graviton and the nuclear force by gluons.

Particles can, first of all, be divided according to a principle first formulated (in 1925) by the Austrian physicist Wolfgang Pauli: some particles (the "fermions", named after the Italian physicist Enrico Fermi) never occupy the same state at the same time, whereas other particles (the "bosons", named after the Indian physicist Satyendra Bose) do. It turns out (not too surprisingly) that fermions make up the matter of the universe, while bosons are the virtual particles that glue the fermions together. Bosons therefore represent the forces that act on fermions. They are the quanta of interaction. An interaction is always implemented via the exchange of bosons between fermions.

Three forces that act on elementary particles have been identified: the electromagnetic, the "weak" and the "strong" forces.

The electron is one of six leptons (the others being the muon, the neutrino and their three anti-particles). The neutron and the proton (the particles that made up the nuclei of atoms) are not elementary: they are made of 18 quarks.

The electromagnetic force between leptons is generated by the virtual exchange of massless particles called "photons". The "strong" force between quarks is generated by the virtual exchange of "gluons". Quarks come in "six" flavors and three "colors". Gluons are sensitive to color, not to flavor. The strong force between protons and neutrons is a direct consequence of the color force.

Leptons do not have color, but have flavor (for example, the electron and its neutrino have different flavors). The "weak" force is actually the flavor force between leptons. W+ and W- are the quanta of this flavor force.

Unification: In Search of Symmetry

Since the electric charge also varies with flavor, it can be considered a flavor force as well. Along these lines, in 1967 Steven Weinberg and Abdus Salam unified the weak and the electromagnetic forces into one flavor force (incidentally founding "Quantum Flavor Dynamics", the analogous of "Quantum Chromodynamics"), and discovered a third flavor force, mediated by the Z quanta. The unified flavor force therefore admits four quanta: the photon, the W- boson, the W+ boson and the Z boson. These quanta behave like the duals of gluons: they are sensitive to flavor, not to color. All quanta are described by the so called "Yang-Mills field", which is a generalization of the Maxwell field (Maxwell's theory becomes a particular case of Quantum Flavor Dynamics: "Quantum Electrodynamics").

Therefore, the world is made of six leptons, six quarks, four bosons for leptons and eight gluons for quarks.

Alternatively, leptons and quarks can also be combined in three families of fermions: one comprising the electron, its neutrino and two flavors of quarks ("up" and "down"); one comprising the muon, its neutrino and two flavors of quarks ("strange" and "charmed"); and one comprising the tauon, its neutrino and two flavors of quarks ("bottom" and "top"). Plus the three corresponding families of anti-particles. Eight particles per family (each flavor of quark counts as three particles). The grand total is 48 fermions. The bosons are twelve: eight gluons, the photon and the three bosons for the weak interaction. Sixty particles overall.

The profusion of particles is simply comic. Quantum Mechanics has always led to this consequence: in order to explain matter, a multitude of hitherto unknown entities is first postulated and then "observed" (actually, verified consistent with the theory). More and more entities are necessary to explain all phenomena that occur in the laboratory. When the theory becomes a self-parody, a new scheme is proposed whereby those entities can be decomposed in smaller units. So physicists are already, silently, seeking evidence that leptons and quarks are not really elementary, but made of a smaller number of particles. It is easy to predict that they will eventually break the quark and the electron, and start all over again.

Several other characteristics look bizarre. For example, the three families of fermions are very similar: what need did Nature have to create three almost identical families of particles?
The spins of these particles is totally arbitrary. Fermions have spin 1/2 and bosons have integral spin. Why?
The whole set of equations for these particles has 19 arbitrary constants. Why?
Gluons are fundamentally different from photons: photons are intermediaries of the electromagnetic force but do not themselves carry an electric charge, whereas gluons are intermediaries of the color force that do carry themselves a color (and therefore interact among themselves). Why?
Also, because color comes in three varieties, there are many gluons, while there is only one photon. As a result, the color force behaves in a fundamentally different way from the electromagnetic force. In particular, it extends to infinite. That confines quarks inside protons and neutrons. Why?
Also, the symmetry of the electroweak force (whereby the photon and the bosons get transformed among themselves) is not exact as in the case of Relativity (where time and space coordinates transform among themselves): the photon is massless, whereas bosons have masses. Only at extremely high temperatures the symmetry is exact. At lower temperatures a spontaneous breakdown of symmetry occurs.

This seems to be a general caprice of nature. At different temperatures symmetry breaks down: ferromagnetism, isotropic liquids, the electroweak force... A change in temperature can create new properties for matter: it creates magnetism for metals, it creates orientation for a crystal, it creates masses for bosons.

The fundamental forces exhibit striking similarities when their bosons are massless. The three families of particles, in particular, acquire identical properties. This leads scientists to believe that the "natural" way of being for bosons in a remote past was massless. How did they acquire the mass we observe today in our world? And why they all have different masses? The Higgs mechanism gives fermions and bosons a mass. Naturally it requires bosons of its own, the Higgs bosons (particles of spin 0).

Each interaction exhibits a form of symmetry, but unfortunately they are all different, as exemplified by the fact that quarks cannot turn into leptons. In the case of the weak force, particles (e.g., the electron and its neutrino) can be interchanged, while leaving the overall equations unchanged, according to a transformation called SU(2), meaning that one particle can be exchanged for another one. For the strong force (i.e., the quarks) the symmetrical transformation is SU(3), meaning that three particles can be shuffled around. For the electromagnetic force, it is U(1), meaning that only the electrical and magnetic component of the field can be exchanged for each other. Any attempt to find a symmetry of a higher order results into the creation of new particles. SU(5), for example, entails the existence of 24 bosons... but it does allow quarks and leptons to mutate into each other (five at the time), albeit at terribly high temperatures.

Finally, Quantum Theory does not incorporate gravity. Since gravity is an interaction (albeit only visible among large bodies), it does require its own quantum of interaction, the so called "graviton" (a boson of spin 2). Once gravity is "quantisized", one can compute the probability of a particle interacting with the gravitational field: the result is... infinite.

The difficulty of quantisizing gravity is due to its self-referential (i.e., nonlinear) nature: gravity alters the geometry of space and time, and that alteration in turns affects the behavior of gravity. Recently, Abhay Ashtekar has proposed the "loop-space model", based on the 1985 theory of Amitabha Sen, that splits time and space into two distinct entities subject to quantum uncertainty (analogous to momentum and position). The solutions of Einstein's equations would then be quantum states that resemble "loops".

The truth is that Quantum Theory had reached an impasse. There seems to be no way that Relativity can be modifies to fit Quantum Mechanics.

Superstring Theory: Higher Dimensions

Countless approaches have been proposed to integrate the quantum and the relativistic views of the world. Those theories are obviously very different and the excuse that they operate at different "granularity" levels of nature is not very credible. Physicists have been looking for a theory that explains both, a theory of which both would be special cases. Unfortunately, applying Quantum Theory to Relativity Theory has proved unrealistic.

A different route to merging Quantum Theory and Relativity Theory is to start with Relativity and see if Quantum Theory can be found as a special case of Einstein's equations.

In 1919 the German physicist Theodr Kaluza had tried to unify gravitation and electromagnetism by introducing a fifth dimension: by rewriting Einstein's field equations in five dimensions, Kaluza obtained a theory that contained both Einstein's General Relativity and Maxwell's theory of Electromagnetism. Kaluza thought that light's privileged status came from the fact that light is a curling of the fourth spatial dimension. The mathematician Oskar Klein explained how the fifth dimension could be curled up in a loop the size of the Planck length. The universe could have five dimensions, except that one is not infinite but closed in on itself. In the 1960s, the American physicist Bryce DeWitt and others proved that a Kaluza theory in higher dimensions is even more intriguing: when the fifth and higher dimensions are curled up, the theory yields the Yang-Mills fields required by Quantum Mechanics.

It was this approach that in 1974 led the American physicist John Schwarz to formulate Superstring Theory as a theory of all interactions. His early studies had been triggered by a formula discovered in 1968 by the Italian physicist Gabriel Veneziano and its interpretation as a vibrating string by the Japanese physicist Yoichiro Nambu. Schwarz quickly realized that both the standard model of elementary particles and General Relativity's theory of gravitation were implied by Superstring Theory.

Superstring Theory views particles as one-dimensional entities (or "strings") rather than points: tiny loops of the magnitude of the Planck length (the shortest length that Quantum Physics can deal with). In other words, particles are simply resonances (or modes of vibrations) of tiny strings. Each vibrational mode has a fixed energy, which means a mass, charge and so forth. therefore the illusion of a particle. All matter consists of these tiny vibrating strings.

The behavior of our universe is largely defined by three universal constants: the speed of light, the Planck constant and the gravitational constant. The "Planck mass" is a combination of those three magic numbers and is the mass (or energy) at which the superstring effects would be visible. Unfortunately, this is much higher than the mass of any of the known particles. Such energies were available only in the early stages of the universe and for a fraction of a second. The particles that have been observed in the laboratory are only those that require small energies. A full appreciation of Superstring Theory would require enormous energies. Basically, Superstring Theory is the first scientific theory that states the practical impossibility of being verified experimentally (at least during the lifetime of its inventors).

Furthermore, the superstring equations yield many approximate solutions, each one providing a list of massless particles. This can be interpreted as allowing a number of different universes: our is one particular solution, and that solution will yield the particles we are accustomed with. Even the number of dimensions would be an effect of that particular solution.

There is, potentially, an infinite number of particles. Before the symmetry breaks down, each fermion has its own boson, which has exactly the same mass. So a "photino" is postulated for a "photon" and a "s-electron" for the electron.

Space-time must have ten dimensions. Six of them are curved in minuscule tubes that are negligible for most uses. Matter originated when those six dimensions of space collapsed into superstrings. Ultimately, elementary particles are compactified hyper-dimensional space.

Einstein's dream was to explain matter-energy the same way he explained gravity: as fluctuations in the geometry of space-time. The "heterotic" variation of Superstring Theory, advanced by the American physicist David Gross and others in the 1980s, does just that: particles emerge from geometry, just like gravity and the other forces of nature. The heterotic string is a closed string that vibrates (at the same time) clockwise in a ten-dimensional space and counterclockwise in a 26-dimensional space (16 dimensions of which are compactified).

Einstein's General Theory of Relativity is implied by Superstring Theory, to the point that another American physicist, Edward Witten, has written that Relativity Theory was discovered first by mere accident. Incidentally, the same Witten, in 1985, has provided the most complete "field string theory".

In the meantime Superstring Theory has progressed towards a peculiar form of duality. In 1977 a Finnish and a British physicists, Claus Montonen and David Olive, proposed that there may exist a dual Physics which deals with "solitons" instead of "particles". In that Physics, magnetic monopoles are the elementary units, and particles emerge as solitons, knots in fields that cannot be smoothed out (in our conventional Physics, magnetic monopoles are solitons of particles). Each particle corresponds to a soliton, and viceversa. They proved that it would not matter which Physics one chooses to follow: all results would automatically apply to the dual one.

In particular, one could think of solitons are aggregates of quarks (as originally done in 1974 by the Dutch physicist Gerard 't Hooft). Then a theory of solitons can be built on top of a theory of quarks, or a theory of quarks can be built on top of a theory of solitons.

In 1996 the American physicist Andrew Strominger has even found a connection between black holes and strings: if the original mass of the black hole was made of strings, the Hawking radiation would ultimately drain the black hole and leave a thing of zero size, i.e. a particle. Since a particle is ultimately a string, the cycle could theoretically resume: black holes decaying into strings and strings decaying into black holes.

Superstring Theory is the only scientific theory of all times that requires the universe to have a specific number of dimensions: but why ten?

Physicists like Peter Freund, a Romanian native, and Michio Kaku have observed that the laws of nature become simpler in higher dimensions. The perceptual system of humans can only grasp three dimensions, but at that level the world looks terribly complicated. The moment we move to a fourth dimension, we can unify phenomena that looked very different. As we keep moving up to higher and higher dimensions, we can unify more and more theories. This is precisely how Einstein unified Mechanics and Electromagnetism (by introducing a fourth dimension), how quantum scientists unified electromagnetism with the weak and strong nuclear forces and how particle physicists are now trying to unify these forces with gravity.

Still: why ten?

Are there more phenomena around that we still have to discover and that, once unified with the existing scientific theories, will yield even more dimensions? Are these dimensions just artifices of the Mathematics that has been employed in the calculations, or are they real dimensions that may have been accessible in older times?

And why mono-dimensional strings, and not poli-dimensional objects? Paul Dirac, way back in 1962, had thought that the electron could be a bubble, which is a membrane closed in on itself.

Quantum Gravity

Penrose also agrees that the right approach to the integration of Quantum Theory and Relativity Theory is not to be concerned about the effects of the former on the latter but viceversa.

Penrose (like everyone else) is puzzled by the two different, and incompatible, quantum interpretations of the world. One is due to Schrodinger's equation, which describes how a wave function evolves in time. This interpretation is deterministic and provides a continuous history of the world. The other is due to the collapse of the wave function in the face of a measurement, which entails determining probabilities of possible outcomes from the squared moduli of amplitudes in the wave function ("state-vector reduction"). This interpretation is probabilistic and provides a discontinuous history of the world, because the system suddenly jumps into a new state. We can use Schrodinger's equation to determine what is happening at any point in time; but, the moment we try to actually measure a quantity, we must resort to state-vector reduction in order to know what has happened.

Penrose postulates that these two incompatible views must be reconciled at a higher level if abstraction by a new theory, and such a theory must be based on Relativity Theory. Such a theory, which he calls "quantum gravity", would also rid Physics of the numerous infinites that plague it today. It should also be time-asymmetrical, predicting a privileged direction in time, just like the second law of Thermodynamics does. Finally, in order to preserve free will, it would contain a non-algorithmic element, which means that the future would not be computable from the present. Penrose even believes that Quantum Gravity will explain consciousness.

The Trail of Asymmetry

Somehow asymmetry seems to play a protagonist role in the history of our universe and our life. Current cosmological models speculate that the four fundamental forces of nature arose when symmetry broke down after the very high temperatures of the early universe began to cool down. Today, we live in a universe that is the child of that momentous split. Without that "broken symmetry" there would be no electrical force and no nuclear force, and our universe would be vastly impoverished in natural phenomena.

Scientists have also speculated at length about the asymmetry between matter and antimatter: if one is the mirror image of the other and no known physical process shows a preference for either, why is it that in our universe protons and electrons (matter) overwhelmingly prevails over positrons and antiprotons (antimatter)?

Most physical laws can be reversed in time, at least on paper. But most will not. Time presents another asymmetry, the "arrow of time" which points always in the same direction, no matter what is allowed by Mathematics. The universe, history and life all proceed forward and never backwards.

Possibly related to it is the other great asymmetry: entropy. A lump of sugar which is dissolved in a cup of coffee cannot become a lump of sugar again. Left to themselves, uildings collapse, they do not improve. Most artifacts require periodic maintenance, otherwise they would decay. Disorder is continuously accumulated. Some processes are irreversible.

It turns out that entropy is a key factor in enabling life (and, of course, in ending it). Living organisms maintain themselves far from equilibrium and entropy plays a role in it.

Moreover, in 1848 the French biologist Louis Pasteur discovered that aminoacids (which make up proteins which make up living organisms) exhibit another singular asymmetry: for every aminoacid there exist in nature its mirror image, but life on Earth uses only one form of the aminoacids (left-handed ones). Pasteur's mystery is still unexplained (Pasteur thought that somehow that "was" the definition of life). Later, biologists would discover that bodies only use right-handed sugars, thereby confirming that homochirality (the property of being single-handed) is an essential property of life.

Finally, an asymmetry presents itself even in the site of thinking itself, in the human brain. The two cerebral emispheres are rather symmetric in all species except ours. Other mammals do not show preferences for grasping food with one or the other paw. We do. Most of us are right-handed and those who are not are left-handed. Asymmetry seems to be a fundamental feature of our brain. The left hemisphere is primarily used for language and the interplay between the two hemispheres seems to be important for consciousness.

It may turn out to be a mere coincidence, but the most conscious creatures of our planet have also the most asymmetric brains.

Was there also a unified brain at the origin of thinking, whose symmetry broke down later on in the evolutionary path?

A Fuzzy World

Modern physics relies heavily on Quantum Mechanics. Quantum Mechanics relies heavily on the theory of probabilities. At the time, probabilities just happened to fit well in the model.

Quantum Mechanics was built on probabilities because the theory of probabilities is what was available in those times. Quantum Mechanics was built that way not because Nature is that way, but because the mathematical tools available at the time were that way; just like Newton used Euclid's' geometry because that is what Geometry could provide at the time.

Boltzmann's stochastic theories had showed that the behavior of gases (which are large aggregates of molecules) could be predicted by a dynamics which ignored the precise behavior of individuals, and took into account only the average behavior. In retrospect Boltzmann's influence was enormous on Quantum Mechanics. His simplification was tempting: forget about the individual, focus on the population.

Quantum Mechanics therefore prescribed a "population" approach to Nature: take so many electrons, and some will do something and some will do something else. No prescription is possible about a single electron. Quantum phenomena specify not what a single particle does, but what a set of particles do. Out of so many particles that hit a target, a few will pass through, a few will bounce back. And this can be expressed probabilistically.

Today, alternatives to probabilities do exist. In particular, Fuzzy Logic can represent uncertainty in a more natural way (things are not black or white, but both black and white, to some extent). Fuzzy Logic is largely equivalent to Probability Theory, but it differs in that it describes single individuals, not populations.

On paper, Quantum Mechanics could thus be rewritten with Fuzzy Logic (instead of probabilities) without altering any of its conclusions. What would change is the interpretation: instead of a theory about "set of individuals" (or populations) it would become a theory about "fuzzy individuals". In a Fuzzy Logic scenario, a specific particle hitting a potential barrier would both go through and bounce back. To some extent. It is not that out of a population some individuals do this and some individuals do that; a specific individual is both doing this and doing that. The world would still behave in a rather bizarre way, but somehow we would be able to make statements about individuals. However, this approach would allow Physics to return to a science of individual objects, not of populations of objects.

The uncertainty principle could change quite dramatically: instead of stating that we can never observe all the parameters of a particle with absolute certainty, it could state that we can observe all the parameters of a particle with absolute certainty, but certainty not being exact. When I say that mine is a good book, I am being very certain. I am not being exact (what does "good" mean? How good is good? Etc).

The fact that a single particle can be in different, mutually exclusive states at the same time has broad implications on the way our mind categorizes "mutually exclusive" states; not on what Nature actually does. Nature never constrained things to be either small or big. Our mind did. Any scientific theory we develop is first and foremost a "discourse" on Nature; i.e., a representation in our mind of what Nature is and does.

Some of the limits we see in Nature (i.e., the fact that something is either big or small) are limits of our mind; and conversely some of the perfections that we see in Nature are perfections of our mind (i.e., the fact that there is a color white or something is cold or a stone is round, while in Nature no object is fully white, cold or round). Fuzzy Logic is probably a better compromise between our mind and Nature, because it allows to express the fact that things are not just zero or one, white or black, cold or warm, round or square; they are "in between", both white and black, both cold and warm, both...

Time: When?

To closer inspection, the main subject of Relativity and Quantum theories may well be Time. Most of the bizarre implications of those theories are things that either happen "in time" or are caused by Time.

Relativity turned Time into one of several dimensions, mildly different from the others but basically very similar to the others. This clearly contrasts with our perception of Time as being utterly distinct from space. Hawking, for example, thinks that originally Time was just a fourth spatial dimension, then gradually turned into a different type of dimension and, at the Big Bang, it became Time as we know it today.

The mathematician Hermann Bondi has argued that the roles of Time are utterly different in a deterministic and in a indeterministic universe. Whereas in a deterministic universe, Time is a mere coordinate, in a universe characterized by indeterminacy, such as one governed by Quantum Theory, the passage of time transforms probabilities into actualities, possibility into reality. If Time did not flow, nothing would ever be. Things would be trapped in the limbo of wave functions.

The Australian physicist Paul Davies claims exactly the opposite: Time is rather meaningless in the context of a quantum model of the universe, because a general quantum state of the universe has no well-defined time. With Hawking, Time may not have existed before the Big Bang, and may have originated afterwards by mere accident.

Time: What?

The subject of Time has puzzled and fascinated philosophers since the dawn of consciousness. What is Time made of? What is the matter of Time? Is Time a human invention?

There are no doubts that physical Time does not reflect psychological Time. Time, as we know it, is subjective and relative. There is a feeling to the flow of time that no equation of Physics can reproduce. Somehow, the riddle of Time reminds us of the riddle of consciousness: we know what it is, we can feel it very clearly, but we cannot express it, and we don't know where it comes from.

If you think that there is absolute time, think again. Yes, all clocks display the same time. But what makes you think that what they display is Time? As an example, let's go back to the age when clocks had not been invented yet. Time was defined by the motion of the sun. People knew that a day is a day because the sun takes a day to turn around the Earth (that's what they thought). And a day was a day everywhere on the Earth, even among people who had never communicated to each other. Is that absolute Time?

What would happen if the Sun all of a sudden slowed down? People all over the planet would still think that a day is a day. Their unit of measurement would be different. They would be measuring something else, without knowing it. What would happen today if a galactic wave made all clocks slow down? We would still think that ten seconds are ten seconds. But the "new" ten seconds would not be what ten seconds used to be. So clocks do not measure Time, they just measure themselves. We take a motion that is the same all over the planet and use that to define something that we never really found in nature: Time.

At the least, we can say that measurement of Time is not innate: we need a clock to tell "how long it took".

Unfortunately, human civilization is founded on Time. Science, the Arts and technology are based on the concept of Time. What we have is two flavors of Time: psychological time, which is a concrete quantity that the brain creates and associates to each memory; and physical time, an abstract quantity that is used in scientific formulas for the purpose of describing properties of matter.

The latter was largely an invention of Isaac Newton, who built his laws of nature on the assumption of an absolute, universal, linear, continuous Time. Past is past for everybody, and future is future for everybody.

Einstein explained that somebody's past may be somebody else's present or even future, and thereby proved that time is not absolute and not universal. Any partitioning of space-time into space and time is perfectly legal. The only requirement on the time component is that evens can be ordered in time. Time is pretty much reduced to a convention to order events, and one way of ordering is as good as any other way.

In the meantime, the second law of Thermodynamics had for the first time established formally the arrow of time that we are very familiar with, the flowing from past to future and not viceversa.

Time: Where?

Once the very essence of Time had been doubted, scientists began to doubt even its existence.

The British physicists Arthur Milne and Paul Dirac are two of the scientists who have wondered if the shaky character of modern Physics may be due to the fact that there are two different types of time and that we tend to confuse them. Both maintained that atomic time and astronomical time may be out of sync. In other words, the speeds of planets slowly change all the time in terms of atomic time, although they remain the same in terms of astronomical time. A day on Earth is a day regardless of the speed of the Earth, but it may be lasting less and less according to an atomic clock. In particular, the age of the universe may have been vastly exaggerated because it is measured in astronomical time and astronomical processes were greatly speeded up in the early stages of the universe.

Not to leave anything untried, the American physicist Richard Feynman even argued in favor of matter traveling backwards in time: an electron that turns into a positron (its anti-particle) is simply an electron that turns back in time. His teacher John Wheeler even argued that maybe all electrons are just one electron, bouncing back and forth in time; and so all other particles. There is only one instance of each particle. That would also explain why all electrons are identical: they are all the same particle.

Time: Why?

In classical and quantum Physics, equations are invariant with respect to time inversion. Future and past are equivalent. Time is only slightly different from space. Time is therefore a mere geometrical parameter. Because of this, Physics offers a static view of the universe. The second law of Thermodynamics made official what was already obvious: that many phenomena are not reversible, that time is not merely a coordinate in space-time.

In the 1970's Prigogine showed, using Boltzmann's theorem and thermodynamic concepts, that irreversibility is the manifestation at macroscopic level of randomness at microscopic level.

Prigogine then attempted a microscopic formulation of the irreversibility of laws of nature. He associates macroscopic entropy with a microscopic entropy operator. Time too becomes an operator, no longer a mere parameter. Once both time and entropy have become operators, Physics has been turned upside down: instead of having a basic theory expressed in terms of wave functions (i.e., of individual trajectories), he obtains a basic theory in terms of distribution functions (i.e., bundles of trajectories). Time itself depends on the distribution and therefore becomes itself a stochastic quantity, just like entropy, an average over individual times. As a consequence, just like entropy cannot be reversed, time cannot: the future cannot predicted from the past anymore.

Traditionally, physical space is geometrical, biological space (the space in which biological form develops) is functional (for example, physical time is invariant with respect to rotations and translations, biological space is not). Prigogine's Time aims at unifying physical and biological phenomena.

Black Holes and Wormholes: Gateways to Other Universes and Time Travel

Shortly after Einstein published his gravitational field equation, the German physicist Karl Schwarzschild found a solution that determines the gravitational field for any object, given its mass and its size. That solution goes to infinity for a specific ratio between mass and size: basically, if the object is dense enough (lots of mass in a tiny size), the gravitational attraction it generates is infinite. Nothing, not even light, can escape this object, which was therefore named "black hole" (by John Wheeler). And everything that gets near it is doomed to fall into it, and be trapped in it forever. All information about the matter that fell into it is also lost forever: a black hole may have been generated by any initial configuration of matter, but there is no record of which one it was. Even worse, in the 1970's Stephen Hawking proved that black holes evaporate, therefore information is not only trapped inside the black hole, it truly disappears forever.

The disappearance of matter, energy and information in a black hole has puzzled physicists since the beginning, as it obviously violates the strongest principle of conservation that our Physics is built upon. It also highlights the contradictions between Quantum Theory and Relativity Theory: the former guarantees that information is never lost, the latter predicts that it will be lost in a black hole.

Einstein himself realized that black holes implied the existence of a "bridge" between our universe and a mirror universe which is hidden inside the black hole, and in which Time runs backwards. It was the Austrian mathematician Kurt Godel, the same individual who had just single-handedly shattered the edifice of Mathematics, who, in 1949, pointed out how Einstein's equations applied to a rotating universe implied that space-time can curve to the point that a particle will return to a previous point in time; in other words, "wormholes" exist connecting two different points in time of the same universe. In the 1950s, John Wheeler speculated that two points in space can be connected through several different routes, because of the existence of spatial wormholes. Such wormholes could act like shortcuts, so that travel between the two points can occur even faster than the speed of light.

The New Zealand mathematician Roy Kerr in 1963 and the American physicist Frank Tipler in 1974 found other situations in which wormholes were admissible. In the U.S., Kip Thorne even designed a time machine capable of exploiting such time wormholes. In the U.K., Stephen Hawking came up with the idea of wormholes connecting different universes altogether. Hawking's wave function allows the existence of an infinite set of universes, some more likely than others, and wormholes the size of the Planck length connect all these parallel universes with each other.

The History of the Universe

One of the consequences of General Relativity is that it prescribes the evolution of the universe. A few possible futures are possible, depending on how some parameters are chosen. These cosmological models regard the universe as one system with macroscopic quantities. Since the discovery that the universe is expanding, the most popular models have been the ones that predict expansion of space-time from an initial singularity. Since a singularity is infinitely small, any cosmological model that wants to start from the very beginning must combine Relativity and Quantum Physics.

The story usually starts with an infinitely small universe (Roger Penrose and Stephen Hawking have proved that Relativity implies this), in which quantum fluctuations of the type predicted by Heisenberg's principle are not negligible, especially when the universe was a size smaller than the Planck length.

The fluctuations actually "created" the universe (space, time and matter) in a "Big Bang". Time slowly turned into space-time, giving rise to spatial dimensions. Space-time started expanding, the expansion that we still observe today. In a sense, there was no beginning of the universe: the "birth" of the universe is an illusion. There is no need to create the universe, because its creation is part of the universe itself. There is no real origin. The universe is self-contained, it does not require anything external to start it.

Then the universe expanded. If the mass of the universe is big enough (and this is still being debated, but most cosmologists seem to believe so), then at some point the expansion will peak and it will reverse: the universe will contract all the way back into another singularity (the "Big Crunch"). At that point the same initial argument holds, which is likely to start another universe. For example, John Wheeler claims that the universe oscillates back and forth between a Big Bang and a Big Crunch. Each time the universe re-starts with randomly assigned values of the physical constants and laws.

Both the beginning and the end are singularities, which means that the laws of Physics break down. The new universe can have no memory of the old universe, except for a higher entropy (assuming that at least that law is conserved through all these singularities), which implies a longer cycle of expansion and contraction (according to Richard Tolman's calculations).

Some scientists believe that they can remove the singularities. In particular, Hawking has proposed a model in which Time is unbounded but finite, and therefore it is not created in the Big Bang even if the universe today has a finite age. (According to Einstein, space is also finite yet unbounded). In his model, Time emerges gradually from space and there is no first moment.

The End of Entropy

Very few people are willing to take the second law of Thermodynamics as a primitive law of the universe. Explicitly or implicitly, we don't seem happy with this law that states an inequality. Somehow it must be a side effect of some other phenomenon.

Thomas Gold (among others) believes that the second law follows the direction of the universe: entropy increases when the universe expands, it decreases when the universe contracts (or, equivalently, when Time flows backwards). The second law would simply be an effect of the expansion or contraction. In that case the universe might be cyclic.

Roger Penrose has also investigated the mystery of entropy. A gravitational effect results in two dual phenomena: a change in shape and a change in volume of space-time. Consequently, Penrose separates the curvature tensor in two components: the Ricci tensor (named after the Italian mathematician Gregorio Ricci who founded the theory of tensors) and the Weyl tensor (named after the German mathematician Hermann Weyl, a close associate of Einstein's). The Weyl tensor measures the change in shape, and, in a sense, the gravitational field, whereas the Ricci tensor measures the change in volume, and, in a sense, the density of matter. The Weyl tensor measures a "tidal" effect and the Ricci tensor measures an effect of volume reduction. The Ricci tensor is zero in empty space, it is infinite in a singularity. The Weyl tensor is zero in the initial singularity of the Big Bang, but infinite at the final singularity of the Big Crunch. Penrose has showed that entropy follows the Weyl tensor and the Weyl tensor may hide the puzzling origin of the second law of Thermodynamics.

The Resurrection of Information

The curvature in proximity of a black hole is infinite: all objects are doomed. There is a distance from the black hole which is the last point where an object can still escape the fall: the set of those points defines the horizon of the black hole.

In 1974 Stephen Hawking discovered that black holes may evaporate and eventually vanish. The "Hawking radiation" that remains has lost all information about the black hole. This violates the assumption of determinism in the evolution of the universe, i.e. that, if we know the present, we can always derive the past, because the present universe contains all information about how the past universe was.

Only two options have been found to allow for the conservation of information. The first one is to allow for information to travel faster than light. That would allow it to escape the black hole. But it would violate the law of causality (that nothing can travel faster than light).

The second option is that a vanishing black hole may leave behind a remnant the size of the Planck length. Andrew Strominger has argued for the latter option. This option calls for an infinite number of new particles, as each black hole is different and would decay into a different particle. Strominger believes that such particles are extreme warps of space-time, "cornucopions", that can store huge amount of information even if they appear very small to an outside observer and their information would not be accessible.

After all, Stephen Hawking and Jacob Bekenstein have proved that the entropy of a black hole is proportional to its surface, which means that entropy should decrease constantly during the collapse of the black hole, which means that information must somehow increase, and not disappear...

Inflation: Before Time

The question is what was there before the Big Bang created our universe.
The "inflationary" model (originally proposed by Alan Guth in 1981) requires, in short, that at some point in its evolution the universe expanded exponentially while it was in a vacuum-like state containing some homogeneous classical fields but no particles. Then, after such "inflation", the vacuum-like state decayed into particles, which reached a thermodynamic equilibrium and caused the universe to become "hot".

Guth's model is based on the existence of scalar fields. A scalar field is one caused by a quantity which is purely numerical, such as temperature or household income. Gravitational and electromagnetic fields, in contrast, also point in a specific direction, and are therefore "vector" fields. Vector fields are perceived because they exert some force on the things they touch, but scalar fields are virtually invisible. Nonetheless, scalar fields play, for example, a fundamental role in unified theories of the weak, strong and electromagnetic interactions. Like all fields, scalar fields carry energy. Guth assumed that in the early stage of the universe a scalar field provided a lot of energy to empty space. This energy produced the expansion, which for a while occurred at a constant rate, thereby causing an exponential growth. This is the "inflation" implied in Guth's model.

Guth's model solves a few historical problems of cosmology: the "primordial monopole" problem (grand unified theories predict the existence of magnetic monopoles), the "flatness" problem (why the universe is so flat, i.e. why the curvature of space is so small) and the "horizon" problem (how casually disconnected regions of the universe can have started their expansion simultaneously).

Natural Selection for Universes

Refining Guth's vision, in the 1980s the Russian physicist Andrei Linde came up with a "chaotic inflationary" model. Linde realized that Guth's inflation must litter the universe with bubbles, each one expanding like an independent universe, with its own Big Bang and its own Big Crunch.

Linde's model is "chaotic" because it assumes a chaotic initial distribution of the scalar field: instead of being uniform, the original scalar field was fluctuating wildly from point to point. Inflation therefore began in different points at different times and at different rates.

Regions of the universe that are isolated by a length greater than the inverse of the Hubble constant cannot be in any relation with the rest of the universe. They expand independently. Any such region is a separate mini-universe. In any such region the scalar field can give rise to new mini-universes.

One mini-universe produces many others. It is no longer necessary to assume that there is a "first" universe.

Each mini-universe is very homogeneous, but on a much larger scale the universe is extremely inhomogeneous. It is not necessary to assume that the universe was initially homogeneous or that all its casually disconnected parts started their expansion simultaneously.

One region of the inflationary universe gives rise to a multitude of new inflationary regions. In different regions, the properties of space-time and elementary particles may be utterly different. Natural laws may be different in each mini-universe.

The evolution of the universe as a whole has no end, and may have no beginning.
The "evolution" of mini-universes resembles that of any animal species. Each mini-universe leads to a number of mini-universes that are mutated versions of it, as their scalar fields is not necessarily the same. Each mini-universe is different, and mini-universes could be classified in a strict hierarchy based on a parent-child relationship.

This mechanism sort of "reproduces" mini-universes in a fashion similar to how life reproduces itself through a selection process. The combinatorial explosion of mini-universes can be viewed as meant to create mini-universes that are ever better at "surviving".

Each mini-universe "inherits" the laws of its prant mini-universe to an extent, just like living beings inherit behavior to an extent through genetic code. A "genome" is passed from praent universe to child universe, and that "genome" contains instructions about which laws should apply. Each genome only prescribes a piece of the set of laws governing the behavior of a universe. Some are random. Some are generated by "adaptation" to the environment of many coexisting universes.

At the same time, expansion means that information is being propagated like in a neural network through the hierarchy of expanding universes.

It may also be that a universe is not born just out of a parent universe, but of many parent universes. A region of the universe expands because of the effect of many other regions. This is similar to what happens with neural networks.

With a little imagination, the view of the chaotic inflationary theory can be interpreted in this way:
The expansion of a new region may be determined by many regions, not just one.
Each region somehow inherits its laws from those regions.
The laws in a region may change all the time, especially at the beginning.
The laws determine how successful a region is in its expansion.
Different expansion regions with different laws can communicate. They are likely to compete for survival.
Adaptation takes a toll on expansion regions. Regions die. Branches of regions become extinct.
Obviously, this scenario bears strong similarities with biological scenarios.

Another theory that presupposes evolving universes is Lee Smonin's. He thinks that black holes are the birthplaces of offspring universes. The constants and laws of Physics are randomly changed in the new universes, just like the genome of offspring is randomly mutated. Black holes guarantee reproduction and hereditarity. Universes that do not give rise to black holes cannot reproduce. There is therefore "natural selection" among Smolin's universes.

Brains, Lives, Universes

Let's take a closer look at Life. We have organisms. Each organism is defined by its genome. An organism's genome does not vary during its lifetime. The genome of its offspring varies. The variation is the result of random processes. Each organism interacts with the environment and may or may not survive such interactions. Indirectly, interactions with the environment determine how genomes evolve over many generations.

Then we have neural networks. The behavior of each thinking organism is controlled by a neural network. The principle of a neural network is that of interacting with the environment, propagating the information received from the environment through its neurons and thereby generating behavior. Each neuron has influence over many neurons and what determines the behavior is the connections between neurons. A neural network changes continuously during the life of an organism, especially at the very beginning.

Within neural networks a selection process also applies. Connections survive or die depending on how useful they are. Connections are stronger or weaker depending on how useful they are. Usefulness is defined by interaction with the environment.

Genomes and neural networks are systems that have in common the principle of propagating information about the environment within itself through a process of a) interaction with the environment, b) feedback from the environment, c) selection.

Neural networks, genetic algorithms and chaotic inflationary universes seem to obey very similar principles. They "expand" in order to

Propagate information within the individual, so as to determine behavior

Propagate information within the population, so as to determine evolution

The Nature of the Laws of Nature

Even with the sophistication of Relativity Theory, our universe presents us with an uncomfortable degree of arbitrariness.

What is still not clear is why laws (e.g. Einstein's field equations) and constants (e.g., the Planck distance) are the way they are. Why is the universe the way it is?

Furthermore. Why do properties of matter such as electrical charge and mass exert forces on other matter? Why do things interact at all?

The cosmological models presume that the physical laws we know today were already in effect at the very beginning, i.e. were born with the universe, and actually pre-existed it. The laws of Physics are simply regularities that we observe in nature. They allow us to explain why what happened, and why it happened the way it happened. They also allow us to make predictions. Science is all about predictions. If we couldn't make predictions, any study of Nature would be pretty much useless. We can build bridges and radios because we can make predictions on how things will work.

Three aspects of the fundamental laws are especially puzzling.

The first has to do with the nature of the laws of Nature. How absolute are they? Some laws can be reduced to other laws. Newton's law of gravitation is but a special case of Einstein's. It was not properly a law of Nature, it was an effect of a law of nature that Newton did not know. These days, we are witnessing a quest for a unification theory, a theory that will explain all four known forces (weak, nuclear, electric and gravitational) in one megaforce: if the program succeeds, we will have proved that those four forces were effects, not causes. Is the second law of Thermodynamics a law indeed, or just the effect of something else?

After all, the laws as we study them today in textbooks are the product of a historical process of scientific discovery. Had history been different (had progress followed a different route) we may have come up with a description of the universe based on different laws, that would equally well fit (individually) all the phenomena we are aware of.

The second question is why are they mathematical formulas. Mathematics is a human invention, but it is amazing how well it describes the universe. True, Mathematics is more a discovery process than an invention process. But, even so, it is a discovery of facts that occur in the realm of mathematical ideas (theorems and the likes). It is amazing that facts occurring in that abstract realm reflect so well facts that occur in the physical realm.

Most Mathematics that is employed today so effectively for describing physical phenomena was worked out decades and even centuries before by mathematicians interested only in abstract mathematical problems. The rule almost never fails: sooner or later a physical phenomenon will be discovered that perfectly matches a mathematical theory. It feels like the universe is a foreign movie, subtitled in mathematical language.

Even more intriguing is the fact that the world of Mathematics is accessible by the human mind. Our bodies have privileged access to physical space, our minds have privileged access to the notes that describe it. We get both treats. The body perceives physical reality through the senses, the mind perceives mathematical reality through reasoning.

The third question is whether they are truly eternal. Were they always the same? Will they always be the same?

Naturally, if the answer is negative, then we don't know anything

It would seem more likely that they are part of the universe and therefore came to be precisely when the universe came to be. In that case it would therefore be impossible to compute a model of how the universe was born, because we don't know which laws (if any) were in place before the universe was born!

(We don't even know for sure whether the laws of Nature are the same in the whole universe. We don't even know if they have been the same all the time or if they have been changing over time).

We don't have a science of natural laws which studies where laws come from. Laws are assumed to transcend the universe, to exist besides and despite the existence of a universe. But that's a rather arbitrary conclusion (or , better, premise).

The Prodigy of Stability

Chaos is a matter of life in this universe. What is surprising is that we do not live in chaos. We live in almost absolute stability. The computer I am writing on right now is made of a few billion particles of all kinds that interact according to mechanic, gravitational, electric, magnetic, weak and strong forces.

The equations to describe just a tiny portion of this computer would take up all my life. Nonetheless, every morning I know exactly how to turn on my computer and every day I know exactly how to operate on it. And the "stability" of my computer will last for a long time, until it completely breaks down. My body exhibits the same kind of stability (for a few decades, at least), so much so that friends recognize me when they see me and every year the IRS can claim my tax returns (no quantum uncertainty there).

Stability is what we are built to care for. We care very little about the inner processes that lead to the formation of a tomato plant: we care for the tomatoes. We care very little for the microscopic processes that led a face to be what it is: we care for what "it looks like". At these levels stability is enormous. Shape, size, position are stable for a number of days, weeks, months, maybe years. Variations are minimal and slow. The more we get into the detail of what we were not built to deal with and the more confused (complex and chaotic) matter looks to us, with zillions and zillions of minuscule particles in permanent motion.

Science was originally built to explain the world at the "natural" level. Somehow scientists started digging into the structure of matter and reached for lower and lower levels. The laws of Physics got more and more complicated, less and less useful for the everyman.

Even more surprising, each level of granularity (and therefore complexity) seems largely independent of the lower and higher levels. Sociology doesn't really need Anatomy and Anatomy doesn't really need Chemistry and Chemistry doesn't really need Quantum Theory.

The smaller we get the more the universe becomes messy, incomprehensible, continuously changing, very unstable. We have grown used to thinking that this is the real universe, because the ultimate reduction is the ultimate truth.

The surprising thing is that at higher levels we only see stability. How does chaos turn into stability? We witness systems that can create stability, order, symmetry out of immense chaos.

One answer is that maybe it is only a matter of perception. Our body was built to perceive things at this level, and at that level things appear to be stable just because our senses have been built to perceive them stable. If our senses weren't able to make order out of chaos, we wouldn't be able to operate in our environment.

Another answer, of course, could be that all other levels are inherently false...

A Self-organizing Universe

The main property of neural networks is feedback: they learn by doing things. Memory and learning seem to go hand in hand. Neural networks are "self-organizing" objects: response to a stimulus affects, among other things, the internal state of the object. To understand the behavior of a neural network one does not need to analyze the constituents of a neural network; one only needs to analyze the "organization" of a neural network.

Physics assumes that matter has no memory and that the laws of Nature entail no feedback. Physics assumes that all objects in the universe are passive and response to a stimulus does not affect the internal state of the object: objects are non-organizing, the opposite of self-organizing objects. To understand the behavior of a physical object, one needs analyze its constituents: the object is made of molecules, which are made of atoms, which are made of leptons and quarks, which are made of...

There is no end to this type of investigation, as history has proved. The behavior of matter still eludes physicists even if they have reached a level of detail that is millions of times finer-grained than the level at which we operate. There is no end to this type of investigation, because everything has constituents: there is no such thing as a fundamental constituent. Just like there is no such thing as a fundamental instant of time or point of space. We will always be able to split things apart with more powerful equipment. The equipment itself might be what creates constituents: atoms were "seen" with equipment that was not available before atoms were conceived.

In any case it is the essence itself of a "reductionist" (constituent-oriented) science that requires scientists to keep going down in levels of detail. No single particle, no matter how small, will ever explain its own behavior. One needs to look at its constituents to understand why it behaves the way it behaves. But then it will need to do the same thing for each new constituent. And so on forever. Over the last century, Physics has gotten trapped into this endless loop.

Could matter in general be analyzed in the same that we analyze neural networks? Could matter be explained in terms of self-organizing systems? Neural networks remember and learn. There is evidence that other objects do so too: a piece of paper, if folded many times, will "remember" that it was folded and will learn to stay folded. Could we represent a piece of paper as a self-organizing system?

Nature exhibits a "hierarchy" of sort of self-organizing systems, from the atomic level to the biological level, from the cognitive level to the astronomical level. The "output" of one self-organizing system (e.g. the genome) seems to be a new self-organizing system (e.g. the mind). Can all self-organizing systems be deduced from one such system, the "mother" of all self-organizing systems?

We are witnessing a shift in relative dominant roles between Physics and Biology. At first, ideas from physical sciences were applied to Biology, in order to make Biology more "scientific". This led to quantifying and formalizing biological phenomena by introducing discussions on energy, entropy and so forth. Slowly, the debate shited unification of Physics and Biology, rather then unidirectional import of ideas from Physics. Biological phenomena just don't fit in the rigid deterministic model of Physics. Then it became progressively clear that biological phenomena cannot be reduced to Physics the way we know it. And now we are moving steadily towards the idea that Physics has to be changed to cope with biological phenomena, it has to absorb concepts that come from Biology.

In order to accommodate biological concepts, such as selection and feedback, in order to be able to encompass neural and living systems, which evolve in a Darwinian fashion and whose behavior is described by nonlinear equations, Physics will need to adopt nonlinear equations and possibly an algorithm-oriented (rather than equation-oriented) approach.

Almost all of Physics is built on the idea that the solution to a problem is the shortest proof from the known premises. The use and abuse of logic has determined a way of thinking about nature that tends to draw the simplest conclusions given what is known (and what is not known) about the situation. For example, it was "intuitive" for scientists to think that the immune system creates anti-bodies based on the attacking virus. This is the simplest explanation, and the one that stems from logical thinking: a virus attacks the body, a virus is killed by the body; therefore the body must be able to build a "killer" for that virus. The disciplines of life constantly remind us of a different approach to scientific explanation: instead of solving a mathematical theorem through logic, nature always chooses to let things solve themselves. In a sense, solutions are found by natural systems not via the shortest proof but thanks to redundancy. The immune systems creates all sorts of antibodies. An invading virus will be tricked into "selecting" the one that kills it. There is no processor in the immune system that can analyze the invading virus, determine its chemical structure and build a counter-virus, as a mathematician would "intuitively" guess. The immune system has no ability to "reason" about the attacking virus. It doesn't even know whether some virus is attacking or not. It simply keeps producing antibodies all the time. If a virus attacks the body, the redundancy of antibodies will take care of it.

This represents a fundamental shift of paradigm in thinking about Nature. For many centuries, humans have implicitly assumed that the universe must be behaving like a machine: actions follow logically from situations, the history of the universe is but one gigantic mathematical proof. It is possible that the larger-scale laws of nature resemble very little a mathematical proof. They might have more to do with randomness than with determinism.

The distinction between instruction and selection is fundamental. Physics has evolved around the concept of instruction: mathematical laws instruct matter how to behave. Selection entails a different set of mind: things happen, more or less by accident, and some are "selected" to survive. The universe as it is may be the product of such selection, not of a logical chain of instructions.

Physics is meandering after the unified theory that would explain all forces. What seems more interesting is a unification of physical and biological laws. We are now looking for the ultimate theory of nature from whose principles the behavior of all (animate and inanimate) systems can be explained. Particles, waves and forces seem less and less interesting objects to study. Physics has been built on recurring "themes": planets revolve around the sun, electrons revolve around the nucleus; masses attract each other, charged particles attract each other. Still, Physics has not explained these recurring patterns of Nature. Biology is explaining its recurring patterns of evolution.

A new scenario may be emerging, one in which the world is mostly nonlinear. And somehow that implies that the world self-organizes. Self-organizing systems are ones in which very complex structures emerge from very simple rules. Self-organizing systems are about where regularity comes from. And self-organizing systems cannot be explained by simply analyzing their constituents, because the organization prevails: the whole is more than its parts.

The Universe as the Messenger

One pervasive property of the universe and everything that exists is communication. Things communicate all the time.

The Austrian physicist and philosopher Ernst Mach, held in great consideration by Einstein, had a vision of the universe that proved influential on all Twentieth century Physics. Newton defined inertial systems as systems which are not subject to any force. They move at constant or null speed. Systems that are accelerated are not inertial and, by magic, strange forces ("inertial forces") appear in them. Mach realized that all systems are subject to interactions with the rest of the universe and redefined inertial systems as systems which are not accelerated in the frame of fixed stars (basically, in the frame of the rest of the universe). The inertia of a body is due to its interaction with the rest of the matter in the universe.

Mach's principle implies that all things communicate with all other things all the time. This universe appears to be built on messages.

The dynamics of the universe is determined to a large extent by the messages that are exchanged between its parts (whether you look at the level of RNA, synapses or gravitation).

Things communicate. It is just their nature to communicate. More: their interactions determine what happens next. Things communicate in order to happen. Life happens because of communication. We think because of communications.

If all is due to messages, a theory of the universe should decouple the message from the messenger.

Messages can be studied by defining their "languages". Maybe, just maybe, instead of sciences like Physics and Biology we should focus on the "language" of the universe.

The Science of Impossibility: The End of Utopia

It is intriguing that the three scientific revolutions of the last century all involved introducing limits to classical Physics. Newton thought that signals could travel at infinite velocities, that position and momentum could be measured simultaneously and that energy could be manipulated at will. Relativity told us that nothing can travel faster than the speed of light. Quantum Mechanics told us that we cannot measure position and momentum simultaneously. Thermodynamics told us that every manipulation of energy implies a loss of order. There are limits in our universe that did not exist in Newton's ideal universe.

These limits are as arbitrary as laws and constants. Why these and not others? May they be just clues to a more general limit that contrains our universe? May they be simply illusions, due to the way our universe is evolving?

Then Newton's world has been shaken to its foundations by Darwin's revolution. Natural systems look different now. Not monolithic artifacts of logic, but flexible and pragmatic side effects of randomness. By coincidence, while Physics kept introducing limits, Biology has been telling us the opposite. Biological systems can do pretty much anything, at random. Then the environment makes the selection. We have been evangelized to believe that nothing is forbidden in Nature, although a lot will be suppressed.

Once all these views are reconciled, Newton's Utopia may be replaced by a new Utopia, with simple laws and no constraints. But it's likely to look quite different from Newton's.

Where to, Albert?

Further Reading

Ashtekar Abbay: CONCEPTUAL PROBLEMS OF QUANTUM GRAVITY (Birkhauser, 1991)

Davies Paul: ABOUT TIME (Touchstone, 1995)

Deutsch David: THE FABRIC OF REALITY (Penguin, 1997)

Ferris Timothy: THE WHOLE SHEBANG (Simon And Schuster, 1997)

Flood Raymond & Lockwood Michael: NATURE OF TIME (Basil Blackwell, 1986)

Gell-Mann Murray: THE QUARK AND THE JAGUAR (W.H.Freeman, 1994)

Guth Alan: THE INFLATIONARY UNIVERSE (Helix, 199#)

Hawking Stephen: A BRIEF HISTORY OF TIME (Bantam, 1988)

Kaku Michio: HYPERSPACE (Oxford University Press, 1994)

Linde Andrei: PARTICLE PHYSICS AND INFLATIONARY COSMOLOGY (Harwood, 1990)

Linde Andrei: INFLATION AND QUANTUM COSMOLOGY (Academic Press, 1990)

Penrose Roger: THE EMPEROR'S NEW MIND (Oxford Univ Press, 1989)

Price Huw: TIME'S ARROW AND ARCHIMEDE'S POINT (Oxford University Press, 1996)

Prigogine Ilya: FROM BEING TO BECOMING (W. H. Freeman, 1980)

Rees Martin: BEFORE THE BEGINNING (Simon And Schuster, 1996)

Scott Alwyn: STAIRWAY TO THE MIND (Copernicus, 1995)

Weinberg Steven: DREAMS OF A FINAL THEORY (Pantheon, 1993)


Permission is granted to download/print out/redistribute this file provided it is unaltered, including credits.
E` consentito scaricare/stampare/distribuire questo testo ma a patto che venga riportata la copyright



Combusem Bookstores