New theory of the origin of the universe. Amazing theories about the universe New theory of the environment of the universe

The universe, according to theoretical physicists, did not originate at all as a result of the Big Bang, but as a result of the transformation into a black hole of a four-dimensional star, which provoked the release of "debris". It was this garbage that became the basis of our universe.

A team of physicists - Razieh Pourhasan, Niayesh Afshordi and Robert B. Mann - have come up with a completely new theory of the birth of our universe. For all its complexity, this theory explains many problematic moments in the modern view of the universe.

The generally accepted theory of the appearance of the Universe speaks of the key role in this process of the Big Bang. This theory is consistent with the observed expansion pattern of the Universe. However, it has some problem areas. So, it is not entirely clear, for example, how the singularity created the Universe with practically the same temperature in different corners. Considering the age of our Universe - about 13.8 billion years - it is impossible to achieve the observed temperature equilibrium.

Many cosmologists argue that the expansion of the Universe should have occurred faster than the speed of light, but Afshordi notes the chaotic nature of the Big Bang, so it is unclear how a region of one size or another, uniform in temperature, could have formed.

The new model of the origin of the universe explains this mystery. The three-dimensional universe floats in the new model like a membrane in a four-dimensional universe. In fact, the Universe is a multidimensional physical object with a dimension less than the dimension of space.

In a four-dimensional universe, of course, there are four-dimensional stars capable of living the life cycle characteristic of three-dimensional stars in our universe. Four-dimensional stars, which are the most massive, exploding as supernovae at the end of their lives, will turn into a black hole.

A four-dimensional hole would in turn have the same event horizon as a three-dimensional black hole. The event horizon is the boundary between the inside of a black hole and the outside. In the 3D universe, this event horizon is represented as a two-dimensional surface, while in a four-dimensional universe, it is represented as a three-dimensional hypersphere.

Thus, when a four-dimensional star explodes from the remaining material on the event horizon, a three-dimensional brane is formed, that is, a universe similar to ours. Such an unusual model for the human imagination can give an answer to the question why the Universe has almost the same temperature: that gave birth to the three-dimensional Universe, the four-dimensional one existed for much longer than 13.8 billion years.

From the point of view of a person who is accustomed to representing the Universe as a vast and infinite space, it is not easy to perceive the new theory. It is difficult to realize that our universe is perhaps only a local disturbance, a “leaf on a pond” of an ancient four-dimensional hole of the largest size.

Looking at a work of art, a beautiful landscape or a child, a person always feels the harmony of being.

In scientific terms, this feeling, which tells us that everything in the universe is harmonious and interconnected, is called non-local consistency. According to Erwin Laszlo, in order to explain the presence of a significant number of particles in the Universe and the continuous, but by no means uniform and linear evolution of everything that exists, we must recognize the presence of a factor that is neither matter nor energy.

The importance of this factor is now recognized not only in the social sciences and humanities, but also in physics and natural science. This information is information as a real and effective factor that sets the parameters of the Universe at its birth, and subsequently controls the evolution of its basic elements, which turn into complex systems.

And now, relying on the data of the new cosmology, we have finally come close to realizing the dream of every scientist - the creation of a holistic theory of everything.

Creating a holistic theory of everything

In the first chapter we will discuss the task of creating a theory of everything. A theory that deserves such a name must truly be a theory of everything - a holistic theory of everything that we observe, experience and encounter, be it physical objects, living things, social and environmental phenomena, or products of reason and consciousness. It is possible to create such a holistic theory of everything - and this will be shown in this and subsequent chapters.

There are many ways to comprehend the world: through their own ideas, mystical intuition, art and poetry, as well as through the belief systems of world religions. Of the many methods available to us, one deserves special attention, since it is based on reproducible experience, strictly adheres to the methodology and is open to criticism and revaluation. This is the path of science.

Science matters. It matters not only because it is a source of new technologies that change our lives and the world around us, but also because it gives us a reliable view of the world and us in this world.

But looking at the world through the prism of modern science is ambiguous. Until recently, science drew a fragmented image of the world that made up seemingly independent disciplines. It is hard for scientists to say what connects the physical Universe and the living world, the living world and the world of society, the world of society with the spheres of mind and consciousness. Now the situation is changing; at the forefront of science, more and more researchers are striving to obtain a more holistic, unified picture of the world. This primarily applies to physicists who are working on the creation of unified theories and large unified theories. These theories link together fundamental fields and forces of nature into a logical theoretical framework, suggesting that they have a common origin.

A particularly promising trend in recent years has emerged in quantum physics: the attempt to create a theory of everything. This project relies on string and superstring theories (so called because these theories treat elementary particles as vibrating strands or strings). Developing theories of everything use complex mathematical and multidimensional spaces to create one master equation that could explain all the laws of the universe.

Physical theories of everything

The theories of everything currently being developed by theoretical physicists are aimed at achieving what Einstein once called "reading the mind of God." He said that if we could combine all the laws of physical nature and create a coherent system of equations, we would be able to explain all the characteristics of the universe on the basis of these equations, which would be tantamount to reading the mind of God.

Einstein made his own attempt of this kind in the form of a unified field theory. Although he did not stop his efforts until his death in 1955, he did not find a simple and effective equation that could explain all physical phenomena in a logical and coherent way.

Einstein went to his goal, considering all physical phenomena as a result of the interaction of fields. We now know that he failed because he did not take into account the fields and forces that operate at the microphysical level of reality. These fields (weak and strong nuclear forces) are central to quantum mechanics, but not to the theory of relativity.

Today, most theoretical physicists take a different approach: they consider the quantum as an elementary unit — a discrete aspect of physical reality. But the physical nature of quanta has been revised: they are considered not separate material-energy particles, but vibrating one-dimensional threads - strings and superstrings. Physicists try to represent all the laws of physics as the vibration of superstrings in multidimensional space. They view each particle as a string that creates its own "music" along with all the other particles. On a cosmic level, entire stars and galaxies vibrate together, as do entire universes. The challenge for physicists is to create an equation that will show how one vibration relates to another so that they can all be expressed in one super-equation. This equation would decipher the music that embodies the most endless and fundamental harmony of the cosmos.

At the time of this writing, string theories of everything are still ambitious: no one has ever created a super-equation that expresses the harmony of the physical universe in a formula as simple as Einstein's E \u003d mc2. In fact, there are so many problems in this area that more and more physicists are suggesting that a new concept will be needed for progress. For equations in string theory, multiple dimensions are required, four-dimensional spacetime is not enough.

Initially, the theory required 12 dimensions in order to link all vibrations into a single theory, but now it is believed that "only" 10 or 11 dimensions are enough, provided that the vibrations occur in a more multidimensional "hyperspace". Moreover, string theory requires space and time for its strings, but cannot show how time and space could have come about. And, finally, it is embarrassing that this theory has so many possible solutions - about 10,500 - that it becomes completely incomprehensible why our universe is as it is (even though each decision leads to a different universe).

Physicists seeking to save string theory have put forward various hypotheses. For example, all possible universes coexist, although we only live in one of them. Or, perhaps, our Universe has many facets, but we perceive only one familiar to us. Here are some hypotheses put forward by theoretical physicists who seek to show that string theories have some degree of realism. But none of them are satisfactory, and some critics, including Peter Voight and Lee Smolin, are willing to bury string theory.

Smolin is one of the founders of the theory of loop quantum gravity, according to which space is a network of cells that connects all points. The theory explains how space and time came to be, and also explains "action at a distance," that is, the strange "relationship" that underlies a phenomenon known as nonlocality. We'll explore this phenomenon in detail in Chapter 3.

It is unknown if physicists can create a working theory of everything. However, it is clear that even if the efforts being made are successful, creating a true theory of everything will not in itself mean success. At best, physicists will create a physical theory of everything - a theory that will not be a theory of everything, but only a theory of all physical objects. A true theory of everything will include more than just the mathematical formulas that express the phenomena studied in this area of \u200b\u200bquantum physics. There are more than just vibrating strings in the universe and quantum events associated with them. Life, mind, culture and consciousness are part of the reality of the world, and a true theory of everything will take these into account as well.

Ken Wilber, who wrote The Theory of Everything, agrees. He speaks of a "holistic vision" embodied in a true theory of everything. However, he does not offer such a theory, but mainly discusses what it could be, and describes it from the point of view of the evolution of culture and consciousness in relation to his own theories. A scientifically based, holistic theory of everything has yet to be created.

Approaches to a true theory of everything

A true theory of everything can be created. While it goes beyond the string and superstring theories, in which physicists try to develop their own super theory, it fits well within the framework of science itself. Indeed, the task of creating a true integral theory of everything is simpler than the task of creating a physical theory of everything. As we can see, physical theories of everything strive to reduce the laws of physics to a single formula - all those laws that govern the interaction of particles and atoms, stars and galaxies; many complex entities with complex interactions. It is easier and wiser to look for the basic laws and processes that generate these entities and their interactions.

Computer modeling of complex structures shows that the complex is created and can be explained by basic and relatively simple initial conditions. As John von Neumann's theory of cellular automata has shown, it is enough to define the main components of the system and set the rules - algorithms - that govern their behavior (this is the basis of all computer models: developers tell the computer what to do at each stage of the modeling process, and the computer does the rest). A limited and unexpectedly simple set of basic elements, driven by a small number of algorithms, can create seemingly incomprehensible complexity if the process is allowed to unfold over time. The set of rules that convey information to the elements starts a process that orders and organizes the elements, which thus gain the ability to create increasingly complex structures and relationships.

In trying to create a true, holistic theory of everything, we can follow a similar path. We can start with basic things - things that give rise to other things without being generated by them. Then we have to define the simplest set of rules according to which something more complex will be created. Basically, then we should be able to explain how each “thing” in the world came into being.

In addition to string and superstring theories, there are theories and concepts in the new physics that make this grandiose vision possible. Using discoveries in the cutting edge of particle and field theories, we can define the foundation that generates everything without being itself something generated. This foundation, as we will see, is a sea of \u200b\u200bvirtual energy known as the quantum vacuum. We can also refer to the many rules (laws of nature) that tell us how the basic elements of reality - particles known as quanta - turn into complex things when interacting with their cosmic basis.

However, we have to add a new element to get a true holistic theory of everything. The currently known laws by which the existing objects of the world arise from the quantum vacuum are the laws of interaction based on the transfer and transformation of energy. These laws turned out to be enough to explain how real objects - in the form of particle-antiparticle pairs - are created in and out of a quantum vacuum. But they do not provide an explanation for why more particles than antiparticles were created in the Big Bang; and how, over billions of years, the surviving particles have combined into more and more complex structures: into galaxies and stars, atoms and molecules, and (on suitable planets) into macromolecules, cells, organisms, societies, ecological niches and whole biospheres.

To explain the presence of a significant number of particles in the Universe (“matter” as opposed to “antimatter”) and the continuous, but by no means uniform and linear evolution of everything that exists, we must recognize the presence of a factor that is neither matter nor energy. The importance of this factor is now recognized not only in the social and human sciences, but also in physics and natural sciences. This information is information as a real and effective factor that sets the parameters of the Universe at its birth, and subsequently controls the evolution of its basic elements, which turn into complex systems.

Most of us understand information as data or what is known to a person. The physical and natural sciences are discovering that information goes far beyond the boundaries of the consciousness of an individual person and even all people combined.

Information is an inherent aspect of both physical and biological nature. The great physicist David Bohm called information the process that affects the recipient, "shaping" him. We will accept this concept.

Communication is not a human product, it is not something that we create when we write, count, speak and convey messages. The sages of antiquity have long known, and modern scientists will know it again, that information is present in the world regardless of human will and actions and is a determining factor in the evolution of everything that fills the real world. The basis for creating a true theory of everything is the recognition that information is a fundamental factor in nature.

About zagaks and myths

Driving forces of the coming paradigm shift in science

We will begin our quest for a true, holistic theory of everything by looking at the factors that bring science closer to a paradigm shift. Key factors are the mysteries that emerge and accumulate in the course of scientific research: anomalies that the existing paradigm cannot explain. This pushes the scientific community to search for new approaches to anomalous phenomena. Such research efforts (we will call them "scientific myths") contain many ideas. Some of these ideas may contain key concepts that will lead scientists to a new paradigm - a paradigm that can clarify mysteries and anomalies and serve as the basis for a true holistic theory of everything.

Leading scientists strive to expand and deepen their understanding of the segment of reality under study. They understand more and more about the corresponding part or aspect of reality, but they cannot study this part or aspect directly - they are able to comprehend it only through concepts turned into hypotheses and theories. Concepts, hypotheses and theories are not strong enough, they can be wrong. In fact, the hallmark of a truly scientific theory (according to the philosopher of science Sir Karl Popper) is refutability. Theories are disproved when the predictions made on their basis are not supported by observations. In this case, the observations are anomalous, and the theory in question is either considered erroneous and rejected, or needs revision.

The refutation of theories is the engine of real scientific progress. When everything works, progress can exist, but it is partial (it is a refinement of an existing theory in order to fit new observations). Real progress occurs when this is not possible. Sooner or later there comes a point when, instead of trying to revise existing theories, scientists prefer to start looking for a simpler and more explanatory theory. The path is opening for a fundamental renewal of theory: a paradigm shift.

The paradigm shift is triggered by the accumulation of observations that do not fit into accepted theories and cannot fit into them after a simple refinement of such theories. A new and more acceptable scientific paradigm is emerging. The challenge is to find fundamental new concepts that will form the basis of a new paradigm.

There are strict requirements for the scientific paradigm. The theory based on it should allow scientists to explain all the discoveries that the previous theory could explain, as well as anomalous observations. It should combine all the relevant facts into a simpler and at the same time more complete concept. This is exactly what Einstein did at the turn of the 20th century, when he stopped looking for the reasons for the strange behavior of light in the framework of Newtonian physics and instead created a new concept of physical reality - the theory of relativity. As he himself said, the problem cannot be solved at the same level at which it arose. In an unexpectedly short time, the physics community abandoned the classical physics founded by Newton and was replaced by Einstein's revolutionary concept.

In the first decade of the 20th century, science experienced a paradigm shift. Now, in the first decade of the 21st century, mysteries and anomalies are piling up again, and the scientific community is facing the next paradigm shift - as fundamental and revolutionary as the transition from Newton's mechanistic world to Einstein's relative universe.

The modern paradigm shift has been brewing for some time in the forefront of academia. Scientific revolutions are not instantaneous processes when a new theory immediately takes its place. They can be rapid, as in the case of Einstein's theory, or more extended in time, such as the transition from classical Darwin's theory to broader biological concepts of post-Darwinism.

Before the beginning revolutions lead to the final result, sciences in which anomalies exist go through a period of instability. Mainstream scholars defend existing theories, while freethinkers in advanced fields explore alternatives. The latter put forward new ideas that offer a different look at the phenomena familiar to traditional scientists. For a while, alternative concepts, existing initially in the form of working hypotheses, seem, if not fantastic, then strange.

They sometimes resemble myths invented by imaginative researchers. However, they are not. The "myths" of serious scholars are based on careful logic; they combine what is already known about the segment of the world that a particular discipline explores with what is so far baffling. These are not ordinary myths, they are "scientific myths" - thoughtful hypotheses that are open to verification and, therefore, can be confirmed or disproved through observation and experiment.

Studying the anomalies that are found in observation and experiment, and inventing testable myths that can explain them, are central components of basic scientific research. If anomalies continue to exist despite the best efforts of scientists adhering to the old paradigm, and if one or another scientific myth put forward by free-thinking scientists offers a simpler and more logical explanation, a critical mass of scientists (mainly young ones) ceases to adhere to the old paradigm. This is how the paradigm shift begins. The concept, which until now has been a myth, is beginning to be regarded as a reliable scientific theory.

There are countless examples of both successful and failed myths in the history of science. Confirmed myths - believed to be reliable, if not completely true, scientific theories - include Charles Darwin's hypothesis that all living species descended from common ancestors, and Alan Guth and Andrei Linde's hypothesis that the universe emerged in a super-rapid "expansion" that followed its birth during the Big Bang. Failed myths (those that offered an inaccurate or inferior explanation of the relevant phenomena) include Hans Driesch's idea that the evolution of life follows a predetermined plan in a goal-directed process called entelechy, and Einstein's hypothesis that an additional physical force, called a cosmological constant, is not allows the universe to perish due to the force of gravity. (Interestingly, as we learn, some of these positions are now being questioned: it is possible that the theory of expansion of Guth and Linde will be replaced by a broader concept of a cyclic universe, and Einstein's cosmological constant was not wrong ...)

Examples of modern scientific myths

Here are three working hypotheses - "scientific myths" - put forward by highly respected scientists. All three, while seemingly improbable, have attracted serious attention from the scientific community.

10,100 universes

In 1955, physicist Hugh Everett offered a startling explanation of the quantum world (which later became the basis for one of Michael Crichton's most popular novels, The Arrow of Time). Everett's parallel universes hypothesis is related to a mysterious discovery in quantum physics: as long as a particle is not observed, measured, or influenced in any way, it is in a curious state that is a superposition of all possible states. However, when a particle is observed, measured or acted upon, this state of superposition disappears: the particle is in a single state, like any "ordinary" object. Since the superposition state is described as a complex wave function associated with the name of Erwin Schrödinger, when the superposition state disappears, it is said that the Schrödinger wave function collapses.

The problem is that it is impossible to tell which of the many possible virtual states a particle will assume. The choice of the particle seems to be indeterminate - completely independent of the conditions that trigger the collapse of the wave function. According to Everett's hypothesis, the indeterminacy of the collapse of the wave function does not reflect the conditions existing in the world. There is no uncertainty here: each virtual state chosen by a particle is definitely - it is simply present in the world by itself!

This is how collapse occurs: when a quantum is measured, there are a number of possibilities, each of which is associated with an observer or measuring device. We perceive only one of the possibilities in the seemingly random selection process. But, according to Everett, the choice is not accidental, since this choice does not occur: all possible states of a quantum are realized every time it is measured or observed; they simply
are not implemented in the same world. Many possible quantum states are realized in the same number of universes.
Suppose that when a quantum such as an electron is measured, there is a fifty percent probability that it will go up, and the same chance that it will go down. Then we have not one Universe in which a quantum can go up or down with a probability of 50-50, but two parallel ones. In one of the universes, the electron actually moves up, and in the other, it goes down. In each of these universes there is also an observer or measuring instrument. Two outcomes exist simultaneously in two universes, as do observers or measuring instruments.

Of course, when numerous superposition states of a particle converge into one, there are not only two, but a greater number of possible virtual states that this particle can assume. Thus, there must be many universes, perhaps about 10,100, in each of which there are observers and measuring instruments.

Observer-Created Universe

If there are 10,100 or even 10,500 universes (despite the fact that in most of them life could never have arisen), how is it that we live in such a universe where there are complex forms of life? Could this be a simple coincidence? Many scientific myths are devoted to this issue, including the anthropic cosmological principle, which states that our observation of this universe is related to such a happy coincidence. Recently Stephen Hawking of Cambridge and Thomas Hertog of CERN (European Organization for Nuclear Research) offered a mathematically formulated answer. According to their theory of the universe created by the observer, not separate universes branch out in time and exist on their own (as the string theory suggests), but all possible universes exist simultaneously in a state of superposition. Our existence in this universe chooses the path that leads precisely to such a universe, among all other paths leading to all other universes; all other paths are excluded. Thus, in this theory, the causal chain of events is reversed: the present determines the past. This would be impossible if the universe had a certain initial state, since a certain history would be born from a certain state. But, Hawking and Hertog argue, the universe has no initial definite state, no point of reference - such a boundary simply does not exist.

Holographic Universe

This scientific myth claims that the universe is a hologram (or at least can be considered one). (In a hologram, which we will discuss in more detail a little later, a two-dimensional model creates a picture in three dimensions.) It is believed that all the information that makes up the universe is located on its periphery, which is a two-dimensional surface. This two-dimensional information occurs within the universe in three dimensions. We see the Universe in three dimensions, even though something that makes it what it is is a two-dimensional field of information. Why has this seemingly ridiculous idea become a topic of controversy and research?

The problem that the theory of the holographic universe eliminates belongs to the sphere of thermodynamics. According to her firmly established second law, the level of chaos can never decrease in a closed system. This means that the level of chaos can never decrease in the Universe as a whole, because if we consider the cosmos in its entirety, it is a closed system (there is no external and, therefore, there is nothing that could become open). The fact that the level of chaos cannot decrease means that the order that can be represented as information cannot increase. According to quantum theory, the information that creates or maintains order must be constant, it cannot become more or less.

But what happens to information when matter disappears into black holes? It can seem that black holes are destroying the information contained in matter. This, however, flies in the face of quantum theory. To solve this mystery, Stephen Hawking, along with Jacob Bekenstein, then at Princeton University, together deduced that chaos in a black hole is proportional to its surface area. There is much more room for order and information inside a black hole than on the surface. In one cubic centimeter, for example, there is room for 1099 Planck volumes and a total of 1066 bits of information on the surface (a Planck volume is an almost incomprehensibly small space bounded by sides of 10-35 meters). Leonard Susskind of Stanford and Gerard 't Hooft of Utrech University have suggested that the information inside the black hole is not lost - it is holographically stored on its surface.

The mathematics found unexpected uses for holograms in 1998, when Juan Maldacena, then at Harvard University, tried to work with string theory in the context of quantum gravity. Maldacena found that strings are easier to work with in 5D spaces than in 4D ones. (We perceive space in three dimensions: two planes along the surface and one vertically. The fourth dimension will be located perpendicular to these three, but it cannot be perceived. Mathematicians can add any number of dimensions, moving further and further from the perceived world.) The solution seemed obvious: suppose, that the five-dimensional space inside the black hole is actually a hologram of four-dimensional space on its surface. Then you can make relatively easy calculations in five dimensions, working with four-dimensional space.

Is the technique of reducing the number of dimensions suitable for the universe as a whole? As we've seen, string scientists are wrestling with many extra dimensions, finding that three-dimensional space is not enough to fulfill their task: to link the vibrations of various strings in the universe into a single equation. The holographic principle could help, since the universe could be thought of as a multidimensional hologram stored in fewer dimensions at its periphery.

The holographic principle could make string theory calculations easier, but it carries fantastic assumptions about the nature of the world. Even Gerard 't Hooft, who was one of the founders of this principle, no longer considers it indisputable. He said that in this context, holography is not a principle, but a problem. Perhaps, he suggested, quantum gravity could be deduced from a more fundamental principle that does not obey the laws of quantum mechanics.

In times of scientific revolutions, when the existing paradigm is under pressure, new scientific myths are put forward, but not all of them find confirmation. Theorists became firmly convinced that, as Galileo said, “the book of nature is written in the language of mathematics,” and forgot that not everything in the language of mathematics exists in the book of nature. As a result, many mathematically formed myths remain just myths. Others, however, carry the seeds of significant scientific progress. Initially, no one knows for sure which seeds will germinate and bear fruit. The field is seething, being in a state of creative chaos.

This is the state of affairs today in many scientific disciplines. Anomalous phenomena are multiplying in physical cosmology, quantum physics, evolutionary and quantum biology, and the new field of consciousness research. They create more and more uncertainty and force open-minded scientists to push the boundaries of accepted theories. While conservative researchers insist that only ideas published in well-known scientific journals and reproduced in textbooks can be considered scientific, cutting-edge researchers are looking for fundamentally new concepts, including those that were considered outside the scope of their disciplines just a few years ago.

More and more scientific disciplines describe the world in more and more incredible ways. Cosmology has added dark matter, dark energy and multidimensional spaces to it; quantum physics - particles that are instantly connected in space-time at deeper levels of reality; biology - living matter that demonstrates the integrity of quanta; and studies of consciousness are transpersonal connections independent of space and time. These are just a few of the already proven scientific theories that are now considered valid.

New elementary particles can no longer be detected. Also, an alternative scenario allows you to solve the problem of the hierarchy of masses. The research was published on the arXiv.org website, Lenta.ru tells about it in more detail.

The theory is called Nnaturalness. It is determined on the scale of energies of the order of the electroweak interaction, after the separation of the electromagnetic and weak interactions. This was about ten at minus thirty-two - ten at minus twelve seconds after the Big Bang. Then, according to the authors of the new concept, a hypothetical elementary particle existed in the Universe - rechiton (or reheaton, from the English reheaton), the disintegration of which led to the formation of the physics observed today.

As the Universe grew colder (the temperature of matter and radiation decreased) and flat (the geometry of space approached Euclidean), the Rechiton disintegrated into many other particles. They formed groups of particles that almost do not interact with each other, almost identical in species set, but differing in the mass of the Higgs boson, and therefore in their own masses.

The number of such groups of particles, which, according to scientists, exist in the modern Universe, reaches several thousand trillion. Physics described by the Standard Model (SM) and particles and interactions observed in experiments at the LHC belong to one of these families. The new theory allows one to abandon supersymmetry, which is still trying to find unsuccessfully, and solves the problem of the hierarchy of particles.

In particular, if the mass of the Higgs boson formed as a result of the rechiton decay is small, then the mass of the remaining particles will be large, and vice versa. This is what solves the problem of the electroweak hierarchy associated with the large gap between the experimentally observed masses of elementary particles and the energy scales of the early Universe. For example, the question of why an electron with a mass of 0.5 megaelectronvolt is almost 200 times lighter than a muon with the same quantum numbers disappears by itself - in the Universe there are exactly the same sets of particles where this difference is not so strong.

According to the new theory, the Higgs boson observed in experiments at the LHC is the lightest particle of this type, formed as a result of the decay of a rechiton. Heavier bosons are associated with other groups of as yet undiscovered particles - analogues of today discovered and well-studied leptons (not participating in strong interactions) and hadrons (participating in strong interactions).

The new theory does not cancel, but makes it not so necessary to introduce supersymmetry, which implies doubling (at least) the number of known elementary particles due to the presence of super partners. For example, for a photon - a photino, a quark - a squark, a Higgs - a Higgsino, and so on. The spin of the superpartners should differ by half an integer from the spin of the original particle.

Mathematically, a particle and a superparticle are combined into one system (supermultiplet); all quantum parameters and masses of particles and their partners coincide in exact supersymmetry. It is believed that supersymmetry is violated in nature, and therefore the mass of superpartners significantly exceeds the mass of their particles. To detect supersymmetric particles, powerful accelerators like the LHC were needed.

If supersymmetry or any new particles or interactions do exist, then, according to the authors of the new study, they can be discovered on a scale of ten teraelectronvolts. This is almost at the border of the LHC's capabilities, and if the proposed theory is correct, the discovery of new particles there is extremely unlikely.

Image: arXiv.org

A signal near 750 gigaelectronvolts, which could indicate the decay of a heavy particle into two gamma-photons, as reported by scientists from the CMS (Compact Muon Solenoid) and ATLAS (A Toroidal LHC ApparatuS) collaborations working at the LHC in 2015 and 2016, is recognized statistical noise. After 2012, when it became known about the discovery of the Higgs boson at CERN, no new fundamental particles predicted by the SM extensions have been revealed.

Nima Arkani-Hamed, a Canadian and American scientist of Iranian origin, who proposed a new theory, received the Fundamental Physics Prize in 2012. The award was established in the same year by Russian businessman Yuri Milner.

Therefore, the emergence of theories in which the need for supersymmetry disappears is expected. "There are many theorists, including myself, who believe that now is a completely unique time when we are solving important and systemic issues, and not concerning the details of any next elementary particle," said the lead author of the new study, a physicist from Princeton University. USA).

His optimism is not shared by everyone. For example, physicist Matt Strassler of Harvard University believes the mathematical justification of the new theory to be contrived. Meanwhile, Paddy Fox of the Enrico Fermi National Accelerator Laboratory in Batavia (USA) believes that the new theory can be tested in the next ten years. In his opinion, particles formed in a group with any heavy Higgs boson should leave their traces on the relic radiation - the ancient microwave radiation predicted by the Big Bang theory.

Ecology of Cognition: Scientists at the University of Southampton have made significant breakthroughs in an effort to unravel the mysteries of our universe. One of the latest achievements in theoretical physics is the holographic principle.


Scientists from the University of Southampton have made significant breakthroughs in their attempts to unravel the mysteries of the structure of our universe. One of the latest achievements in theoretical physics is the holographic principle. According to him, our universe is considered as a hologram, and we formulate the laws of physics for such a holographic universe.

The latest developments of Prof. Skenderis and Dr. Marco Caldarelli from the University of Southampton, Dr. Joan Camps from the University of Cambridge and Dr. Blaise Gutero from the Northern Institute for Theoretical Physics in Sweden have been published in Physical Review D and are devoted to the unification of negatively curved space-time and flat space-time. The document explains how, by drawing on the Gregory-Laflamme instability, some types of black holes break down into smaller ones if disturbed - how a trickle of water breaks into droplets when you touch it with your finger. This phenomenon of black holes was previously proven in the framework of computer modeling, and current work has described its theoretical basis even deeper.

Time-space is usually an attempt to describe the existence of space in three dimensions, where time acts as the fourth dimension, and all four come together to form a continuum, or a state in which the four elements cannot be separated.

Flat space-time and negative space-time describe an environment in which the Universe is not compact, space expands infinitely, constantly in time, in any direction. Gravitational forces, such as those generated by a star, are best described by flat spacetime. Negatively curved spacetime describes a universe filled with negative vacuum energy. The mathematics of holography is best understood in terms of the negatively curved spacetime model.

Professor Skenderis developed a mathematical model that traces the incredible similarities between flat spacetime and negatively curved spacetime, but the latter is formulated with a negative number of dimensions beyond our perception.

“According to holography, at a fundamental level, the universe has one less dimensions than we are used to in everyday life, and it obeys laws similar to electromagnetism,” says Skenderis. - "This idea is consonant with how we see an ordinary hologram, when an image with three dimensions is reflected on a two-dimensional plane, like a hologram on a credit card, and imagine the entire universe, encoded in this way."
“Our research continues and we hope to find more connections between flat spacetime, negatively curved spacetime, and holography. Traditional theories of how our universe works boil down to an individual description of its very nature, but each of them collapses at some point. Our ultimate goal is to find a new combined understanding of the universe that will work in all directions. "
In October 2012, Professor Skenderis was named one of the twenty most outstanding scientists around the world. For considering the question "Did space and time have a beginning?" he received an award of $ 175,000. Perhaps a holographic model of the universe will reveal what happened before the Big Bang? published

For the correct concept of the nature of our environment of vacuum, the concept of the emergence of matter of the medium of the matrix vacuum and the nature of gravity in the medium of vacuum, it is necessary, of course, in detail, relatively, to dwell on the evolution of our Universe. What will be described in this chapter has been published in part in scientific and popular journals. This material from scientific journals has been systematized. And what until now is not known to science is filled from the point of view of this theory. Our Universe is currently in an expansion phase. In this theory, only the expanding and contracting Universe is accepted, i.e. non-stationary. A universe that is only ever expanding or stationary is rejected in this theory. For this kind of Universes excludes any development, leads to stagnation, i.e. to the only universe.

Naturally, a question may arise. Why is this description about the evolution of the Einstein-Friedman Universe in this theory? This describes a probable model of particles of the first kind of different levels. Where a logical interpretation is given about the processes of their occurrence, their cycle of existence in space and time, about the regularity of their volumes and masses for each environment of the corresponding level. Particles of media of the first kind have variable volumes, i.e. go through a cycle of expansion and contraction in time. But the environments of the first kind themselves are eternal in time and infinite in volumes, accommodating each other, create the structure of the structure of eternally moving matter, eternal in time and infinite in volume. In this case, it becomes necessary to describe the evolution of our Universe, from the so-called “Big Bang” to the present time. When describing the evolution of the Universe, we will use what is currently known in the scientific world and hypothetically continue its development in space and time until it is completely compressed, i.e. before the new "Big Bang".

In this theory, it is assumed that our Universe is not the only one in nature, but is a particle of a medium of another level, i.e. environment of the first kind, which is also eternal in time and infinite in volume. According to the latest astrophysical data, our Universe has passed the stage of its development in fifteen billion years. There are still many scientists from the scientific world who doubt that the Universe is expanding or not expanding, others believe that the Universe is not expanding, and that there was no "Big Bang". Still others believe that the Universe does not expand or contract, it was always constant and unique in nature. Therefore, it is necessary to indirectly prove in this theory that the "Big Bang" in all likelihood was. And that the universe is currently expanding and then will contract and that it is not the only one in nature. Now the universe continues to expand with acceleration. After the “Big Bang”, the elementary matter of the matrix vacuum medium that has arisen acquired an initial recession velocity comparable to the speed of light, i.e. equal to 1/9 the speed of light, 33,333 km / s.

Figure: 9.1. The universe is in the phase of quasar formation: 1 - matrix vacuum environment; 2 - medium of elementary particles of matter; 3 - singular point; 4 - quasars; 5 - direction of scattering of matter in the Universe

At present, scientists with the help of radio telescopes have managed to penetrate 15 billion light years into the depths of the Universe. And what is interesting to note is that as we go deeper into the abyss of the Universe, the speed of the scattering matter increases. Scientists have seen objects of gigantic size, which had a recession speed comparable to the speed of light. What is this phenomenon? How to understand this phenomenon? In all likelihood, scientists saw the yesterday of the universe, that is, the day of the young universe. And these gigantic objects, the so-called quasars, were young Galaxies in the initial stage of their development (Fig. 9.1). Scientists saw the time when the substance of the matrix vacuum environment appeared in the form of elementary particles of matter. All this suggests that the so-called "Big Bang" in all likelihood was.

In order to hypothetically continue a further description of the development of our Universe, we must look at what surrounds us at the present time. Our Sun with its planets is an ordinary star. This star is located in one of the spiral arms of the Galaxy, on its outskirts. There are many Galaxies like ours in the Universe. It does not speak of an infinite set, since our Universe is a particle of the environment of another level. The shapes and types of Galaxies that fill our Universe are very diverse. This diversity depends on many causes at the time of their occurrence at an early stage of their development. The main reasons are the original masses and torques acquired by these objects. With the emergence of the elementary matter of the matrix vacuum environment and its uneven density in the volume occupied by it, numerous centers of gravity arise in the stressed vacuum environment. To these centers of gravity, the vacuum environment pulls the elementary matter. Primordial giant objects, the so-called quasars, begin to form.

Thus, the emergence of quasars is a natural phenomenon. How, from the first-born quasars, the Universe has acquired at the present time such a variety of forms and movements for 15 billion years of its development. Primordial quasars, which naturally arose as a result of the contradictory nature of the matrix vacuum, began to gradually contract by this medium. And as they shrank, their volumes began to decrease. With a decrease in volume, the density of the elementary substance also increases, and the temperature rises. Conditions arise for the formation of more complex particles from particles of elementary matter. Particles with the mass of an electron are formed, and from these masses neutrons are formed. The volumes of masses of electrons and neutrons are determined by the elasticity of the matrix vacuum medium. The newly formed neutrons acquired a very strong structure. During this period of time, the neutrons are in the process of oscillatory motion.

Under the infinitely increasing onslaught of the environment of the vacuum, the neutron matter of the quasar gradually thickens and heats up. Quasar radii also gradually decrease. And as a result, the speed of rotation around the imaginary axes of quasars increases. But, despite the radiation from quasars, which to some extent counteracts compression, the process of compression of these objects is inexorably increasing. The environment of the quasar is rapidly moving towards its gravitational radius. According to the theory of gravitation, the gravitational radius is the radius of the sphere on which the gravitational force, created by the mass of matter lying inside this sphere, tends to infinity. And this gravity cannot be overcome, not only by any particles, but even by photons. Such objects are often called Schwarzschild spheres or the same, the so-called "Black holes".

In 1916, the German astronomer Karl Schwarzschild accurately solved one of Albert Einstein's equations. And as a result of this solution, the gravitational radius was determined equal to 2 MG/from 2, where M - the mass of the substance, G - constant gravitational, c Is the speed of light. Therefore, the Schwarzschild sphere appeared in the scientific world. According to this theory, this Schwarzschild sphere, or the same "Black Hole", consists of a medium of neutron matter of limiting density. Within this sphere, an infinitely large force of gravity dominates, an extremely high density and high temperature. At the present time, in certain circles of the scientific world, the opinion still prevails that in nature, besides space, there is also anti-space. And that the so-called "Black holes", where the matter of massive bodies of the Universe is pulled by gravity, are associated with anti-space.

This is a false idealistic trend in science. In nature, there is one space, infinite in volume, eternal in time, densely filled with eternally moving matter. It is now necessary to recall the moment of the emergence of quasars and the most important properties acquired by them, i.e. initial weights and torques. The masses of these objects did their job, drove the neutron matter of the quasar into the Schwarzschild sphere. Quasars, which for some reason did not acquire torques or insufficient torques, after entering the Schwarzschild sphere, temporarily stopped their development. They turned into the hidden substance of the Universe, i.e. in "Black Holes". It is impossible to detect them with conventional devices. But those objects that managed to acquire sufficient torque will continue to develop in space and time.

As they evolve in time, quasars are compressed by the surrounding vacuum. From this compression, the volumes of these objects decrease. But the torques of these objects do not decrease. As a result, the speed of rotation around its imaginary axes increases in gas and dust nebulae of unimaginably large volumes. Numerous centers of gravity have arisen, as well as for particles of elementary matter of the matrix vacuum medium. In the process of development in space and time, from the strained substance to the centers of gravity, constellations, individual stars, planetary systems and other objects of the Galaxy were formed. Emerging stars and other objects of the Galaxy, very different in masses, chemical composition, the compression continues non-stop, the peripheral speed of these objects also progressively increases. There comes a critical moment, under the influence of an unimaginably large centrifugal force, the quasar explodes. There will be ejections of neutron matter from the sphere of this quasar in the form of jets, which will later turn into spiral arms of the Galaxy. This we see at the present time in most of the Galaxies we see (Fig. 9.2).

Figure: 9.2. Expanding Universe: 1 - infinite matrix vacuum environment; 2 - quasars; 3 - galactic formations

To date, in the process of development of the ejected neutron matter from the core of the Galaxy, star clusters, individual stars, planetary systems, nebulae and other types of matter have formed. In the Universe, most of the matter is in the so-called "Black Holes" These objects are not detected with the help of ordinary devices and are invisible to us. But scientists indirectly discover them. The neutron matter thrown out by the centrifugal force from the galactic nucleus is not able to overcome the gravity of this galactic nucleus and will remain its companion, dispersed in numerous orbits, continuing its further development, rotating around the galactic nucleus. Thus, new formations arose - Galaxies. Figuratively speaking, they can be called the atoms of the Universe, which are similar to planetary systems and atoms of matter with chemical properties.

Now, mentally, hypothetically, let us trace the course of development of neutron matter, which was ejected from the galactic core by centrifugal force in the form of jets. This ejected neutron material was very dense and very hot. With the help of the ejection from the galactic nucleus, this substance was freed from the monstrous internal pressure and the oppression of infinitely strong gravity, and began to rapidly expand and cool. In the process of ejection of neutron matter from the galactic nucleus in the form of jets, most of the neutrons, in addition to the scattering motions, also acquired rotational motions around their imaginary axes, i.e. back. Naturally, this new form of motion, acquired by the neutron, began to generate a new form of matter, i.e. a substance with chemical properties in the form of atoms, from hydrogen to the heaviest elements of the periodic table of D.I. Mendeleev.

After the processes of expansion and cooling, huge volumes of gas and dust, highly rarefied and cold nebulae were formed. The reverse process began, i.e. contraction of a substance with chemical properties to numerous centers of gravity. At the moment of the end of the scattering of matter with chemical properties, it turned out to be in highly discharged and cold gas and dust nebulae of unimaginably large volumes. Numerous centers of gravity have arisen, as well as for particles of elementary matter of the matrix vacuum medium. In the process of development in space and time, from the strained substance to the centers of gravity, constellations, individual stars, planetary systems and other objects of the Galaxy were formed. The emerged stars and other objects of the Galaxy, which are very different in mass, chemical composition and temperature. The stars that have absorbed large masses developed at an accelerated rate. Stars like our Sun have a longer development time.

Other objects of the Galaxy, not having collected the appropriate amount of matter, develop even more slowly. And such objects of the Galaxy as our Earth, also, without gaining an appropriate amount of mass, in its development could only heat up and melt, retaining heat only inside the planet. But for that, these objects created optimal conditions for the emergence and development of a new form of matter, living matter. Other objects are like our eternal companion. The moon, in their development, did not even reach the warming up stage. According to the rough definitions of astronomers and physicists, our Sun originated about four billion years ago. Consequently, the ejection of neutron matter from the galactic nucleus occurred much earlier. During this time, processes took place in the spiral arms of the Galaxy that brought the Galaxy to its present form.

In stars that have absorbed tens or more solar masses, the development process proceeds very quickly. In such objects, due to their large masses and due to the large force of gravity, conditions for the occurrence of thermonuclear reactions arise much earlier. The resulting thermonuclear reactions proceed intensively in these objects. But as the light hydrogen in the star decreases, which is converted into helium through a thermonuclear reaction, and as a result, the intensity of the thermonuclear reaction decreases. And with the disappearance of hydrogen completely stops. And as a result of this, the radiation of the star also drops sharply and stops balancing the gravitational forces that seek to squeeze this large star.

After that, the forces of gravity compress this star to a white dwarf with a very high temperature and high density of matter. Further in its further development, having spent the energy of decay of heavy elements, the white dwarf, under the onslaught of ever-increasing gravitational forces, enters the Schwarzschild sphere. Thus, a substance with chemical properties turns into a neutron substance, i.e. into the hidden substance of the universe. And its further development is temporarily stopped. It will continue its development towards the end of the expansion of the Universe. The processes that should take place inside stars like our Sun begin with the gradual compression of the matrix vacuum by the surrounding environment, a cold, highly rarefied medium of gas, and dust. As a result, pressure and temperature rise inside the object. Since the compression process proceeds continuously and with increasing force, conditions for the occurrence of thermonuclear reactions gradually arise inside this object. The energy released during this reaction begins to balance the forces of gravity and the compression of the object stops. An enormous amount of energy is released from this reaction.

But it should be noted that not howling energy that is released in an object from a thermonuclear reaction goes to radiation into space. A significant part of it goes to weighting light elements, from iron atoms to the heaviest elements. Since the process of weighting requires a large expenditure of energy. After the ambient vacuum, i.e. by gravity, it rapidly contracts to a white or red dwarf star. After that, nuclear reactions will begin to occur inside the star, i.e. reactions of decay of heavy elements to iron atoms. And when there is no source of energy in the star, then it will turn into an iron star. The star will gradually cool down, lose luminosity, and in the future will be a dark and cold star. Its development in space and in time in the future will completely depend on the development in space and in time of the Universe. Due to the insufficient mass for this, the iron star will not enter the Schwarzschild sphere. Those changes in the scattering matter of the Universe that took place after the so-called "Big Bang" are described in this theory up to the present moment. But the matter of the Universe continues to scatter.

The speed of the scattering substance increases with every second, and the changes in the substance continue. From the point of view of dialectical materialism, matter and its movement are not created and are indestructible. Therefore, matter in micro and mega worlds has an absolute speed, which is equal to the speed of light. For this reason, in our vacuum environment, any material body cannot mix above this speed. But since any material body has not only one form of motion, but can have a number of other forms of motion, for example, translational motion, rotational motion, oscillatory motion, intra-atomic motion and a number of other forms. Therefore, the material body has a total speed. This total speed must also not exceed the absolute speed.

Hence, we can assume about the changes that should occur in the scattering matter of the Universe. If the speed of the scattering matter of the Universe increases with every second, then the intra-atomic speed of movement increases in direct proportion, i.e. the speed of movement of the electron around the nucleus of the atom increases. The spins of the proton and the electron also increase. The speed of rotation of those material objects that have torques will also increase, i.e. nuclei of Galaxies, stars, planets, "Black holes" from neutron matter and other objects of the Universe. Let's describe, from the point of view of this theory, the decomposition of a substance with chemical properties. Thus, the process of disintegration of a substance with chemical properties occurs in stages. As the speed of the scattering matter of the Universe changes, the peripheral speeds of objects that had torques increase. Under the action of the increased centrifugal force, stars, planets and other objects of the Universe disintegrate into atoms.

The volume of the Universe is filled with a kind of gas, consisting of various atoms that randomly move in volume. The processes of decomposition of a substance with chemical properties continue. The spins of protons and electrons increase. For this reason, the repulsive moments between protons and electrons increase. The environment of the vacuum ceases to balance these repulsive moments, and the atoms decay, i.e. electrons leave atoms. Plasma arises from a substance with chemical properties, i.e. protons and electrons randomly mix separately in the volume of the Universe. After the disintegration of a substance with chemical properties, due to an increase in the speed of the scattering substance of the Universe, they begin to disintegrate, or rather burst into particles of elementary substance of the vacuum environment, galactic nuclei, "black holes", neutrons, protons and electrons. The volume of the Universe, even before the end of the expansion, is filled with a kind of gas from elementary particles of the substance of the vacuum medium. These particles move chaotically in the volume of the Universe, and the speed of these particles increases with every second. Thus, even before the end of the expansion, there will be nothing in the Universe, except for a kind of gas (Fig. 9.3).

Figure: 9.3. Maximum expanded Universe: 1 - matrix vacuum environment; 2 - the sphere of the maximally expanded Universe; 3 - the singular point of the Universe is the moment of the birth of the young Universe; 4 - gaseous medium of elementary particles of the matrix vacuum medium

After all, the substance of the Universe, i.e. A kind of gas will stop for a moment, then, under the pressure of the response of the matrix vacuum environment, it will start rapidly gaining speed, but in the opposite direction, towards the center of gravity of the Universe (Fig. 9.4).

Figure: 9.4. The universe is in the initial phase of compression: 1 - matrix vacuum environment; 2 - matter of elementary particles falling to the center; 3 - the effect of the environment of the matrix vacuum of the Universe; 4 - direction of fall of elementary particles of matter; 5 - expanding singular volume

The process of compression of the Universe and the process of decay of its substance in this theory are combined into one concept - the concept of the gravitational collapse of the Universe. Gravitational collapse is a catastrophically fast compression of massive bodies under the influence of gravitational forces. Let's describe the process of the gravitational collapse of the Universe in more detail.

The gravitational collapse of the universe

Modern science defines gravitational collapse as a catastrophically rapid compression of massive bodies under the influence of gravitational forces. A question may arise. Why is it necessary in this theory to describe this process of the Universe? The same question arose at the beginning of the description of the evolution of the Einstein-Friedman Universe, i.e. non-stationary universe. If in the first description, a probable model of a particle of the first kind of different levels was proposed. According to this theory, our Universe was defined as a particle of the first level environment and is a very massive body. That second description, i.e. the mechanism of the gravitational collapse of the Universe is also necessary for the correct concept of the end of the cycle of existence of the Universe in space and time.

To summarize the essence of the collapse of the Universe, this is the response of the matrix vacuum medium to its maximally expanded volume. The process of compression of the Universe by the surrounding vacuum is the process of restoring its full energy. Further, the gravitational collapse of the Universe is the reverse process to the process of the emergence of matter in the matrix vacuum environment, i.e. substances of the new young universe. Earlier it was said about changes in the matter of the Universe due to an increase in the speed of its scattering matter. Due to this increase in speed, the substance of the Universe decays into elementary particles of the vacuum environment. This decay of matter, which was in different forms and states, took place long before the beginning of the compression of the Universe. While the universe was still expanding, there was a kind of gas in its volume, which uniformly filled this entire expanding volume. This gas consisted of elementary particles of the matrix vacuum medium, which moved chaotically in this volume, i.e. in all directions. The speed of these particles increased with every second. The resultant of all these chaotic movements is directed to the periphery of the expanding Universe.

At the moment the speed of the chaotic movement of particles of a kind of gas falls to zero speed, all the matter of the Universe, in all its volume, will stop for a moment, And from zero speed, in its entire volume, it will start to rapidly gain speed, but in the opposite direction, i.e. to the center of gravity of the universe. At the moment of the beginning of its compression, the process of falling matter along the radius occurs. After 1.5 ... 2 seconds after the start, the process of decay of particles of elementary matter occurs, i.e. substances of the old universe. In this process of falling matter of the old Universe in the entire volume, collisions of falling particles from diametrically opposite directions are inevitable. According to this theory, these particles of elementary substances contain in their structure particles of the matrix vacuum environment. They move in a vacuum environment at the speed of light, i.e. carry the utmost maximum amount of movement. Upon collision, these particles generate an initial medium of a singular volume at the center of the contracting Universe, i.e. at the singular point. What is this Wednesday? This environment is formed from extra particles of the matrix vacuum and ordinary vacuum particles ”. Excess particles move in this volume with light speed relative to the particles of this volume. The medium of the singular volume itself is expanding at the speed of light and this expansion is directed to the periphery of the contracting Universe.

Thus, the process of decay of matter of the old Universe includes two processes. The first process is the fall of the matter of the old Universe to the center of gravity with light speed. The second process is the expansion of the singular volume with the same speed of light towards the falling matter of the old Universe. These processes occur almost at the same time.

Figure: 9.5. New developing Universe in the space of the expanded singular volume: 1 - the environment of the matrix vacuum; 2 - the remnants of the substance of elementary particles falling to the center; 3 - gamma radiation; 4 - the maximum singular volume by mass; 5 - radius of the maximally expanded Universe

The end of the process of falling matter of the old Universe into the medium of a singular volume gives rise to the beginning of the process of the emergence of matter in the new young Universe (Fig. 5.9). The emerged elementary particles of the matrix vacuum of the surface of the singular volume chaotically scatter with an initial speed of 1/9 the speed of light.

The process of falling matter of the old Universe and the expansion of the singular volume are directed towards each other at the speed of light and the paths of their movement must be equal. Based on these phenomena, it is possible to determine the total radius of the maximally expanded Universe. It will be equal to twice the path of the scattering newly formed matter with an initial scattering speed of 1/9 the speed of light. This is the answer to the question of why a description of the gravitational collapse of the Universe is needed.

After setting out in this theory the process of the emergence and development in space and time of our Universe, it is also necessary to describe its parameters. These main parameters include the following:

  1. Determine the acceleration of the scattering matter of the Universe in one second.
  2. Determine the radius of the Universe at the moment of its expansion of matter.
  3. Determine the time in seconds of the expansion of the Universe from the beginning to the end of the expansion.
  4. Determine the area of \u200b\u200bthe sphere of the expanded mass of the substance of the Universe in sq. km.
  5. Determine the number of particles of the matrix vacuum environment that can be located on the area of \u200b\u200bthe maximally expanded mass of the substance of the Universe and its energy.
  6. Determine the mass of the Universe in tons.
  7. Determine the time until the end of the expansion of the Universe.

We determine the acceleration of the scattering matter of the Universe, the increase in the scattering speed per second. To solve this question, we will use the results that were previously discovered by science, Albert Einstein in the general theory of relativity determined that the Universe is finite. And Friedman said that the Universe is currently expanding, and then will contract, science with the help of radio telescopes penetrated the abyss of the Universe for fifteen billion light years. Based on these data, you can answer the questions posed.

It is known from kinematics:

S = V 0 – at 2 /2,

where V 0 is the initial run-up speed of the Universe substance and, according to this theory, is equal to one-ninth the speed of light, i.e. 33 333 km / s.

S = Vtat 2 /2,

where V 0 - initial speed; S - the distance of the path, which is equal to the path of light for fifteen billion years in kilometers, it is equal to 141912 · 10 18 km (this path is equal to the distance of the scattering matter of the Universe to the present moment); t - time equal to 15 · 10 9 years, in seconds - 47304 · 10 13.

Determine the acceleration:

a = 2 (SV 0 · t) 2 / t \u003d 2/5637296423700 km / s.

Let's calculate the time required for the complete expansion of the Universe:

S = V 0 · t + at 2 /2.

When S = 0:

V 0 · t + at 2 /2 = 0.

t \u003d 29792813202 years

Until the end of the expansion, there are:

t - 15 · 10 9 \u003d 14792913202 years.

We determine the value of the path of the scattering matter of the Universe from the beginning of the expansion to the end of the expansion.

In the equation:

S = V 0 · t + at 2 /2

material spreading speed V 0 \u003d 0, then

S = V 0 2 / 2and \u003d 15669313319741 10 9 km.

As already indicated earlier, the moment of termination of the increase in the mass of the singular volume coincides with the moment of the end of the compression of the old Universe. That is, the existence of a singular volume will almost coincide with the time of the divergence of matter:

S = V 0 · t.

From the point of view of dialectical materialism, it follows that if an end occurs for one natural phenomenon, then this is the beginning of another natural phenomenon. The question naturally arises, where does the scattering of the newly formed matter of the new young Universe begin?

In this theory, acceleration is defined, i.e. an increase in the speed of the scattering matter of the Universe. The time of the maximum, complete expansion of the Universe is also determined, i.e. to zero velocity of matter. The process of change in the scattering matter of the Universe is described. Further, the physical process of decay of the substance of the Universe was proposed.

According to the calculation in this theory, the true radius of the maximally expanded Universe consists of two paths, i.e. the radius of the singular volume and the path of the scattering matter of the Universe (Fig. 5.9).

According to this theory, the substance of the matrix vacuum environment is formed from particles of the vacuum environment. Energy was expended on the formation of this substance. The mass of an electron is one of the forms of matter in the vacuum environment. To determine the parameters of the Universe, it is necessary to determine the smallest mass, i.e. mass of a particle of the matrix vacuum medium.

The mass of an electron is:

M e \u003d 9.1 · 10 -31 kg.

In this theory, the electron consists of elementary particles of the substance of the matrix vacuum environment, i.e. elementary quanta of action:

M email \u003d h · n.

Based on this, it is possible to determine the number of extra particles of the matrix vacuum environment, which are included in the structure of the electron mass:

9.1 · 10 -31 kg \u003d 6.626 · 10 -34 J · s · n,

where n - the number of extra particles of the matrix vacuum environment included in the structure of the electron mass.

Let us reduce J · s and kg in the left and right sides of the equation, since the elementary mass of matter represents the momentum:

N \u003d 9.1 · 10 –31 / 6.626 · 10 –34 \u003d 1373.

Let us determine the number of particles of the matrix vacuum medium in one gram of mass.

M el / 1373 \u003d 1 gr / k,

where k - the number of particles of the vacuum medium in one gram.

k = 1373 / M el \u003d 1.5 10 30

The number of particles of the vacuum medium in the mass of one ton of substance:

m = k 10 6 \u003d 1.5 10 36.

This mass includes 1/9 of the pulses of the vacuum environment. This is the number of elementary impulses in the mass of one ton of substance:

N = m / 9 \u003d 1.7 10 35.

V e \u003d 4π r 3/3 \u003d 91.0 · 10 -39 cm 3,

where r Is the classical radius of an electron.

Let us determine the volume of a particle of the matrix vacuum medium:

V m.v. \u003d V e / 9π \u003d 7.4 · 10 -42 cm.

Whence we find the radius and cross-sectional area of \u200b\u200ba particle of the matrix vacuum medium:

R m.v. \u003d (3 V m.v. / 4π) 1/3 \u003d 1.2 · 10 –14 cm.

S m.v. \u003d π R m.v. \u003d 4.5 · 10 –38 km 2.

Consequently, to determine the amount of energy that is contained in an irresistibly large volume of a receiver, it is necessary to calculate the surface area of \u200b\u200bthis receiver, i.e. the area of \u200b\u200bthe maximally expanded universe

S pl. \u003d 4π R 2 \u003d 12320636510 38 km 2.

Let us determine the number of particles of the matrix vacuum environment that can be located on the sphere area of \u200b\u200bthe maximally expanded mass of the Universe substance. This requires the value S pl. area divided by the cross-sectional area of \u200b\u200bthe matrix vacuum medium:

Z in \u003d S pl. / S h \u003d 2.7 × 10 83.

According to this theory, for the formation of one elementary particle of the matrix vacuum medium, the energy of two elementary impulses is required. The energy of one elementary impulse goes to the formation of one particle of the elementary substance of the medium of the matrix vacuum, and the energy of the other elementary impulse gives this particle of matter a speed of movement in the vacuum medium equal to one-ninth the speed of light, i.e. 33 333 km / s.

Therefore, the formation of the entire mass of the substance of the Universe requires half the number of particles of the matrix vacuum medium, which fill in one layer its maximally expanded mass of matter:

K = Z in / 2 \u003d 1.35 × 10 83.

To determine one of the main parameters of the Universe, i.e. mass in tons or substance of the vacuum environment, it is necessary to divide half of its number of elementary impulses by the number of elementary impulses that enter one ton of substance of the vacuum environment

M = K / N \u003d 0.8 10 48 tn

The number of particles of the vacuum medium that fill the area of \u200b\u200bthe sphere of the maximally expanded mass of the substance of the Universe in one layer. And according to the principle of the receiver, which is adopted in this theory. This number of particles represents the number of elementary impulses that form the mass of matter and enter the structure of the Universe. This number of elementary impulses is the energy of the Universe, created by the entire mass of matter. This energy will be equal to the number of elementary impulses of the medium multiplied by the speed of light.

W = Z v s \u003d 2.4 10 60 kg m / s

After the above, a question may arise. What is the nature of the expansion and contraction of our universe?

After determining the basic parameters of the Universe: radius, mass, expansion time and its energy. It is necessary to pay attention to the fact that the maximally expanded Universe performed work with its spreading matter, i.e. their energy, in a vacuum environment by forceful expansion of particles of the matrix vacuum environment, the compression of these particles into a volume that is equal to the volume of the entire matter of the Universe. And as a result, this energy, determined by nature, was expended on this work. According to the principle of "Big receiver" adopted in this theory and the natural elasticity of the vacuum environment, the process of expansion of the Universe can be formulated as follows.

At the moment of the end of the expansion, the particles of the expanded sphere of the Universe acquire equal repulsive moments with the particles of the vacuum medium, which envelop this sphere. This is the reason for the end of the expansion of the Universe. But the volume of the enveloping shell of the vacuum medium is larger than the outer shell of the sphere of the Universe. This axiom does not require proof. In this theory, the particles of the matrix vacuum medium have an internal energy equal to 6.626 · 10 –27 erg · s. Or the same amount of movement. Inequality in volumes also gives rise to inequality in momenta, i.e. between the sphere of the Universe and the environment of the vacuum Equality of the repulsive moments between the particles, the maximally expanded sphere of the Universe and the particles of the environment of the matrix vacuum, which envelop this sphere, stopped the expansion of the Universe. This equality lasts for an instant. Then this substance of the Universe rapidly begins to gain the speed of movement, but in the opposite direction, i.e. to the center of gravity of the universe. Compression of matter is a response of the vacuum environment. According to this theory, the response of the matrix vacuum environment is equal to the absolute speed of light.