The Brain, Neuronal Communication and Energy Efficiency

Article for the “bio/mol/text” competition: Cellular processes that ensure the exchange of information between neurons require a lot of energy. High energy consumption has contributed to the selection of the most efficient mechanisms for encoding and transmitting information during evolution. In this article, you will learn about the theoretical approach to the study of brain energy, its role in the study of pathologies, which neurons are more advanced, why synapses sometimes benefit from not “firing,” and how they select only the information the neuron needs.

Origin of the approach

Since the mid-twentieth century, it has been known that the brain consumes a significant part of the energy resources of the whole organism: a quarter of all glucose and ⅕ of all oxygen in the case of the great ape [1–5]. This inspired William Levy and Robert Baxter from the Massachusetts Institute of Technology (USA) to conduct a theoretical analysis of the energy efficiency of information encoding in biological neural networks (Fig. 1) [6]. The study is based on the following hypothesis. Since the brain's energy consumption is high, it is beneficial for it to have neurons that work most efficiently - transmitting only useful information and expending a minimum of energy.

This assumption turned out to be true: using a simple neural network model, the authors reproduced the experimentally measured values ​​of some parameters [6]. In particular, the optimal frequency of impulse generation they calculated varies from 6 to 43 impulses/s - almost the same as for neurons at the base of the hippocampus. They can be divided into two groups according to the pulse frequency: slow (~10 pulses/s) and fast (~40 pulses/s). Moreover, the first group significantly outnumbers the second [7]. A similar picture is observed in the cerebral cortex: there are several times more slow pyramidal neurons (~4-9 impulses/s) than fast inhibitory interneurons (>100 impulses/s) [8], [9]. Thus, apparently, the brain “prefers” to use fewer fast and energy-consuming neurons so that they do not use up all their resources [6], [9–11].


Figure 1. Two neurons are shown. In one of them, the presynaptic protein synaptophysin is colored purple. Another neuron is completely stained with green fluorescent protein. Small light specks are synaptic contacts between neurons [12]. In the inset, one “speck” is presented closer. Groups of neurons connected by synapses are called neural networks [13], [14]. For example, in the cerebral cortex, pyramidal neurons and interneurons form extensive networks. The coordinated “concert” work of these cells determines our higher cognitive and other abilities. Similar networks, only made up of different types of neurons, are distributed throughout the brain, connected in a certain way and organize the work of the entire organ.

website embryologie.uni-goettingen.de

What are interneurons?

Neurons of the central nervous system are divided into activating (forming activating synapses) and inhibitory (forming inhibitory synapses). The latter are largely represented by interneurons , or intermediate neurons. In the cerebral cortex and hippocampus, they are responsible for the formation of gamma rhythms in the brain [15], which ensure the coordinated, synchronous work of other neurons. This is extremely important for motor functions, perception of sensory information, and memory formation [9], [11].

Interneurons are distinguished by their ability to generate significantly higher frequency signals than other neurons. They also contain more mitochondria, the main organelles of energy metabolism and ATP production factories. The latter also contain a large amount of proteins cytochrome c oxidase and cytochrome c, which are key for metabolism. Thus, interneurons are extremely important and, at the same time, energy-consuming cells [8], [9], [11], [16].

The work of Levy and Baxter [6] develops the concept of “economy of impulses” by Horace Barlow from the University of California (USA), who, by the way, is a descendant of Charles Darwin [17]. According to it, during the development of the body, neurons tend to work only with the most useful information, filtering out “extra” impulses, unnecessary and redundant information. However, this concept does not provide satisfactory results, since it does not take into account the metabolic costs associated with neuronal activity [6]. Levy and Baxter's extended approach, which focuses on both factors, has been more fruitful [6], [18–20]. Both the energy expenditure of neurons and the need to encode only useful information are important factors guiding brain evolution [6], [21–24]. Therefore, in order to better understand how the brain works, it is worth considering both of these characteristics: how much useful information a neuron transmits and how much energy it spends.

Recently, this approach has found many confirmations [10], [22], [24–26]. It allowed us to take a new look at the structure of the brain at various levels of organization - from molecular biophysical [20], [26] to organ [23]. It helps to understand what the trade-offs are between the function of a neuron and its energy cost and to what extent they are expressed.

How does this approach work?

Suppose we have a model of a neuron that describes its electrophysiological properties: action potential ( AP ) and postsynaptic potentials ( PSP ) (more on these terms below). We want to understand whether it works efficiently and whether it wastes an unreasonable amount of energy. To do this, it is necessary to calculate the values ​​of the model parameters (for example, the density of channels in the membrane, the speed of their opening and closing), at which: (a) the maximum ratio of useful information to energy consumption is achieved and at the same time (b) realistic characteristics of the transmitted signals are preserved [6 ], [19].

Search for the optimum

In fact, we are talking about an optimization problem: finding the maximum of a function and determining the parameters under which it is achieved. In our case, the function is the ratio of the amount of useful information to energy costs. The amount of useful information can be approximately calculated using Shannon's formula, widely used in information theory [6], [18], [19]. There are two methods for calculating energy costs, and both give plausible results [10], [27]. One of them - the “ion counting method” - is based on counting the number of Na+ ions that entered the neuron during a particular signaling event (AP or PSP, see sidebar “What is an action potential”) with subsequent conversion to the number of adenosine triphosphate (ATP) molecules ), the main energy “currency” of cells [10]. The second is based on the description of ionic currents through the membrane according to the laws of electronics and allows one to calculate the power of the equivalent electrical circuit of a neuron, which is then converted into ATP costs [17].

These “optimal” parameter values ​​must then be compared with those measured experimentally to determine how different they are. The overall picture of the differences will indicate the degree of optimization of a given neuron as a whole: how real, experimentally measured, parameter values ​​coincide with the calculated ones. The less pronounced the differences, the closer the neuron is to the optimum and the more energetically it works optimally. On the other hand, a comparison of specific parameters will show in what specific quality this neuron is close to the “ideal”.

Next, in the context of the energetic efficiency of neurons, two processes on which the encoding and transmission of information in the brain are based are considered. This is a nerve impulse, or action potential, thanks to which information can be sent to the “addressee” over a certain distance (from micrometers to one and a half meters) and synaptic transmission, which underlies the actual transmission of a signal from one neuron to another.

Action potential

Action potential ( AP ) is a signal that neurons send to each other. APs can be different: fast and slow, small and large [28]. They are often organized in long sequences (like letters in words) or in short, high-frequency “packs” (Fig. 2).


Figure 2. Different types of neurons generate different signals. In the center is a longitudinal section of the mammal's brain. The boxes show different types of signals recorded by electrophysiological methods [15], [38]. a — Cortical (Cerebral cortex) pyramidal neurons can transmit both low-frequency signals (Regular firing) and short explosive, or burst, signals (Burst firing). b - Purkinje cells of the cerebellum (Cerebellum) are characterized only by burst activity at a very high frequency. c — Relay neurons of the thalamus (Thalamus) have two modes of activity: burst and tonic (Tonic firing). d — Neurons in the middle part of the leash (MHb, Medial habenula) of the epithalamus generate low-frequency tonic signals.

[14], figure adapted

The wide variety of signals is due to the huge number of combinations of different types of ion channels, synaptic contacts, as well as the morphology of neurons [28], [29]. Since neuronal signaling processes are based on ionic currents, it is expected that different APs require different energy inputs [20], [27], [30].

What is an action potential?

  1. Membrane and ions. The plasma membrane of a neuron maintains an uneven distribution of substances between the cell and the extracellular environment (Fig. 3b) [31–33]. Among these substances there are also small ions, of which K+ and Na+ are important for describing PD. There are few Na+ ions inside the cell, but many outside. Because of this, they constantly strive to get into the cage. On the contrary, there are a lot of K+ ions inside the cell, and they strive to leave it. The ions cannot do this on their own, because the membrane is impermeable to them. For ions to pass through the membrane, it is necessary to open special proteins—membrane ion channels.

  2. Figure 3. Neuron, ion channels and action potential. a — Reconstruction of a candelabra cell in the rat cerebral cortex. The dendrites and body of the neuron are colored blue (blue spot in the center), the axon is colored red (in many types of neurons the axon is branched much more than the dendrites [8], [11], [35]). Green and crimson arrows indicate the direction of information flow: the dendrites and body of the neuron receive it, the axon sends it to other neurons. b - The membrane of a neuron, like any other cell, contains ion channels. Green circles are Na+ ions, blue circles are K+ ions. c — Change in membrane potential during the generation of an action potential (AP) by a Purkinje neuron. Green area: Na channels are open, Na+ ions enter the neuron, depolarization occurs. Blue area: K channels are open, K+ comes out, repolarization occurs. The overlap of the green and blue regions corresponds to the period when simultaneous entry of Na+ and exit of K+ occurs.

    [34], [36], [37], figures adapted

  3. Ion channels. The variety of channels is enormous [14], [36], [38], [39]. Some open in response to a change in membrane potential, others - upon binding of a ligand (a neurotransmitter in a synapse, for example), others - as a result of mechanical changes in the membrane, etc. Opening a channel involves changing its structure, as a result of which ions can pass through it. Some channels allow only a certain type of ion to pass through, while others are characterized by mixed conductivity. In the generation of AP, a key role is played by channels that “sense” the membrane potential—voltage-dependent ion channels. They open in response to changes in membrane potential. Among them, we are interested in voltage-gated sodium channels (Na channels), which allow only Na+ ions to pass through, and voltage-gated potassium channels (K-channels), which allow only K+ ions to pass through.
  4. AP is a relatively strong in amplitude stepwise change in membrane potential.

  5. Ion current and PD. The basis of AP is the ion current—the movement of ions through the ion channels of the membrane [38]. Since the ions are charged, their current leads to a change in the net charge inside and outside the neuron, which immediately entails a change in the membrane potential. Generation of APs, as a rule, occurs in the initial segment of the axon—in that part that is adjacent to the neuron body [40], [14]. Many Na channels are concentrated here. If they open, a powerful current of Na+ ions will flow into the axon, and membrane depolarization will occur—a decrease in membrane potential in absolute value (Fig. 3c). Next, it is necessary to return to its original meaning - repolarization. K+ ions are responsible for this. When the K channels open (shortly before the AP maximum), K+ ions will begin to leave the cell and repolarize the membrane. Depolarization and repolarization are the two main phases of AP. In addition to them, there are several more, which, due to lack of necessity, are not considered here. A detailed description of PD generation can be found in [14], [29], [38], [41]. A brief description of PD is also available in articles on Biomolecule [15], [42].
  6. Initial axon segment and AP initiation. What causes Na channels to open at the axon initial segment? Again, a change in membrane potential “coming” along the dendrites of the neuron (Fig. 3a). These are postsynaptic potentials (PSPs) that arise as a result of synaptic transmission. This process is explained in more detail in the main text.
  7. Conducting PD. Na-channels located nearby will be indifferent to the AP in the initial segment of the axon. They too will open in response to this change in membrane potential, which will also cause AP. The latter, in turn, will cause a similar “reaction” on the next section of the axon, further and further from the neuron body, and so on. In this way, AP conduction occurs along the axon [14], [15], [38]. Eventually it will reach its presynaptic terminals (magenta arrows in Fig. 3a), where it can cause synaptic transmission.
  8. Energy consumption for the generation of APs is less than for the operation of synapses. How many molecules of adenosine triphosphate (ATP), the main energy “currency”, does PD cost? According to one estimate, for pyramidal neurons of the rat cerebral cortex, the energy consumption for generating 4 APs per second is about ⅕ of the total energy consumption of the neuron. If we take into account other signaling processes, in particular synaptic transmission, the share will be ⅘. For the cerebellar cortex, which is responsible for motor functions, the situation is similar: the energy consumption for generating the output signal is 15% of the total, and about half is for processing input information [25]. Thus, PD is far from the most energy-intensive process. The operation of a synapse requires many times more energy [5], [19], [25]. However, this does not mean that the PD generation process does not exhibit energy efficiency features.

Analysis of different types of neurons (Fig. 4) showed that invertebrate neurons are not very energy efficient, while some vertebrate neurons are almost perfect [20]. According to the results of this study, the most energy efficient were the interneurons of the hippocampus, which is involved in the formation of memory and emotions, as well as thalamocortical relay neurons, which carry the main flow of sensory information from the thalamus to the cerebral cortex.


Figure 4. Different neurons are efficient in different ways. The figure shows a comparison of the energy consumption of different types of neurons. Energy consumption is calculated in models with both initial (real) parameter values ​​(black columns) and optimal ones, in which, on the one hand, the neuron performs its assigned function, and on the other, it spends a minimum of energy (gray columns). The most effective of the presented ones turned out to be two types of vertebrate neurons: hippocampal interneurons (rat hippocampal interneuron, RHI) and thalamocortical neurons (mouse thalamocortical relay cell, MTCR), since for them the energy consumption in the original model is closest to the energy consumption of the optimized one. In contrast, invertebrate neurons are less efficient. Legend: SA (squid axon) - squid giant axon; CA (crab axon) - crab axon; MFS (mouse fast spiking cortical interneuron) - fast cortical interneuron of the mouse; BK (honeybee mushroom body Kenyon cell) - mushroom-shaped Kenyon cell of a bee.

[20], figure adapted

Why are they more effective? Because they have little overlap of Na- and K-currents. During the generation of APs, there is always a period of time when these currents are present simultaneously (Fig. 3c). In this case, practically no charge transfer occurs, and the change in membrane potential is minimal. But in any case, you have to “pay” for these currents, despite their “uselessness” during this period. Therefore, its duration determines how much energy resources are wasted. The shorter it is, the more efficient the energy use [20], [26], [30], [43]. The longer, the less effective. In just the two above-mentioned types of neurons, thanks to fast ion channels, this period is very short, and APs are the most effective [20].

By the way, interneurons are much more active than most other neurons in the brain. At the same time, they are extremely important for the coordinated, synchronous operation of neurons, with which they form small local networks [9], [16]. Probably, the high energy efficiency of AP interneurons is some kind of adaptation to their high activity and role in coordinating the work of other neurons [20].

The structural elements of machines and structures are quite diverse. The methods for calculating their strength are different, and they are constantly being improved, which is reflected in the latest published monographs and textbooks, proceedings of scientific and technical conferences, and dissertations. More and more problems relating to the calculation of individual elements are receiving an analytical (albeit rather complex) solution. However, it is so complex mathematically that it is very difficult not only to derive such formulas (which, in fact, has already been done by scientists), but also to use a ready-made analytical solution in practice within the framework of the work of a regular design department. As for assessing the strength of complex structures, specialized software packages (ANSYS, Lyra and others) are currently actively used, which make it possible to perform calculations based on the created finite element model not analytically, but numerically. Building a model of a complex structure is a very lengthy, painstaking and labor-intensive process, so FEM calculations are performed already at the final stages of design. And at the initial stage of designing an object, in order to assess how suitable a given design is as a whole or to assess how correct the technical solution is for a specific unit being developed, it is necessary to perform at least rough strength calculations. How to perform them is discussed in the course on strength of materials

, which all future engineers study as part of general technical training.

This is due to the fact that it is engineers who, when designing machines and structures, have to select the material and cross-sectional dimensions for each structural element so that this structural element resists

action of external forces, without collapsing or distorting its shape (or distorting it within acceptable limits). Sometimes the challenge facing engineers is different: they need to check the sufficiency of cross sections for an already designed or existing structure. Particular attention is always paid to load-bearing structures, since they are designed to absorb loads and various force effects.

In some cases, questions that arise in practice regarding the strength or rigidity of a structure could be solved simply: to lay down such dimensions of structural elements and such materials that the strength, stability and rigidity will no longer raise any doubts. However, such a design would be very material-intensive and, therefore, not economical. In other cases, for example, for rotating machine elements, this approach is fundamentally unacceptable. The question arises: how to choose the shape and dimensions of the cross sections of various structural elements so that the overall structure is economical and its elements resist

to the action of external forces?

That is why one of the compulsory disciplines at a technical university is a course on strength of materials

. Students usually take it in their second year. Students studying construction and mechanical engineering specialties study strength of materials especially in depth.

What is strength of strength

Sopromat

is an abbreviated name for the science of strength, called strength of materials, and the course of the same name at a technical university. It comes from the initial letters of the full name.

Strength of materials

(
sopromat
) is a science that examines the basic concepts, principles and methods of engineering calculation of individual structural elements and some simple structures for strength, rigidity and stability.

The phrase " engineering calculation"

“It is emphasized here that we are talking about approximate, evaluative calculations performed in a typical design bureau, and not in a research institute.

Despite the fact that the structural elements of machines and structures are quite diverse, in engineering calculations they are reduced to a small number of basic forms: rods (straight and curved), plates, shells, massive bodies. Basically, the strength of strength material course studies methods for calculating straight rods under tension, compression, torsion and bending. Straight rods that bend are called beams

. Since when solving almost all problems on strength strength materials it is necessary to know the moment of inertia, moment of resistance and other geometric characteristics of the cross sections of rods and beams, one of the sections in textbooks on strength strength materials is devoted to this topic.

Strength of materials, as a science, is based on both theoretical and experimental data. To build her theory, a number of hypotheses were adopted.

Basic hypotheses of the theory of strength of materials

the following:

  • hypothesis about the continuity of the material (the atomistic theory of the discrete structure of matter is not taken into account)
  • hypothesis of homogeneity and isotropy (in any volume and in any direction the properties of the material are considered the same)
  • hypothesis of small deformation (deformations are small compared to the size of the body, and therefore changes in the location of external forces relative to individual parts of the body are neglected, and static equations are compiled for an undeformed body)
  • hypothesis of perfect elasticity of the material (otherwise it is called the hypothesis of ideal elasticity of the material. It means that all bodies considered in the strength of strength course are assumed to be absolutely elastic. This is acceptable, since most problems in the strength of materials are solved under the assumption of a linearly deformable body, i.e. i.e. one for which Hooke’s law is valid)
  • hypothesis of a linear relationship between deformations and loads
  • hypothesis of flat sections (the sections themselves are not curved, only their relative positions change).

These hypotheses make it possible to select design schemes and perform calculations so that, on the one hand, their results are sufficiently reliable, and on the other hand, so that they are understandable and executable by engineers who do not have special university mathematical training.

In technical universities, the strength of strength materials course is studied after the theoretical mechanics course (sometimes in parallel with it).

Where to start studying strength of materials

?
How to prepare for the strength of strength exam
?

Sopromat

– the discipline is quite complex. Its study is necessarily accompanied by solving a large number of problems and performing calculation and graphic work.

Exam on strength of materials

passes in the same way as in most other disciplines, with a written answer to two or three theoretical questions and the solution of one or two problems.
The teacher may also ask additional questions that will need to be answered orally. To prepare well for the exam,
it is necessary not only to study the books, but also to solve a sufficient number of problems (they can be taken from a methodological manual or from a problem book on strength of materials). This will make the process of memorizing the material easier, and teaching the subject more effective, conscious and interesting. If solving problems in strength of materials presents certain difficulties, problems that have already been solved by someone, which need to be thoughtfully analyzed, are of great help in self-study. Ready-made solutions to problems on strength of strength materials can be found both in the literature (in solution books on strength of strength materials) and on the Internet - for example, on these sites:

  • reshusam.ucoz.ru - Examples of solving problems on strength of materials
  • sopromato.ru - Solving typical problems on strength of materials

Despite the fact that most students study strength of materials

causes difficulties, it should be emphasized that most of the calculation methods used in the strength of strength course are approximate.
The strength of strength material course is a preparation for the perception of other, refined methods of calculation, which are studied in depth in the courses “Structural Mechanics”, “Theory of Elasticity”, “Fracture Mechanics”, “Dynamics of Structures” and other special courses that require both serious mathematical training and and knowledge of the basic science of strength, i.e.
strength of material

More details about strength of materials

can be read in textbooks of the same name.

Information sources

:

  • N.M. Belyaev. Strength of materials.
  • G.S. Pisarenko, A.P. Yakovlev, V.V. Matveev. Handbook of Strength of Materials.
  • A.V. Alexandrov, V.D. Potapov, B.P. Derzhavin. Strength of materials

Synapse

The transmission of a signal from one neuron to another occurs in a special contact between neurons, in the synapse [12]. We will consider only chemical synapses (there are also electrical ones), since they are very common in the nervous system and are important for the regulation of cellular metabolism and nutrient delivery [5].

Most often, a chemical synapse is formed between the axon terminal of one neuron and the dendrite of another. His work resembles... “relaying” a relay baton, the role of which is played by a neurotransmitter - a chemical mediator of signal transmission [12], [42], [44–48].

At the presynaptic end of the axon, the AP causes the release of a neurotransmitter into the extracellular environment - to the receiving neuron. The latter is looking forward to just this: in the membrane of the dendrites, receptors - ion channels of a certain type - bind the neurotransmitter, open and allow different ions to pass through them. This leads to the generation of a small postsynaptic potential (PSP) on the dendrite membrane. It resembles AP, but is much smaller in amplitude and occurs due to the opening of other channels. Many of these small PSPs, each from its own synapse, “run” along the dendrite membrane to the neuron body (green arrows in Fig. 3a) and reach the initial segment of the axon, where they cause the opening of Na channels and “provoke” it to generate APs.

Such synapses are called excitatory : they contribute to the activation of the neuron and the generation of AP. There are also inhibitory synapses. They, on the contrary, promote inhibition and prevent the generation of AP. Often one neuron has both synapses. A certain ratio between inhibition and excitation is important for normal brain function and the formation of brain rhythms that accompany higher cognitive functions [49].

Oddly enough, the release of a neurotransmitter at the synapse may not occur at all - this is a probabilistic process [18], [19]. Neurons save energy in this way: synaptic transmission already accounts for about half of all energy expenditure of neurons [25]. If synapses always fired, all the energy would go into keeping them functioning, and there would be no resources left for other processes. Moreover, it is the low probability (20–40%) of neurotransmitter release that corresponds to the highest energetic efficiency of synapses. The ratio of the amount of useful information to the energy expended in this case is maximum [18], [19]. So, it turns out that “failures” play an important role in the functioning of synapses and, accordingly, the entire brain. And you don’t have to worry about signal transmission when synapses sometimes don’t work, since there are usually many synapses between neurons, and at least one of them will work.

Another feature of synaptic transmission is the division of the general flow of information into individual components according to the modulation frequency of the incoming signal (roughly speaking, the frequency of incoming APs) [50]. This occurs due to the combination of different receptors on the postsynaptic membrane [38], [50]. Some receptors are activated very quickly: for example, AMPA receptors (AMPA comes from α- a mino-3-hydroxy-5- m ethyl-4-isoxazole p ropionic acid ). If only such receptors are present on the postsynaptic neuron, it can clearly perceive a high-frequency signal (such as, for example, in Fig. 2c). The most striking example is the neurons of the auditory system, which are involved in determining the location of a sound source and accurately recognizing short sounds such as clicks, which are widely represented in speech [12], [38], [51]. NMDA receptors (NMDA - from N - m ethyl- D - a spartate) are slower. They allow neurons to select signals of lower frequency (Fig. 2d), as well as to perceive a high-frequency series of APs as something unified—the so-called integration of synaptic signals [14]. There are even slower metabotropic receptors, which, when binding a neurotransmitter, transmit a signal to a chain of intracellular “second messengers” to adjust a wide variety of cellular processes. For example, G protein-associated receptors are widespread. Depending on the type, they, for example, regulate the number of channels in the membrane or directly modulate their operation [14].

Various combinations of fast AMPA, slower NMDA, and metabotropic receptors allow neurons to select and use the most useful information that is important for their functioning [50]. And “useless” information is eliminated; it is not “perceived” by the neuron. In this case, you don’t have to waste energy processing unnecessary information. This is another aspect of optimizing synaptic transmission between neurons.

Passage of information

Neurons communicate with each other using “nerve messages.” These “messages” are similar to electric current running through wires. Sometimes, when transmitted from one neuron to another, these impulses turn into chemical messages.

Nerve impulses

Information is transferred between neurons like electric current in wires. These messages are encoded: they are a sequence of completely identical impulses. The code itself lies in their frequency, that is, in the number of pulses per second. Impulses are transmitted from cell to cell, from the dendrite in which they originate to the axon through which they pass. But there is also a difference from electrical networks - impulses are transmitted not using electrons*, but using more complex particles - ions.

Medicines that affect the speed of impulses

There are many chemicals that can change the transmission characteristics of nerve impulses. As a rule, they act at the synaptic level. Anesthetics and tranquilizers slow down and sometimes even suppress the transmission of impulses. And antidepressants and stimulants, such as caffeine, on the contrary, contribute to their better transmission.

With great speed

Nerve impulses must travel quickly throughout the body. The surrounding glial cells help neurons speed up their passage. They form the sheath of the nerve fiber called myelin. As a result, the impulses travel at a mind-blowing speed - more than 400 km/h.

Chemical bonds

Messages transmitted from neuron to neuron must be converted from electrical to chemical form. This is due to the fact that, despite their large number, neurons never touch each other. But electrical impulses cannot be transmitted unless there is physical contact. Therefore, neurons use a special system called synapses to communicate with each other. In these places, neurons are separated by a narrow space called the synaptic cleft. When an electrical impulse reaches the first neuron, it releases chemical molecules called neurotransmitters from the synapse. These substances produced by neurons move across the synaptic cleft and land on receptors specially designed for them on another neuron. As a result, another electrical impulse occurs.

An impulse travels between neurons in less than a thousandth of a second.

Neurotransmitter differences

The brain produces about fifty neurotransmitters, which can be divided into two groups. The first consists of those that initiate the occurrence of a nerve impulse - they are called excitatory. Others, on the contrary, slow down its occurrence - these are inhibitory neurotransmitters. It is worth noting that in most cases, a neuron releases only one type of neurotransmitter. And depending on whether it is excitatory or inhibitory, the neuron affects neighboring nerve cells differently.

Artificial stimulation

An individual neuron or a group of neurons can be stimulated artificially using electrodes inserted into them, directing electrical impulses to precisely designated areas of the brain. This method is sometimes used in medicine, in particular for the treatment of patients suffering from Parkinson's disease. This disease, which manifests itself in old age, is accompanied by trembling of the limbs. This shaking can be stopped by continuously stimulating a specific area of ​​the brain.

Neuron - microcomputer

Each neuron is capable of receiving hundreds of messages per second. And in order not to be overloaded with information, he must be able to judge the degree of its significance and make a preliminary analysis of it. This computational activity occurs inside the cell. There, excitatory impulses are added and inhibitory impulses are subtracted. And, in order for a neuron to generate its own impulse, it is necessary that the sum of the previous ones be greater than a certain value. If the addition of excitatory and inhibitory impulses does not exceed this limit, the neuron will be “silent”.

Information roads

In all this intricacy of neurons, there are beautifully defined pathways. Similar ideas, similar memories pass through, always firing the same neurons and synapses. It is still unknown how these circuit-like electronic communication circuits arise and are maintained, but it is clear that they exist and that the stronger they are, the more efficient they are. Frequently used synapses work faster. This explains why we remember faster things that we have seen or repeated several times. However, these connections do not last forever. Some of them may disappear if they are not used enough, and new ones will appear in their place. If necessary, neurons are always capable of creating new connections.

The small green dots in the photo are hormones inside the blood vessels.

Chemical doping

When an athlete is said to have used hormonal doping, it means that he took hormones either in pill form or by injecting them directly into the blood. Hormones can be natural or artificial. The most common are growth hormones and steroids, which make muscles bigger and stronger, as well as erythropoietin, a hormone that accelerates the delivery of nutrients to the muscles.

The brain is capable of performing millions of operations in a fraction of a second.

Hormones work in the brain

The brain also uses another tool to exchange information – hormones . These chemicals are produced in part by the brain itself in a group of neurons located in the hypothalamus. These hormones control the production of others produced in other parts of the body in the endocrine glands. They act differently from neurotransmitters, which bind directly to neurons and are carried in the blood to body organs distant from the brain, such as the breasts, ovaries, testes, and kidneys. By attaching to their receptors, hormones cause various physiological reactions. They, for example, promote the growth of bones and muscles, control feelings of hunger and thirst and, of course, affect sexual activity.

What else?

The energy efficiency of brain cells has also been studied in relation to their morphology [35], [52–54]. Studies show that the branching of dendrites and axons is not chaotic and also saves energy [52], [54]. For example, an axon branches so that the total length of the path that passes through the AP is minimal. In this case, the energy consumption for conducting AP along the axon is minimal.

A reduction in neuron energy consumption is also achieved at a certain ratio of inhibitory and excitatory synapses [55]. This is directly related, for example, to ischemia (a pathological condition caused by impaired blood flow in the vessels) of the brain. In this pathology, the most metabolically active neurons are most likely to fail first [9], [16]. In the cortex, they are represented by inhibitory interneurons that form inhibitory synapses on many other pyramidal neurons [9], [16], [49]. As a result of the death of interneurons, the inhibition of pyramidal neurons decreases. As a consequence, the overall level of activity of the latter increases (activating synapses fire more often, APs are generated more often). This is immediately followed by an increase in their energy consumption, which under ischemic conditions can lead to the death of neurons.

When studying pathologies, attention is paid to synaptic transmission as the most energy-consuming process [19]. For example, in Parkinson's [56], Huntington's [57], and Alzheimer's diseases [58–61], there is a disruption in the functioning or transport to the synapses of mitochondria, which play a major role in the synthesis of ATP [62], [63]. In the case of Parkinson's disease, this may be due to disruption and death of highly energy-consuming neurons of the substantia nigra, which is important for the regulation of motor functions and muscle tone. In Huntington's disease, the mutant protein huntingtin disrupts the delivery mechanisms of new mitochondria to synapses, which leads to “energy starvation” of the latter, increased vulnerability of neurons and excessive activation. All this can cause further disruption of neuronal function with subsequent atrophy of the striatum and cerebral cortex. In Alzheimer's disease, mitochondrial dysfunction (in parallel with a decrease in the number of synapses) occurs due to the deposition of amyloid plaques. The effect of the latter on mitochondria leads to oxidative stress, as well as apoptosis - cell death of neurons.

Where are nerve impulses created?

Where do they start their journey? The answer to this question can be given by any student who has diligently studied the physiology of arousal. There are four options:

  1. Receptor end of the dendrite. If it exists (which is not a fact), then it is possible that there is an adequate stimulus, which will first create a generator potential, and then a nerve impulse. Pain receptors work in a similar way.
  2. Membrane of the excitatory synapse. As a rule, this is only possible in the presence of severe irritation or their summation.
  3. Dendritic trigger zone. In this case, local excitatory postsynaptic potentials are formed as a response to the stimulus. If the first node of Ranvier is myelinated, then they are summed up on it. Due to the presence of a section of membrane there that has increased sensitivity, a nerve impulse arises here.
  4. Axon hillock. This is the name given to the place where the axon begins. The mound is the most frequent source of impulses on the neuron. In all other places that were considered earlier, their occurrence is much less likely. This is due to the fact that here the membrane has increased sensitivity, as well as a reduced critical level of depolarization. Therefore, when the summation of numerous excitatory postsynaptic potentials begins, the hillock reacts to them first.

Once again about everything

At the end of the twentieth century, an approach to studying the brain emerged that simultaneously considers two important characteristics: how much a neuron (or neural network, or synapse) encodes and transmits useful information and how much energy it spends [6], [18], [19] . Their ratio is a kind of criterion for the energy efficiency of neurons, neural networks and synapses.

The use of this criterion in computational neuroscience has provided a significant increase in knowledge regarding the role of certain phenomena and processes [6], [18–20], [26], [30], [43], [55]. In particular, the low probability of neurotransmitter release at the synapse [18], [19], a certain balance between inhibition and excitation of a neuron [55], and the selection of only a certain type of incoming information due to a certain combination of receptors [50] - all this helps to save valuable energy resources.

Moreover, the very determination of the energy consumption of signaling processes (for example, generation, conduction of action potentials, synaptic transmission) makes it possible to find out which of them will suffer first in case of pathological disruption of nutrient delivery [10], [25], [56]. Since synapses require the most energy to operate, they are the first to fail in pathologies such as ischemia, Alzheimer’s and Huntington’s diseases [19], [25]. In a similar way, determining the energy consumption of different types of neurons helps to determine which of them will die before others in the event of pathology. For example, with the same ischemia, the interneurons of the cortex will fail first [9], [16]. Due to their intense metabolism, these same neurons are the most vulnerable cells during aging, Alzheimer’s disease, and schizophrenia [16].

In general, the approach to determining energetically efficient mechanisms of brain function is a powerful direction for the development of both fundamental neuroscience and its medical aspects [5], [14], [16], [20], [26], [55], [64]. ].

Rating
( 1 rating, average 4 out of 5 )
Did you like the article? Share with friends:
For any suggestions regarding the site: [email protected]
Для любых предложений по сайту: [email protected]