Information

1.1.2: The Process of Science - Biology


Like geology, physics, and chemistry, biology is a science that gathers knowledge about the natural world. Specifically, biology is the study of life. The discoveries of biology are made by a community of researchers who work individually and together using agreed-on methods. In this sense, biology, like all sciences is a social enterprise like politics or the arts.

The methods of science include careful observation, record keeping, logical and mathematical reasoning, experimentation, and submitting conclusions to the scrutiny of others. Science also requires considerable imagination and creativity; a well-designed experiment is commonly described as elegant, or beautiful. Like politics, science has considerable practical implications and some science is dedicated to practical applications, such as the prevention of disease (Figure (PageIndex{2})). Other science proceeds largely motivated by curiosity. Whatever its goal, there is no doubt that science, including biology, has transformed human existence and will continue to do so.

The Nature of Science

Biology is a science, but what exactly is science? What does the study of biology share with other scientific disciplines? Science(from the Latin scientia, meaning "knowledge") can be defined as knowledge about the natural world.

Science is a very specific way of learning, or knowing, about the world. The history of the past 500 years demonstrates that science is a very powerful way of knowing about the world; it is largely responsible for the technological revolutions that have taken place during this time. There are however, areas of knowledge and human experience that the methods of science cannot be applied to. These include such things as answering purely moral questions, aesthetic questions, or what can be generally categorized as spiritual questions. Science has cannot investigate these areas because they are outside the realm of material phenomena, the phenomena of matter and energy, and cannot be observed and measured.

The scientific method is a method of research with defined steps that include experiments and careful observation. The steps of the scientific method will be examined in detail later, but one of the most important aspects of this method is the testing of hypotheses. A hypothesis is a suggested explanation for an event, which can be tested. Hypotheses, or tentative explanations, are generally produced within the context of a scientific theory. A scientific theory is a generally accepted, thoroughly tested and confirmed explanation for a set of observations or phenomena. Scientific theory is the foundation of scientific knowledge. In addition, in many scientific disciplines (less so in biology) there are scientific laws, often expressed in mathematical formulas, which describe how elements of nature will behave under certain specific conditions. There is not an evolution of hypotheses through theories to laws as if they represented some increase in certainty about the world. Hypotheses are the day-to-day material that scientists work with and they are developed within the context of theories. Laws are concise descriptions of parts of the world that are amenable to formulaic or mathematical description.

Natural Sciences

What would you expect to see in a museum of natural sciences? Frogs? Plants? Dinosaur skeletons? Exhibits about how the brain functions? A planetarium? Gems and minerals? Or maybe all of the above? Science includes such diverse fields as astronomy, biology, computer sciences, geology, logic, physics, chemistry, and mathematics (Figure (PageIndex{3})). However, those fields of science related to the physical world and its phenomena and processes are considered natural sciences. Thus, a museum of natural sciences might contain any of the items listed above.

There is no complete agreement when it comes to defining what the natural sciences include. For some experts, the natural sciences are astronomy, biology, chemistry, earth science, and physics. Other scholars choose to divide natural sciences into life sciences, which study living things and include biology, and physical sciences, which study nonliving matter and include astronomy, physics, and chemistry. Some disciplines such as biophysics and biochemistry build on two sciences and are interdisciplinary.

Scientific Inquiry

One thing is common to all forms of science: an ultimate goal “to know.” Curiosity and inquiry are the driving forces for the development of science. Scientists seek to understand the world and the way it operates. Two methods of logical thinking are used: inductive reasoning and deductive reasoning.

Inductive reasoning is a form of logical thinking that uses related observations to arrive at a general conclusion. This type of reasoning is common in descriptive science. A life scientist such as a biologist makes observations and records them. These data can be qualitative (descriptive) or quantitative (consisting of numbers), and the raw data can be supplemented with drawings, pictures, photos, or videos. From many observations, the scientist can infer conclusions (inductions) based on evidence. Inductive reasoning involves formulating generalizations inferred from careful observation and the analysis of a large amount of data. Brain studies often work this way. Many brains are observed while people are doing a task. The part of the brain that lights up, indicating activity, is then demonstrated to be the part controlling the response to that task.

Deductive reasoning or deduction is the type of logic used in hypothesis-based science. In deductive reasoning, the pattern of thinking moves in the opposite direction as compared to inductive reasoning. Deductive reasoning is a form of logical thinking that uses a general principle or law to forecast specific results. From those general principles, a scientist can extrapolate and predict the specific results that would be valid as long as the general principles are valid. For example, a prediction would be that if the climate is becoming warmer in a region, the distribution of plants and animals should change. Comparisons have been made between distributions in the past and the present, and the many changes that have been found are consistent with a warming climate. Finding the change in distribution is evidence that the climate change conclusion is a valid one.

Both types of logical thinking are related to the two main pathways of scientific study: descriptive science and hypothesis-based science. Descriptive (or discovery) science aims to observe, explore, and discover, while hypothesis-based science begins with a specific question or problem and a potential answer or solution that can be tested. The boundary between these two forms of study is often blurred, because most scientific endeavors combine both approaches. Observations lead to questions, questions lead to forming a hypothesis as a possible answer to those questions, and then the hypothesis is tested. Thus, descriptive science and hypothesis-based science are in continuous dialogue.

Hypothesis Testing

Biologists study the living world by posing questions about it and seeking science-based responses. This approach is common to other sciences as well and is often referred to as the scientific method. The scientific method was used even in ancient times, but it was first documented by England’s Sir Francis Bacon (1561–1626) (Figure (PageIndex{4}) ), who set up inductive methods for scientific inquiry. The scientific method is not exclusively used by biologists but can be applied to almost anything as a logical problem-solving method.

The scientific process typically starts with an observation (often a problem to be solved) that leads to a question. Let’s think about a simple problem that starts with an observation and apply the scientific method to solve the problem. One Monday morning, a student arrives at class and quickly discovers that the classroom is too warm. That is an observation that also describes a problem: the classroom is too warm. The student then asks a question: “Why is the classroom so warm?”

Recall that a hypothesis is a suggested explanation that can be tested. To solve a problem, several hypotheses may be proposed. For example, one hypothesis might be, “The classroom is warm because no one turned on the air conditioning.” But there could be other responses to the question, and therefore other hypotheses may be proposed. A second hypothesis might be, “The classroom is warm because there is a power failure, and so the air conditioning doesn’t work.”

Once a hypothesis has been selected, a prediction may be made. A prediction is similar to a hypothesis but it typically has the format “If . then . .” For example, the prediction for the first hypothesis might be, “If the student turns on the air conditioning, then the classroom will no longer be too warm.”

A hypothesis must be testable to ensure that it is valid. For example, a hypothesis that depends on what a bear thinks is not testable, because it can never be known what a bear thinks. It should also be falsifiable, meaning that it can be disproven by experimental results. An example of an unfalsifiable hypothesis is “Botticelli’s Birth of Venus is beautiful.” There is no experiment that might show this statement to be false. To test a hypothesis, a researcher will conduct one or more experiments designed to eliminate one or more of the hypotheses. This is important. A hypothesis can be disproven, or eliminated, but it can never be proven. Science does not deal in proofs like mathematics. If an experiment fails to disprove a hypothesis, then we find support for that explanation, but this is not to say that down the road a better explanation will not be found, or a more carefully designed experiment will be found to falsify the hypothesis.

Each experiment will have one or more variables and one or more controls. A variable is any part of the experiment that can vary or change during the experiment. A control is a part of the experiment that does not change. Look for the variables and controls in the example that follows. As a simple example, an experiment might be conducted to test the hypothesis that phosphate limits the growth of algae in freshwater ponds. A series of artificial ponds are filled with water and half of them are treated by adding phosphate each week, while the other half are treated by adding a salt that is known not to be used by algae. The variable here is the phosphate (or lack of phosphate), the experimental or treatment cases are the ponds with added phosphate and the control ponds are those with something inert added, such as the salt. Just adding something is also a control against the possibility that adding extra matter to the pond has an effect. If the treated ponds show lesser growth of algae, then we have found support for our hypothesis. If they do not, then we reject our hypothesis. Be aware that rejecting one hypothesis does not determine whether or not the other hypotheses can be accepted; it simply eliminates one hypothesis that is not valid (Figure). Using the scientific method, the hypotheses that are inconsistent with experimental data are rejected.

Example (PageIndex{1})

In the example below, the scientific method is used to solve an everyday problem. Which part in the example below is the hypothesis? Which is the prediction? Based on the results of the experiment, is the hypothesis supported? If it is not supported, propose some alternative hypotheses.

  1. My toaster doesn’t toast my bread.
  2. Why doesn’t my toaster work?
  3. There is something wrong with the electrical outlet.
  4. If something is wrong with the outlet, my coffeemaker also won’t work when plugged into it.
  5. I plug my coffeemaker into the outlet.
  6. My coffeemaker works.

Solution

The hypothesis is #3 (there is something wrong with the electrical outlet), and the prediction is #4 (if something is wrong with the outlet, then the coffeemaker also won’t work when plugged into the outlet). The original hypothesis is not supported, as the coffee maker works when plugged into the outlet. Alternative hypotheses may include (1) the toaster might be broken or (2) the toaster wasn’t turned on.

In practice, the scientific method is not as rigid and structured as it might at first appear. Sometimes an experiment leads to conclusions that favor a change in approach; often, an experiment brings entirely new scientific questions to the puzzle. Many times, science does not operate in a linear fashion; instead, scientists continually draw inferences and make generalizations, finding patterns as their research proceeds. Scientific reasoning is more complex than the scientific method alone suggests.

Basic and Applied Science

The scientific community has been debating for the last few decades about the value of different types of science. Is it valuable to pursue science for the sake of simply gaining knowledge, or does scientific knowledge only have worth if we can apply it to solving a specific problem or bettering our lives? This question focuses on the differences between two types of science: basic science and applied science.

Basic science or “pure” science seeks to expand knowledge regardless of the short-term application of that knowledge. It is not focused on developing a product or a service of immediate public or commercial value. The immediate goal of basic science is knowledge for knowledge’s sake, though this does not mean that in the end it may not result in an application.

In contrast, applied science or “technology,” aims to use science to solve real-world problems, making it possible, for example, to improve a crop yield, find a cure for a particular disease, or save animals threatened by a natural disaster. In applied science, the problem is usually defined for the researcher.

Some individuals may perceive applied science as “useful” and basic science as “useless.” A question these people might pose to a scientist advocating knowledge acquisition would be, “What for?” A careful look at the history of science, however, reveals that basic knowledge has resulted in many remarkable applications of great value. Many scientists think that a basic understanding of science is necessary before an application is developed; therefore, applied science relies on the results generated through basic science. Other scientists think that it is time to move on from basic science and instead to find solutions to actual problems. Both approaches are valid. It is true that there are problems that demand immediate attention; however, few solutions would be found without the help of the knowledge generated through basic science.

One example of how basic and applied science can work together to solve practical problems occurred after the discovery of DNA structure led to an understanding of the molecular mechanisms governing DNA replication. Strands of DNA, unique in every human, are found in our cells, where they provide the instructions necessary for life. During DNA replication, new copies of DNA are made, shortly before a cell divides to form new cells. Understanding the mechanisms of DNA replication enabled scientists to develop laboratory techniques that are now used to identify genetic diseases, pinpoint individuals who were at a crime scene, and determine paternity. Without basic science, it is unlikely that applied science would exist.

Another example of the link between basic and applied research is the Human Genome Project, a study in which each human chromosome was analyzed and mapped to determine the precise sequence of DNA subunits and the exact location of each gene. (The gene is the basic unit of heredity; an individual’s complete collection of genes is his or her genome.) Other organisms have also been studied as part of this project to gain a better understanding of human chromosomes. The Human Genome Project (Figure (PageIndex{6})) relied on basic research carried out with non-human organisms and, later, with the human genome. An important end goal eventually became using the data for applied research seeking cures for genetically related diseases.

While research efforts in both basic science and applied science are usually carefully planned, it is important to note that some discoveries are made by serendipity, that is, by means of a fortunate accident or a lucky surprise. Penicillin was discovered when biologist Alexander Fleming accidentally left a petri dish of Staphylococcus bacteria open. An unwanted mold grew, killing the bacteria. The mold turned out to be Penicillium, and a new antibiotic was discovered. Even in the highly organized world of science, luck—when combined with an observant, curious mind—can lead to unexpected breakthroughs.

Reporting Scientific Work

Whether scientific research is basic science or applied science, scientists must share their findings for other researchers to expand and build upon their discoveries. Communication and collaboration within and between sub disciplines of science are key to the advancement of knowledge in science. For this reason, an important aspect of a scientist’s work is disseminating results and communicating with peers. Scientists can share results by presenting them at a scientific meeting or conference, but this approach can reach only the limited few who are present. Instead, most scientists present their results in peer-reviewed articles that are published in scientific journals. Peer-reviewed articles are scientific papers that are reviewed, usually anonymously by a scientist’s colleagues, or peers. These colleagues are qualified individuals, often experts in the same research area, who judge whether or not the scientist’s work is suitable for publication. The process of peer review helps to ensure that the research described in a scientific paper or grant proposal is original, significant, logical, and thorough. Grant proposals, which are requests for research funding, are also subject to peer review. Scientists publish their work so other scientists can reproduce their experiments under similar or different conditions to expand on the findings. The experimental results must be consistent with the findings of other scientists.

There are many journals and the popular press that do not use a peer-review system. A large number of online open-access journals, journals with articles available without cost, are now available many of which use rigorous peer-review systems, but some of which do not. Results of any studies published in these forums without peer review are not reliable and should not form the basis for other scientific work. In one exception, journals may allow a researcher to cite a personal communication from another researcher about unpublished results with the cited author’s permission.

Summary

Biology is the science that studies living organisms and their interactions with one another and their environments. Science attempts to describe and understand the nature of the universe in whole or in part. Science has many fields; those fields related to the physical world and its phenomena are considered natural sciences.

A hypothesis is a tentative explanation for an observation. A scientific theory is a well-tested and consistently verified explanation for a set of observations or phenomena. A scientific law is a description, often in the form of a mathematical formula, of the behavior of an aspect of nature under certain circumstances. Two types of logical reasoning are used in science. Inductive reasoning uses results to produce general scientific principles. Deductive reasoning is a form of logical thinking that predicts results by applying general principles. The common thread throughout scientific research is the use of the scientific method. Scientists present their results in peer-reviewed scientific papers published in scientific journals.

Science can be basic or applied. The main goal of basic science is to expand knowledge without any expectation of short-term practical application of that knowledge. The primary goal of applied research, however, is to solve practical problems.

Glossary

applied science
a form of science that solves real-world problems
basic science
science that seeks to expand knowledge regardless of the short-term application of that knowledge
control
a part of an experiment that does not change during the experiment
deductive reasoning
a form of logical thinking that uses a general statement to forecast specific results
descriptive science
a form of science that aims to observe, explore, and find things out
falsifiable
able to be disproven by experimental results
hypothesis
a suggested explanation for an event, which can be tested
hypothesis-based science
a form of science that begins with a specific explanation that is then tested
inductive reasoning
a form of logical thinking that uses related observations to arrive at a general conclusion
life science
a field of science, such as biology, that studies living things
natural science
a field of science that studies the physical world, its phenomena, and processes
peer-reviewed article
a scientific report that is reviewed by a scientist’s colleagues before publication
physical science
a field of science, such as astronomy, physics, and chemistry, that studies nonliving matter
science
knowledge that covers general truths or the operation of general laws, especially when acquired and tested by the scientific method
scientific law
a description, often in the form of a mathematical formula, for the behavior of some aspect of nature under certain specific conditions
scientific method
a method of research with defined steps that include experiments and careful observation
scientific theory
a thoroughly tested and confirmed explanation for observations or phenomena
variable
a part of an experiment that can vary or change

Contents

Self-assembly in the classic sense can be defined as the spontaneous and reversible organization of molecular units into ordered structures by non-covalent interactions. The first property of a self-assembled system that this definition suggests is the spontaneity of the self-assembly process: the interactions responsible for the formation of the self-assembled system act on a strictly local level—in other words, the nanostructure builds itself.

Although self-assembly typically occurs between weakly-interacting species, this organization may be transferred into strongly-bound covalent systems. An example for this may be observed in the self-assembly of polyoxometalates. Evidence suggests that such molecules assemble via a dense-phase type mechanism whereby small oxometalate ions first assemble non-covalently in solution, followed by a condensation reaction that covalently binds the assembled units. [4] This process can be aided by the introduction of templating agents to control the formed species. [5] In such a way, highly organized covalent molecules may be formed in a specific manner.

Self-assembled nano-structure is an object that appears as a result of ordering and aggregation of individual nano-scale objects guided by some physical principle.

A particularly counter-intuitive example of a physical principle that can drive self-assembly is entropy maximization. Though entropy is conventionally associated with disorder, under suitable conditions [6] entropy can drive nano-scale objects to self-assemble into target structures in a controllable way. [7]

Another important class of self-assembly is field-directed assembly. An example of this is the phenomenon of electrostatic trapping. In this case an electric field is applied between two metallic nano-electrodes. The particles present in the environment are polarized by the applied electric field. Because of dipole interaction with the electric field gradient the particles are attracted to the gap between the electrodes. [8] Generalizations of this type approach involving different types of fields, e.g., using magnetic fields, using capillary interactions for particles trapped at interfaces, elastic interactions for particles suspended in liquid crystals have also been reported.

Regardless of the mechanism driving self-assembly, people take self-assembly approaches to materials synthesis to avoid the problem of having to construct materials one building block at a time. Avoiding one-at-a-time approaches is important because the amount of time required to place building blocks into a target structure is prohibitively difficult for structures that have macroscopic size.

Once materials of macroscopic size can be self-assembled, those materials can find use in many applications. For example, nano-structures such as nano-vacuum gaps are used for storing energy [9] and nuclear energy conversion. [10] Self-assembled tunable materials are promising candidates for large surface area electrodes in batteries and organic photovoltaic cells, as well as for microfluidic sensors and filters. [11]

Distinctive features Edit

At this point, one may argue that any chemical reaction driving atoms and molecules to assemble into larger structures, such as precipitation, could fall into the category of self-assembly. However, there are at least three distinctive features that make self-assembly a distinct concept.

Order Edit

First, the self-assembled structure must have a higher order than the isolated components, be it a shape or a particular task that the self-assembled entity may perform. This is generally not true in chemical reactions, where an ordered state may proceed towards a disordered state depending on thermodynamic parameters.

Interactions Edit

The second important aspect of self-assembly is the predominant role of weak interactions (e.g. Van der Waals, capillary, π − π , hydrogen bonds, or entropic forces) compared to more "traditional" covalent, ionic, or metallic bonds. These weak interactions are important in materials synthesis for two reasons.

First, weak interactions take a prominent place in materials, especially in biological systems. For instance, they determine the physical properties of liquids, the solubility of solids, and the organization of molecules in biological membranes. [12]

Second, in addition to the strength of the interactions, interactions with varying degrees of specificity can control self-assembly. Self-assembly that is mediated by DNA pairing interactions constitutes the interactions of the highest specificity that have been used to drive self-assembly. [13] At the other extreme, the least specific interactions are possibly those provided by emergent forces that arise from entropy maximization. [14]

Building blocks Edit

The third distinctive feature of self-assembly is that the building blocks are not only atoms and molecules, but span a wide range of nano- and mesoscopic structures, with different chemical compositions, functionalities, [15] and shapes. [16] Research into possible three-dimensional shapes of self-assembling micrites examines Platonic solids (regular polyhedral). The term ‘micrite’ was created by DARPA to refer to sub-millimeter sized microrobots, whose self-organizing abilities may be compared with those of slime mold. [17] [18] Recent examples of novel building blocks include polyhedra and patchy particles. [19] Examples also included microparticles with complex geometries, such as hemispherical, [20] dimer, [21] discs, [22] rods, molecules, [23] as well as multimers. These nanoscale building blocks can in turn be synthesized through conventional chemical routes or by other self-assembly strategies such as directional entropic forces. More recently, inverse design approaches have appeared where it is possible to fix a target self-assembled behavior, and determine an appropriate building block that will realize that behavior. [24]

Thermodynamics and kinetics Edit

Self-assembly in microscopic systems usually starts from diffusion, followed by the nucleation of seeds, subsequent growth of the seeds, and ends at Ostwald ripening. The thermodynamic driving free energy can be either enthalpic or entropic or both. [25] In either the enthalpic or entropic case, self-assembly proceeds through the formation and breaking of bonds, [26] possibly with non-traditional forms of mediation. The kinetics of the self-assembly process is usually related to diffusion, for which the absorption/adsorption rate often follows a Langmuir adsorption model which in the diffusion controlled concentration (relatively diluted solution) can be estimated by the Fick's laws of diffusion. The desorption rate is determined by the bond strength of the surface molecules/atoms with a thermal activation energy barrier. The growth rate is the competition between these two processes.

Examples Edit

Important examples of self-assembly in materials science include the formation of molecular crystals, colloids, lipid bilayers, phase-separated polymers, and self-assembled monolayers. [27] [28] The folding of polypeptide chains into proteins and the folding of nucleic acids into their functional forms are examples of self-assembled biological structures. Recently, the three-dimensional macroporous structure was prepared via self-assembly of diphenylalanine derivative under cryoconditions, the obtained material can find the application in the field of regenerative medicine or drug delivery system. [29] P. Chen et al. demonstrated a microscale self-assembly method using the air-liquid interface established by Faraday wave as a template. This self-assembly method can be used for generation of diverse sets of symmetrical and periodic patterns from microscale materials such as hydrogels, cells, and cell spheroids. [30] Yasuga et al. demonstrated how fluid interfacial energy drives the emergence of three-dimensional periodic structures in micropillar scaffolds. [31] Myllymäki et al. demonstrated the formation of micelles, that undergo a change in morphology to fibers and eventually to spheres, all controlled by solvent change. [32]

Properties Edit

Self-assembly extends the scope of chemistry aiming at synthesizing products with order and functionality properties, extending chemical bonds to weak interactions and encompassing the self-assembly of nanoscale building blocks at all length scales. [33] In covalent synthesis and polymerization, the scientist links atoms together in any desired conformation, which does not necessarily have to be the energetically most favoured position self-assembling molecules, on the other hand, adopt a structure at the thermodynamic minimum, finding the best combination of interactions between subunits but not forming covalent bonds between them. In self-assembling structures, the scientist must predict this minimum, not merely place the atoms in the location desired.

Another characteristic common to nearly all self-assembled systems is their thermodynamic stability. For self-assembly to take place without intervention of external forces, the process must lead to a lower Gibbs free energy, thus self-assembled structures are thermodynamically more stable than the single, unassembled components. A direct consequence is the general tendency of self-assembled structures to be relatively free of defects. An example is the formation of two-dimensional superlattices composed of an orderly arrangement of micrometre-sized polymethylmethacrylate (PMMA) spheres, starting from a solution containing the microspheres, in which the solvent is allowed to evaporate slowly in suitable conditions. In this case, the driving force is capillary interaction, which originates from the deformation of the surface of a liquid caused by the presence of floating or submerged particles. [34]

These two properties—weak interactions and thermodynamic stability—can be recalled to rationalise another property often found in self-assembled systems: the sensitivity to perturbations exerted by the external environment. These are small fluctuations that alter thermodynamic variables that might lead to marked changes in the structure and even compromise it, either during or after self-assembly. The weak nature of interactions is responsible for the flexibility of the architecture and allows for rearrangements of the structure in the direction determined by thermodynamics. If fluctuations bring the thermodynamic variables back to the starting condition, the structure is likely to go back to its initial configuration. This leads us to identify one more property of self-assembly, which is generally not observed in materials synthesized by other techniques: reversibility.

Self-assembly is a process which is easily influenced by external parameters. This feature can make synthesis rather complex because of the need to control many free parameters. Yet self-assembly has the advantage that a large variety of shapes and functions on many length scales can be obtained. [35]

The fundamental condition needed for nanoscale building blocks to self-assemble into an ordered structure is the simultaneous presence of long-range repulsive and short-range attractive forces. [36]

By choosing precursors with suitable physicochemical properties, it is possible to exert a fine control on the formation processes that produce complex structures. Clearly, the most important tool when it comes to designing a synthesis strategy for a material, is the knowledge of the chemistry of the building units. For example, it was demonstrated that it was possible to use diblock copolymers with different block reactivities in order to selectively embed maghemite nanoparticles and generate periodic materials with potential use as waveguides. [37]

In 2008 it was proposed that every self-assembly process presents a co-assembly, which makes the former term a misnomer. This thesis is built on the concept of mutual ordering of the self-assembling system and its environment. [38]

The most common examples of self-assembly at the macroscopic scale can be seen at interfaces between gases and liquids, where molecules can be confined at the nanoscale in the vertical direction and spread over long distances laterally. Examples of self-assembly at gas-liquid interfaces include breath-figures, self-assembled monolayers and Langmuir–Blodgett films, while crystallization of fullerene whiskers is an example of macroscopic self-assembly in between two liquids. [39] [40] Another remarkable example of macroscopic self-assembly is the formation of thin quasicrystals at an air-liquid interface, which can be built up not only by inorganic, but also by organic molecular units. [41] [42]

Self-assembly processes can also be observed in systems of macroscopic building blocks. These building blocks can be externally propelled [43] or self-propelled. [44] Since the 1950s, scientists have built self-assembly systems exhibiting centimeter-sized components ranging from passive mechanical parts to mobile robots. [45] For systems at this scale, the component design can be precisely controlled. For some systems, the components' interaction preferences are programmable. The self-assembly processes can be easily monitored and analyzed by the components themselves or by external observers. [46]

In April 2014, a 3D printed plastic was combined with a "smart material" that self-assembles in water, [47] resulting in "4D printing". [48]

People regularly use the terms "self-organization" and "self-assembly" interchangeably. As complex system science becomes more popular though, there is a higher need to clearly distinguish the differences between the two mechanisms to understand their significance in physical and biological systems. Both processes explain how collective order develops from "dynamic small-scale interactions". [49] Self-organization is a non-equilibrium process where self-assembly is a spontaneous process that leads toward equilibrium. Self-assembly requires components to remain essentially unchanged throughout the process. Besides the thermodynamic difference between the two, there is also a difference in formation. The first difference is what "encodes the global order of the whole" in self-assembly whereas in self-organization this initial encoding is not necessary. Another slight contrast refers to the minimum number of units needed to make an order. Self-organization appears to have a minimum number of units whereas self-assembly does not. The concepts may have particular application in connection with natural selection. [50] Eventually, these patterns may form one theory of pattern formation in nature. [51]


Imaginary meaning

But what exactly is the significance of the Fibonacci sequence? Other than being a neat teaching tool, it shows up in a few places in nature. However, it's not some secret code that governs the architecture of the universe, Devlin said.

It's true that the Fibonacci sequence is tightly connected to what's now known as the golden ratio (which is not even a true ratio because it's an irrational number). Simply put, the ratio of the numbers in the sequence, as the sequence goes to infinity, approaches the golden ratio, which is 1.6180339887498948482. From there, mathematicians can calculate what's called the golden spiral, or a logarithmic spiral whose growth factor equals the golden ratio. [The 9 Most Massive Numbers in Existence]

The golden ratio does seem to capture some types of plant growth, Devlin said. For instance, the spiral arrangement of leaves or petals on some plants follows the golden ratio. Pinecones exhibit a golden spiral, as do the seeds in a sunflower, according to "Phyllotaxis: A Systemic Study in Plant Morphogenesis" (Cambridge University Press, 1994). But there are just as many plants that do not follow this rule.

"It's not 'God's only rule' for growing things, let's put it that way," Devlin said.

And perhaps the most famous example of all, the seashell known as the nautilus, does not in fact grow new cells according to the Fibonacci sequence, he said.

When people start to draw connections to the human body, art and architecture, links to the Fibonacci sequence go from tenuous to downright fictional.

"It would take a large book to document all the misinformation about the golden ratio, much of which is simply the repetition of the same errors by different authors," George Markowsky, a mathematician who was then at the University of Maine, wrote in a 1992 paper in the College Mathematics Journal.

Much of this misinformation can be attributed to an 1855 book by the German psychologist Adolf Zeising. Zeising claimed the proportions of the human body were based on the golden ratio. The golden ratio sprouted "golden rectangles," "golden triangles" and all sorts of theories about where these iconic dimensions crop up. Since then, people have said the golden ratio can be found in the dimensions of the Pyramid at Giza, the Parthenon, Leonardo da Vinci's "Vitruvian Man" and a bevy of Renaissance buildings. Overarching claims about the ratio being "uniquely pleasing" to the human eye have been stated uncritically, Devlin said.

All these claims, when they're tested, are measurably false, Devlin said.

"We're good pattern recognizers. We can see a pattern regardless of whether it's there or not," Devlin said. "It's all just wishful thinking."


Content Preview

The null hypothesis can be thought of as the opposite of the "guess" the research made (in this example the biologist thinks the plant height will be different for the fertilizers). So the null would be that there will be no difference among the groups of plants. Specifically in more statistical language the null for an ANOVA is that the means are the same. We state the Null hypothesis as:

(H_0 colon mu_1 = mu_2 = ⋯ = mu_k)

for k levels of an experimental treatment.

The reason we state the alternative hypothesis this way is that if the Null is rejected, there are many possibilities.

For example, (mu_1 e mu_2 = ⋯ = mu_k) is one possibility, as is (mu_1=mu_2 emu_3= ⋯ =mu_k). Many people make the mistake of stating the Alternative Hypothesis as: (mu_1 emu_2 e⋯ emu_k) which says that every mean differs from every other mean. This is a possibility, but only one of many possibilities. To cover all alternative outcomes, we resort to a verbal statement of ‘not all equal’ and then follow up with mean comparisons to find out where differences among means exist. In our example, this means that fertilizer 1 may result in plants that are really tall, but fertilizers 2, 3 and the plants with no fertilizers don't differ from one another. A simpler way of thinking about this is that at least one mean is different from all others.

If we look at what can happen in a hypothesis test, we can construct the following contingency table:

Type I Error
(alpha) = probability of Type I Error

You should be familiar with type I and type II errors from your introductory course. It is important to note that we want to set (alpha) before the experiment (a-priori) because the Type I error is the more ‘grevious’ error to make. The typical value of (alpha) is 0.05, establishing a 95% confidence level. For this course we will assume (alpha) =0.05.

Remember the importance of recognizing whether data is collected through experimental design or observational.

For categorical treatment level means, we use an F statistic, named after R.A. Fisher. We will explore the mechanics of computing the F statistic beginning in Lesson 2. The F value we get from the data is labeled (F_< ext>).

As with all other test statistics, a threshold (critical) value of F is established. This F value can be obtained from statistical tables and is referred to as (F_< ext>) or (F_alpha). As a reminder, this critical value is the minimum value for the test statistic (in this case the F test) for us to be able to reject the null.

The F distribution, (F_alpha), and the location of Acceptance / Rejection regions are shown in the graph below:


Watch the video: 1 2 Κύτταρο η μονάδα της ζωής (January 2022).