Uncanny valley

Hypothesized emotional response of subjects is plotted against anthropomorphism of a robot, according to Masahiro Mori's statements. The uncanny valley is the region of negative emotional response towards robots that seem "almost" human. Movement amplifies the emotional response.

The uncanny valley (Japanese: 不気味の谷, Hepburn: bukimi no tani) effect is a hypothesized psychological and aesthetic relation between an object's degree of resemblance to a human being and the emotional response to the object. Examples of the phenomenon exist among robotics, 3D computer animations and lifelike dolls. The increasing prevalence of digital technologies (e.g., virtual reality, augmented reality, and photorealistic computer animation) has propagated discussions and citations of the "valley"; such conversation has enhanced the construct's verisimilitude. The uncanny valley hypothesis predicts that an entity appearing almost human will risk eliciting eerie feelings in viewers.

Etymology

As related to robotics engineering, robotics professor Masahiro Mori first introduced the concept in 1970 from his book titled Bukimi No Tani (不気味の谷), phrasing it as bukimi no tani genshō (不気味の谷現象, lit.'uncanny valley phenomenon'). Bukimi no tani was translated literally as uncanny valley in the 1978 book Robots: Fact, Fiction, and Prediction written by Jasia Reichardt. Over time, this translation created an unintended association of the concept to Ernst Jentsch's psychoanalytic concept of the uncanny established in his 1906 essay On the Psychology of the Uncanny (German: Zur Psychologie des Unheimlichen), which was then critiqued and extended in Sigmund Freud's 1919 essay The Uncanny (German: Das Unheimliche).

Hypothesis

In an experiment involving the human lookalike robot Repliee Q2 (pictured above), the uncovered robotic structure underneath Repliee, and the actual human who was the model for Repliee, the human lookalike elicited the greatest degree of mirror neuron activity.

Mori's original hypothesis states that as the appearance of a robot is made more human, some observers' emotional response to the robot becomes increasingly positive and empathetic, until it becomes almost human, at which the response quickly becomes strong revulsion. However, as the robot's appearance continues to become less distinguishable from that of a human being, the emotional response becomes positive once again and approaches human-to-human empathy levels. When plotted on a graph, the reactions are indicated by a steep decrease followed by a steep increase (hence the "valley" part of the name) in the areas where anthropomorphism is closest to reality.

This interval of repulsive response aroused by a robot with appearance and motion between a "somewhat human" and "fully human" entity is the uncanny valley effect. The name represents the idea that an almost human-looking robot seems overly "strange" to some human beings, produces a feeling of uncanniness, and thus fails to evoke the empathic response required for productive human–robot interaction.

Theoretical basis

A number of theories have been proposed to explain the cognitive mechanism causing the phenomenon:

  • Mate selection: Automatic, stimulus-driven appraisals of uncanny stimuli elicit aversion by activating an evolved cognitive mechanism for the avoidance of selecting mates with low fertility, poor hormonal health, or ineffective immune systems based on visible features of the face and body that are predictive of those traits.
  • Mortality salience: Viewing an "uncanny" robot elicits an innate fear of death and culturally supported defenses for coping with death's inevitability.... [P]artially disassembled androids...play on subconscious fears of reduction, replacement, and annihilation: (1) A mechanism with a human façade and a mechanical interior plays on our subconscious fear that we are all just soulless machines. (2) Androids in various states of mutilation, decapitation, or disassembly are reminiscent of a battlefield after a conflict and, as such, serve as a reminder of our mortality. (3) Since most androids are copies of actual people, they are doppelgängers and may elicit a fear of being replaced, on the job, in a relationship, and so on. (4) The jerkiness of an android's movements could be unsettling because it elicits a fear of losing bodily control.
  • Pathogen avoidance: Uncanny stimuli may activate a cognitive mechanism that originally evolved to motivate the avoidance of potential sources of pathogens by eliciting a disgust response. "The more human an organism looks, the stronger the aversion to its defects, because (1) defects indicate disease, (2) more human-looking organisms are more closely related to human beings genetically, and (3) the probability of contracting disease-causing bacteria, viruses, and other parasites increases with genetic similarity." The visual anomalies of androids, robots, and other animated human characters cause reactions of alarm and revulsion, similar to corpses and visibly diseased individuals.
  • Sorites paradoxes: Stimuli with human and nonhuman traits undermine our sense of human identity by linking qualitatively different categories, human and nonhuman, by a quantitative metric: degree of human likeness.
  • Violation of human norms: If an entity looks sufficiently nonhuman, its human characteristics are noticeable, generating empathy. However, if the entity looks almost human, it elicits our model of a human other and its detailed normative expectations. The nonhuman characteristics are noticeable, giving the human viewer a sense of strangeness. In other words, a robot which has an appearance in the uncanny valley range is not judged as a robot doing a passable job at pretending to be human, but instead as an abnormal human doing a bad job at seeming like a normal person. This has been associated with perceptual uncertainty and the theory of predictive coding.
  • Conflicting perceptual cues: The negative effect associated with uncanny stimuli is produced by the activation of conflicting cognitive representations. Perceptual tension occurs when an individual perceives conflicting cues to category membership, such as when a humanoid figure moves like a robot, or has other visible robot features. This cognitive conflict is experienced as psychological discomfort (i.e., "eeriness"), much like the discomfort that is experienced with cognitive dissonance. Several studies support this possibility. Mathur and Reichling found that the time subjects took to gauge a robot face's human- or mechanical-resemblance peaked for faces deepest in the uncanny valley, suggesting that perceptually classifying these faces as "human" or "robot" posed a greater cognitive challenge. However, they found that while perceptual confusion coincided with the uncanny valley, it did not mediate the effect of the uncanny valley on subjects' social and emotional reactions—suggesting that perceptual confusion may not be the mechanism behind the uncanny valley effect. Burleigh and colleagues demonstrated that faces at the midpoint between human and non-human stimuli produced a level of reported eeriness that diverged from an otherwise linear model relating human-likeness to affect. Yamada et al. found that cognitive difficulty was associated with negative affect at the midpoint of a morphed continuum (e.g., a series of stimuli morphing between a cartoon dog and a real dog). Ferrey et al. demonstrated that the midpoint between images on a continuum anchored by two stimulus categories produced a maximum of negative affect, and found this with both human and non-human entities. Schoenherr and Burleigh provide examples from history and culture that evidence an aversion to hybrid entities, such as the aversion to genetically modified organisms ("Frankenfoods"). Finally, Moore developed a Bayesian mathematical model that provides a quantitative account of perceptual conflict. There has been some debate as to the precise mechanisms that are responsible. It has been argued that the effect is driven by categorization difficulty, configural processing, perceptual mismatch, frequency-based sensitization, and inhibitory devaluation.
  • Threat to humans' distinctiveness and identity: Negative reactions toward very humanlike robots can be related to the challenge that this kind of robot leads to the categorical human – non-human distinction. Kaplan stated that these new machines challenge human uniqueness, pushing for a redefinition of humanness. Ferrari, Paladino and Jetten found that the increase of anthropomorphic appearance of a robot leads to an enhancement of threat to the human distinctiveness and identity. The more a robot resembles a real person, the more it represents a challenge to our social identity as human beings.
  • Religious definition of human identity: The existence of artificial but humanlike entities is viewed by some as a threat to the concept of human identity. An example can be found in the theoretical framework of psychiatrist Irvin Yalom. Yalom explains that humans construct psychological defenses to avoid existential anxiety stemming from death. One of these defenses is 'specialness', the irrational belief that aging and death as central premises of life apply to all others but oneself. The experience of the very humanlike "living" robot can be so rich and compelling that it challenges humans' notions of "specialness" and existential defenses, eliciting existential anxiety. In folklore, the creation of human-like, but soulless, beings is often shown to be unwise, as with the golem in Judaism, whose absence of human empathy and spirit can lead to disaster, however good the intentions of its creator.
  • Uncanny valley of the mind or AI: Due to rapid advancements in the areas of artificial intelligence and affective computing, cognitive scientists have also suggested the possibility of an "uncanny valley of mind". Accordingly, people might experience strong feelings of aversion if they encounter highly advanced, emotion-sensitive technology. Among the possible explanations for this phenomenon, both a perceived loss of human uniqueness and expectations of immediate physical harm are discussed by contemporary research.

Research

An empirically estimated uncanny valley for static robot face images.

A series of studies experimentally investigated whether uncanny valley effects exist for static images of robot faces. Mathur MB & Reichling DB used two complementary sets of stimuli spanning the range from very mechanical to very human-like: first, a sample of 80 objectively chosen robot face images from Internet searches, and second, a morphometrically and graphically controlled 6-face series set of faces. They asked subjects to explicitly rate the likability of each face. To measure trust toward each face, subjects completed an investment game to measure indirectly how much money they were willing to "wager" on a robot's trustworthiness. Both stimulus sets showed a robust uncanny valley effect on explicitly rated likability and a more context-dependent uncanny valley on implicitly rated trust. Their exploratory analysis of one proposed mechanism for the uncanny valley, perceptual confusion at a category boundary, found that category confusion occurs in the uncanny valley but does not mediate the effect on social and emotional responses.

One study conducted in 2009 examined the evolutionary mechanism behind the aversion associated with the uncanny valley. A group of five monkeys were shown three images: two different 3D monkey faces (realistic, unrealistic), and a real photo of a monkey's face. The monkeys' eye-gaze was used as a proxy for preference or aversion. Since the realistic 3D monkey face was looked at less than either the real photo, or the unrealistic 3D monkey face, this was interpreted as an indication that the monkey participants found the realistic 3D face aversive, or otherwise preferred the other two images. As one would expect with the uncanny valley, more realism can result in less positive reactions, and this study demonstrated that neither human-specific cognitive processes, nor human culture explain the uncanny valley. In other words, this aversive reaction to realism can be said to be evolutionary in origin.

As of 2011, researchers at University of California, San Diego and California Institute for Telecommunications and Information Technology were measuring human brain activations related to the uncanny valley. In one study using fMRI, a group of cognitive scientists and roboticists found the biggest differences in brain responses for uncanny robots in parietal cortex, on both sides of the brain, specifically in the areas that connect the part of the brain's visual cortex that processes bodily movements with the section of the motor cortex thought to contain mirror neurons. The researchers say they saw, in essence, evidence of mismatch or perceptual conflict. The brain "lit up" when the human-like appearance of the android and its robotic motion "didn't compute". Ayşe Pınar Saygın, an assistant professor from UCSD, stated that "The brain doesn't seem selectively tuned to either biological appearance or biological motion per se. What it seems to be doing is looking for its expectations to be met – for appearance and motion to be congruent."

Viewer perception of facial expression and speech and the uncanny valley in realistic, human-like characters intended for video games and movies is being investigated by Tinwell et al., 2011. Consideration is also given by Tinwell et al. (2010) as to how the uncanny may be exaggerated for antipathetic characters in survival horror games. Building on the body of work already performed for android science, this research intends to build a conceptual mapping of the uncanny valley using 3D characters generated in a real-time gaming engine. The goal is to analyze how cross-modal factors of facial expression and speech can exaggerate the uncanny. Tinwell et al., 2011 have also introduced the notion of an 'unscalable' uncanny wall that suggests that a viewer's discernment for detecting imperfections in realism will keep pace with new technologies in simulating realism. A summary of Angela Tinwell's research on the uncanny valley, psychological reasons behind the uncanny valley and how designers may overcome the uncanny in human-like virtual characters is provided in her book, The Uncanny Valley in Games and Animation by CRC Press.

Studies in 2015 and 2018 observed that autistic individuals were less affected by the uncanny valley, and autistic children even not at all. The suspected causes were their reduced sensibility for subtle facial changes and limited visual experiences due to diminished social motivation. In return, the social ostracism of autistics may be caused by the uncanny valley effect in the neurotypical society. The effort of autistic individuals to appear neurotypical may thereby be misinterpreted as neurotypical people behaving atypically "creepy". Outing or improved masking may help autistic individuals in such cases.

Design principles

A number of design principles have been proposed for avoiding the uncanny valley:

  • Design elements should match in human realism. A robot may look uncanny when human and nonhuman elements are mixed. For example, both a robot with a synthetic voice or a human being with a human voice have been found to be less eerie than a robot with a human voice or a human being with a synthetic voice. For a robot to give a more positive impression, its degree of human realism in appearance should also match its degree of human realism in behavior. If an animated character looks more human than its movement, this gives a negative impression. Human neuroimaging studies also indicate matching appearance and motion kinematics are important.
  • Reducing conflict and uncertainty by matching appearance, behavior, and ability. In terms of performance, if a robot looks too appliance-like, people expect little from it; if it looks too human, people expect too much from it. A highly human-like appearance leads to an expectation that certain behaviors are present, such as humanlike motion dynamics. This likely operates at a sub-conscious level and may have a biological basis. Neuroscientists have noted "when the brain's expectations are not met, the brain...generates a 'prediction error'. As human-like artificial agents become more commonplace, perhaps our perceptual systems will be re-tuned to accommodate these new social partners. Or perhaps, we will decide 'it is not a good idea to make [robots] so clearly in our image after all'."
  • Human facial proportions and photorealistic texture should only be used together. A photorealistic human texture demands human facial proportions, or the computer generated character can result in the uncanny valley. Abnormal facial proportions, including those typically used by artists to enhance attractiveness (e.g., larger eyes), can look eerie with a photorealistic human texture.

Criticism

A number of criticisms have been raised concerning whether the uncanny valley exists as a unified phenomenon amenable to scientific scrutiny:

  • The uncanny valley effect is a heterogeneous group of phenomena. Phenomena considered as exhibiting the uncanny valley effect can be diverse, involve different sense modalities, and have multiple, possibly overlapping causes. People's cultural heritage may have a considerable influence on how androids are perceived with respect to the uncanny valley.
  • The uncanny valley effect may be generational. Younger generations, more used to Computer-generated imagery (CGI), robots, and such, may be less likely to be affected by this hypothesized issue.
  • The uncanny valley effect is simply a specific case of information processing such as categorization and frequency-based effects. In contrast to the assumption that the uncanny valley is based on a heterogeneous group of phenomena, recent arguments have suggested that uncanny valley-like phenomena simply represent the products of information processing such as categorization. Cheetham et al. have argued that the uncanny valley effect can be understood in terms of categorization processes, with a category boundary defining 'the valley'. Extending this argument, Burleigh and Schoenherr suggested that the effects associated with the uncanny valley can be divided into those attributable to the category boundary and individual exemplar frequency. Namely, the negative affective responses attributed to the uncanny valley were simply a result of the frequency of exposure, similar to the mere-exposure effect. By varying the frequency of training items, they were able to demonstrate a dissociation between cognitive uncertainty based on the category boundary and affective uncertainty based on the frequency of training exemplars. In a follow-up study, Schoenherr and Burleigh demonstrated that an instructional manipulation affected categorization accuracy but not ratings of negative affect. Thus, generational effects and cultural artifacts can be accounted for with basic information processing mechanisms. These and related findings have been used to argue that the uncanny valley is merely an artifact of having greater familiarity with members of human categories and does not reflect a unique phenomenon.
  • The uncanny valley effect occurs at any degree of human likeness. Hanson has also stated that uncanny entities may appear anywhere in a spectrum ranging from the abstract (e.g., MIT's robot Lazlo) to the perfectly human (e.g., cosmetically atypical people). Capgras delusion is a relatively rare condition in which the patient believes that people (or, in some cases, things) have been replaced with duplicates. These duplicates are accepted rationally as identical in physical properties, but the irrational belief is held that the "true" entity has been replaced with something else. Some people with Capgras delusion claim that the duplicate is a robot. Ellis and Lewis argue that the delusion arises from an intact system for overt recognition coupled with a damaged system for covert recognition, which results in conflict over an individual being identifiable but not familiar in any emotional sense. This supports the opinion that the uncanny valley effect could occur due to issues of categorical perception that are particular to how the brain processes information.
  • Good design can avoid the uncanny valley effect. David Hanson has criticized Mori's hypothesis that entities having an almost human appearance will necessarily be evaluated negatively. He has shown that the uncanny valley effect could be eliminated by adding neotenous, cartoonish features to entities that had formerly caused an uncanny valley effect. This method incorporates the idea that humans find characteristics appealing when they are reminiscent of the young of our own (as well as many other) species, as used in cartoons.

Similar effects

If the uncanny valley effect is the result of general cognitive processes, there should be evidence in evolutionary history and cultural artifacts. An effect similar to the uncanny valley was noted by Charles Darwin in 1839:

The expression of this [Trigonocephalus] snake's face was hideous and fierce; the pupil consisted of a vertical slit in a mottled and coppery iris; the jaws were broad at the base, and the nose terminated in a triangular projection. I do not think I ever saw anything more ugly, excepting, perhaps, some of the vampire bats. I imagine this repulsive aspect originates from the features being placed in positions, with respect to each other, somewhat proportional to the human face; and thus we obtain a scale of hideousness.

— Charles Darwin, The Voyage of the Beagle

A similar "uncanny valley" effect could, according to the ethical-futurist writer Jamais Cascio, show up when humans begin modifying themselves with transhuman enhancements (cf. body modification), which aim to improve the abilities of the human body beyond what would normally be possible, be it eyesight, muscle strength, or cognition. So long as these enhancements remain within a perceived norm of human behavior, a negative reaction is unlikely, but once individuals supplant normal human variety, revulsion can be expected. However, according to this theory, once such technologies gain further distance from human norms, "transhuman" individuals would cease to be judged on human levels and instead be regarded as separate entities altogether (this point is what has been dubbed "posthuman"), and it is here that acceptance would rise once again out of the uncanny valley. Another example comes from "pageant retouching" photos, especially of children, which some find disturbingly doll-like.

In visual effects

A number of movies that use computer-generated imagery to show characters have been described by reviewers as giving a feeling of revulsion or "creepiness" as a result of the characters looking too realistic. Examples include the following:

  • According to roboticist Dario Floreano, the baby character Billy in Pixar's groundbreaking 1988 animated short movie Tin Toy provoked negative audience reactions, which first caused the movie industry to consider the concept of the uncanny valley seriously.
  • The 2001 movie Final Fantasy: The Spirits Within, the first photorealistic computer-animated feature movie, provoked negative reactions from some viewers due to its near-realistic yet imperfect visual depictions of human characters. The Guardian critic Peter Bradshaw stated that while the movie's animation is brilliant, the "solemnly realist human faces look shriekingly phoney precisely because they're almost there but not quite". Rolling Stone critic Peter Travers wrote of the movie, "At first it's fun to watch the characters, [...] But then you notice a coldness in the eyes, a mechanical quality in the movements".
  • Several reviewers of the 2004 animated movie The Polar Express termed its animation eerie. CNN.com reviewer Paul Clinton wrote, "Those human characters in the film come across as downright... well, creepy. So The Polar Express is at best disconcerting, and at worst, a wee bit horrifying". The term "eerie" was used by reviewers Kurt Loder and Manohla Dargis, among others. Newsday reviewer John Anderson called the movie's characters "creepy" and "dead-eyed", and wrote that "The Polar Express is a zombie train". Animation director Ward Jenkins wrote an online analysis describing how changes to the Polar Express characters' appearance, especially to their eyes and eyebrows, could have avoided what he considered a feeling of deadness in their faces.
  • In a review of the 2007 animated movie Beowulf, New York Times technology writer David Gallagher wrote that the movie failed the uncanny valley test, stating that the movie's villain, the monster Grendel, was "only slightly scarier" than the "closeups of our hero Beowulf's face... allowing viewers to admire every hair in his 3-D digital stubble".
  • Some reviewers of the 2009 animated film A Christmas Carol criticized its animation as creepy. Joe Neumaier of the New York Daily News said of the movie, "The motion-capture does no favors to co-stars [Gary] Oldman, Colin Firth and Robin Wright Penn, since, as in 'Polar Express,' the animated eyes never seem to focus. And for all the photorealism, when characters get wiggly-limbed and bouncy as in standard Disney cartoons, it's off-putting". Mary Elizabeth Williams of Salon.com wrote of the film, "In the center of the action is Jim Carrey -- or at least a dead-eyed, doll-like version of Carrey".
  • The 2011 animated movie Mars Needs Moms was widely criticized for being creepy and unnatural because of its style of animation. The movie was among the biggest box office bombs in history, which may have been due in part to audience revulsion. (Mars Needs Moms was produced by Robert Zemeckis's production company, ImageMovers, which had previously produced The Polar Express, Beowulf, and A Christmas Carol.)
  • Reviewers had mixed opinions regarding whether the 2011 animated movie The Adventures of Tintin: The Secret of the Unicorn was affected by the uncanny valley effect. Daniel D. Snyder of The Atlantic wrote, "Instead of trying to bring to life Herge's beautiful artwork, Spielberg and co. have opted to bring the movie into the 3D era using trendy motion-capture technique to recreate Tintin and his friends. Tintin's original face, while barebones, never suffered for a lack of expression. It's now outfitted with an alien and unfamiliar visage, his plastic skin dotted with pores and subtle wrinkles." He added, "In bringing them to life, Spielberg has made the characters dead.". N.B. of The Economist termed elements of the animation "grotesque", writing, "Tintin, Captain Haddock and the others exist in settings that are almost photo-realistic, and nearly all of their features are those of flesh-and-blood people. And yet they still have the sausage fingers and distended noses of comic-strip characters. It's not so much 'The Secret of the Unicorn' as 'The Invasion of the Body Snatchers'". However, other reviewers felt that the movie avoided the uncanny valley effect despite its animated characters' realism. Critic Dana Stevens of Slate wrote, "With the possible exception of the title character, the animated cast of Tintin narrowly escapes entrapment in the so-called 'uncanny valley'". Wired magazine editor Kevin Kelly wrote of the movie, "we have passed beyond the uncanny valley into the plains of hyperreality".
  • In 2014, the titular protagonist of the movie Bob the Builder got a redesign which was described by some as "creepy".
  • In the French movie Animal Kingdom: Let's Go Ape it uses motion capture, the apes were criticized for looking creepy. As this review points out, they have "weirdly humanoid figures" and "recognisably human faces".
  • The 2019 film The Lion King, a remake of the 1994 film that featured photo-realistic digital animals instead of the earlier movie's more traditional animation, divided critics about the effectiveness of its imagery. Ann Hornaday of The Washington Post wrote that the images were so realistic that "2019 might best be remembered as the summer we left the Uncanny Valley for good". However, other critics felt that the realism of the animals and setting rendered the scenes where the characters sing and dance disturbing and "weird".
  • The 2020 movie Sonic the Hedgehog was delayed for three months to make the title character's appearance less human-like and more cartoonish, after an extremely negative audience reaction to the movie's first trailer.
  • Multiple commentators cited the CGI half-human half-cat characters in the 2019 movie Cats as an example of the uncanny valley effect, first after the release of the trailer for the movie and then after the movie's actual release.
  • In the 2022 Disney animated movie Chip 'n Dale: Rescue Rangers, the uncanny valley is mentioned when the animated duo visits a place where several realistic CGI characters, including a Cats cameo from the 2019 movie, are inhabitants.
  • In the 2022 Disney+ series She-Hulk: Attorney at Law, the appearance of the main character, She-Hulk, who is depicted via CGI, was criticized by some reviewers as suffering from the uncanny valley effect, and negatively compared to the appearance of the Hulk in the same series.

Virtual actors

An increasingly common practice is to feature virtual actors in movies: CGI likenesses of real actors used because the original actor either looks too old for the part or is deceased. Sometimes a virtual actor is created with involvement from the original actor (who may contribute motion capture, audio, etc.), while at other times the actor has no involvement. Reviewers have often criticized the use of virtual actors for its uncanny valley effect, saying it adds an eerie feeling to the movie. Examples of virtual actors that have received such criticism include replicas of Arnold Schwarzenegger in Terminator Salvation (2009) and Terminator Genisys (2015), Jeff Bridges in Tron: Legacy (2010), Peter Cushing and Carrie Fisher in Rogue One (2016), and Will Smith in Gemini Man (2019).

The use of virtual actors is in contrast with digital de-aging, which can involve simply removing wrinkles from actors' faces. This practice has generally not been criticised for uncanny valley effects. One exception is the 2019 movie The Irishman, in which Robert De Niro, Al Pacino and Joe Pesci were all de-aged to try to make them look as much as 50 years younger: one reviewer wrote that the actors' "hunched and stiff" body language stood in marked contrast to their facial appearance, while another wrote that when De Niro's character was in his 30s of age, he looked like he was 50.

Deepfake software, which first began to be widely used during 2017, uses machine learning to graft one person's facial expressions onto another's appearance, thus providing an alternate approach to both creating virtual actors and digital de-aging. Various individuals have created web videos that use deepfake software to re-create some of the notable previous uses of virtual actors and de-aging in movies. Journalists have tended to praise these deepfake imitations, calling them "more naturalistic" and "objectively better" than the originals.

See also


This page was last updated at 2024-02-12 13:19 UTC. Update now. View original page.

All our content comes from Wikipedia and under the Creative Commons Attribution-ShareAlike License.


Top

If mathematical, chemical, physical and other formulas are not displayed correctly on this page, please useFirefox or Safari