Where Do Babies Come From? Induced Pluripotent Stem Cells…Sort of

Recent headlines tout that it may be possible for same-sex couples to have biological children thanks to stem cell technology. Using skin stem cells, scientists from the University of Cambridge and the Weismann Institute found that it is possible to make primordial germ cells, the cells that eventually form into egg and sperm (gametes), from induced pluripotent stem cells created from a donor’s skin.

The opportunity for same-sex couples to have biological children may take over the headlines, but it is not the only, or necessarily the primary reason, scientists are interested in this research. One thing scientists hope to do is to use this technique to study age-related diseases. As we go through our lives, we accumulate epigenetic messages that tell genes when to turn on or off, when to make more cells, or when to stop making cells. Recent research shows that certain cancers and age-related diseases are likely due to these epigenetic factors going awry. These epigenetic factors are “reset” in germ cells, meaning scientists can start over and see how these factors develop at the cellular level. There is still much research that needs to be done in this area, including questions as to which epigenetic factors are passed on and which ones are actually reset in germ cells (See Nature’s recent issue on the results of the NIH’s Roadmap Epigenetics Project here).

While the epigenetic research is interesting, the headlines emphasize the reproductive possibilities. This technique could be another option for infertile couples who want to have biological children, including same-sex couples who want children that are biologically related to both of them. However, what is not touted as loudly is that, for now, the technology only works for male same-sex couples.

The why this technique can be used for two men but not two women has to do with how the cells are made. Primordial germ cells are made from skin cells are taken from each person to be converted into induced pluripotent stem cells. Those stem cells are then converted into primordial germ cells by turning on or off certain genetic factors involved in converting pluripotent stem cells to particular cell types.

Female cells have XX chromosomes and male cells have XY chromosomes, but in order to make primordial germ cells that are precursors to sperm, researchers need a Y chromosome. Lead researcher, Dr. Hannah pointed out, that it is easier to take away a chromosome than to insert one. Women don’t have any Y chromosomes to contribute, meaning that making primordial germ cells from women “is a long way off.”

Finally, one of the more troubling factors in this research is that little has been said about the health and well-being of the children that would be produced from this technique. A key point in the original research article in Cell is that SOX17 is a key factor in the process of making primordial germ cells and likely plays an important role in gene regulation. This was surprising to scientists because SOX17 does not play a key role in mouse development. This means that even though mice have been used in prior studies on creating primordial germ cells, they may not be a good model system for creating the subsequent sperm and egg cells. Often before a procedure or a drug makes it to human trials, it is first tested in mice and then in primates. When it comes to human development, though, things do not translate from animal models to human models as easily.

Once scientists are able to take the primordial cells and advance them to egg and sperm cells, they will be able to create an embryo. However, because the mouse models are different, this is a case in which we have no way of knowing whether these embryos or the children will be healthy until the experiment is actually done.

It is unclear from the interviews or the article if the assumption is that “unhealthy” embryos will die off before they implant in the uterus or how exactly researchers are expecting to test whether this technique work as another reproductive technology. If unhealthy embryos die, this poses an ethical problem for those that assume embryos should be granted dignity in their own right. But even if one does not accept that embryos are accorded a certain level of dignity, what about babies and children? What do scientists plan to do if the children born from this procedure are unhealthy or deformed?

Finally, it is worth mentioning that this technique would provide a source of eggs, which are needed for various other reproductive techniques and research endeavors. For example, The UK just passed legislation permitting what has been dubbed “three-parent IVF” in which scientists transfer the nucleus from one woman’s egg to another woman’s enucleated egg that will then be used in IVF. The hope is to prevent mitochondrial disease which is genetically passed down to offspring from the mother. Obtaining skin cells from a donor is much less invasive and poses less of a risk to the donor than obtaining her eggs after inducing hyperovulation.

This research is rife with ethical concerns, most notably the fact that it amounts to human experimentation on people who did not have the opportunity to choose to be the product of experimentation, but must live with the consequences, if they live at all.

NFL and Prescription Drugs

The NFL is being sued by 1,300 former players for the way it distributed prescription pain medicines so players can get back in the game. The former players claim that they were not informed of the side effects of potent pain killers such as Percodan, Percocet, Vicodin, and Toradol. Percodan, Percocet and Vicodin are all opioid painkillers and Toradol is a strong non-steroidal anti-inflammatory (NSAID) drug.

Many of the former NFL players involved in the lawsuit played during the 1980s and 1990s when practices for administering powerful painkillers, both opioids and NSAIDs, were cavalier. Today they are, in theory, more regulated. The players state that the “NFL medical staffs routinely violated federal and state laws in plying them with powerful narcotics to mask injuries on game days.” They also claim that medical staff was negligent by keeping important information on the players’ medical conditions from them, such as markers for kidney disease or broken bones.

At issue are 1) whether doctors and trainers violated the law by illegally administering prescription drugs, and 2) whether players were adequately informed of the side-effects of the drugs as well as informed of any medical issues that doctors found that might affect their decision.

Doctors Behaving Badly?

In an attempt to investigate whether illegal practices were going on, the DEA paid unannounced visits to several professional teams in November (2014) in which they questioned team doctors and trainers after the game. This investigation was to ensure that doctors were prescribing and distributing drugs appropriately, that they were handling controlled substances properly when crossing state lines, and that they had a license to practice in the state. Thus far, the DEA has not found evidence of illegal activities in their investigation.

However, an investigation from Vice Sports into how and where NFL doctors acquired such large amounts of prescription drugs, shows that, at least in the past, they were likely obtaining drugs illegitimately. From 2009 to 2010, several NFL teams, as well as other professional and college sports teams, acquired large amounts of opioids and NSAIDs from a company called SportPharm, an illegal drug distributor operating behind the legitimate company, RSF Pharmaceuticals. RSF Pharmaceuticals eventually shut down, and SportPharm was re-branded as a subsidiary of Champion Health Services, which is still in operation. Many teams would fill prescriptions in player’s names without the player knowing so that the actual quantities would fly under the radar.

Informed Consent

The second issue has to do with players’ rights, and whether they were adequately informed of what drugs they were given, their medical options given their current medical situation, and the long-term side effects.

Many of the players received opiate drugs without being told about their addictive nature, and were often told to take them for longer or in higher dosages than what is recommended by the FDA. Furthermore, many players were given prescription pain medicine without a doctor’s evaluation or monitoring. One former player reports that while playing for one team, an assistant trainer would pass out unlabeled manilla envelopes with pain medicine for any player that raised his hand and said he needed them. Another former player said that envelopes with prescription pain medicine would be waiting in the seats on the airplane for the players.

Player testimonies from the class action law suit website show that many players were given powerful pain medicine instead of being told that they needed rest and recovery or that the problem was actually much worse and required surgery. Several players said that NFL doctors knew of existing health issues, but did not inform the players. Two players’ testimonies state that NFL doctors knew that they had indicators of kidney problems but did not tell the players. Both former players now have renal failure.

Another former player, Rex Hadnot, said in a Washington Post interview that he was given Toradol pills and/or injections once-per-week for nine years. He was never told that Toradol should not be administered for more than five days due to risk of kidney damage, heart attack, and stroke.  He said that sometimes he would receive both a shot and a pill on the same day, a much higher dosage than the FDA recommends.

The Mountain Climber Problem

Part of the problem with discerning the ethics of safety for football players is exemplified in what H. Tristam Engelhardt calls “the mountain climber problem.” In general, climbing a mountain is more dangerous than not climbing a mountain, but we do not consider it unethical to allow a mountain climber to scale a mountain if he so desires. Similarly, playing sports is inherently more dangerous than not playing sports. Football players take on additional risks by choosing to play the sport. Therefore, what protections, if any, are football players owed?

There is a tension between restricting someone’s freedom and allowing them to put themselves in harm’s way. Typically, with the mountain climber problem, ethicists will say that it is unethical to allow additional harm to come to the person such that he or she could not accomplish the stated goal of climbing the mountain. For example, while mountain climbing is inherently dangerous, the climber should still use a harness and ropes. In the case of football players, while it is an inherently dangerous sport, one can enforce safety precautions to ensure that players are not injured in such a way that they cannot play the sport. This is the motivation behind stricter rules to prevent concussions, helmet design, and padding.

The difference between the mountain climber and the football player is that collisions are part of the sport. Pain is a given. The former players who are suing the NFL claim that their health was sacrificed in the name of sales. But, other players criticize the lawsuit as nothing more than a money grab on behalf of former players because they knew what they were risking by playing the sport.

Despite whatever motivations are behind the lawsuit or the NFL’s medical decisions, it is unethical to de-humanize athletes, even if they willingly chose to engage in de-humanizing activities. Let’s take a non-football example: If a woman choses to trade sex for money, she is willingly commodifying herself and ultimately engaging in a de-humanizing activity. While this may have been her free choice, it does not mean that if she goes to a doctor, the doctor is no longer ethically obligated to treat her with human dignity. In other words, even if she chooses to engage in activities that are de-humanizing, that does not mean it is okay for medical health professionals to treat her as less-than-human.

In the case of football players, even if they may choose short-term returns at the expense of long-term injury, they need to be given the opportunity to make an informed choice on the matter because, ultimately, they are the ones that have to live with the consequences.

In the latest issue of Salvo Magazine (Winter, 2014) I cover the larger issue of prescription pain medicine addiction, what opiate drugs actually do to the brain, and how one becomes addicted. The former NFL players’ claims about the over-prescribing of prescription painkillers may be part of a larger national problem that saw a peak in opiate drug prescriptions during the years that many of the former players were active in the NFL.

Creepy Critters in Our Gut

What is the microbiome?

The microbiome is the bacteria that reside within and on our bodies. Often these bacteria do more than just hang out with us. Some bacteria fight off disease, while some cause disease. Others will help us digest foods or reject bad food. For this post, I am going to focus on the gut biome, the bacteria that live in our large and small intestines, because the gut has made for some interesting headlines lately. The “microbiome” refers to all of the bacteria on the body.

The small intestines have a plethora of bacteria that act symbiotically with us to help us digest and process foods. Scientists have been studying the gut biome for many years, but it is only recently that it has garnered public attention. There have been several theories lately that have suggested the gut biome is responsible for everything from food allergies to autoimmune diseases to autism. Furthermore, new diet fads, fecal transplants, and probiotic supplements have emerged as a result of the gut biome hype, many of which are untested or whose claims are unsubstantiated. As is the case with pop-science trends, the microbiome is becoming the poster child for pseudo-scientific claims and grandiose promises.

What does the research show?

Let’s start with some facts because the gut biome does affect our health and well-being. The National Institute of Health is currently working on the Human Microbiome Project. This project seeks to identify and characterize the bacteria (and fungi) that are associated with the human body. Similar to the Human Genome Project, the original plan was to characterize the microbiome of healthy individuals and then to compare it to unhealthy individuals in hopes of understanding the role the microbiome plays in disease. However, those goals may need to be adjusted.

The Human Microbiome studies have revealed two things: 1) no two human microbiomes are alike, and 2) the microbiome is dynamic. Because each person has a unique microbiome, there is not a gold-standard, “healthy” microbiome by which to compare “diseased” microbiomes. Also, because the gut biome changes with diet and environment, it is difficult to determine a particular signature for a person. It’s composition is just too dynamic.

Additionally, the microbiome’s composition (the types of bacteria that make up the biome) are different at different times depending on the individual’s diet and environment. This is especially true with the gut biome. There are hundreds of different species of bacteria that could potentially live in our digestive system, and those species may be in different abundances at different times. Furthermore, sometimes studying two different parts of the same sample will show different results. This is a classic sampling problem. Imagine that you wanted to find the amount of lead in soil in a field. You could collect soil from the top of the ground, which might give you a different lead concentration than if you took soil that was one foot underground or you might get different results if you took samples that were 100 feet away from each other.  The gut biome has a similar problem. Apparently, the biome composition is different depending on where in the digestive tract you retrieve the bacteria (e.g., from a fecal sample or from the small intestines).

With these caveats, scientists have still observed some trends. For one, an individual’s gut biome changes after taking antibiotics. This makes sense because antibiotics are meant to kill bacteria. What is unclear is how long the changes persist and how this affects a person’s health.

Scientists also know that the gut biome plays a role in aiding digestion of certain hard-to-digest foods, such as carbohydrates. Furthermore, they have found differences between the gut biomes of obese people and non-obese people and between people with digestive diseases, such as Crohn’s disease. However, whether the different gut biome is the cause or is the result is unclear.

Healthy skepticism

There are several other correlations between the microbiome and physiological effects.  The difficulty is whether these are merely correlations or causation. William Hanage has an excellent article in Nature, “Microbiology: Microbiome Science Needs a Healthy Dose of Skepticism” in which he discusses five key questions to help discern the truth from the hype:

  1. Can experiments detect differences that matter?
  2. Does the study show causation or just correlation?
  3. What is the mechanism?
  4. How much do experiments really reflect reality?
  5. Could anything else explain the results?

Many studies show that the gut biome is very responsive to diet and environment, which means the differences we see in people with a certain disease (or condition) may be the gut responding to the disease rather than causing it.

The gut biome is a new area of research that may shed some light on digestive disorders and the effects of antibiotics on the body. However, Hanage cautions us to not fall into the same kind of non-discretionary, cure-all thinking that we’ve seen in other new areas of science such as the Human Genome Project, stem cell research, genetic engineering, or nanotechnology. He also remind us not to blame the microbiome for all of our ills: “In pre-scientific times when something happened that people did not understand, they blamed it on spirits. We must resist the urge to transform our microbial passengers into modern-day phantoms.”

Are We Bored Yet? The Apple Watch and New Technologies

One of the plights of modernity and postmodernity is hyperboredom. This is not the kind of boredom that comes out of having nothing to do, but the kind of boredom that comes out of having too many options and no way to distinguish which one is better than the other. We are jolted out of this boredom when we encounter disruptive technologies. These are technologies that fundamentally change a particular market and have an impact on our culture.

According to Ian Bogost, Apple is a company that is as much in the business of shaking us out of our routine with disruptive technologies as it is in the business of manufacturing them. This may explain why people flock to Apple’s announcements (either virtually, or in person) with a big-tent revival fervor in hopes of seeing what groundbreaking new technology Apple has in store for us. For a brief moment, the hyperbordom is replaced with anticipation and excitement over the possibility that the multitude of options will become passé to be replaced by that one technology that supersedes all of them.

Take, for example, Steve Jobs’ announcement in January, 2007 of this little gadget called the iPhone. He knew the implications of this device and where it stood in the grand scheme of things: (Quoted from “How Apple Introduced the iPhone” in The Atlantic):

This is a day I’ve been looking forward to for two-and-a-half years. Every once in a while, a revolutionary product comes around that changes everything and Apple has been—well, first of all, one is very fortunate if you get to work on just one of these in your career—Apple has been very fortunate. It’s been able to introduce a few of these into the world. In 1984, we introduced the Macintosh. It didn’t just change Apple. It changed the whole computer industry. In 2001, we introduced the first iPod. It didn’t just change the way we all listen to music, it changed the entire music industry. Well, today, we’re introducing three revolutionary products of this class. The first one is a widescreen iPod with touch controls. The second is a revolutionary mobile phone. And the third is a breakthrough Internet communications device. An iPod, a phone, and an Internet communicator. These are not three separate devices. This is one device. And we are calling it iPhone. Today Apple is going to reinvent the phone.(emphasis added)

Since then, every time Apple unveils a new iPhone, people flock to stores in anxious anticipation, some of them going so far as to sleep outside the Apple store’s doors in hopes of being the first to get the latest and best that Apple has to offer. And, it does not seem to be slowing down. Sales for the iPhone 6 and 6 Plus broke last year’s record, selling ten million phones last weekend, an opening weekend that was strategically timed to ensure that there will be visions of iPhones dancing in everyone’s head by December.

So with such excitement and Apple’s track record of disruptive technology, what happened with the Apple Watch?* Apple had not released a new device in four years. This was to be the next device after the death of Steve Jobs that shows Apple is still changing markets. However, rather than the fanfare of groundbreaking technology, the Apple Watch was met with mixed reactions..

In his article “Future Ennui” Ian Bogost says that the problem is not the technology itself, but the burden that comes with it. We have become bored of the constant barrage of groundbreaking technologies. He compares it to Google’s innovations,

Unlike its competitor Google, with its eyeglass wearables and delivery drones and autonomous cars, Apple’s products are reasonable and expected—prosaic even, despite their refined design. Google’s future is truly science fictional, whereas Apple’s is mostly foreseeable. You can imagine wearing Apple Watch, in no small part because you remember thinking that you could imagine carrying Apple’s iPhone—and then you did, and now you always do.

Bogost may be giving Google too much of a pass, though. The Google Glass has sparked some controversy among those paranoid of being filmed by its wearers.

Perhaps the difference between Google’s innovations and Apple’s innovations can be compared to the difference between reading Isaac Asimov and Margaret Atwood. Asimov writes about robots and artificial intelligence, and even explores some of the ways that this technology can go awry, but Asimov’s stories do not come with a sense of prophetic inevitability that Atwood’s do. Atwood writes speculative fiction, not science fiction (See Atwood’s book In Other Worlds). Atwood’s stories, like her recent Madd Adam trilogy, are disconcerting because they are a little too plausible. Rather than something that may be fifty years from now, her books describe a near-future in which technologies that are already in place are ratcheted up. Similarly, while people will likely not drive automatic cars in the next ten years, it is much more likely that they will be wearing technology that is collecting data on all of their bodily process, purchases, and locations in the next two years.

While the fervor over the iPhone 6 hit record levels, perhaps the mixed response to the Apple Watch signifies that we are tempering our enthusiasm over internet-in-our-pocket technologies. Clive Thompson, quoted in an in an opinion piece in the New York Times, says that our attitudes toward technology follows a predictable pattern, “We are so intoxicated by it, and then there’s a fairly predictable curve of us recognizing as a society that this is untenable, and we’re acting like freaks.”

Thompson is an optimistic writer on technology who believes that there are many benefits to the kind of community interactions that are possible with the internet. Rather than focusing on the doom-and-gloom of the here-and-now, Thompson takes a broader, historical perspective, reminding us, in an interview with The New Yorker that we have been through this before,

We have a long track record of adapting to the challenges of new technologies and new media, and of figuring out self-control…More recently, we tamed our addition [sic] to talking incessantly on mobile phones. People forget this, but when mobile phones came along, in the nineties, people were so captivated by the idea that you could talk to someone else—anywhere—on the sidewalk, on a mountaintop—that they answered them every single time they rang. It took ten years, and a ton of quite useful scrutiny—and mockery of our own poor behavior—to pull back.

Indeed, studies on cell phone addiction and parents neglecting their children and state laws addressing car accident deaths because people cannot pull away from their cell phones are all indications that we are becoming keenly aware that we might be acting like “freaks.”

While disruptive technologies may also disrupt our postmodern malaise, there does come a point when we become weary of the constant announcements of the next-big-tech. Bogost’s article is compelling because he touches on this very notion. Once there are too many next-big-tech options available, the hyperboredom of modernity and postmodernity sets in.

*The marketing team at Apple wisely opted not to go with “iWatch.”

Artificial Intelligence in the News

There’s been much ado about artificial intelligence lately. This has largely been prompted by a computer convincing some people that it is a 13-year-old boy and an article written by a veritable who’s who among emerging tech thinkers warning of the risks of superintelligent machines.

Lol Humans

A computerbot, named Eugene Gootsman, was able to convince 33% of people who interacted with him for five minutes via chat that he was a human. This was touted as a clear instance of a computer passing the Turing Test, but it was met with some criticism, including this piece in by Gary Marcus in The New Yorker.

Ironically, rather than showcasing advances in human ingenuity, the Eugene Gootsman experiment reveals some of our less noble attributes. For one, in order to make computers sufficiently human-like, programmers needed to make the machines dumber. As Joshua Batson points out in his Wired commentary, prior winners in an annual Turing Test competition incorporate mistakes and silliness to convince the judges that the computer is a person. This calls into question the value of a test for artificial intelligence which requires a machine to be “dumbed-down” in order to pass.

Secondly, the Turing Test, as it was presented in the media, could easily be one of those tests that the psychology department at your local university conducts on willing participants. The results of the Eugene Gootsman test say more about the judges than it does about the machine. Taken from another perspective, 33% of people tested were more gullible than the rest of the participants.

You Have to Want to Want It

This is contrasted to Stephen Hawking’s warning in an Independent article, co-authored by Stuart Russell, Max Tegmark, and Frank Wilczek, that superintelligence may provide many benefits, but could also lead to dire consequences if it cannot be controlled.  Hawking and company write that “One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand.” Yes, one can imagine technology doing this, but the question is can the technology imagine itself doing this?

Hawking assumes that we will have the capability to engineer creativity or that creativity will somehow emerge from technology. However, we see from the examples of Eugene Gootsman, Watson the computer, Google smart cars, and Siri, that complex programming does not produce ingenuity. One could argue that the only way a machine would even muster up the “motivation” to do any of these things is if a human programmed that motivation into it.

An example from science fiction is Asimov’s three laws of robots. These are the inviolable principles programmed into the hardwiring of every robot in Isaac Asimov’s fictional world. These laws provide the motivations behind the robots’ behavior, and while they lead to some ridiculous and troubling antics, they are not the same as a robot coming up with its own fundamental motivations. In Asimov’s series, I, Robot, the impetus behind each of the robots’ behavior harkens back to these pre-programmed (ostensibly by humans) laws.

This is not to dismiss the field of artificial intelligence. This is to call into question some of the assumptions behind the recent hype regarding the progress and the potential peril of AI. Technology is becoming increasingly more powerful with the potential for both helping and hurting. Hawking’s warning that “the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all” is worth heeding. However, the problem is not endemic in the machine but in the humans that make and wield this technology.

The Language of Morality

When it comes to making moral decisions, is it better to have emotional distance or to have compassion?

An ethics professor once told me that ethics is not about choosing between right and wrong because you should always choose what’s right. Ethics is typically about choosing between two wrongs. Certainly, if two people are approaching a decision from differing moral foundations, they may disagree on the “right” decision, but what the professor meant was that they are still trying to decide the “right” decision in a difficult situation in which there are “wrongs” that must be weighed on both sides of the issue. When people disagree, they are often prioritizing the wrongs differently.

Let’s take a typical example from ethics class: if a train is running out of control, and one track has one person tied to it and another track with five people tied to it and you have control of the switch that will direct the train to either of these two tracks, which one do you pick?

More than merely a macabre puzzle game, these improbable scenarios are meant to illuminate how one makes ethical decisions. Often, in the train example, people will kill the one person to save the five people. The next bit is to change the scenario slightly to see if the same kind of utilitarian calculation applies. For example, the one person on the train track is your father, and the five people on the other track are strangers. You still have control of the switch, what do you choose to do? Or, another way to change the scenario is to change the demographics of the victims. What if the one person is a child, while the five people are escaped prisoners? Or, the one person is a woman and the five people are men?

Thankfully, we rarely find ourselves in such troubling situations. However, what influences our answer to these questions becomes much more important when we move out the realm of imaginary trains and into real situations. One example of this is in medicine. Bioethicists often debate how to determine who receives scarce medical resources.

Take the train example, and whether your answers changed when we changed the demographic of the people tied to the rail. This is quite similar to the questions ethicists ask when it comes to deciding who should receive an organ from an organ donor. There are more people waiting for an organ transplant than there are available organs, so we have a situation in which the ethicist must decide how to choose the person who lives in a fair and just way. This is a case of weighing out the “wrongs” because no matter what the ethicist chooses, someone will likely die.

People make decisions based on many factors, but an article in The Economist indicates one unforeseen factor. A study in which some participants were asked two variations of the train scenario in their native language while other participants were asked these scenarios in a different language showed a difference when asked in a different language. Participants were not fluent in the other language, and were tested to ensure they were able to understand the question. In one scenario, the train will hit five people who are lying on the track, and the only way to save them is to push a fat man onto the track in front of the train. You cannot jump in front of the train; the only way to save them is to shove the fat man in the path of the train. The other scenario is the switch and two tracks mentioned above.

The switch scenario emotionally distances the person from the violent act. When given the switch scenario, most people opt to kill the one person in order to save five. This was the case whether the question was asked in a person’s native language or in a different language. However, when the person must push the fat man onto the track, about 20% of people would push the fat man onto the track when asked in their native language (This was also the same for people fluent in a particular language), but when asked in a different language, the percentage was 33%.

After several tests and analyses to account for factors such as cultural mores and randomness, the study confirmed that when people are asked about the fat man scenario in a different language, they were more likely to push him onto the track compared to when they are asked in their native language or a language in which they have fluency. It seems that the different language provides emotional distance similar to the way the switch provided emotional distance.

We live in a globalized world in which many people communicate and make decisions in a different language. Based on this study, it seems that language barriers also create emotional distance that is not necessarily there if the person is making the decision in his or her native language. In the area of medicine, this may have implications for patients who are asked to make a decision while living in a different country that does not speak his or her language. Additionally, this may have implications for patients who are unfamiliar with technical medical jargon, in which the use of jargon may be similar to hearing a problem in a different language. This may prompt a patient to make a more emotionally distanced decision.