Creepy Critters in Our Gut

What is the microbiome?

The microbiome is the bacteria that reside within and on our bodies. Often these bacteria do more than just hang out with us. Some bacteria fight off disease, while some cause disease. Others will help us digest foods or reject bad food. For this post, I am going to focus on the gut biome, the bacteria that live in our large and small intestines, because the gut has made for some interesting headlines lately. The “microbiome” refers to all of the bacteria on the body.

The small intestines have a plethora of bacteria that act symbiotically with us to help us digest and process foods. Scientists have been studying the gut biome for many years, but it is only recently that it has garnered public attention. There have been several theories lately that have suggested the gut biome is responsible for everything from food allergies to autoimmune diseases to autism. Furthermore, new diet fads, fecal transplants, and probiotic supplements have emerged as a result of the gut biome hype, many of which are untested or whose claims are unsubstantiated. As is the case with pop-science trends, the microbiome is becoming the poster child for pseudo-scientific claims and grandiose promises.

What does the research show?

Let’s start with some facts because the gut biome does affect our health and well-being. The National Institute of Health is currently working on the Human Microbiome Project. This project seeks to identify and characterize the bacteria (and fungi) that are associated with the human body. Similar to the Human Genome Project, the original plan was to characterize the microbiome of healthy individuals and then to compare it to unhealthy individuals in hopes of understanding the role the microbiome plays in disease. However, those goals may need to be adjusted.

The Human Microbiome studies have revealed two things: 1) no two human microbiomes are alike, and 2) the microbiome is dynamic. Because each person has a unique microbiome, there is not a gold-standard, “healthy” microbiome by which to compare “diseased” microbiomes. Also, because the gut biome changes with diet and environment, it is difficult to determine a particular signature for a person. It’s composition is just too dynamic.

Additionally, the microbiome’s composition (the types of bacteria that make up the biome) are different at different times depending on the individual’s diet and environment. This is especially true with the gut biome. There are hundreds of different species of bacteria that could potentially live in our digestive system, and those species may be in different abundances at different times. Furthermore, sometimes studying two different parts of the same sample will show different results. This is a classic sampling problem. Imagine that you wanted to find the amount of lead in soil in a field. You could collect soil from the top of the ground, which might give you a different lead concentration than if you took soil that was one foot underground or you might get different results if you took samples that were 100 feet away from each other.  The gut biome has a similar problem. Apparently, the biome composition is different depending on where in the digestive tract you retrieve the bacteria (e.g., from a fecal sample or from the small intestines).

With these caveats, scientists have still observed some trends. For one, an individual’s gut biome changes after taking antibiotics. This makes sense because antibiotics are meant to kill bacteria. What is unclear is how long the changes persist and how this affects a person’s health.

Scientists also know that the gut biome plays a role in aiding digestion of certain hard-to-digest foods, such as carbohydrates. Furthermore, they have found differences between the gut biomes of obese people and non-obese people and between people with digestive diseases, such as Crohn’s disease. However, whether the different gut biome is the cause or is the result is unclear.

Healthy skepticism

There are several other correlations between the microbiome and physiological effects.  The difficulty is whether these are merely correlations or causation. William Hanage has an excellent article in Nature, “Microbiology: Microbiome Science Needs a Healthy Dose of Skepticism” in which he discusses five key questions to help discern the truth from the hype:

  1. Can experiments detect differences that matter?
  2. Does the study show causation or just correlation?
  3. What is the mechanism?
  4. How much do experiments really reflect reality?
  5. Could anything else explain the results?

Many studies show that the gut biome is very responsive to diet and environment, which means the differences we see in people with a certain disease (or condition) may be the gut responding to the disease rather than causing it.

The gut biome is a new area of research that may shed some light on digestive disorders and the effects of antibiotics on the body. However, Hanage cautions us to not fall into the same kind of non-discretionary, cure-all thinking that we’ve seen in other new areas of science such as the Human Genome Project, stem cell research, genetic engineering, or nanotechnology. He also remind us not to blame the microbiome for all of our ills: “In pre-scientific times when something happened that people did not understand, they blamed it on spirits. We must resist the urge to transform our microbial passengers into modern-day phantoms.”

Are We Bored Yet? The Apple Watch and New Technologies

One of the plights of modernity and postmodernity is hyperboredom. This is not the kind of boredom that comes out of having nothing to do, but the kind of boredom that comes out of having too many options and no way to distinguish which one is better than the other. We are jolted out of this boredom when we encounter disruptive technologies. These are technologies that fundamentally change a particular market and have an impact on our culture.

According to Ian Bogost, Apple is a company that is as much in the business of shaking us out of our routine with disruptive technologies as it is in the business of manufacturing them. This may explain why people flock to Apple’s announcements (either virtually, or in person) with a big-tent revival fervor in hopes of seeing what groundbreaking new technology Apple has in store for us. For a brief moment, the hyperbordom is replaced with anticipation and excitement over the possibility that the multitude of options will become passé to be replaced by that one technology that supersedes all of them.

Take, for example, Steve Jobs’ announcement in January, 2007 of this little gadget called the iPhone. He knew the implications of this device and where it stood in the grand scheme of things: (Quoted from “How Apple Introduced the iPhone” in The Atlantic):

This is a day I’ve been looking forward to for two-and-a-half years. Every once in a while, a revolutionary product comes around that changes everything and Apple has been—well, first of all, one is very fortunate if you get to work on just one of these in your career—Apple has been very fortunate. It’s been able to introduce a few of these into the world. In 1984, we introduced the Macintosh. It didn’t just change Apple. It changed the whole computer industry. In 2001, we introduced the first iPod. It didn’t just change the way we all listen to music, it changed the entire music industry. Well, today, we’re introducing three revolutionary products of this class. The first one is a widescreen iPod with touch controls. The second is a revolutionary mobile phone. And the third is a breakthrough Internet communications device. An iPod, a phone, and an Internet communicator. These are not three separate devices. This is one device. And we are calling it iPhone. Today Apple is going to reinvent the phone.(emphasis added)

Since then, every time Apple unveils a new iPhone, people flock to stores in anxious anticipation, some of them going so far as to sleep outside the Apple store’s doors in hopes of being the first to get the latest and best that Apple has to offer. And, it does not seem to be slowing down. Sales for the iPhone 6 and 6 Plus broke last year’s record, selling ten million phones last weekend, an opening weekend that was strategically timed to ensure that there will be visions of iPhones dancing in everyone’s head by December.

So with such excitement and Apple’s track record of disruptive technology, what happened with the Apple Watch?* Apple had not released a new device in four years. This was to be the next device after the death of Steve Jobs that shows Apple is still changing markets. However, rather than the fanfare of groundbreaking technology, the Apple Watch was met with mixed reactions..

In his article “Future Ennui” Ian Bogost says that the problem is not the technology itself, but the burden that comes with it. We have become bored of the constant barrage of groundbreaking technologies. He compares it to Google’s innovations,

Unlike its competitor Google, with its eyeglass wearables and delivery drones and autonomous cars, Apple’s products are reasonable and expected—prosaic even, despite their refined design. Google’s future is truly science fictional, whereas Apple’s is mostly foreseeable. You can imagine wearing Apple Watch, in no small part because you remember thinking that you could imagine carrying Apple’s iPhone—and then you did, and now you always do.

Bogost may be giving Google too much of a pass, though. The Google Glass has sparked some controversy among those paranoid of being filmed by its wearers.

Perhaps the difference between Google’s innovations and Apple’s innovations can be compared to the difference between reading Isaac Asimov and Margaret Atwood. Asimov writes about robots and artificial intelligence, and even explores some of the ways that this technology can go awry, but Asimov’s stories do not come with a sense of prophetic inevitability that Atwood’s do. Atwood writes speculative fiction, not science fiction (See Atwood’s book In Other Worlds). Atwood’s stories, like her recent Madd Adam trilogy, are disconcerting because they are a little too plausible. Rather than something that may be fifty years from now, her books describe a near-future in which technologies that are already in place are ratcheted up. Similarly, while people will likely not drive automatic cars in the next ten years, it is much more likely that they will be wearing technology that is collecting data on all of their bodily process, purchases, and locations in the next two years.

While the fervor over the iPhone 6 hit record levels, perhaps the mixed response to the Apple Watch signifies that we are tempering our enthusiasm over internet-in-our-pocket technologies. Clive Thompson, quoted in an in an opinion piece in the New York Times, says that our attitudes toward technology follows a predictable pattern, “We are so intoxicated by it, and then there’s a fairly predictable curve of us recognizing as a society that this is untenable, and we’re acting like freaks.”

Thompson is an optimistic writer on technology who believes that there are many benefits to the kind of community interactions that are possible with the internet. Rather than focusing on the doom-and-gloom of the here-and-now, Thompson takes a broader, historical perspective, reminding us, in an interview with The New Yorker that we have been through this before,

We have a long track record of adapting to the challenges of new technologies and new media, and of figuring out self-control…More recently, we tamed our addition [sic] to talking incessantly on mobile phones. People forget this, but when mobile phones came along, in the nineties, people were so captivated by the idea that you could talk to someone else—anywhere—on the sidewalk, on a mountaintop—that they answered them every single time they rang. It took ten years, and a ton of quite useful scrutiny—and mockery of our own poor behavior—to pull back.

Indeed, studies on cell phone addiction and parents neglecting their children and state laws addressing car accident deaths because people cannot pull away from their cell phones are all indications that we are becoming keenly aware that we might be acting like “freaks.”

While disruptive technologies may also disrupt our postmodern malaise, there does come a point when we become weary of the constant announcements of the next-big-tech. Bogost’s article is compelling because he touches on this very notion. Once there are too many next-big-tech options available, the hyperboredom of modernity and postmodernity sets in.

*The marketing team at Apple wisely opted not to go with “iWatch.”

Artificial Intelligence in the News

There’s been much ado about artificial intelligence lately. This has largely been prompted by a computer convincing some people that it is a 13-year-old boy and an article written by a veritable who’s who among emerging tech thinkers warning of the risks of superintelligent machines.

Lol Humans

A computerbot, named Eugene Gootsman, was able to convince 33% of people who interacted with him for five minutes via chat that he was a human. This was touted as a clear instance of a computer passing the Turing Test, but it was met with some criticism, including this piece in by Gary Marcus in The New Yorker.

Ironically, rather than showcasing advances in human ingenuity, the Eugene Gootsman experiment reveals some of our less noble attributes. For one, in order to make computers sufficiently human-like, programmers needed to make the machines dumber. As Joshua Batson points out in his Wired commentary, prior winners in an annual Turing Test competition incorporate mistakes and silliness to convince the judges that the computer is a person. This calls into question the value of a test for artificial intelligence which requires a machine to be “dumbed-down” in order to pass.

Secondly, the Turing Test, as it was presented in the media, could easily be one of those tests that the psychology department at your local university conducts on willing participants. The results of the Eugene Gootsman test say more about the judges than it does about the machine. Taken from another perspective, 33% of people tested were more gullible than the rest of the participants.

You Have to Want to Want It

This is contrasted to Stephen Hawking’s warning in an Independent article, co-authored by Stuart Russell, Max Tegmark, and Frank Wilczek, that superintelligence may provide many benefits, but could also lead to dire consequences if it cannot be controlled.  Hawking and company write that “One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand.” Yes, one can imagine technology doing this, but the question is can the technology imagine itself doing this?

Hawking assumes that we will have the capability to engineer creativity or that creativity will somehow emerge from technology. However, we see from the examples of Eugene Gootsman, Watson the computer, Google smart cars, and Siri, that complex programming does not produce ingenuity. One could argue that the only way a machine would even muster up the “motivation” to do any of these things is if a human programmed that motivation into it.

An example from science fiction is Asimov’s three laws of robots. These are the inviolable principles programmed into the hardwiring of every robot in Isaac Asimov’s fictional world. These laws provide the motivations behind the robots’ behavior, and while they lead to some ridiculous and troubling antics, they are not the same as a robot coming up with its own fundamental motivations. In Asimov’s series, I, Robot, the impetus behind each of the robots’ behavior harkens back to these pre-programmed (ostensibly by humans) laws.

This is not to dismiss the field of artificial intelligence. This is to call into question some of the assumptions behind the recent hype regarding the progress and the potential peril of AI. Technology is becoming increasingly more powerful with the potential for both helping and hurting. Hawking’s warning that “the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all” is worth heeding. However, the problem is not endemic in the machine but in the humans that make and wield this technology.

The Language of Morality

When it comes to making moral decisions, is it better to have emotional distance or to have compassion?

An ethics professor once told me that ethics is not about choosing between right and wrong because you should always choose what’s right. Ethics is typically about choosing between two wrongs. Certainly, if two people are approaching a decision from differing moral foundations, they may disagree on the “right” decision, but what the professor meant was that they are still trying to decide the “right” decision in a difficult situation in which there are “wrongs” that must be weighed on both sides of the issue. When people disagree, they are often prioritizing the wrongs differently.

Let’s take a typical example from ethics class: if a train is running out of control, and one track has one person tied to it and another track with five people tied to it and you have control of the switch that will direct the train to either of these two tracks, which one do you pick?

More than merely a macabre puzzle game, these improbable scenarios are meant to illuminate how one makes ethical decisions. Often, in the train example, people will kill the one person to save the five people. The next bit is to change the scenario slightly to see if the same kind of utilitarian calculation applies. For example, the one person on the train track is your father, and the five people on the other track are strangers. You still have control of the switch, what do you choose to do? Or, another way to change the scenario is to change the demographics of the victims. What if the one person is a child, while the five people are escaped prisoners? Or, the one person is a woman and the five people are men?

Thankfully, we rarely find ourselves in such troubling situations. However, what influences our answer to these questions becomes much more important when we move out the realm of imaginary trains and into real situations. One example of this is in medicine. Bioethicists often debate how to determine who receives scarce medical resources.

Take the train example, and whether your answers changed when we changed the demographic of the people tied to the rail. This is quite similar to the questions ethicists ask when it comes to deciding who should receive an organ from an organ donor. There are more people waiting for an organ transplant than there are available organs, so we have a situation in which the ethicist must decide how to choose the person who lives in a fair and just way. This is a case of weighing out the “wrongs” because no matter what the ethicist chooses, someone will likely die.

People make decisions based on many factors, but an article in The Economist indicates one unforeseen factor. A study in which some participants were asked two variations of the train scenario in their native language while other participants were asked these scenarios in a different language showed a difference when asked in a different language. Participants were not fluent in the other language, and were tested to ensure they were able to understand the question. In one scenario, the train will hit five people who are lying on the track, and the only way to save them is to push a fat man onto the track in front of the train. You cannot jump in front of the train; the only way to save them is to shove the fat man in the path of the train. The other scenario is the switch and two tracks mentioned above.

The switch scenario emotionally distances the person from the violent act. When given the switch scenario, most people opt to kill the one person in order to save five. This was the case whether the question was asked in a person’s native language or in a different language. However, when the person must push the fat man onto the track, about 20% of people would push the fat man onto the track when asked in their native language (This was also the same for people fluent in a particular language), but when asked in a different language, the percentage was 33%.

After several tests and analyses to account for factors such as cultural mores and randomness, the study confirmed that when people are asked about the fat man scenario in a different language, they were more likely to push him onto the track compared to when they are asked in their native language or a language in which they have fluency. It seems that the different language provides emotional distance similar to the way the switch provided emotional distance.

We live in a globalized world in which many people communicate and make decisions in a different language. Based on this study, it seems that language barriers also create emotional distance that is not necessarily there if the person is making the decision in his or her native language. In the area of medicine, this may have implications for patients who are asked to make a decision while living in a different country that does not speak his or her language. Additionally, this may have implications for patients who are unfamiliar with technical medical jargon, in which the use of jargon may be similar to hearing a problem in a different language. This may prompt a patient to make a more emotionally distanced decision.

Signs in the Stars

TheBrightestStar

“Where is the one who has been born King of the Jews? We saw his star in the east and have come to worship him.”

Thus was the question posed by the Magi upon arrival in Jerusalem, presumably to Herod, the king in situ at the time.

What had they seen? Why did they come to Jerusalem? It makes sense that, if they were looking for the King of the Jews, they would go to Jerusalem. But how did they know that a king had been born? A King who would be “King of the Jews?” What did they see?

Fred Larson got interested in that question after setting up Christmas decorations on the lawn with his daughter, Marian. She’d wanted three wise men in the yard and then said, “Daddy, make a star!” What’s a Dad to do? He made a star.

Wondering
But that got him thinking. Well …what was the star? When he came across a science article by a Ph.D. astronomer who took the position that the Bethlehem star had been a real astronomical event, he set out to investigate this puzzle.

He went to the book of Matthew, specifically chapter two, and, paying careful attention to every word, noted nine data points about the star, according to what Matthew had recorded:

  1. It indicated a birth.
  2. It had to do with the Jewish nation.
  3. It had to do with kingship.
  4. The magi saw the star in the east.
  5. They had come to worship the king.
  6. Herod asked the magi when the star appeared. This indicates that he hadn’t seen it or otherwise been made aware of it, implying that the star was not overly striking in the sky. It did not command attention, except to those who were looking with a certain wisdom and knew what to look for.
  7. It appeared at a specific time.
  8. It went ahead of the magi as they traveled to Bethlehem from Jerusalem.
  9. It stopped over Bethlehem.

Seeking
This was a considerable amount of data to work with, but it presented quite a puzzle. He bought an astronomy software package and started studying the sky. Because of the extreme precision of planetary motion, modern software allows us to see, not just snapshots but simulated animations, of the night sky from any point on the globe at any time in history.

He quickly ruled out a shooting star, a comet, and an exploding star or nova as explanations for the Bethlehem star because they didn’t fit the data recorded by Matthew. That still left another class of stars, however: the planets, which at that time were called “wandering stars.” The word ‘planet’ comes from the Greek verb for ‘to wander,’ and the planets were called that because they ‘wandered around’ in the sky against a backdrop of apparently fixed stars.

Might one of the planets have something to do with the star? Larson, an attorney skilled in analytical thinking, proceeded with this as his working hypothesis.

He zeroed in quickly on Jupiter, the largest planet, named after the highest god in the Roman pantheon, which has been known as the “King Planet” from ancient times. Magi watching the night sky from Babylon would have seen Jupiter rise in the east and then form a conjunction with a star called Regulus (which also means ‘king’) maybe 2-3 times in their lifetime. It would be a notable occurrence, but not an exceedingly rare event.

Larson pressed on. As planets wander across the sky, he discovered, they will at times go into what astronomers call retrograde motion. They will make what appears to be an about face loop and then continue on their way. They aren’t really reversing or looping, but viewed from Earth, this is what the path looks like because from Earth we view it from a moving platform. He looked at Jupiter’s retrograde motion with respect to Regulus and discovered that on very rare occasions, Jupiter does what appears to be a triple loop around Regulus. One of those conjunctions occurred in September of 3 BC on Rosh Hashanah, the Jewish New Year.

Now this is something to sit up and take notice of – the King planet forming a conjunction with the King star, even drawing a celestial “halo” around it. This event would have been exceedingly rare and would certainly have captured the attention of watchful stargazers. But would it really move them to mount their camels and take a 700-mile journey across the desert to Jerusalem?

Probably not, but there was still more going on. Babylonian astronomers were well familiar with the constellations of the zodiac – the same ones by which astrologers today make inane predictions. Larson “turned on” the constellations (meaning he had the software draw them out and label them on the screen), and he watched the September, 3 BC retrograde pattern Jupiter displayed with respect to Regulus against the backdrop of the constellations.

What he discovered was a remarkable display involving the constellations Virgo – the virgin, and Leo – the Lion (the lion symbolizing the kingly Jewish tribe of Judah, from which the Messiah was to come). For someone studied in Jewish history and Messianic prophecies, the symbolism would have been stunning.

Finding
Larson asked still another question. What if this Rosh Hashanah celestial display was the announcement in the stars, not of the birth of the Messiah, but of his conception? He ran the software forward nine months to see what the sky looked like then. What he found pretty much rocked his world, and I can’t do it justice in an ordinary written blog post. You’ll have to watch the presentation (and I highly recommend you do, you can get it from his website or Netflix) to see it all play out.

But I will leave you with this: Never be afraid to press the Scriptures and investigate the universe. You will find that the heavens indeed declare the glory of God, and all the Earth sings his praises.

And these: according to Starry Night astronomy software, here are three astronomical occurrences that took place during the years 3-2 BC:

  1. In September, 3 BC, during Rosh Hashanna, Jupiter “crowned” Regulus in the constellation Leo.
  2. In June, 2 BC, the king planet, Jupiter, and the mother planet, Venus, formed a conjunction, creating the brightest star anyone on earth would ever see.
  3. In December, 2 BC, Jupiter went into a small retrograde loop in the southern sky, meaning it would appear to be stopped over Bethlehem if you were looking at it from Jerusalem.

TheStarofBethlehem

Choosing
Coincidences? Fabrications? Or the Lord of heaven and Earth announcing the invasion of the Jewish King in the stars?

You decide.

CO2: Elixir of Life

Elixir of Life?

Yes, ‘Elixir of Life.” Elixir of Life is the label two scientists apply to carbon dioxide. Despite the fact that the U.S. Environmental Protection Agency has declared it a dangerous air pollutant, the son and father team of Dr. Craig D. Idso and Dr. Sherwood B. Idso, in their book, The Many Benefits of Atmospheric CO2 Enrichment, unabashedly say just the opposite:

“Atmospheric carbon dioxide is the elixir of life. It is the primary raw material out of which plants construct their tissues, which in turn are the materials out of which animals construct theirs. This knowledge is so well established, in fact, that we humans – and all the rest of the biosphere – are described in the most basic of terms as carbon-based lifeforms.”

Indeed. “Not only are increasing concentrations of atmospheric CO2 not dangerous to human, animal, or plant health,” writes Jay Lehr, science director of The Heartland Institute, in his review of the book, “they actually benefit earth’s many life forms, counteracting the deleterious effects of real air pollutants.”

The two scientists bring impressive credentials to bear on their admittedly non-conformist declaration.

Dr. Craig D. Idso is the founder and former President of the Center for the Study of Carbon Dioxide and Global Change and currently serves as Chairman of the Center’s board of directors. He earned his B.S. in Geography from Arizona State University, his M.S. in Agronomy from the University of Nebraska – Lincoln, and his Ph.D. in Geography from Arizona State University.

Dr. Sherwood B. Idso earned his Bachelor of Physics, Master of Science, and Doctor of Philosophy degrees from the University of Minnesota. From 1967 – 2001, he served as a Research Physicist with the U.S. Department of Agriculture’s Agricultural Research Service at the U.S. Water Conservation Laboratory in Phoenix, Arizona, and as an Adjunct Professor at Arizona State University in the Departments of Geology, Geography, Botany and Microbiology. He is the author or co-author of over 500 scientific publications.

Unless you’re an avid environmental scientist, though, you may find The Many Benefits of Atmospheric Co2 Enrichment rather boring reading. It’s filled with charts, graphs, and summarized results of scientific studies. But the executive summary version is fascinating.

In sum, the two scientists document 55 ways in which elevated atmospheric CO2 levels benefit the earth’s biosphere. For the reasonably scientific-minded not given to dicyphering science journals for everyday reading, Jay Lehr handily summarized ten of them:

Air Pollution Stress on Plants—As the CO2 content of the air rises, most plants reduce their stomatal apertures, or openings through which they consume carbon dioxide, and thereby reduce the intake of harmful pollutants that might damage their tissue.

Diseases of Plants—Plant diseases are commonly reduced as a result of improved immune systems that result from increased CO2 in the surrounding environment. This has been proven by hundreds of plant studies.

Flowers—Most plants produce more and larger flowers at higher atmospheric CO2 concentrations.

Health Promotion—CO2 enrichment increases the quantity and potency of the many beneficial substances found in the tissue of our food crops which therefore make it onto our dinner tables with more vitamin C and other antioxidants.

Medical Plants—Atmospheric CO2 increases the production of many health-promoting substances in medicinal plants, which have been shown to fight a wide variety of human maladies.

Nitrogen Fixation—Increasing CO2 concentration improves nitrogen fixation by soil bacteria, which leads to increased nitrogen availability in the soil for plants that normally need additional nitrogen provisions.

Photosynthesis—Additional atmospheric CO2 typically increases the photosynthesis rates of nearly all plants.

Soil Erosion—Increased CO2 enables all plants to extract more moisture from their surroundings; as a result, plants expand their root systems and significantly stabilize soil, thus protecting it from erosion.

Transpiration—Plants take in CO2 from open pores, called stomata, through which moisture also exits the plant. With increased CO2 in the air, plants do not need to keep these pores open very long to capture the needed CO2, and thus less water is lost through evaporation, a process called transpiration.

Water Stress—When plants are growing under less-than-optimal soil water availability, higher atmospheric CO2 dramatically improves the plants’ chances for survival and healthy growth.

Cool, huh?

Spring is unfolding into summer. As a carbon-based lifeform, I invite you to join me in enjoying the richness of biological life and spreading the word about this wrongly maligned elixir of life.