Moral Law Argument 2.0

Objective Evidence for God

If God exists, there should be some confirming evidence we can perceive. Cosmological and design arguments for God rest upon physical realities. But the moral law argument for God's existence uses philosophy and intuition about human thoughts and relationships—nothing we can demonstrate in physical form.

Isaac Asimov's famous Three Laws of Robotics and our knowledge of computers can help us see that the ability to evaluate situations and distinguish right from wrong requires information and programming from an outside source. Moral laws must have an outside intelligent source, and a good candidate for that source is God.

Consider the standard moral law argument for the existence of God in its simplest three-element form: 

Premise 1 >>> Every law has a lawgiver.

Premise 2 >>> There is a moral law.

Conclusion >>> Therefore, there is a moral lawgiver. 

(Of course, the term "law" here refers to rules of human behavior, not to physical laws like gravity.) The logic is valid, but it rests upon our intuitions about laws as guides to thinking and behavior. Modern knowledge about "thinking machines" can illustrate this.

Moral Laws Are Non-material Facts

As C. S. Lewis notes in The Abolition of Man, moral laws are ideas that human beings in every culture can and do know, understand, and act upon. For instance, nearly every known human society recognizes a moral proscription against killing other innocent human beings. Such a moral law stems from a "moral value," in this case the understanding that human life has intrinsic value. We commonly recognize this value and so try to deter or destroy external threats to human life.

Moral laws, and the moral values underlying them, are non-material ideas. That is, they are not physical materials or forces. They are uniquely found among human beings with minds. As Thomas Nagel writes in Mind and Cosmos, "Conscious subjects and their mental lives are inescapable components of reality not describable by the physical sciences." Moral ideas cannot be described or quantified in material terms.

Nevertheless, we can put the idea of moral laws into a concrete form by imagining what technology is required to enable a being to follow such laws. When moral laws are understood in technological terms, the concept of a moral lawgiver makes perfect sense. To see how, let us consider the "morality" of robots.

Asimov's Robot Morality

In the late Isaac Asimov's science fiction classic I, Robot contains a story about robots built to employ sophisticated intelligence as they work serving humans. The designers of the robots installed the Three Laws of Robotics into their electronic brains:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.  

Because the Three Laws of Robotics define what is "right" and "wrong" robotic behavior, they are moral laws. They will guide the robots in planning and carrying out actions in the future. Underlying the Laws are the moral values they are designed to protect, in a specific order of precedence: human life preeminently, then human liberty and authority over the robot, and finally the robot's own worth.

The Three Laws themselves could not be mechanical devices. Therefore, they must be software.

In 1942, when Asimov first formulated the Laws, and into the 1950s when they became well-known through the I, Robot stories, the notion of there being intelligent machines that required behavioral rules seemed mysterious and fantastic. Today, humans daily interact with and operate computers and other machines that contain intelligence. So we understand the distinction between hardware, the physical equipment manufactured using material elements and directed forces, and software, the information and sets of instructions stored in the hardware to direct its operations and enable it to perform calculations.

We also easily grasp how the rules of operation exist in physical devices like robots and computers: they are installed in the devices by programmers who design and write software specifically for them. In Asimov's stories, the Three Laws are software program elements installed by the robots' designers.

A Practical Impossibility

Writing software to implement Asimov's Laws in real, actual robots, however, would be extremely difficult, if possible at all. That is because the software written for the robots (the hardware) would have to enable them to:

> acquire and process sensory data;

> identify and recognize the significance of that data;

> compare the data with previously stored information about situations and their likely future consequences;

> prepare possible action plans;

> consider the potential results of each plan;

> estimate how they would react to unexpected changes in conditions and new input;

> choose the plan most likely to accord with their directives and yield the optimal result; and

> carry out the chosen plan.

Computer scientist Lee McCauley once polled professors working in artificial intelligence (AI) on the feasibility of writing software that would give Asimov's Laws any real life in machines. In his 2007 article, "AI Armageddon and the Three Laws of Robotics," he reported the results: they said that writing such software would border on the impossible. A fatal problem occurs in the very wording of Asimov's laws, which, while humanly comprehensible, is too abstract and ambiguous to guide a robotic "mind." There are no single, clear definitions of such terms as "come to harm," "inaction," "orders," "protect existence," and "conflict with other laws" that a robot could neatly apply to all situations.

McCauley writes that programming moral laws into robots would entail the "encoding of the abstract concepts implied in the laws within the huge space of possible environments," which he considers an "insurmountable" task. In Asimov's fiction, it is assumed that the robots not only fully understand the Laws and how to apply them in any context, but also that they know everything the lawgiver (programmer) knows about the relevant universe of information and possibilities. But in real life this is hardly the case, says McCauley. Since "a word encountered by a robot as part of a [human's] command . . . may have a different meaning in different contexts," he writes,

a robot must use some internal judgment in order to disambiguate the term and then determine to what extent the Three Laws apply. As anyone [who] has studied natural language understanding could tell you, this is by no means a trivial task in the general case. The major underlying assumption is that the robot has an understanding of the universe from the perspective of the human giving the command. Such an assumption is barely justifiable between two humans, much less a human and a robot.

Asimov's Laws seem quite basic, almost simple, to humans who already work daily under rules, principles, and guidelines. But installing such basic moral laws into even the most "intelligent" computers or robots presents an overwhelming challenge to the smartest human designers using all available methods. If the task overwhelms human designers, it hardly seems possible that purposeless, undirected material forces such as operate in the context of Darwinian evolution could achieve it.

Morality Is Software

If we think of the human body as biological "hardware," we can see right away that without a brain directing and interacting with them, the hardware components, such as organs, limbs, eyes and ears, and so forth, can do nothing. With a brain connected to them, these components can do things—but only if the brain has been programmed with software enabling it to direct operations. An inert lump of "gray matter" would be useless. Hence, if the brain is the "computer" needed to operate a biological body, that computer must contain software—stored information and a set of instructions for operating the hardware. Otherwise, the hardware will either malfunction or not work at all.

There is no known physical way for such operating software to come into existence by purposeless, undirected means. There is no logical way that undirected and purposeless materials and forces can produce software that does have a purpose and does direct hardware to achieve future goals. Nowadays, we know that software is needed to decode and process information, to perform calculations, and to direct hardware. And we know that software can only be produced by intelligent designers.

The Three Laws of Robotics, which are moral laws governing the robots' behavior, are software elements that had to be programmed and installed into the robots' hardware. The giver of the Laws is an intelligent programmer.

For human beings, the analogy applies directly. Human biological hardware—i.e., the human body—contains no moral laws and harbors no moral values. Ideas about morals and values reside in the human mind. We can assume that the mind resides in the human brain. But it is the mind that contains procedures for receiving data, processing it, developing plans, and directing further thinking and action. Those procedures are software elements. Moral values and moral laws exist as data and procedures—that is, as software.

Software Needs a Giver

Understanding the necessity of software makes it easy to understand moral laws as necessarily springing from an intelligent designer. Morality software must be installed into any thinking system from an external source—there must be a "giver" of that software.

With this understanding, we can explain the conclusion of the moral law argument as follows: Moral laws and their underlying moral values exist as abstract ideas until they are written into software. To make or modify any software requires a programmer, i.e., an external intelligence that designs and installs the software to operate in the future (given whatever inputs and circumstances occur). Thus, a moral law for a human being, just as for a robot, must come from the "giver" of the software that implements the moral values and moral laws.

The Process of Preference

Where traditionally the moral law argument rested only upon abstract philosophy, it is now reinforced with the concrete reality of software. Consider Dr. Ravi Zacharias's philosophical explanation of the moral law argument in a 2011 speech to the C. S. Lewis Institute:

I have often been asked the question on the problem of Evil: How can a Good God allow such suffering and Evil in this world? And when we talk about Evil, we assume Good. When we talk about Good, we assume a Moral Law. When we talk about a Moral Law, we assume that there is a Moral Law Giver. You say, why do you have to assume that? . . . Because every time the problem of Evil is actually raised, it is raised either by a person or about a person, which means it attributes intrinsic worth to personhood. The intrinsic value of personhood is indispensable to the validity of the question, and naturalism cannot give you that bequest.

A crucial element of Dr. Zacharias's argument points to the concepts of Good and Evil as referring to persons and things that are considered to have value. When we give something value, we are preferring one state of facts (State B) over a different state of facts (State A). So, when we prefer State B over State A, we are comparing data stored in our minds about the two states and applying thinking procedures to generate a resulting conclusion. The stored data and the programming of those procedures can only exist as software.

How do we know that "giving value" is a software function? When we evaluate State A, for example, we compare that state of facts to other possible states or to remembered states, using stored information about the states of facts and stored criteria for evaluation. We do the same for State B. Next, to compare State A to State B, we use a computation procedure (an algorithm) that produces a result. That result is our evaluation.

All of the foregoing steps involve stored data and stored procedures. Both of these categories are information. For any robot or digital computer, such information is stored in coded form. When needed, the information is fetched from storage and decoded for computation.

Feeding the computation process is the code system, with its matching encoding and decoding features. Code system features exist in computers only after an intelligent programmer has created and installed them. Without the symbolic code and its encoder and decoder mechanisms, the information could not be encoded, stored, or decoded; i.e., the foundational elements needed for computation would not exist. At every stage of the process of "giving value" there is software: data and procedures.

A Bolstered Argument

Let's apply all of these facts to the robot example. Operating under Asimov's Three Laws, a robot must: observe the current state of facts (State A), calculate the values of other possible states, develop and analyze its own possible next actions, compare the imagined future results of the possible actions with each other, and choose one action. All of these computations take place in software. Since the Three Laws define robot morality, it is clear that robot morality resides in software.

Human morality must reside in software as well. Dr. Zacharias's concise formulation would gain by the addition of this paragraph describing the reality of software:

There is no physical hardware that can perform any of the evaluation computations entailed in judging good and evil without accessing stored data and executing stored procedures. Human moral analysis takes place by using data and procedures, that is, by using software. Software can only be made by a higher intelligence external to the hardware. Naturalism offers no method by which morality could be programmed into human beings. The making of moral law software requires a designer external to the material world. God is the most likely candidate to be that designer.

Moral laws are software. Therefore, moral laws come from an external intelligent source, which we call the moral lawgiver. These objective truths about software bolster the moral law argument in a way that was unavailable to thinkers in previous eras. Like the other theistic arguments grounded in observable reality, the moral law argument finds support in external evidence now widely understandable in the twenty-first century.

Richard W. Stevens, an appellate lawyer, holds degrees in both computer science and law, and has authored five books and numerous articles on various subjects, including legal topics, the Bill of Rights, and intelligent design.

This article originally appeared in Salvo, Issue #47, Winter 2018 Copyright © 2024 Salvo | www.salvomag.com https://salvomag.com/article/salvo47/moral-law-argument-20

Topics

Bioethics icon Bioethics Philosophy icon Philosophy Media icon Media Transhumanism icon Transhumanism Scientism icon Scientism Euthanasia icon Euthanasia Porn icon Porn Marriage & Family icon Marriage & Family Race icon Race Abortion icon Abortion Education icon Education Civilization icon Civilization Feminism icon Feminism Religion icon Religion Technology icon Technology LGBTQ+ icon LGBTQ+ Sex icon Sex College Life icon College Life Culture icon Culture Intelligent Design icon Intelligent Design

Welcome, friend.
Sign-in to read every article [or subscribe.]