top of page

Artificial Intelligence 

            The term “artificial Intelligence” (known colloquially as A.I.) invokes many images: robotic friends that converse as they make life convenient, pre-crime departments subduing future criminals, mechanical experts surpassing their masters in their chosen field, computer uprisings by thinking machines, and so forth. In reality, A.I. represents a novel technological paradigm which has its own constraints, misconceptions, and potentials, but very few formal approaches towards its moral implications. As with all technologies ever developed, A.I. poses as many constructive as destructive potentials, but the actual future can be swayed in one or the other direction depending upon how scientists, policymakers, and consumers apply the technology at their disposal. One observation from the nascent field of nanoethics offers promise.

            Most importantly, A.I. must be understood as a technology and not through the mythos of science fiction, which is not yet fact. “Artificial intelligence” is defined as any and all computational technology meant to replicate human intelligence and cognitive processes to complete tasks which otherwise would require human critical thinking ability, oftentimes encompassing autonomous programming (Bali 1; de Saint Laurent 736). Towards this end, two other distinct processes are involved to ensure that a program can mimic human learning and adaptive capacities. Machine learning refers to algorithms designed to analyze statistical inputs to generate predictive outputs; i.e. a program which can learn with, or without, human supervision (Gandhi et al. 1403; Ergen 6). A subset of machine learning is deep learning, which is meant to replicate neurological process through artificial neural networks (ANN) which have multiple layers of input/output nodes (Ergan 6; de Saint Laurent 737). Essential to machine learning at all levels is an iterative process where an algorithm is fed raw data relevant to its function, and “learns” how to process desired patterns in the dataset to make predictions towards a targeted goal (Ergan 6). In short, a far cry from the sentient machines of “Star Wars” fame, A.I. is—at its most basic—a statistical engine that predicts outcomes that otherwise would be overwhelmingly demanding in terms of human resources, time, and combined skill (e.g. if a tree’s roots will endanger water lines, or when a seemingly healthy patient might have a heart attack).

            The potential of basic A.I. is seemingly unlimited. The 2016 victory of Google DeepMind’s AlphaGo over Korean Go champion Lee Sedol shaowcases the potential of A.I. to solve human problems faster than humans themselves (Bali 1). ANNs are being tested for the potential to resolve biological conundrums which have stumped human experts, such as predicting protein functions, while others have been successful in image recognition and automated translation (de Saint Laurent 737; Fa et al. 2). Others have been used in the healthcare industry, processing patient data to diagnose, prognosticate, and prescribe treatments (Weng et al, Lo-Ciganic et al). Each case offers promise, and the danger of abuse. Overreliance on A.I. conclusions could lead to misdiagnoses, patient misconceptions of their conditions, and machine prejudice based on demographic inputs. Similar worries stem from A.I. data corruption, faulty learning, or sabotage, which can easily go unnoticed in an otherwise autonomous program. This is especially salient when one considers the ongoing problem that the autonomy of machine learning, even under supervision, can lead to predictions with statistical justifications which elude human experts, making mistakes impossible to ascertain (de Saint Laurent 737).

            The challenge issued to developers and users of A.I.: how to go from here?

            Bert Gordjin offers a pragmatic scheme whereby the “ethical desirability” of A.I. research and use may be delineated without recourse to Precautionary or Proactionary postures. Although his “balanced view” was developed to address nanoethics, it is impartial to practice and can be reapplied to any field of research. He asks three simple yet critical questions for researchers and users which are meant to guide them as they refine the new technology:

 

  1. What are the goals sought by research into A.I.”?

  2. Will the research in question actually contribute to the realization of these goals?

  3. Are the foreseeable ethical issues concomitant with research and development of A.I. surmountable, if not justifiable?

 

As it stands, A.I. research is a free-for-all: there is no consensus as to the actual goals of developing intelligence-like qualities in computer algorithms. What is the ultimate objective of A.I.? Is it further scientific development? Profit? Convenience? The creation of sentient machines to become fellow citizens, or to solve problems too complex for human to solve (assuming they can be solved at all)? Are these objective morally worthwhile and are the risks greater than the reward? Who will pay the moral price for a mistake?

            Unless these questions are answered, and a consensus achieved, the world will not be ready for the implications of A.I., and those uninvolved in its development will likely pay the price. One must rise above myth and misconception, above utopianism and technophobia, and consider how this new technology will affect them and everyone they know in their own lives. Towards this end, one important question to compliment the Gordjin inquiry might be this: Will A.I. help humanity attain something that would be utterly inaccessible otherwise?

This article was written by Bioethics Alumnus Andres Elvira

Andres Elvira
LMU Bioethics Institute Logo thinking man sitting on microscope
bottom of page