What is Computer Ethics?

Excerpt: Stamatellos, Giannis (2007) Computer Ethics a Global Perspective. Jones and Bartlett

First we shape our tools, thereafter they shape us.

[Marshall McLuhan]


What is Computer Ethics?

1.1 History and Definitions

Computer ethics as a field of enquiry originates from the work of MIT Professor Norbert Wiener who first foresaw the revolutionary social and ethical consequences of information technology. Wiener in his books Cybernetics: or control and communication in the animal and the machine (1948) and The Human Use of Human Beings (first published in 1950) establishes the theoretical background for computer ethics. But Wiener never used the term ‘computer ethics’. A decade later Donn Parker began to study unethical computerized activities and to collect various cases of computer crime.[1] Parker provides the first theoretical basis of computer ethics but, like Wiener, he never used the term. The term ‘computer ethics’ was first introduced by Walter Maner in the mid-70s referring to the field of philosophical inquiry that deals with ethical problems aggravated, transformed or created by computer technology. In his book Starter Kit in Computer Ethics (1978) he develops a pedagogical framework for computer ethics for both teachers and students. The first complete presentation of the nature of computer ethics appears in the influential article of James Moore (1985) ‘What is Computer Ethics?’.  In this article Moore provides a first definition of computer ethics and a complete presentation of the ethical problems in the information society. According to Moore[2], ‘computer ethics’ is to be defined asthe analysis of the nature and social impact of computer technology and the corresponding formulation and justification of policies for the ethical use of such technology’ and states that a typical problem in computer ethics arises because there is a policy vacuum about how computer technology should be used. Computers provide us with new capabilities and these in turn give us new choices for action. Often, either no policies for conduct in these situations exist or existing policies seem inadequate. A central task of computer ethics is to determine what we should do in such cases, i.e., to formulate policies to guide our actions. Of course, some ethical situations confront us as individuals and some as a society. Computer ethics includes consideration of both personal and social policies for the ethical use of computer technology.

Moore’s definition of computer ethics is the basis for modern IT scholarship and the many books, articles, magazines, organizations and university courses which appear in the field; it provides a focus for a discipline that oscillates between moral philosophy and computer technology.


1.2 Computer Ethics and Morality

Ethics is usually regarded as one of the main branches of philosophy along with metaphysics, logic and epistemology; it is generally defined as the systematic study of morality. This study involves questions of practical reasoning such as freedom, equality, duty, obligations and choice as well as the justification of judgments, rights and claims related to these terms. In general, ethics is related to the code or set of principles, standards, or rules that guide the actions of an individual within a particular social framework. Fundamentally, it is the field of philosophical study that is concerned with ‘moral judgement’, involving questions about human behaviour or conduct – how a person ought to act in a particular case and to what extent this action can be described as ‘right’ or ‘wrong’. The aim is to provide us with a rational basis for moral judgments, enable us to classify and compare different ethical theories and defend a particular position on an ethical issue.

Furthermore, ethics is divided into normative ethics and metaethics. Metaethics is the study of the nature and the basis of ethics; it is a philosophical discussion about moral concepts, practices and judgement outside ethical practice, dealing with problems concerning ethics, not with problems within ethics. It is concerned with the nature of moral obligation and duty, and of moral argument and ethical reasoning within a wider theoretical context beyond ethical conduct and the bipolar decision-making process of ‘right’ and ‘wrong’ in moral judgement. Normative ethics on the other hand is the study of the norms, rules, values and standards that should guide our moral decisions, in other words, how we ought or ought not to act and behave and what we ought or ought not to do. The branch of normative ethics that deals with practical and everyday ethical issues such as environmental ethics, business ethics, bioethics and medical ethics and now computer ethics is called applied ethics. It deals with the moral principles that help the individual to take the right decision in a specific ethical problem and on this ground to act responsibly.

Computer ethics is commonly defined as the systematic study of the ethical and social impact of computers in the information society. The ethical and social issues under discussion involve the acquisition, distribution, storage, processing and dissemination of digital data in information systems and how individuals and groups interact with these systems and data. According to Walter Maner[3] the ethical problems that arise from computer technology could take the following forms: (1) it may aggravate certain traditional ethical problems (e.g. new ways for the invasion of privacy); (2) it may transform familiar problems into analogous but unfamiliar ones (e.g. change the idea of ownership and intellectual property); (3) it may create completely new problems (e.g. computer viruses and hacking); and (4) it may help to resolve existing moral problems (e.g. crime and fraud). Within this framework, the central problems that are discussed in computer ethics are the issues of: (1) privacy and anonymity; (2) intellectual property; (3) computer crime; (4) security and control; (5) reliability of computers; (6) integrity of data; (7) freedom of information; (8) equality of access; (9) authenticity; and (10) the dependence of humans on machines. These issues are inter-related, underlined by the special nature of digital information, and so their social impact on human life extends from politics, economics, health, science, education and entertainment to other special areas such as psychology, environment, culture and legislation.


1.3 Ethical Decision in Computing

Computer ethics deals with practical problems and focuses on the nature of moral action and responsibility: how do I know whether or not an action is morally right or wrong? On a practical basis, individuals can draw upon some personal or codified ethics: (1) the law (codifying moral principles with negative consequences for disobedience); (2) codes of practice (guidelines for employers to work within); (3) professional ethics (ethical principles relevant to the members of that profession); (4) personal ethics (values that influence the actions of individual, e.g. religious precepts). However, individuals can draw upon the ethical theories that emphasise the nature of moral action and provide the moral agent with a particular ethical principle on which to act. Fundamentally, every moral action involves two factors: (1) the person who acts, i.e. the moral agent with the accompanying responsibility, and the policies, rules and laws that are relevant in particular ethical decisions, and (2) those (including the moral agent) that are affected by the action. In computer ethics the latter includes individuals and groups that control, use, manage, design and develop information technologies and are affected by them.  On this basis, with regard to the moral agent, an ethical action includes three related factors:

Intentions > Acts > Consequences

Each of these factors is the main emphasis in one of three groups of ethical theories relevant to computer ethics: virtue ethics (e.g. Aristotelian, for intentions and the character of the moral agent); deontological ethics (e.g. Kantian for intentions and acts); consequentialist ethics (e.g. utilitarian for consequences).

Virtue ethics has its roots in the philosophy of Aristotle (384-322 BC), who sets out his ethical theory in two influential works: the Nicomachean Ethics and the Eudemian Ethics. For Aristotle, ethics is a practical science that deals with the character and behaviour of the individual within the community. Aristotle bases his moral theory on a fundamental distinction between ‘means’ and ‘ends’. Whereas means are the acts which are done for the sake of something else, ends are activities which are done for their own sakes. According to Aristotle, there must be an ‘end for all means’, and the final end of all actions has to be happiness. Happiness (eudaimonia) is the activity of the soul in accordance with reason, the highest faculty of the human soul that is not shared either with animals or plants, and it is fundamentally related to moral virtue.

Moral virtue is a disposition concerning choice that is grounded on moderation; it is an intermediate state, a mean between two extreme states: deficiency and excess. For instance, the virtue of courage is a mean between the feelings of fear and rashness. Hence all choices of the moral agent have to be for actions between alternatives, based on the concept of the golden mean – a mean relative to the abilities of the moral agent. Based on this rationale, freedom is a presupposition for ethics. A voluntary action has to be internal (coming from the agent) rather than external (compelled by someone or something other than the agent). The moral agent becomes good and happy not only by choosing the right action but also by doing it in the right way. Thus, for an act to be virtuous a moral agent has to know that he or she acts virtuously and to choose the right action, i.e. one that is primarily performed for the happiness of the individual and the benefit of the society as a whole.

On the other hand, the central dictum of deontological ethics is that ‘every action is right or wrong in itself’, which means that our actions are right or wrong in themselves and not merely because they produce good or bad consequences for the individual or the society. The German philosopher Immanuel Kant (1724-1804) is the first thinker who systematically maintained the main lines of a deontological position. However, for Kant, as for Aristotle, freedom is a presupposition for ethics. Only a free moral agent is able to apply good will. Good will determines our choice of action in accordance with the commands of duty. Only good will is good in itself and an action is good only if it is done from a sense of duty, i.e. when it is based on good intentions which are rationally recognized by the moral agent independently of the consequences of the action or the preferences of the agent. A rational agent has to rely on the objective moral principles of a universal moral law and not on subjective moral principles or personal preferences. Kant states that an ethical system, in order to be effective, has to be universally true and valid for all rational agents. Thus, the moral law has to be based on a priori and unchanging moral principles independent of arbitrary personal beliefs, relative cultural customs and unpredictable circumstances.

In Kantian Ethics the objective principles of the universal moral law have the form of commands or imperatives. All imperatives are either hypothetical (if you want A, do B) or categorical (do not do A). For Kant, only categorical imperatives can be moral imperatives since they are the only ones which can reveal the absolute sense of duty and direct moral action towards the good (‘Act as if the maxim of your action were to become through your will a universal law of nature’). With categorical imperatives Kant aims to establish a universal law valid for all rational moral agents. The criterion in all imperatives is their universality. Since the criterion in all categorical imperatives is universally valid, it can be accepted as an absolute principle for moral action.

Finally, the emphasis of utilitarianism or consequentialism, in contrast to deontology, is on the consequences of moral action and not on the intentions or acts of the moral agent. The common element between the different consequentialist theories, including the various versions of utilitarianism, is that they judge the rightness or wrongness of an action according to its consequences rather than the intentions of the agent or any intrinsic rightness or wrongness of the act in itself. The fundamental imperative of utilitarianism is: ‘always act in the way that will produce the greatest overall amount of good in the world’. This imperative is known as the Principle of Utility.

On this basis, the aim of utilitarian ethics is to maximize good and minimize suffering for the greatest number of people. The philosophers who introduced this ethical theory were Jeremy Bentham (1748-1832) and John S. Mill (1806-1873). The theory was further developed in the works of G. E. Moore (1873-1958) and Kenneth Arrow (b. 1921). Bentham and Mill both understood the Principle of Utility as ‘the greatest happiness of the greatest number’ and both understood happiness as ‘pleasure, and the absence of pain’; Mill’s distinctive contribution was to distinguish between ‘higher’ and ‘lower’ pleasures. In the same way, Moore proposed that we should strive to maximize ideal values such as freedom, and justice, while Arrow simply suggests that what we should value is whatever satisfies the preferences of the moral agent.

Finally, another controversial issue between the different utilitarian theories is whether the Principle of Utility should relate to the moral actions of the individual or take the form of a general moral rule or standard. On the one hand, according to act-utilitarianism the rightness or the wrongness of each moral action depends on the utility it produces in respect of other possible alternatives; on the other hand, for rule-utilitarianism the moral actions of the individual are evaluated in accordance with a justified moral rule.

These theories provide us a variety of ethical principles to guide our actions and decisions. They are available to the moral agent seeking answers to problems that remain unsolved by laws or other codified rules. However, in the case of computer ethics, modern philosophers and thinkers focus more on the deontological and the utilitarian ethical theories. These theoretical approaches emphasise on moral action: deontological ethics on the intentions of the moral agent, utilitarianism on the consequences of the action. Together they offer a practical framework for decisions that affect both the actions of the moral agent, at a local level, and the development of the information society, at a global level.

[1] Cf. Parker (1968); (1979); (1990) passim.

[2] See Moore (1985), p. 266.

[3] Cf. Pecorino & Maner (1985), p. 1 ff.

Moor, James H. (1985) ‘What Is Computer Ethics?’ in Bynum, Terrell Ward, Computers and Ethics, Blackwell, pp. 266-75.

Parker, Donn (1968) ‘Rules of Ethics in Information Processing’ in Communications of the ACM, Vol. 11., 198-201.

Parker, Donn, (1979) Ethical Conflicts in Computer Science and Technology. AFIPS Press.

Parker, Donn, S. Swope and B.N. Baker (1990) Ethical Conflicts in Information & Computer Science, Technology & Business, QED Information Sciences.

Pecorino, P. A., and Maner W. (1985) ‘The Philosopher as Teacher: Proposal for a course on Computer Ethics’ in Metaphilosophy 16, 4: 327-335.

Further readings

Baase S., (2003) A Gift of Fire: Social, legal, and ethical issues for computers and the Internet, second edition, Prentice-Hall.

Beekman G., (2003) Computer Confluence: Exploring Tomorrow’s Technology, fourth edition, Prentice-Hall.

Bynum, Terrell Ward (1998) Information Ethics: An Introduction. Blackwell Publishers.

Edgar L. Stacey (1997) Morality and Machines: Perspectives on Computer Ethics, Jones and Bartlett Pub.

Erman M. D., Williams M. B., and Guitierrez C. eds., (1990) Computers, Ethics and Society, Oxford: Oxford University Press.

Forester T., and Morrison P., (1994) Computer Ethics: Cautionary Tales and Ethical Dilemmas in Computing, Massachusetts: MIT Press

Johnson Deborah (2001) Computer Ethics, third edition, Prentice-Hall.

Rosenberg R., (1997) The Social Impact of Computers, 2nd edition, Academic Press.

Shrader-Frechette K., and Westra L. eds., (1997) Technology and Values, Rowman & Littlefield Pub.

Spinello R. and Tavani H., (2001) Readings in Cyberethics, Jones and Bartlett.

Stamatellos, Giannis. (2011) “Computer Ethics and Neoplatonic virtue: A Reconsideration of Cyber Ethics in the light of Plotinus’ Ethical Theory”. International Journal of Cyber Ethics in Education, 1 (1): 1-11.

Thompson B. W., ed. (1991) Controlling Technology: Contemporary Issues, Prometheus Books



You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *