top of page
  • Writer's pictureDean Anthony Gratton

Artificial Intelligence: Ethics & Moral Responsibility

Artificial intelligence and its assurance of ethical and moral responsibility starts with you. Let me explain…

As we already know, the hype, hysteria and concerns surrounding the accelerated development of this thing called artificial intelligence are wailing somewhat loudly. Yet there are those that think we should now mandate and govern such a concept which, by the way, is not entirely understood.

The Rise in Ethical & Moral Awareness

When you think about AI ethics and moral responsibility, you would ordinarily consider the governance of computers, machines, or any other electronic device. Of course, it’s a natural reaction to believe that these devices need to be controlled, but it’s a tad more unsettling and unnerving than that. Artificial intelligence ethics and moral responsibility are not focused solely on a machine but, rather, about the engineers developing the technology enabling AI systems? After all, artificial intelligence is nothing more than clever programming and smart technology.

Artificial intelligence is assistive technology, and it is nothing more than clever programming and smart technology.

With the realization that it’s not necessarily a device that needs to be governed, how do we measure the ‘creator’s’ sense of ethics and morality? I realize, and so should we all, when developing such technology, that the most significant Achilles’ heel(s) in AI’s design are the innovators, architects, designers, and engineers—for they are at the heart of where all our Hollywood nightmares might come true. With this in mind, how can innovators protect their innovations from untrusted wrongdoers of technology doing harm to those who are completely unaware of their unsavory intentions?

Ethical Software

So, ethics and moral responsibility, along with terms such as ‘responsible AI’ and ‘AI Bias’, all begin at the conception phase of any prospective development project. We have many technology companies, or the ‘tech giants’ as they’re called, around the world, collectively eager to develop new AI algorithms that are better than their competitors and who are always keen to make their respective technologies iteratively smarter. Likewise, the application of these algorithms and how they collect, assess, and interpret data is levied by software, which is based on the instructions of the programming code implemented by an engineer. The result, outcome or behavior is similarly imposed by the software and its reaction is normally predictable.

With data, if we know the past and understand the present, can we predict the future?

But, with machine and deep learning techniques and the ongoing development of Artificial Neural Networks (ANNs) behaviors and outcomes can vary depending on how the algorithm has been designed and implemented. These algorithms present a somewhat predictable result, as such the fact that behaviors still need to be prescribed in advance although, in some instances, with advanced deep learning and ANN development, algorithms’ new behaviors based on data received may be written. All data is composed and compiled into new datasets and used for new experiences, allowing the system to make a correlation between the new and previous events. Such advanced algorithms can develop their own source code and begin to make predictions with datasets from previous experiences. The adaptability of such algorithms affords the AI-capable system an insight where it can rely on previous experiences, assess what is currently being presented to it and make a new ‘decision’ based on new information it has received. In some instances, an algorithm might be capable of predicting a future event based on previous and current experiences, albeit with good quality data.

Until next time…

Your company or government agency may choose to use an artificial intelligence ‘how to,’ ‘guidebook,’ or a procedural policy, which describes the ‘dos and don’ts’ of ethical and moral software and hardware development—if they don’t, then ask, “Why not?” Such guides should become the fabric of a company-wide policy for existing and new employees to abide by. This internal bible aids those who are about to embark upon a new AI project. But how do we measure their adherence and compliance? After all, this is not just for the employees and management, it’s also applicable to those that have devised such policies? Along with a suitable guidebook, companies need to remain open and honest regarding their use of AI algorithms and inform their customers or consumers how their information is used. You must establish trust with your community of users and, perhaps set up a guided set of principles that should have been initially defined by the creators of the system(s).

As such, these bodies should lay bare the purpose and scope of the AI system being used. Ultimately, we are responsible and therefore accountable for the technology we implement and deploy. Yes, the overall responsibility sits with you! Naturally, your government and its agencies, along with industry and businesses will have varying perspectives of what constitutes ethics and morality, and how they relate to the company as a whole, its employees, and consumers. Nevertheless, safety, security, privacy, unfair bias and so on are just some considerations--admittedly this is a minefield and it seems no-one is right or wrong since it’s their philosophy.

So, this is where a “very ethical and morally aware” Dr G signs off.

An abridged version of a chapter from "Playing God with Artificial Intelligence."


  • Twitter - Grey Circle
  • LinkedIn - Grey Circle

a technology influencer, analyst & futurist 

I dispel the rumours, gossip and hype surrounding new technology

bottom of page