The digital ecosystem shaping our world is vibrant and fast-moving. It has blurred the boundaries between what is physical and what is virtual. New business paradigms are being forged daily, turning challenges into opportunities and data-points into foresight.


But that’s only half the story.

Is technology shaping societal norms? What are the philosophical implications of automation and artificial intelligence? Have we imbued technology with positive value-creation possibilities? Does technology facilitate truthful communication?

The above and other ethical and ontological questions were at the center of a dialogue I had with Google’s EMEA President Carlo d’Asaro Biondo. We were both hosted by the President of Malta, Her Excellency Marie Louise Coleiro Preca, on the occasion of the launch of the book ‘Connected World’ by Philip Larrey. The author holds the Chair of Logic and Epistemology at the Pontifical Lateran University in the Vatican. For years he has been following the philosophical implications of the rapid development of artificial intelligence and similar tools which are re-shaping our world.

In Connected World, Larrey explores the consequences of the new digital age through a series of conversations with thought-leaders including Sir Martin Sorrell, CEO of WPP, Eric Schmidt, CEO of Google’s parent company Alphabet, and Maurice Lévy, CEO of Publicis Groupe. Ranging from the death of privacy to the rise of artificial intelligence, Connected World asks the existential questions which will come to define our age. This was the springboard for the discussion Carlo and I had in Malta.

Carlo d'Asaro Biondo from Google and Gege Gatt

 

On the subject of technology neutrality

It is often argued that technology is neutral. This view proposes that the effect of technology is solely dependent on how we use such tools.

I argued against such thesis. Technology, like science, is a human endeavour that is guided by values and conceptions of what is good or desirable to achieve. Therefore, technology is made to serve a specific purpose and achieve specific aims for a specific audience. Thus, when we examine technology we must review two dimensions: (i) the tangible invention with its intended goals, and (ii) the achieved uses in the intended market. The view that technology is neutral has plausibility only in so far as it relates to observed uses abstracted from intended goals. This approach seems awkward as, ab initio, technology has intended functions (a kettle’s purpose is to boil water and its use as a hammer is not apt or desirable) which are connected to the realisation of its expected goals. Thus, form can’t be divorced from function.

The design of the Wannacry cryptoworm had the objective to infect computers and demand ransom payments. Its observed use followed its intended purpose and infected 200,000 computers across 150 countries clocking up $4bn in economic losses. Assertions of neutrality may often be veiled attempts to escape responsibilities for intended consequences and thus should be debated thoroughly.

H.E. The President of Malta with Carlo d'Asaro Biondo (Google President EMEA), Fr. Joe Borg (Discussion Moderator) and Dr. Gege Gatt (CEO ebo.ai)

H.E. The President of Malta with Carlo d’Asaro Biondo (Google President EMEA), Fr. Joe Borg (Moderator) and Dr. Gege Gatt (CEO ebo.ai)

Value-Sensitive Design

During the design phase of technology, social consequences are malleable, but during use these are largely set-in-stone. Subsequently technologists must pay attention to ethical issues in early stages of the product’s engineering lifecycle.

In software architecture, we utilize many design frameworks that highlight maintainability, cost-control, or efficiency. My proposal is to inject value-sensitive design principles in these architectural paradigms and place such values as litmus tests prior to delivery. These value-sensitive propositions will drive engineers to design technology for inclusivity or for ultimate human well-being. A good example of such principles may be found in the 2017 Asilomar design principles for AI which propose specific value-sensitive principles which should be adopted in AI deployments.

Value Promotion in an Individualistic Era

Can technology promote values? Can the Internet transmit moral values in an age where individualism has created a digital cacophony of disparate self-centered voices?

I believe that individualism is not an effect of technology. Perhaps social technologies have been an aggravating factor leading to a higher level of self-interest, however technology has merely evidenced the deeply changing society we live in. Individualism as a moral stance implies that a person acts for his or her own interest without concern of societal requirements.

Individualism gained speed when we collectively moved away from belief systems and religions,  when we allowed a xenophobic fear and distrust of strangers to set in, when we expressed a lack of trust in government or structures which have, for years, served us. This social distancing has led to less local, inter-personal relationships and subsequently we have been drawn to the cult of ‘me’. As Derakhshan put it (Wired, 19/10/17): “…instead of a quest for knowledge we see a zest for instant approval from an audience, for which we are constantly (but unconsciously) performing”.

For technology to promote values we should focus our efforts on the design phase (see above) and also ensure that at every stage of computing (from acquiring of data to re-transmitting it) we implement and safeguard recognised values. These may include privacy principles relating to fairness, transparency or even social justice. A key value to be promoted in this regard (and perhaps safeguarded through policy) is that of informational self-determination or the capacity of the individual to determine the disclosure and use of his/her data. Similarly, the Internet can transmit values of social justice by ensuring that a larger amount of citizens are allowed access to Government so as to further enrich it. The utilisation of this public resource does not erode the state, it legitimises it further, by providing access to the distributed knowledge that it represents (more on this here).

Ultimately, although technology is centered on abstract engineering functions, it’s a central theme for us humans and we have the responsibility to shape it positively to create a better world.

Discussing 'Connected World' a book by Philip Larrey.

Discussing ‘Connected World’- a book by Philip Larrey.