The consequences of AI without trust

People buy from people they trust, and they trust people they like, they like people they connect with, correspondingly, artificial intelligence (AI) should earn people’s trust and eradicate all kinds of bias to sustain and scale.

According to Gartner, 79% of organizations are exploring or piloting AI projects, but just 21% said their AI initiatives were in production.

If any AI initiative fails, then a very common floating perspective is that it is because of

§ Inadequate technology selection or

§ Poor vendor qualification or

§ Insufficient people skills or

§ even unavailability of good data

But beyond all these factors, there is a key responsibility for leaders and stakeholders to keep up their AI initiatives, which is “Trust”

Even today, many consumers are skeptical about AI and scared by what they read and hear, the Pega study shows only 25% of people (banking customers) would trust a decision made by an AI system over that of a person regarding their qualification for a bank loan.

So, beyond technology, this leads leaders to worry about “artificial stupidity”, aspect of artificial stupidity is capable of bringing real dangers to AI investment if handled inadequately. Nevertheless, there is no universal threshold of exposure to measure people’s trust in AI that depends on various wide-ranging factors, such as

§ AI maturity

§ User type

§ Human acceptance

§ Technology awareness

§ Domain knowledge

§ Ethics

§ Importantly, the business segments where AI is deployed to help humans

So as of today, the leading challenge for enterprise leaders is not just bringing AI, but the trustworthy AI

Why should we trust AI?

In general, trust and trustworthiness are essential ingredients in everyday life that comprise how we typically see, think, feel, accept and act. For instance, how we seriously seek trusted doctors for our treatments, medications, vaccinations, etc same applies to all other elements in our life, such as government, science, technology, even vehicle mechanic, and so on, but the stipulation of trust and its degree fluctuate from one element to other, expectancy on trusting vehicle mechanics is not as important and equal as to how we desperate in trusting our doctors.

Coming to AI, it is one-of-a-kind, continuously growing modern technology. In an ideal world, it is highly recommended for stakeholders to evaluate employees and customers trust in their AI, to establish, and re-establish progression at a regular interval to scale, if not there will be a consequence of AI turning ineffective due to unintended human ignorance, an ongoing example is, how people looking after Tesla’s level 5 self-driving autonomy cars, even though level 4 series earned people’s trust and widely accepted, in the future level 5 technology needs to redo same in the market to become successful.

What is the degree of trust in AI for success?

For enterprise business, there are three genres of AI,

1. AI for Internal operations

2. AI for Business-to-customer (B2C) operations

3. AI for business-to-business (B2B) operations.

AI for internal operations covers IT infrastructure, contact center, IT service desk, application deployments, and other horizontal functions like HR, finance, legal, etc. AI for B2C is anything and everything that is exposed to end-customers use, like a chatbot, finally, AI for B2B belongs to AI consultants, engineering firms, companies that sell AI as a product or service, and many more!

Enterprise leaders who are trying AI only for internal operations don’t need to sweat too much for enabling people’s trust in AI, for that, there are several established human behavioral change management procedures to follow and achieve the excepted results. Also, it is not always necessary to bring a high degree of people’s trust in AI for all above three segments for that reason, we could see some CIOs are simply successful with AI without enabling people’s trust and some may fail and inculpate AI and its engineering practices. The real game is all about

1. Selected business segments for AI

2. Business criticality

3. AI maturity

4. Involved human elements

5. Infrastructure and Data

6. Established consumer’s trust for acceptance.

In any spectrum of business, the sweet spot needs to be derived from how stakeholders look after AI transparency, AI performance, and outcome. It is always important to bring appropriate and effective AI, but enabling trustworthy AI starts with good engineering/data practices and ends with ethical practices, especially at the touchpoint where augmented AI connects with human elements. Regardless of either sensitive customer-facing operations or internal processes, this ground plays a critical role to cultivate people’s trust in AI and become successful with AI.

--

--