Artificial Intelligence: A Blessing or a Curse?

Newsletter Subscription

Keep up with new content on the site, receive exclusive content and commentary, and learn about activities within the Straight Talk community.

By José​ de la Rubia, CEO, Quanta

This article is by Featured Blogger José de la Rubia from his LinkedIn page. Republished with the author’s permission.

With Google, Amazon, Facebook, Microsoft, Apple and other tech giants joining the ranks, people have given different understandings of AI from their own perspectives – with some of them being off target.

There are two things companies need to be careful of, whether they are large or small. The first is to avoid any pursuit of unrealistic products that require futuristic technologies. The second is a misinterpretation of the market demand, or the quasi-demand. It means the market actually does not need the products the company thinks are necessary.

So, what profound changes may we expect AI to bring to people’s life in the short run? There is huge room for improvement, starting with making easy judgment. Through mass data and cross-platform integration, the decision making process can be optimized. For example, a doctor can only go through a certain amount of medical cases. Particularly for junior doctors, we need to ask the question of how do they cope with an unprecedented medical case when doing image diagnosis of cancer. Information technology can help gather and integrate all similar cancer cases across the country or even around the globe for automatic recognition, matching features with past cases to assist doctors in making judgment on the nature and severity of the illness.

Likewise, massive power of Artificial intelligence will bring a different historic meaning to the technology.

But which stage has AI technology reached? Is the arrival of machine intelligence a blessing or a curse?

Fear of the unknown

AI has not gathered all positive reviews. Investment that the industry seized met ups and downs.

AI is much faster to be commercialized than other technologies. Research papers freshly circulated have a chance to become research assignments for students from top Silicon Valley and similar universities worldwide. They may even be quickly developed into real life products. This type of pace can be traced back to the nature of this industry.

Everyone is acting quickly.

Up until today, the ultimate purpose of the academic sprawl out was to understand AI or make AI machines, hopefully achieving both. Looking at the general research trend, the “traditional” AI research was led by computer scientists. Now we’re in view of disciplines such as psychology and neurology becoming more and more implicated in AI analyses.

Neurology and Computational neuroscience have generated masses of research papers with dissimilar results. Only just few days back Professor Murray Shanahan in an excellent article speculated about non-human conscience, looking for “candidates for membership of the space of possible minds.” Recently there was an article stating how human brains manage to locate places, indicating that some cells functions as a GPS system that navigates geographic locations. These research results and AI research share some common aspects.

To the point that it is coherent to raise the unambiguous question: Is it possible that, in the future, machine learning can jump over its reliance on large scale of data and be achieved directly through direct human brain simulation?

Intelligent apps and chatbots are now facing a huge area of searching space and need more human experience to narrow this space in a systematic approach. For example, if you want to find a ping-pong ball on a basketball court, what you will be doing is blind searching inch by inch. However, after learning certain information, such as where ping-pong balls are often put, the area where ping-pong players are found to be active and how far the ball can roll on the floor, you are more likely to find it. Data help you with answers and solutions. Without data or information, it costs you an awful amount of time (or a lot of computing power) to find a tiny ball in a huge space. We must find some boundaries to help define a more possible area.

And I don’t think this is a perfect testimony to a major breakthrough in AI technology. An (outrageously expensive) sci-fi-looking gadget – say Google glasses or iWatch– that everyone wants to wear at least once (but a hard sell to anyone who isn't a fitness diehard and obsessed with getting that walking wonder GPS chip in their watch/glasses) is a true sophisticated technology. But is it plausible for it to be applied to other domains? Maybe some, but it is too premature to say it represents a big jump for the entire industry.

Some believe that at some point in the future, AI will surpass human beings in all kinds of capability. In other words, the establishment of a computing technology would be marked by its capacity for outperforming human beings. Can you imagine, for a moment that whether in terms of work related to searching and archiving or in common sense, libraries will be considered obsolete and will be easily replaced by a simple database?

Something completely different is banking system security. It is getting impossible to rely on human labour for credit card fraud detection. The only functional way is to design a computing technology that tracks changes of data to draw up the contour of a default model, upon which abnormal transactions will be detected. In this sense yes, the machine capacity has exceeded that of human’s long time ago. The problem-solving capacity of human beings will be challenged on multiple fronts.

We can sense fear against AI among the public. Believers say AI will experience explosive growth in the coming decades and surpass the limit of technology to ultimately beat human wisdom.

Transparency vs knowledge

As the European Union continues to clamp down on the perceived misuse of people’s digital information, analysts also say that many Silicon Valley giants are responding to these privacy concerns by increasingly offering individuals and companies the ability to keep information close to home, whereas in the past, data might have been stored solely in the United States.

But the source of fear is mostly a lack of knowledge.

To give an example that may sound somehow off track, we all fear AIDS, yet doctors are not concerned about having contact with AIDS patients. That is because they know about the disease well and what might be the worst case scenario. The fact that you know nothing about the situation opens up a whole large space of wild imagination and that is the source of your fear.

People are worried about the free will of machines. The reality is far from what people might think of. The core problem is how human beings want to deploy such technology. To give a wild example, a robot can save or kill, depending on the order it receives. It follows its commander verbatim and never cares about the purpose or consequences of such actions. Running this line of thinking, a robot is the same as weapon. The key is who is using and how to use it.

Haste and Lack of Transparency Too

That creates challenges in the regulation. Auto pilot technology is a good example. We are well aware that emerging objects or technologies are destined to challenge industry norms and legislation. Such is the life. Google car and Tesla technologies come with warnings. Risk prevention should be applied to the development of technology. And we need time. New legislation and rules will phase in according to the change in human conduct.

The difficulty of Artificial Intelligence regulation

Regulation poses considerable challenges. The White House Office of Science and Technology Policy released its report on the issues of fairness and transparency, and brings up two different concerns:

  • The need to prevent automated systems from making decisions that discriminate against certain groups or individuals.
  • The need for transparency in AI systems, in the form of an explanation for any decision.

To quote: "Use of AI to make consequential decisions about people, often replacing decisions made by human-driven bureaucratic processes, leads to concerns about how to ensure justice, fairness, and accountability …Transparency concerns focus not only on the data and algorithms involved, but also on the potential to have some form of explanation for any AI-based determination. Yet AI experts have cautioned that there are inherent challenges in trying to understand and predict the behavior of advanced AI systems."

The European Union has also been thinking along these same lines and, further to a directive, released in April 2016 a document, with the intention of enforcing AI regulations by 2018. In the EU document, besides the usual point about what personal data can be collected, the same two concerns are raised and legal countermeasures proposed.

The risk is that attempting to regulate for fairness could effectively outlaw any fully automated system from making a decision about a person. Equally the requirement to a “right to an explanation of the decision reached after algorithmic assessment” could also lead to other unintended consequences.

The rationales for control are diverse, including concerns ranging from deindustrialization to dehumanization, as well as worries about the “fairness” of the algorithms behind AI systems. 

Policymakers must carefully ensure they have a full understanding of the boundaries and promises of all of the technologies they address. Many AI technologies pose little or no risks to safety, fair market competition, or consumer welfare. These applications should not be obstructed due to an inappropriate regulatory scheme that seeks to address an entirely separate technology. They should be distinguished and exempted from regulations as appropriate.

Other AI technologies may warrant more regulatory consideration if they generate substantial risks to public welfare. Still, regulators should proceed cautiously. 

Originally published on LinkedIn