Artificial Intelligence
Article

The Promise and Peril of Artificial Intelligence, Data Analytics, and Algorithms

by
Noel Nevshehir, Automation Alley
March 31, 2022
Download PDF

Summary

AI—versus natural intelligence that humans use to power up our brains—is akin to a jigsaw puzzle that deploys pattern-recognizing algorithms that draw together seemingly disparate information to form a clearer picture of underlying data.

The Promise and Peril of Artificial Intelligence, Data Analytics, and Algorithms

To paraphrase Nobel Laureate and economist Ronald Coase, like statistics, you can torture algorithms to confess to anything. His sardonic wit would be less compelling if it were not for algorithms created at the intersection of artificial intelligence (AI) and Big Data, which are, in many ways, inherently biased. Weighted variables, group think, human preconceptions, and the conscious and unconscious manipulation are generally the culprit. Taken together, they serve as a gateway drug to misguided assumptions we make about our universe.  

AI—versus natural intelligence that humans use to power up our brains—is akin to a jigsaw puzzle that deploys pattern-recognizing algorithms that draw together seemingly disparate information to form a clearer picture of underlying data. It has come a long way since the concept was first introduced at Dartmouth College in 1957. Aided by neural networks and deep machine learning technology powered by supercomputers and sophisticated algorithms, AI today can crunch raw data into meaningful and actionable analyses. Machines today reveal hitherto veiled information, ushering in a new era of Enlightenment and a broader understanding of human reason and knowledge.  

There is no such thing as too much information (TMI) when it comes to data capture so long as we have the tools to make empirical sense of it all. But therein lies the challenge. AI, Big Data, IoT, and algorithms can be as flawed as the people who create, or worse yet, influence them. They have already proved their dominion over humans in sometimes unflattering ways. For example, biometric software and digital assistants have demonstrated reckless bias towards women and minorities, resulting in the latter arbitrarily punished with longer sentences by our courts. If such things exist, the code of ethics, morals, and values of any technology reflects those who develop them—however innocent they may be with their premises. Simply stated, AI and their enablers mirror human behavior well in some situations but no so well in others. As we have already seen, the dystopian nature of algorithms cannot be understated by nation-state actors that deploy them as weapons of social engineering, propaganda, misinformation, and mass destruction. In Ukraine, like most military conflicts, the first casualty of war is the truth.  

Academia, government, and business and industry are impacted by the limitations of AI in ways that were unimaginable 20 years ago. For example, manufacturing has fallen victim to algorithmic bias as they digitally design, engineer, and speed up the production of goods to beat their global competitors to market. While accelerating the pace of innovation and compressing production development cycles are critical to any company’s success, it carries risks. Navigating self-driving vehicles with millions of lines of computer code and multiple sensors have been on the ‘perpetual cusp’ of being safe and road ready. What can possibly go wrong when vehicles linked to the Internet of Things are exposed to sudden anomalies in road conditions or cyberattacks on their onboard electronic systems (brakes and steering systems included)? To be sure, AI and its close cousin machine learning are akin to idiot savants that performs well with certain tasks but fall apart when faced with complex, unexpected input, and other externalities. Machine-learning models are trained on events that have already happened but cannot predict outcomes based on data not yet statistically measured.

In addition to AI’s shortcomings with semi-autonomous vehicles, another example is perhaps more legendary. Recall how Boeing Co. 737 MAX airplanes were superimposed with ‘smart’ technology sensors on top of successive generations of ‘dumb’ machines. To bridge the gap between the cyber-physical divide(s), Boeing engineers placed layers of mechanical systems dating back to the 1960s under the control of modified algorithms and tweaked their Internet of Things. In this particular case, flawed data from angle of attack sensors led to the development of the Maneuvering Characteristics Augmentation System (MCAS). Essentially, engineers trimmed the 737 MAXs software inputs into a dive mode whereby the stability control forces were so strong that the pilots could not counteract them, resulting in two separate plane crashes and the loss of 347 lives in 2019.  

Along with poor data inputs, other major contributing factors include MCAS software not being fully explained to pilots to realize that their ill-fated planes were in an excessive nose-up altitude. Boeing’s rush to make their jets more fuel efficient and cost effective to better compete with Airbus may have also underwritten this disaster. Tangentially, this calls into question Isaac Asimov’s laws of robotics in that robotic systems should not be programmed to override human decision making. Beyond integrating innovative software solutions aimed at augmenting legacy systems, gremlins can also lurk in physical equipment digitally made from scratch. The crucial difference is that from the outset, the latter is more adept at seamlessly embedding, integrating, and scaling up software as the product is being fabricated. This allows engineers to evaluate products while in process with both the cyber and physical variables in mind.

Japan recently developed the Fugaku, the world’s fastest supercomputer with a processing power of 415.5 petaflops capable of performing 415.5 quadrillion calculations a second. Comparatively speaking, the human brain registers only 10-100 petaflops per second. Imagine if the data inputs absorbed by the Fugaku are somehow faulty and serve only to hasten the speed and output of critical yet flat out erroneous information. We have already witnessed how quickly we rush to judgment with social media posts or the 24-hour news cycle without taking the time to fully chew, swallow, and digest what is fed to us. In addition, supercomputers clearly lack the human capacity to reason, perceive, sense, and express human empathy and compassion toward others.  

According to English biologist and anthropologist Thomas Huxley, “the great tragedy of science is the slaying of a beautiful hypothesis by an ugly fact.” Computer programs and machine learning can study past procedures, learn from mistakes, adopt best practices, reduce error rates, and perform as a tool for continuous improvement. Yet the information that we hastily extract from techno-optimistic data analysts collide with humbling reality and facts on the ground. Just imagine for a moment how reasonably intelligent people and critical thinkers can observe the same set of facts yet arrive at a varied conclusions. Historically, these alternative realities have metastasized and cascaded downward to form ‘foundational’ knowledge that sacrifices rigor for impact and attention getting headlines. On a positive note, although AI will never match human intuition, its trial-and-error approach to breakthrough innovations has already helped us mitigate bias and the unscientific use of scientific “discovery.”

Noel Nevshehir, Automation Alley
Noel Nevshehir, Automation Alley

Noel Nevshehir is director of Automation Alley’s International Business Services and Global Strategic Partnerships. In this role, Nevshehir is responsible for leading Automation Alley’s trade mission program and foreign direct investment efforts. He is also responsible for seeking out global strategic partners that align with Automation Alley’s Industry 4.0 mission.

More
Decarbonizing Light Manufacturing Industries in a Pinch
Here's How Smart Machining is Improving Metrology
INTEGR8 2024
Related
Become a Member