Select Page

These days, there is ample hype, not to mention plenty of articles, on digital transformation, artificial intelligence (AI), and machine learning (ML). So much that even those of us in the midst of such discussions tend to get confused. The terms and  their meaning keep changing, depending on which vendors’ marketing and sales groups are presenting and particular “flavor” of a given week. The takeaway for many is that this whole thing is brand new and cutting edge (even “edge” is a thing now). This creates some stress as we realize management is looking at implementing all this stuff in already chaotic, often failing, systems. We are in the middle of Alvin Toffler’s “Future Shock,” due to the rapidity of change in both society and business. Or are we?

Back in the 1990s, at the University of Chicago (UIC) I taught about what we now call AI, ML, digital twins, and other technological advancements (including industrial statistics, now known as operations, reliability, and data science. Have things changed since then and by how much?  I’ve gone back to school on it.

Over the past year, I’ve recertified in ML online through several different university programs: Stanford University’s advanced machine learning coursework with Andrew Ng (a legend in the field); University of Michigan’s (UofM) School of Information digital science courses; and the University of Virginia’s Darden Business School coursework on “Leading the Modern-Day Business,” which starts with “Digital Transformation.”  While these have been great exercises in re-orienting the academic side of my career, it also provided some very interesting insights.

Note: The above educational options are all available under the Coursera.org platform as self-paced courses.

Much of what we see as ML and AI has not changed much, if at all, since the concepts were originally floated in the 1970s. Which, interesting enough, was about the same time the novel Future Shock was released. The ideas of automation, neural networks, and even machine-learning techniques came into existence then, and we’ve been discussing them in Institute of Electrical and Electronics Engineers, Inc. (IEEE) and at universities/national labs ever since. However, computing power required that these concepts could only be run on very large systems. My first exposure was in the 1980s with my dad at Argonne National Labs, where he led the Dept. of Environmental Research office. To do even the most basic ML project, big disk sets about the size of a gallon paint can had to be loaded into a room-sized machine. Each of these represented about 1024 kbytes (one megabyte) of information. As systems became smaller and Central Processing Units (CPUs, or “core”) became more powerful and dense, we’ve advanced much further in computer science.

Fun Fact: Nikola Tesla patented the first “electrical logic circuits” called “gates” or “switches” in 1903.

It wasn’t until after 2000, when multiple cores per processor were available in desktop and laptop computers (April 2006, Intel), that things really started moving forward. Prior to that time, each processor could run a single problem at a time through a single core. Large servers were required to run anything considered ML or AI before then.

The company I worked for in the mid-1990s, before entering the academic side at UIC, was TK Design. There, we were performing ML and VRML (Virtual Reality Modeling Language) based “digital twins” with our largest cost of business being the prototyping servers. Virtual Reality and Computer-Generated Imagery (CGI) through gaming and movies continued making leaps and bounds, which allowed for “Augmented Reality” (AR), and eventually made their way into “smart phones. Even in the 1990s, TK Design was working on VRML systems that included controllable and animated “fly throughs” of production equipment and the ability to train operators at a large food processor through shutter glasses and hand-held remotes.

A concept we couldn’t complete before TK Design folded was the introduction of faults and the ability to pull up parts lists, repair instructions, and other reliability/maintenance tasks within the VRML model. Aside from the food processing company that funded the work, the industry wasn’t ready for those concepts, and TK lost out to a competing organization that received a military based grant.

The ideas were floated out to industry for some time in the latter ’90s and early 2000s, with occasional technical or engineering articles until after 2010, when the concepts began gaining traction. By 2015, with the skilled workforce problem becoming a serious issue, organizations were looking at ways to fill the skills gap, additional aspects of computer automation were seen as solutions. With internet-capable systems, wireless, continued miniaturization, and a whole lot of science fiction, the concepts of ML/AI started the distant rumble of an avalanche of new technology. While that technology was viewed with caution by business, it considered to be a new frontier by academia, investors, and inventors. Major corporations such as Amazon, Microsoft, Google, Meta (Facebook), and others saw the ability to expand in these areas, adding to their horsepower. This gave birth to a new “data scientist” or “data engineer” profession, a variation of statistics and probability professionals, and ML/AI programmers wherein everything from numbers to images could be digitized and managed in devices that fit in the palm of your hand.

With the COVID-19-related shutdown of the almost everything in March 2020, the need to remotely operate, work site-unseen, and manage employees and services, the industrial landscape changed again. While we had been talking about remote sensoring of equipment, digital transformation, and replacing personnel with data and automation prior to that period, the pandemic helped start the avalanche in earnest. (Note: Politics had been keeping many of the concepts in check as NIST and other Federal organizations worked on frameworks to have a schema for roll-out of technology in critical infrastructure prior to 2020. The checks and balances then had to be pushed to one side to make up for the loss of in-person human interaction and the move to on-line initiatives.) Although not happening at the pace presented in the news, the digital economy is, I believe, is off to a roaring start.

So, what has happened with the advancement of ML/AI, and other aspects of digitalization?  It is the same as what I was teaching in the 1990s, which was the same as what researchers were doing in the 1980s and discovering in the 1970s. The base algorithms and methods have not significantly changed. What has changed is the computing power and the ability for people who did not have access to such tools in the past to have them now. The result is new and fascinating ideas on applications.

There are some caveats, however. In the view of the previously cited Darden Business School digital-transformation coursework, digitalization can replace human decision-making. Based on that perspective, everything can and must be digitalized for any company to survive. I expected the engineering/data science academic side to have the same position and was pleasantly surprised when the very first lesson in both the Stanford and UofM’s training was that ML/AI should be explored after other options were evaluated. The primary difference between the business-school perspective and the technical side of digital transformation was that the Darden training (much like several other courses I’ve audited) stated that ML/AI was implicit in that it could imply things exactly the same way as a human would, whereas the data-science academic side stated that these tools are explicit and subject to programmer bias and could not “think” in the same way as a human. The positions were completely opposite of each other.

The generalized position of the concepts around “Industry 4.0” was the business-school concept: People and knowledge could be replaced with automation. However, the leading researchers in this area saw it a little differently. At a Vibration Institute meeting in Texas during August 2021, the keynote speaker discussed “Industry 5.0,” in which the pendulum swings back toward the combined interaction between humans and automation. This helps fill in the gap between automated decision-making and the ability of a human to manage those instances where the automation has not been trained to make decisions. Consider this question about automation that relates to self-driving vehicles: In a situation where a decision to hit a pedestrian or injure the passengers of the vehicle by turning off a road into the side of a building to avoide the pedestrian, who will make the decision on how to train the ML/AI?

Counter to popular science fiction, with AI utilizing advanced neural networks (deep learning), let alone AI using other machine learning techniques, each methodology is a set of decision trees weighted by probability. When developing the tools for machine prognostics, the data scientist/engineer must identify features, manipulate the data, then train the system to make similar decisions each time. That’s correct, you may have the exact same scenario, but other conditions may cause the probability to shift a slightly different direction.

The coursework I’m currently reviewing involves the ethical dilemmas associated with the release of AI systems in which the data scientist is warned about the potential to include bias in models. For instance, in the development and labeling of data for training models, the conscious or unconscious personal bias of the people involved in that work may be introduced into the software which will act based on the data. For example, In the dilemma question of a self-driving car, if the sensor has been trained based on specific visual data, it may make different decisions based on the visible features of the pedestrian (sex, race, disability, etc.) which would be a direct result of the percentage of samples used in training. This has already occurred in resume’-sorting software, and other ML tools used in human resources, and the cases presented in the course noted that those involved had no idea the bias existed.

When it comes to the technical side of the equation, the same bias and decisions will have an impact. Here’s an example from the Northeast Blackout of 1993. One of the conditions that allowed the problem to get to the point where poor tree maintenance took out high-voltage lines due to temperature and conductor sag, causing other lines to overload and/or sag, had to do with software making a decision outside its training. The conditions resulted in the software deciding to not alert operators (no decision) so an alarm which would have allowed them to stop the cascading failure did not occur.

In another famous case, software that monitored a single sensor on the nose of the Boeing 737-Max would react to sensor failure by putting the aircraft into a dive, and the designers also did not include a maximum angle of attack.  In the well-documented crashes the ‘black boxes’ identified that the aircraft crashed with full speed almost vertical dives.

In other cases, such as with my personal vehicle, which has 18 cameras and additional sensors to identify everything from staying in-lane to how fast I’m approaching another vehicle, the manual and instructions explicitly point out that you must keep the sensors and lenses clean. There’s nothing fun about driving in winter weather on an empty highway and having your vehicle slam on the brakes because it thinks there is an object directly ahead.

When placing automation in plants, such as continuous monitoring vibration or thermal testing, the sensors have an impact as well. Failed batteries, technicians failing to reinstall them after repairing or replacing components, the sensors failing, other incidents, all have an impact on the ability of digital- transformation success. Thus, understanding that the technology is not infallible is crucial. In one case, at a client site, an alarm was sent when the temperature of a motor bearing rose to 50,000 C and automation reacted to it. The number passed a threshold after all. However, at that temperature, the building we were in had not incinerated, so we discovered that the sensor had failed.

In another case, company executives had allowed a third-party vendor who provided a specific technology to determine where that technology should go within the organization (and excluded the Reliability Engineering department from the process). Just prior to the roll-out of the program we happened to be on-site in a meeting between management and Reliability and pointed out critical equipment that wasn’t on the roll-out. As it turned out, the technology was not effective on that type of equipment, so machines that would, and had, shutdown the plant were excluded from the program, while equipment that was disposable and had no immediate impact were included.

We’ve seen many cases where digital transformation efforts have been highly successful. In one case when a company was moving forward with expanding on “success,” it identified a specific manufacturer of technology. I asked, “Why them and not the other company?” I was told that they did not know who the other company was, and I pointed out that half the equipment was from the company I’d asked about while the company under consideration had continual problems. The company that they were not aware of was working so well that they were not heard of in the morning meetings where the primary conversations were around the technology that had nothing but problems.

Digital transformation is a significant effort and involves extraordinary resources. That’s one reason why fewer than 1/3 of these initiatives are successful, depending on which study you review. This also depends on the approach and selection of methodology. For instance, one AI/ML company made a statement that 18% of CBM (condition-based monitoring) implementations were successful, implying that all predictive and condition-based efforts had a similar low success rate. The study actually cited only ML/AI CBM applications, and not CBM applications as a whole, which have a very high success rate. The challenge was a defensive reaction to the capability of the technology and what the technology was claiming it could detect.

Successful digital-transformation programs involve the coordination of all stakeholders in a company. This alone is a challenge, as most companies want to limit the number of resources used in these types of efforts. In these cases, the effort becomes more costly after-the-fact. as opportunities are missed or incorrect steps are taken. Of course, bias takes part, and most people decide to put their efforts into areas other than electric machines, which, of course, is a bad idea.TRR


ABOUT THE AUTHOR
Howard Penrose, Ph.D., CMRP, is Founder and President of Motor Doc LLC, Lombard, IL and, among other things, a Past Chair of the Society for Maintenance and Reliability Professionals, Atlanta (smrp.org). Email him at howard@motordoc.com, or info@motordoc.com, and/or visit motordoc.com.


Tags: reliability, availability, maintenance, RAM, digital transformation, automation, Machine Learning, ML, Artificial Intelligence, AI, Augmented Reality, AR