By Jack Cumming

Is there’s anyone left on the planet who hasn’t heard of the latest business fad “Artificial Intelligence,” familiarly referred to as “AI”? In brief, it’s simply a rebranding of computer processes that have existed at least since the 1940s. What’s new is the increasing power and speed of computers.

Before ENIAC

Here’s an early example. During World War II, calculating-sorting machine processes were used to model the failure curves for all mechanical parts, e.g., airplane components, that were critical to the war effort. ENIAC – the first “computer” aka Electronic Numerical Integrator and Computer – wasn’t yet in use. The results of those calculations were used to predict when parts would be needed and to forward position the replacements near where the war was being fought.

Those wartime analyses, conducted primarily by actuaries recruited for the purpose, were an early example of what later morphed into Operations Research and, most recently, into Data Science and its AI manifestation. So, harkening back to our title, what is “artificial” in AI? Hint: What’s most artificial is the implication that AI is intelligent.

Why the Buzz?

If AI is not “intelligent,” why is there so much buzz about it? Answer: modern machine velocities take wise human capabilities to a higher level than any unaided human can achieve. Of course, that’s true of any tool. An automobile is similar in advancing the pace of human activity. As a conveyance, an automobile is a tool that allows a person to cross the country in a little over 40 hours. That would take nearly 4 months walking, and many pioneers made that trek. That’s a remarkable advance, though with the leapfrogging of tools that technological advance makes possible, a passenger airplane now accomplishes the same feat in under 5 hours. Military jets are even faster.

Think about that for a moment because that’s how technology has accelerated our lives and allowed us to be far more productive than was previously possible. Ponder the reality of a four-month odyssey reduced to 5 hours. The cost and power breakthroughs with computer processing of algorithms are far more astounding. Keep that in mind as you ponder the mystery of AI.

It’s Commonsense

Another thing to have in mind is that the mechanics of improving predictions is not complicated. Take pricing as an example. Pricing is central to any business plan since it encompasses market acceptance (volume or occupancy); costs, and margin. In a senior living rental model, the facility price determines the monthly rent roll.

Initially, senior living rents are set by what is expected to cover costs and provide a fair return. Rents can’t be too high or there will be market resistance, and they can’t be too low or returns will be inadequate to justify the investment. So, we look for the Goldilocks sweet point and start operations. If entrance fees are involved, they are no more than prepaid rents.

As soon as we begin having revenues, the business gets very interesting. That’s when we find out whether our pricing is working or not. In effect, the price or rent is a predictive model based on factors such as occupancy, expenses, and more. Refining the model begins with comparing actual results by component with what was expected.  That then leads to action to either change results, e.g. by cutting costs, or by changing pricing. That iterative, continuous process is as old as commercial activity.

Predictive Pricing and AI Monitoring

AI, aka predictive analytics, or aka data scientific analysis, allows that same process on a much more complex pricing model, tracking myriad components to a very fine transactional level, while also considering exogenous factors. Yes, it’s complicated. Yes, it enables a truer picture of the emerging corporate performance. What it’s not is intelligent. It’s only as smart as the people who build the pricing/financial model and as the people who act, or fail to act, on the results as they emerge.

Adverse AI

Any tool can be purposed for good or used for harm. A hammer seats a nail or becomes a bludgeon. A tool is only as good as those who instruct it in its tasks. AI is no exception. Two examples will show how AI can quickly turn into something ominous.

Example 1 – the IRS

Recently, my wife and I received a letter from the IRS. We have always been very careful to err on the side of overpaying our taxes for fear of the chill that comes when the government issues a demand. No one wants to receive the kind of letter that came our way.

The power imbalance between government and citizens is overwhelming and rightly something to be feared. The IRS letter turned out in truth to be something to be feared. It accused us of underreporting income, and the proposed penalty was grossly outsized to the alleged wrong. It sent a chill through our household, and I worried about the mental health of my wife. Imagine how such a letter would affect you.

We responded immediately, demonstrating that we had complied in every particular with what the tax code required. We had followed TurboTax guidance to the letter, double-checking every step and every entry, and confirming the more unusual entries – in our case a qualified charitable distribution from an IRA – by seeking out the IRS guidance on the IRS’s own website.

We couldn’t understand how such a devastating demand could have been generated until we received what appeared to be a computer-generated letter acknowledging receipt of our appeal.  Never mind that the IRS doesn’t hold itself to the kind of response deadlines that it demands of citizens. IRS demands are relentless. That reply letter held a clue. The title of the “signatory” was “Operations Manager, AUR.”

Do you know what AUR means? We didn’t, but Google Search to the rescue. It turns out that AUR is an IRS AI-type initiative called, “Automated Underreporting,” thus, “AUR.” Who knew? It’s most likely that the letter that caused us so much anxiety was churned out by IRS computers trolling for taxes. We imagine that no human, much less an expert, ever considered our case.

The IRS combs for taxpayer “transgressions,” no matter how innocent, that may produce revenues to the tax authorities. We’ve never had the IRS write about an overpayment, and there have been many. AI in the service of agencies that are not accountable, as with the government, can be vicious. AI can be adverse when the humans directing it don’t rise to the ethical standards one might expect.

Example 2 – Automated Hiring

The second example is easier to explain and that is the increasing propensity to process job applications solely by computers without human intervention. Human Resources may direct the AI screening by standards that are thought to be right, which means that the least technicality may deny an employer the services of an employee who otherwise might have become a star.

AI is not “intelligent” and nowhere is that more apparent than in this mechanistic screening of job applications. If you’ve ever applied to hundreds of potential employers without any success, you may be shelved merely because of something as simple as that you served in the military and, thus, lack the specific experience terminology listed on the job description. Or, it may be that your college major doesn’t match the specifications, regardless of how much job-relevant specific knowledge you may have mastered on your own that you might have brought to the job.

Fear the Ethics, not AI

The long and the short of it is that people need not be fearful that AI will take over like HAL in “2001: A Space Odyssey” or like the intelligent, grasping plant in “Little Shop of Horrors.” AI, when given wise instructions, will never self-actualize to utter demands like, “Feed me,” unless it is written into the script and programmed into the enabling network of computers.

AI is one of those magical words that are popular just now — crypto being another.  These are words that few understand.  The best rule of business is never to authorize an expenditure for something you don’t understand at least on some level.  There’s no magic in what you don’t understand.

The most AI (as a buzzword for deep digitalization) can do is to help humans get more work done in less time so that they can then be paid more for a shorter workweek. That is unless their employers decide to take the productivity gains for themselves. That’s another matter of ethics that is beyond the competence of any computer or machine.