By Jack Cumming

This is part 4 of Jack’s AI series. You can read the full series here: part 1, part 2, part 3.

AI is new, and novelty often seems magical. Novelty can evoke anticipation, but more often it triggers fear, and AI is no different. Many teachers think that student use of AI, usually in the form of OpenAI’s ChatGPT product, is cheating, while others challenge their students to critique AI’s output, to reveal its mistakes, to improve on its reasoning, and to find attributions which AI has omitted.

Does ChatGPT Cheat?

Declaring using AI to be cheating is to conclude without much reasoning that it is not a learning tool. Of course, AI itself may cheat, and, from all descriptions of what it does, it cheats all the time. AI lacks original thinking. It takes what’s out there — euphemistically called data — and reuses it, generally without revealing the source.

This includes not giving credit for purloined styles as well as data used with questionable selectivity. If AI mimics authors like Charles Dickens or E. B. White, that doesn’t mean that it is superseding human creativity. It only means that it is optimized to use the best of human written style. Even as those styles are used, however, they may already be culturally and linguistically dated. Seldom do people use the word “nifty” now, though it may have been common among those with whom E. B. White comported.

What ChatGPT Doesn’t Do

No one has said that ChatGPT’s AI is good at critical thinking, or at intellectually qualifying the worth of input, or at reviewing logically the conclusions of its output. Neither does it even provide the minimal critical thinking component of annotating its sources so that users can discern for themselves whether the source is credible or not.

Teachers alert: If you allow students to use AI, then require them to think critically about its output. That’s positive in an era when widespread use of textbooks has diminished the critical use of multiple sources. If a flawed AI, like ChatGPT, teaches critical thinking, that would be a good outcome.

Putting AI into perspective beyond the immediacy of today’s conversational ChatGPT suggests that the ethics for AI should be no different from the ethics of building on the work of others throughout history. We find what’s original by scrupulously crediting the work of those who have gone before. Scientists may overlap in their pursuit of a breakthrough and then may claim originality while ignoring others with the same insights.

Still, the greatest scientists aren’t so insecure that they need that credit, though they are happy to credit others. The greatest just work to improve the human condition. AI, too, exists to improve the human condition unless dark forces use it nefariously, as some undoubtedly will. We fear the use of atomic weapons by forces committed to anti-human violence; we should equally fear their use of AI.

Pursue Justice, Not Legislation

The bottom line, therefore, is that we shouldn’t reinvent ethics for AI, but that we should perfect the ethics that have been handed down to us through the eons from the dawn of human consciousness. We have a legal system, but we aspire toward a justice system. Sometimes the officers of justice, judges or legislators, get caught up in their own egos and their own opinions and lose sight of the larger good.

We even have cases when those given authority allow that authority to be corrupted for their own benefit. AI is another source of human empowerment that calls for good character, unflagging honesty, integrity, good faith, and fair dealing. It can be hard to constrain the human capacity for dissembling.

Examples of abuse abound. The U.S. Supreme Court, which should be a bastion of justice, declared money to be speech in the Citizens United case, and in the Dred Scott case the court declared that a man had no citizen rights due to race. If we romanticize AI, we risk similar errors among those entrusted with safeguarding the highest values of our society. We need to bring our legal system into full congruence with justice. It’s that simple.

Unethical AI Conduct

It’s clear that the originators of the obvious public manifestations of artificial intelligence have played a bit cavalierly with the ethics of their undertaking. There was widespread early uncredited use of copyrighted material. That was wrong. There was widespread use of questionable opinionated material without any attempt to distinguish what’s credible from what’s not. That, too, was wrong. There has been little attempt to ensure that what ChatGPT, specifically, produces is either accurate or relevant. That was also wrong.

These missteps are not an inevitable characteristic of AI. These are simply unethical practices by any standard. We don’t need to reinvent ethics to fit AI. AI needs to be reinvented to conform to what we know to be established ethical standards for humans.

In the meantime, those who are impacted by this palpable carelessness, if that’s all that it is, should be compensated for the damages they incur because of these flaws and ethical lapses. That’s the stuff of long-established tort law, and no mandatory arbitration clause in AI producers’ contracts of adhesion, designed to ensnare the fleeting casual or not-so-casual user, should deprive users of due process of the law to right the wrong.

This is part 4 of Jack’s AI series. You can read the full series here: part 1, part 2, part 3.