0 Items

According to headlines and news commentators, life as we know it is about to change dramatically. AI would seem to be on the verge of displacing humans completely. Check these out:

  • Global AI experts sound the alarm in unique report – University of Cambridge
  • The AI Arms Race Is Changing Everything – Times Magazine
  • How to Survive the A.I. Revolution – Stanford University
  • The AI Revolution Is Happening Now – Forbes
  • The AI industrial revolution puts middle-class workers under threat this time – The Guardian

Perhaps most startling, Digitaltrends published an article, GPT-5 could soon change the world in one incredible way, which claims that the next iteration of Chat GPT will achieve AGI, or ‘artificial general intelligence’.

Now hang on a minute!!!

First let’s get some terminology in place.

AI is the abbreviation for Artificial Intelligence. AI describes a form of intelligence demonstrated by machines. In other words, intelligence that is not human.

AI is therefore a very broad term that can describe everything from my Waze App telling me to change routes to avoid a traffic slowdown, to the science-fiction-described, self-aware robots that rule a galaxy far, far away.

AGI, Artificial General Intelligence, sometimes referred to as General Artificial Intelligence, describes a form of intelligence that would be indistinguishable from human intelligence.

Augmented Intelligence?

My own personal preference for a term for the artificial forms of intelligence we see now is Augmented Intelligence. Humans have done an incredible job of designing software and hardware components that bring information to one’s fingertips in amazing ways. The availability of information is augmenting our human intelligence.

  • What time is it in Singapore?
  • Where is the closest pizza place that is open?
  • How old was Elvis when he died?
  • How many US dollars are there in 197 Euros?
  • What is the fastest way to drive home?

I no longer have to remember phone numbers, addresses, or carry a briefcase full of papers.

Research for a term paper? Google can find the latest and most relevant research papers.

ChatGPT is a huge step forward.  It has taken the world by storm because it has moved the information gathering and presentation to the next level.

Rather than serve up a list of available information sources, as do search engines, it uses deep machine learning to predict, based on your question, the most likely responses to that question. ChatGPT uses LLM (Large Language Model) to determine the most likely order of words that would be related to a question asked. Because of the size of the ChatGPT 3 model, a database of over 500 billion words passed through 175 billion parameters, the answers sound very human-like.

There is no question that the results are spectacular. Any piece of work that has been prepared based on the data set on which ChatGPT was trained is bound to be thorough.

But there is a catch. And it’s a big one.

Yes, ChatGPT has been trained on a very large data set. Yes, it can respond with accurate and extensive responses. But can also get things very wrong.

ChatGPT serves up the best answer to your question based on the way words have been used previously. After each word in its answer, it determines the next most likely word. The answer sounds human because the totality of words in the answer have been used that way before. The most common usage of those words are replicated in an order based on the analysis of the data from which the AI draws.

Of course, this is a simplification of a very sophisticated and complex process. But the bottom line is that the AI will follow the most predictable path based on the usage of words in the context of the question. That path may sometimes go down a rabbit hole.

Let us keep in mind – humans can also be very wrong.

Just as a professional should not blindly trust the report of a junior analyst, one should not blindly trust that the AI will get it right.

That brings us to the heart of this article.

As helpful as ChatGPT and related LLMs can be, will they replace humans for some or all work?

To answer this question, we need to first understand work.

The complexity of work

The starting point is that work can be classified by complexity.

Entry-level, salaried work is procedural work. It is work that can typically be done on a day-to-day basis. The work is either documented in a procedure, or could be documented in a procedure. This is Level 1 work.

Think technical work: taxi driver, laborer, mechanic, baker, assembler, bookkeeper, brick layer, carpenter, call center operator, receptionist, and so on. Of course, the work required by these jobs varies in complexity as well. For example, unskilled work such as day laborer also fits in this category.  They all have the common denominator that the work follows a pattern, and can be documented in a procedure. The worker has a finite number of choices to successfully complete the work.

The number of choices can be large, but they are part of a decision tree. Think of that big old oak tree – it has many, many branches but there is a path to the end of each. There are a finite number of choices.

The next level of work is not concerned with following a procedure. It is concerned with creating and continuously improving procedures. The choices available to the worker are not finite – they are infinite. This is level 2 work.

An oil painting by Matisse of a humanoid robot playing chessWorkers at this level must use their diagnostic capability to identify the information they will draw upon for decision-making, analyze this information, and use their judgment and creativity to find the best solution for their unique situation.

This work is carried out by managers of front-line workers and professionals: nurses, doctors, engineers, lawyers, management consultants, people-managers, teachers, architects, detectives, dentists.

The science of organization design describes up to eight levels of complexity of work, which can be found in the world’s largest, most complex organizations. For our purposes in this article we need only think about the first two levels, which demonstrate the difference between procedural work with finite choices and diagnostic work with infinite choices.

Understanding the science of organization design is important to understanding how complexity of work can be used to create layers inside organizations.

To date, AI has made inroads into the work of front-line workers, in other words, work with finite choices that can be documented into a procedure.

Many assembly lines that had been staffed with workers doing mindless, repetitive work have been replaced by robots.

Much of the highly proceduralized work of bank tellers has been replaced by automated tellers and online banking.

Many bookkeeping tasks have been automated by apps that manage receipts and documents, capturing the relevant information and preparing it for accounting systems.

Are LLMs such as ChatGPT about to now take over work carried out by professionals? This is the big question before us. To answer this, we need to think about the capability of humans to do work.

Human Capability.

For work to be successfully completed, people need to have the capability to do that work.

As a management consultant, my work includes helping my clients put in place systems that will help hiring managers match the capability of people to the complexity of work.

The most common mistake in hiring is made when the most successful front-line employee is promoted to be the manager of the team.  For example, the sales manager of a sales team leaves the organization. The common assumption is that the best sales person should manage the team because that ‘magic’ will be used to transform the remaining team members to be better sales people.

That best salesperson doing level 1 sales may have been ideally suited to following the sales procedures. They may not have the diagnostic capability to creatively solve problems that have an array of infinite solutions. If they do not have this diagnostic capability, they will not be successful as the manager of the team.

Managing a team of front-line workers is more complex than doing the front-line work.

Managing a function comprising several teams of frontline workers is more complex than managing one team of workers.

Managing a department comprising several functions, is more complex than managing a function.

Managing a whole organization is even more complex.

This human capability – I call it problem solving capability – describes how one processes information to successfully work at higher levels of complexity. The level at which one can successfully solve problems is attained through a maturation process that all humans go through.

Each of us at any point in our lives has the capability to solve problems at a certain level of complexity. To solve problems in a higher order of complexity our brains must mature ability to be able to solve problems at that level.

We cannot be taught to successfully work at a higher level of complexity. Just as we cannot decide how tall we will be, we cannot simply decide to solve problems at a higher order of complexity. The clerk in the mailroom may mature over time to have the capability to run the company, and this does happen. But not all clerks become CEOs.

The best salesperson may not be successful as the team manager this year, but next year they may be. It is a matter of maturation. (And, of course, many other factors that we are not discussing in this article.)

The Limits of AI

What we have seen so far is that AI is very, very good at analyzing and ordering vast amounts of data. All of the data in an AI’s data set is available to it at the same time, and the AI can access and use that data in appropriate ways.

ChatGPT was trained on a 45 terabyte database. This is equal to over 500 billion words. The access to knowledge is amazing. Every time I use ChatGPT I am impressed by its ability to bring human-sounding text together in a helpful way.

ChatGPT is not new. The algorithms behind ChatGPT have been around for a long time and have been implemented commercially in many forms. But ChatGPT 3 is leading edge, and ChatGPT 4 is going to be another big step forward.

The ability of AIs like ChatGPT to gather and arrange information in increasingly human-like ways will continue. The tool will therefore become even stronger as time goes on.

But can an AI be taught to solve problems at the higher order of complexity? Can an AI be trained to use a diagnostic capability?

My answer is no, not in the foreseeable future.

ChatGPT has been trained on a finite amount of data. Whether it was trained on 500 billion words or ten times that, it has been trained on information that currently exists.

It is impossible for ChatGPT to provide a response that has not already been thought of and written down. ChatGPT cannot provide new information. It can only package and produce information in ways that the information is typically thought of and used.

By the very nature of the fact that the data it trains on needs to be identified and compiled, the data is at least a year old. ChatGPT 3, released in 2022, was trained on data collected in 2020.

So at this point, an LLM model not only cannot provide a new way of thinking of things, it can only provide you with the latest thinking that is a year or two out of date.

AI and the Professions

What does this mean to professionals?

The essence of professional work cannot be replaced by AI. It cannot be replaced by an algorithm such as ChatGPT in any of its iterations because the algorithm cannot create knowledge. It cannot come up with anything new.

created by Dall-EThe creation of new knowledge is in the domain of humans. Humans can take existing knowledge, and using their diagnostic capability come up with different and new ways of thinking about things.

ChatGPT is ground-breaking because it serves up information in easy-to-understand language, and presents (usually) the best way of doing something or thinking about something. It is existing knowledge that has been developed and used previously. But to the extent that this knowledge is new to us, it can be very helpful.

Stories abound on how ChatGPT:

  • wrote programming code in minutes that would take a programmer days.
  • created a branding solution and strategic plan for a new business concept.
  • wrote the code for a web site based on a sketch.
  • Wrote a PhD thesis.

These are all great starting points. But we must remember that all of this information already existed. The questioner is being fed a compilation of the most likely responses to the question based on the data on which it has been trained.

So yes, this can be extremely helpful to the person asking the question. Instead of getting thousands of hits from Google that need to be reviewed and sorted, ChatGPT does the tedious work of sorting through the information for you and organizing it into human-sounding language.

This technology can also be helpful to the professions.

Those repetitive, procedure-driven parts of a profession can and should be automated so that professional services are more readily available and inexpensive.

For example, most wills are already available online. But the professional expertise of a lawyer is needed for complex estates when there is no straight forward solution.

AI diagnostics can help people understand their medical symptoms. Some can be self-treated, but some need the diagnostic expertise of a medical professional if the array of symptoms is not straight forward.

The CEO of an organization may be able to get some insights into performance issues, but complex problems can be addressed by a management consultant who can recommend solutions in that unique situation.

What does ChatGPT have to say?

It seems only fitting that ChatGPT should have a say in this article. Here is what ChatGPT answered when I asked this question:

Is ChatGPT capable of replacing the work that is carried out by professionals such as doctors, lawyers and management consultants? Please take into account the diagnostic capability required for root cause analysis vs automating work that can be documented in procedures.

“As an AI language model, I can provide information and insights on a wide range of topics, including those related to professions such as medicine, law, and management consulting. However, I cannot replace the work of professionals in these fields as they possess unique skills and expertise that cannot be replicated by an AI system.

“In the case of doctors, the diagnostic process requires not only knowledge of medical procedures but also the ability to analyze a patient’s symptoms, history, and test results to arrive at a diagnosis. This requires not only technical knowledge but also empathy and communication skills that are critical for building trust and understanding with patients.

“Similarly, lawyers and management consultants must use their expertise and judgement to identify and solve complex problems. This requires not only technical knowledge but also strong analytical and communication skills, as well as the ability to work collaboratively with clients and stakeholders.

“While some tasks within these professions can be automated and documented in procedures, such as administrative tasks like data entry and record-keeping, the more complex tasks require a human touch. For example, legal and management consulting services involve advising clients on strategic decisions and providing recommendations that take into account the unique circumstances and context of the client’s situation.

“Moreover, root cause analysis is a complex process that requires deep understanding of the problem and the underlying factors that contributed to it. While AI can provide data and insights, it cannot replicate the experience and expertise of a human professional in identifying the root causes of a problem and developing effective solutions.

“In summary, while AI systems like ChatGPT can provide assistance and insights, they cannot replace the work of professionals in fields such as medicine, law, and management consulting. These professions require a combination of technical knowledge, critical thinking, and human judgement that cannot be replicated by an AI system.”

The Dangers of AI

You have a problem. You describe it to ChatGPT and ask for a solution. The solution sounds pretty good. So you implement it. All good, right?

Maybe yes. But maybe not.

As a professional management consultant, I would probably be stretched to improve on what ChatGPT says about the problem solution. This shouldn’t be a surprise. The answer should reflect the most common thinking that has been written down about that particular problem.

We all like to think that our problems are unique, but generally there are a lot of other people that have struggled through the same issues.

In most cases hundreds, or maybe thousands, of people have dealt with the same set of symptoms that have led to your problem. Their solutions are reflected by the algorithm which reflects the most common solutions documented in the over 500 billion-word dataset that ChatGPT trained on.

The danger comes in what ChatGPT did not say.

The answer to the problem will reflect the typical response.

Will that typical response apply in your unique situation?

More to the point, have you correctly described the problem you are facing, or have you described the symptoms of a problem?

This is what root cause analysis is all about. Rather than deal with solving the symptoms, the analysis created by Dalle-Eneeds to take into account what is happening beneath the surface to identify causal issues.

This is analogous to a person with a fever taking an aspirin. The aspirin will reduce the fever. But has the underlying cause been treated? If not, the fever will return.

If you go to a doctor because you are really unwell, do you want the doctor to give you the typical solution that was applied to most people a couple of years ago? Or do you want the doctor to listen to your description of the symptoms, ask questions to get below the surface, combine this information with your unique background and recent activities, and then provide you with a personal diagnosis uniquely suited to you?

It is exactly the same in organizations.

Too often, the symptoms are identified and treated, often at great expense, only to reappear a few weeks or months later.

For example, teams of people are often sent away on retreats to get to know each other in order to reduce interdepartmental tensions. But the lack of clarity in cross functional accountability and authority will continue to cause workflow issues and the resulting conflict. The root cause is not that employees don’t get along. The root cause is that the accountability workflow systems and transitions are not set up properly. The team members may come back knowing each other better. Perhaps even liking each other. But the work conflict will still be there.

The Ethics of AI

We need to understand what these tools are and how they can be used. There are tremendous efficiencies that can be gained from the use of this technology.

On March 29, 2023 a group of over 1,000 artificial intelligence experts and industry executives published an open letter asking for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4.

This fear is not about AI overtaking human intelligence. This fear is about how this extremely powerful technology may be misused. The letter asks this question: “Should we let machines flood our information channels with propaganda and untruth?” and asks for investments in AI governance systems.

The next day, March 30, 2023, as I write this, the Centre for AI and Digital Policy in the US has asked the FTC to stop OpenAI, the developers of ChatGPT from releasing new GPT models. They state that the models are “biased, deceptive and a risk to public Safety”. They also seem to be concerned about malicious code and propaganda.

Going forward, we as a Society need to think carefully about this good vs bad argument.

Can I use ChatGPT to write a blog? Yes. Most people probably wouldn’t be aware.

Should I use ChatGPT to write a blog? Absolutely not. I earn my respect as a professional because of the value I add, not because I know how to operate an AI. My job is not to regurgitate the ‘what is’. My job is to figure out how to impact the ‘what will be’.

It is the same for other professions. Should a lawyer use ChatGPT to generate advice for a client? Should a medical doctor use ChatGPT to generate a prescription for a patient? Should a PhD student use it to generate the text for a thesis? Obviously not. These are ethical breaches.

The good or bad of the use of this tool comes down to the ethics of the person using the tool.

People are already using information in unethical ways. I have seen many cases where a politician’s interview has been edited to completely alter the meaning.  Software exists to put words into someone’s mouth that have never been said.

The increasing power of AI will give more ability to these bad actors to do things they should not.

We should be worried about ethics in society. We should be able to trust people to tell the truth. We should expect people to do the honorable thing. So yes, we should invest in governance systems around the ethics of conduct. But this is a broader issue than the governance of AI.

In Conclusion

AI has not reached a state of General Artificial Intelligence where machine-based algorithms can solve problems that are indistinguishable from humans.

Nor is it likely to.

Can AI help profession to be more effective?

Absolutely!

If I am suffering writer’s block, I happily use my AI tool for inspiration.

If I am lost, I am very happy that my phone can guide me home.

If I am looking for inspiration, ChatGPT can be a good partner for brainstorming.

It would be wonderful, when it becomes affordable, to train a Chatbot on my firm’s body of knowledge to answer the typical questions that a visitor to my web site might have.

AI and other advances in technology are enabling me to offer some services to clients at a fraction of the price of just a few short years ago. Al can be used in professions to automate certain aspects of services and many aspects of their back offices to provide greater value to clients.

These same advances in AI can be used to benefit Society at large. Partly through the way in which professional services can be offered more efficiently, i.e. better and cheaper. But also directly to make everyone’s life easier.

Is there room for harm?

Yes.

It falls to us, as professionals, to lead the way. We need to set the example by using our ethical frameworks and codes of conduct to guide us in how these tools can be used to improve society.

========================

Addendum:

  1. All images in this article were generated by ChapGPT’s cousin, DALL-E.
  2. Learn more about CHATGPT here: ChatGPT and the Generative Artificial Intelligence.
  3. Here is the open letter calling for a pause in AI Experiments: Pause Giant AI Experiments: An Open Letter
  4. Artificial Intelligence is defined in Wikipedia here.
  5. Here are some interesting facts about ChatGPT.