What is AI?
The term is confusing and means different things to different people. I would define it as the ability of an artificial system to exhibit human-like behavior. An “artificial system” here refers to a system created by humans or another system.
There is an old joke that most people claiming to do AI are actually fitting a line to some data, the process also known as advanced business analytics.
There used to be three types of analytics: descriptive, predictive, and prescriptive. Recently, with the emergence of very large AI models, we got a new type – generative.
Generative AI is very different from other types of AI.
First, it is performed by non-task-specific AI models, the so-called foundation models. These models represent a compressed snapshot of human knowledge that can be uncompressed into a sentence, an image, a computer program, or a molecule in response to a prompt.
Second, these general-purpose models can be tuned to perform a new task with just a little additional data. Often, these data can be provided directly with the prompt as context. Sometimes, additional training may be required, but as the models improve, in-context learning will likely be sufficient for most applications, which makes it possible to provide and consume AI as a service.
Third, foundation models possess emergent abilities that were not programmed by their creators, such as chain of thought reasoning, theory of the mind, emotional intelligence, and others. Models develop these human-like abilities while training to better compress human knowledge.
Experts agree that generative AI is a very, very big deal.
Because of their unique abilities, generative AI models will transform every sphere of human activity. For the first time, we have human-level intelligence that can be easily embedded in every app, process, and platform. To paraphrase Steve Jobs, it is the ultimate bicycle for the mind.
Consequently, businesses that ignore AI will be at a distinct competitive disadvantage.
Adopting AI is not without risks.
Generative AI models can “hallucinate” by uncompressing their version of human knowledge into false statements wrapped in very believable narratives. Some models develop moodiness, a truly human quality, that can result in inappropriate output.
Due to their non-deterministic nature, these risks cannot be fully tested and controlled at development time. Instead, they must be effectively managed at runtime, most likely with the help of higher-level meta-models.
Despite all the risks, because of the transformative nature of generative AI and the speed of its development, a “wait and see” approach might seem wise but could prove disastrous. Blockbuster and Blackberry stories are now part of folklore. Changes happen gradually and then suddenly.
Fortunately, the barrier to adopting generative AI is lower than ever. Experiments are very inexpensive and can be plentiful. Get an OpenAI API key and get going.
Start a skunk project … or several.