Artificial Intelligence: The Hype, The Hopes and The Horizon
In 1958, the New York Times featured an article titled “Electronic ‘Brain’ Teaches Itself”, spotlighting Cornell University’s groundbreaking Perceptron project. The article dreamed big about the Perceptron, prophesying that it would become the first non-living mechanism capable of independently perceiving, recognizing, and identifying its surroundings without human intervention — all for the surprisingly low price of $100,000.
Fast forward to 1970, the autonomous robot “Shakey” made its debut navigating SRI’s laboratories, earning acclaim from Life Magazine as the first electronic person, and emphasizing its trailblazing autonomy and decision-making capabilities. A little less than 50 years later, the humanoid robot Sophia legally claimed that title when it was given Saudi Arabian citizenship, becoming the first robot in the world to be given legal personhood.

While the optimism displayed by the New York Times and Life Magazine reporters may appear somewhat idealistic in retrospect, it serves as a noteworthy example of the media’s occasional tendency to enthusiastically exaggerate technological advancements. And nowhere has this enthusiasm been more evident than in the case of artificial intelligence, which, although existing in at least some rudimentary form since the 50s, has garnered significantly increased public attention in the last decade.

Over 70 years later since its inception, this surge in enthusiasm for artificial intelligence can be attributed, in part, to the emergence of novel programs like ChatGPT and Midjourney, now accessible to the public. Classified as generative AI, these programs engage in activities traditionally associated with human capabilities, such as text writing and image creation, and have translated to a surge in investment in companies that develop or employ this type of technology.
For example, following its public release, ChatGPT’s popularity skyrocketed, boasting an estimated 100 million monthly active users, and earning the distinction of the fastest-growing consumer app in history. This unprecedented growth led to a significant uptick in investor interest in artificial intelligence, with OpenAI securing an additional $10 billion investment from Microsoft, placing the company’s valuation at $29 billion. The increasing attention and desire for artificial intelligence in all its forms rippled across to other companies not traditionally categorized as ‘AI’ or ‘technology’ companies, with BuzzFeed experiencing a 150% surge in stock value in January 2023 following the announcement that they would integrate “AI-inspired content” into their core business.
While Generative AI is a relatively recent phenomenon, the reality is that we have all unknowingly been engaging with AI for a really long time. Our phones employ it for facial recognition and predictive texting, YouTube utilizes it to suggest videos and music tailored to your preferences, and Google employs it to anticipate your online search queries. Long before these capabilities were realized, hundreds of researchers and scientists dedicated their efforts to create and develop the AI systems that enabled such functionalities.
The term “Artificial Intelligence” finds its origin in John McCarthy, a mathematics professor at Dartmouth College, who coined it during the organization of a summer conference on AI in 1956, funded by a Rockefeller Foundation grant. While nothing else in McCarthy’s resume suggested he might have had a successful career as a marketer, his choice of the term has resonated and captured the public’s imagination for nearly seven decades. McCarthy defined artificial intelligence as the ability of computer systems to “perform actions that if performed by humans would be considered intelligent.” , laying the groundwork for the development of this exciting field. If he had named it symbolic processing or analytical computation, you might not be reading this article right now. Not because the technology wouldn’t exist, but because, without that specific title, the technology might appear less thrilling, less daunting, and more accurately as what it truly is: the ongoing progress of technology through automation.

It is challenging to overemphasize the significance of this conference, and the simultaneous introduction of the term “Artificial Intelligence,” in catalyzing the emergence of the field responsible for the creation and advancement of contemporary AI systems and technologies. Despite the existence of various researchers engaged in related areas like complexity theory, language simulation, neuron nets, abstraction of content from sensory inputs, the relationship of randomness to creative thinking, and learning machines, it was the unification of these efforts under the umbrella term “Artificial Intelligence” that enabled crucial early breakthroughs.
This conference, distinguished as the most extensive gathering on the subject at the time, established the groundwork for an ambitious vision that has profoundly influenced research and development across engineering, mathematics, computer science, psychology, and numerous other disciplines. Participants left the discussions with a conviction that ongoing advancements in electronic speed, capacity, and software programming would inevitably lead to a future where computers could match human intelligence — the only uncertainties were when and how this would occur.
So, if AI has been around for such a long time, why are people all of a sudden so excited about it?
The current AI excitement likely stems from the tangible demonstrations in writing and artistic applications, showcasing AI’s capabilities more visibly compared to its less conspicuous yet technically remarkable uses. While AI is already a deeply ingrained feature in numerous fields, operating effectively in the background, it typically lacks the front-and-center presence observed in the recent technologies discussed earlier.
A seminal moment capturing the public’s imagination in recent years was the triumph of the program Deep Blue, which defeated the reigning world chess champion, Garry Kasparov, in a six-game tournament in 1997. This victory not only garnered widespread attention but also sparked debates about the sustainability of human supremacy over machines. Later, as focus quickly pivoted away from chess towards emerging challenges such as computer vision, self-driving cars, speech recognition, and natural language processing, public discourse evolved to center around apprehensions regarding the eventual replacement or dominance of humans by AI.

Indeed, many discussions about artificial intelligence today, are usually accompanied by a contemplation of the very essence of intelligence embodied by the term, and framed by machine’s impressive abilities to solve logical and mathematical problems. However, evaluating human intelligence through tasks like rapid calculations reveals a flaw in methodology, as a basic $1 calculator can outperform most humans in such activities. No reasonable observer would assert that this outcome indicates superior intelligence in the calculator.
In fact, machines routinely excel at various tasks, often surpassing human capabilities without causing concern. Cars can outrun us, ATM machines can count money faster and more accurately than we can, and cameras can see better in the dark. There is nothing that new about our abilities being beaten by machines that we build — in fact that is a large reason of why we even build machines in the first place. Humans have used tools and built machines for quite some time, but we haven’t typically attributed intelligence to them. We don’t look at our phones and think they have grown smarter when we install a new app, but as machines’ abilities grow there is a temptation to see them as having actual intelligence.
The controversy surrounding artificial intelligence arises from the fear that machines will surpass our more intelligent (and complex) social and cognitive abilities, such as thinking and feeling. Calling it ‘Artificial Intelligence’ naturally puts the abilities of these technological systems at odds with our human ‘Natural Intelligence’ and, by doing so, creates a competitive divide that humans perceive as threatening.
In practical terms, over the last years prevalent fears surrounding artificial intelligence center on the apprehension that AI could jeopardize livelihoods and diminish overall quality of life by rendering many activities obsolete. However, the notion that artificial intelligent robots will replace human jobs in a one-to-one fashion oversimplifies the nuanced economic impact of automation. Historical shifts in employment, such as the decline in agricultural jobs from more than 80% to less than 2% in the United States, demonstrate that gradual transitions allow for adaptation to new job types.

While concerns about rapid shifts in the labor market’s valuation of skills persist, the impact of AI on employment is more complex than a simplistic replacement of jobs. Technology has consistently enhanced worker productivity throughout history, which means that as time passed and technology evolved, fewer workers were needed to complete the same ammount of work. However, the increased wealth that resulted from this increase in productivity ended up creating new jobs.
For example, a substantial 63% of today’s jobs did not exist in the 1940s, demonstrating historical adaptability. However, the accelerated pace of technological progress raises concerns about a sudden and disruptive shift in the labor market’s valuation of skills. AI’s impact on employment extends beyond job displacement, transforming the skill set required for retained positions.

Furthermore, despite public discourse focusing primarily on the fear, some years ago, that AI would eliminate simple and repetitive jobs (those easily reducible to a series of steps or procedures), the substantial technological progress to date has underscored AI’s capacity to handle far more intricate and non-routine tasks. These tasks include driving a vehicle, reading handwritten text, and translating between different languages. This evolution of AI has now placed at risk an entirely different set of jobs that were once considered immune to the impact of AI.
The application of AI to white-collar tasks is, indeed, less challenging, as it involves the manipulation of information, which is comparatively easier to program than tasks that require the manipulation of physical objects, and thus necessitate integration with the physical world. This implies that individuals engaged in professional writing should be more concerned about the potential replacement by AI, because writing tasks are easier to program for AI, compared to those that involve interaction with the physical world.
Satya Nadella, Microsoft’s CEO, contends that AI will generate more new jobs, and he posits that these emerging roles will be less tedious. According to him, the implementation of AI enables companies to automate repetitive, time-consuming, and less fulfilling tasks. This becomes particularly significant in an era where the ratio of retirees to workers is increasing, and the productivity boost facilitated by AI could play a vital role in addressing some of the gaps emerging in the workforce.
Nevertheless, numerous tasks that could be automated have not been, primarily due to their social rather than purely physical nature. For instance, a machine can already take your order and prepare your coffee — McDonald’s is an example of this today. However, not everyone wants to go to a coffee shop where there is no human interaction. That is not why people go to coffee shops. Similarly, while a phone can perfectly play a recording of your favorite music, many individuals are still willing to pay substantial amounts to attend a live performance and be part of that in-person experience.
I contend that the jobs most susceptible to AI replacement are not those involving repetitive and tedious tasks, which are relatively easy to program, nor those in writing, which can be easily automated through generative AI. Instead, the jobs most at risk are those that necessitate less social interaction.
Recent surveys have illustrated that the increase in loneliness has coincided with the evolution of technology. The very technology that once promised to bring us closer together has, in reality, driven us apart, and this fact might well be what leads to its demise in the end.
So what does this mean for the labor market and economy?
Contemporary attitudes toward work reflect greater flexibility and adaptability, with the modern workforce embracing the concept of a series of careers rather than a lifelong commitment to a single job. Acknowledging the potential benefits of automation, such as increased availability of fresh food and lower costs, is essential. As an investor, it is crucial to adopt a calm and steady approach, recognizing that catching every market trend is neither necessary nor practical.
For companies utilizing AI, it is crucial to remain attentive to consumers. Despite AI’s potential cost-saving benefits across various departments, it remains an evolving technology that can introduce as many challenges as it addresses. Particularly within the customer success domain, recognizing that not all customer segments may be interested, willing, or capable of interacting with this technology is vital. Factors such as age, technological proficiency, or the nature of the service being utilized can influence customer preferences. For example, in instances where an AI chatbot encounters difficulties, facilitating an easy process for customers to report errors and connect with a human customer service representative empowered to rectify mistakes becomes essential in maintaining customer satisfaction.
In the human resources department, AI systems have been employed to sift through the volumes of CVs submitted for job applications, aiming to identify the best-qualified candidates for a position. Nevertheless, it’s crucial to recognize that in this process, AI models have tended to favor groups of people who have traditionally held privilege in the corporate world — primarily Caucasian men with Western-sounding names. As AI systems, driven by machine learning models, are trained using human data, they often reflect our own biases in their decisions. Despite this, many prefer AI systems under the unfounded belief that they can make dispassionate, unbiased, and rational decisions more effectively than humans.
Internally, companies utilizing or offering AI systems must ensure that AI is viewed as a means to an end, not the end itself. This involves placing emphasis on the authority of human decision-making at any stage of the process. It is crucial to guarantee that the AI systems in use are transparent and explainable. In simpler terms, every user of the system should be aware of the information the system considers to make decisions. The system should also be designed to communicate effectively with users, enabling them to learn, improve, adjust, and request human intervention when necessary.
In the end, generative AI’s ability to craft intricate paintings and compose poetry may create an illusion of thought, but it is, in essence, a simulation of thought rather than genuine cognitive processes. The analogy to a player piano, producing music without the depth of interpretation and creativity of an expert musician, highlights the distinction. Witnessing generative AI swiftly create a painting from scratch challenges our intuitions about human uniqueness, sparking concerns about the essence of what makes us special, interesting, and valuable.
As it turns out, the answer to that question is a simple one, but one that has long been disconnected from the concept of intelligence. The answer to that question is emotions. What distinguishes a player piano from a piano player who is an expert musician is the emotion they infuse into their interpretation. This is no different from what happens in the corporate world. A good recruiter is one who can sift through a pile of CVs and find the most well-qualified applicants, but a great one? A great one is the person who can identify the most well-qualified applicants with the best fit for the company and the most potential. The best one — across all areas of human endeavor — is always the individual who can read between the lines, connect with their fellow humans, and who cares enough to make a difference.