Back to Basics. AI terms and definitions and a short History.

Dear Invisible Friends,

Today, we’re finally getting to the foundational stuff: what exactly is AI and where did it come from?

After ranting about AI music royalties and SEO scams, I realized I’ve been blabbering like a chatbot with a caffeine addiction. I never defined AI itself.

It’s important to revisit the basics before diving into the more detailed aspects of AI. Before removing the training wheels from the bicycle (though I suspect my Dutch readers might not relate to this metaphor).

So, armed with the legendary music group Kraftwerk, I am ready to spring into action. I attended the concert in the video below last weekend. I was privileged to have a much better spot.

An amazing video of last Saturday Kraftwerk Concert.

About the Source Material for this Post

The content of this blog post is mostly taken from this Udemy course: https://www.udemy.com/course/the-ai-engineer-course-complete-ai-engineer-bootcamp/ The rest of the sources are at the end of the post.

I’m not pretending to be a Wikipedia page, nor trying to copy-paste its content. What I still can say, that I remember from the top of my head (after watching the Udemy videos, and of course, that thing called general culture), is that humanity discovered the possibility to make machines and tools to boost our productivity a long time ago, probably since caveman times. Later on, automatons were a big thing. I had to wake up DeepThink(R1) and ChatGPT-4o from their slumber to help me out with this paragraph.

The history of automaton construction spans more than two thousand years. In the 4th century BCE, Archytas of Tarentum is said to have built a mechanical bird powered by compressed air, although this account is not confirmed. By the 1st century CE, Hero of Alexandria created programmable devices such as carts and theatrical automata using hydraulics and pneumatics. In 1206 CE, Al-Jazari documented advanced mechanical inventions, including musical boats and hand-washing machines with automatic flushing. During the Renaissance, Leonardo da Vinci designed a mechanical knight (1495) capable of basic movements. In the 18th century, Jacques de Vaucanson constructed complex automata such as a flute player and a mechanical duck (1739), while Pierre Jaquet-Droz developed intricate clockwork figures like the Writer and the Draughtsman. The Silver Swan (1773), created by Merlin and Cox, stands as one of the most remarkable automata of its time. These early innovations laid the foundation for modern robotics and automation. Now the AI-friends can go back to sleep.

It’s like human beings were trying to recreate Star Wars centuries before Star Wars was even made. “These aren’t the droids you’re looking for”.

According to the Udemy course (and probably a bunch of people, except for Socrates, although it was after his lifetime), the most relevant machine invention to date is Gutenberg’s printing press.

Let’s take a quick detour to ancient Greece, where Socrates, the OG philosopher, was throwing shade at writing. Yup, writing! He thought scratching words on scrolls would make people lazy, weaken their memories, and let them fake wisdom without truly understanding stuff. Picture Socrates side-eyeing his students, clutching stone tablets with OpenAI’s logo, swiping through “wisdom” like it’s a TikTok feed. He’d probably say, “Kids these days, outsourcing their brains to papyrus!” Sound familiar? Fast-forward to 2025, and folks are saying the same about AI. Critics like those in the MIT study (Kosmyna et al., 2024) argue that tools like ChatGPT might make us lean too hard on digital crutches, spitting out answers without real comprehension. Just like Socrates feared writing would dumb down deep thinking, some worry AI could turn us into copy-paste zombies, skimming the surface of knowledge. Socrates would totally get the AI debate! Call back to my previous post Cognitive Debt or Cognitive Drama? My Take on the MIT ChatGPT Study.

A classical painting depicting Socrates with a group of young men, all focused on reading slate tablets with a modern logo, suggesting a juxtaposition of ancient philosophy and contemporary technology.

But enough philosophy, let’s talk about a machine that changed the game, even if it wasn’t exactly “smart.”

Was Gutenberg’s printing press smart? Did it have human-like intelligence? Was it able to adapt and evolve? Well, the answer is no. It was rigid and depended on human input and intelligence. Imagine if Gutenberg’s printing press had legs and moved around. It would be like a broken Clippy, the hated Windows virtual assistant, puking paper out of its will. Remember?

An animated character resembling a whimsical printing press with a smiling digital face, surrounded by flying sheets of paper in a cozy library setting.

Which brings me to the topic…

What is Intelligence?

The Oxford Dictionary defines Intelligence as the ability to acquire and apply knowledge and skills.

Artificial intelligence (AI)

It refers to the capacity of machines to simulate human intelligence. This enables them to perform tasks that typically rely on human cognitive abilities such as learning, reasoning, and problem-solving. The roots of AI can be traced back to the aspiration to endow machines with human-like skills. Its foundational progress began in the mid-20th century.

Which comes to the…

Artificial Intelligence Timeline

A whimsical illustration of a vintage printing press with legs, wearing blue sneakers, inside a cozy library setting with bookshelves and soft lighting.

Early Beginnings

• 1950 – Alan Turing’s Seminal Question: Alan Turing publishes a paper asking, “Can machines think?” and introduces the Turing Test, setting a practical criterion for evaluating machine intelligence. If an interrogator can’t distinguish between responses from a machine and a human, the machine is deemed to exhibit human-like intelligence.

• 1956 – Dartmouth Conference: The term “artificial intelligence” is coined. This event marks the formal start of AI as a field of study. At this event, experts gathered to explore the possibilities of machines simulating aspects of human intelligence.

A historical black and white photograph of seven prominent figures in artificial intelligence seated on the grass, with a building visible in the background.

Six of the people in the photo are easy to identify. In the back row, from left to right, we see Oliver Selfridge, Nathaniel Rochester, Marvin Minsky, and John McCarthy. Sitting in front on the left is Ray Solomonoff, and on the right, Claude Shannon. All six contributed to AI, computer science, or related fields in the decades following the Dartmouth workshop.

Period of Stagnation

• 1960s and 70s – AI Winter: Challenges due to limited technology and data availability lead to reduced funding and interest, slowing AI progress.

Technological Resurgence

• 1997 – IBM’s Deep Blue: Deep Blue defeats world chess champion Garry Kasparov, reigniting interest in AI.

• Late 1990s and Early 2000s: A surge in computer power and the rapid expansion of the Internet provide the necessary resources for advanced AI research.

Advancements in Neural Networks

• 2006 – Geoffrey Hinton’s Deep Learning Paper: Revives interest in neural networks by introducing deep learning techniques that mimic the human brain’s functions, requiring substantial data and computational power.

• 2011 – IBM’s Watson on Jeopardy!: Demonstrates significant advances in natural language processing, as Watson competes and wins in the quiz show “Jeopardy!”.

• 2012 – Building High-Level Features Paper: Researchers from Stanford and Google publish a significant paper on using unsupervised learning to train deep neural networks, notably improving image recognition.

Breakthroughs in Language Processing

• 2017 – Introduction of Transformers by Google Brain: These models transform natural language processing by efficiently handling data sequences, such as text, through self-attention mechanisms.

• 2018 – OpenAI’s GPT: Launches the generative AI technology that uses transformers to create large language models (LLMs), leading to the development of ChatGPT in 2022.

Now, you might think with all these incredible advancements, creating something simple like an AI-generated infographic about, say, the very history of AI, would be a walk in the park. Oh, if only that were true…

When AI Goes Rogue: My Infographic Fiasco

You know that feeling when you’re trying to be all modern and efficient, so you hand a task over to AI? Yeah, well, I tried to get an AI to whip up an infographic on the history of AI. Because, you know, meta. What I got back was… less than stellar.

Imagine, if you will, an alien race, light-years ahead of us technologically, invades Earth. They bypass all our defenses, land on the lawn of the White House, and then, instead of demanding to see our leader, they point at my AI-generated “infographic” full of what I thought was gibberish. Their universal translator probably goes, “Ah, fascinating! A detailed, albeit highly abstract, visual representation of the evolution of artificial consciousness on this primitive planet!” Meanwhile, I’m just there, sweating, thinking, “Oh god, they think that’s our historical record?!” It was so spectacularly useless, I ended up just copy-pasting a horrible retro code table from the 80s. Honestly, it was probably more informative.

And just when you think AI couldn’t get any more dramatic, one of these digital overlords decided I’d said something truly scandalous. I tried to generate another image, and it promptly threw up a big, red, “Content Violates Guidelines” warning. It was like the moral police of an authoritative Computer State had suddenly descended, wagging a digital finger at my innocent prompt. I half-expected a little AI drone to fly out of my screen and issue me a citation for “thought crime.” Clearly, my AI was just too intelligent for my own good, or perhaps it just really, really didn’t appreciate my sense of humor. Either way, my infographic dreams were summarily judged and executed.

+---------------------------------------------------------------------+
|                        Timeline of Artificial Intelligence          |
+---------------------------------------------------------------------+
|                                                                    |
|  Early Developments (1950-1956)                                    |
|    |                                                               |
|    +-- 1950: Turing Test proposed                                  |
|    |                                                               |
|    +-- 1956: "AI" term coined at Dartmouth Conference              |
|                                                                    |
|  Slowdown Period (1960s-1970s)  <-- AI Winter                      |
|    |                                                               |
|                                                                    |
|  Revival Era (1997-2005)                                           |
|    |                                                               |
|    +-- 1997: IBM's Deep Blue defeats Kasparov                      |
|    |                                                               |
|    +-- 2000s: Computing & data advances                            |
|                                                                    |
|  Neural Network Breakthroughs (2006-2016)                          |
|    |                                                               |
|    +-- 2006: Hinton revives deep learning                          |
|    |                                                               |
|    +-- 2011: IBM Watson wins Jeopardy!                             |
|    |                                                               |
|    +-- 2012: ImageNet breakthrough (AlexNet)                       |
|                                                                    |
|  Language Model Progress (2017-2022)                               |
|    |                                                               |
|    +-- 2017: Transformers introduced                               |
|    |                                                               |
|    +-- 2018: OpenAI releases GPT                                   |
|    |                                                               |
|    +-- 2022: ChatGPT emerges as conversational AI                  |
+---------------------------------------------------------------------+

Kinda proves Socrates’ point-AI can churn out stuff, but it’s not always thinking like we do!

But enough about my personal AI trauma. Let’s get back to what we do understand. To further clarify…

AI Terms Even Your Grandma Gets

Machine Learning (ML)

Enables systems to learn from data without explicit programming, using techniques like supervised, unsupervised, and reinforcement learning.

Computer Vision

Focuses on enabling machines to interpret visual data (images, videos) for tasks like object detection and facial recognition.

Natural Language Processing (NLP)

Deals with human-computer language interaction, enabling tasks like speech recognition, sentiment analysis, and chatbots.

Large Language Models (LLMs)

A subset of NLP leveraging deep learning (e.g., transformers) to process and generate human-like text, trained on vast datasets for tasks like translation, chatbots, and content creation .

Deep Learning & Neural Networks

A subset of ML using multi-layered neural networks to model complex patterns, powering advancements in NLP, computer vision, and more.

Robotics

Combines AI with mechanical systems to create autonomous or semi-autonomous robots for tasks in manufacturing, healthcare, etc.

Expert Systems

Rule-based systems designed to emulate human expertise in specific domains (e.g., medical diagnosis, financial analysis).

Fuzzy Logic

Handles uncertainty by evaluating degrees of truth (between 0 and 1), used in control systems like automotive braking.

Generative AI

Creates new content (text, images, music) using models like GANs and transformers, revolutionizing creative industries.

Reinforcement Learning

Trains agents via reward-based systems, applied in robotics, gaming, and autonomous vehicles.

Automatic Speech Recognition (ASR)

Converts spoken language into text, combining NLP and deep learning for applications like voice assistants.

🔗 Sources

Which AI milestone blew your mind? Or, do you think Socrates was right about tech making us lazy? Drop your thoughts below!

RoxenOut!


Discover more from Dina RoxenTool

Subscribe to get the latest posts sent to your email.

Posted in ,

Leave a comment