INFORMATION & COMMUNICATION TECHNOLOGIES (ICT)

Core Technologies of Artificial Intelligence

Artificial Intelligence is not a single technology but a combination of multiple disciplines working together to mimic human intelligence. The following are the core technologies that form the backbone of modern AI systems.

1. Machine Learning (ML)

Machine Learning is a core subset of AI. In traditional computer science, a human programmer must write explicit, line-by-line instructions (code) telling the computer exactly what to do. Machine learning completely changes this approach.

Instead of writing rules, developers feed the computer massive amounts of training data. The machine uses mathematical algorithms to analyze this data, identify hidden patterns, and learn from it. Once trained, the machine can make accurate predictions or decisions when it sees new data.

  • Key Concept: The machine learns from experience rather than being explicitly programmed.
  • Real-Life Example: Think about the spam filter in your email. Programmers did not write a rule saying, “If an email contains the word ‘lottery’, block it.” Instead, they fed the ML algorithm millions of normal emails and millions of spam emails. The algorithm learned the patterns on its own and can now identify a spam email instantly.

2. Deep Learning (DL)

Deep Learning is a highly advanced, highly specialized subset of Machine Learning. It is designed to mimic the structure and function of the human brain.

Deep learning relies on complex, multi-layered computer systems called Artificial Neural Networks (ANNs). The word “deep” refers to the many hidden layers of artificial neurons (nodes) within the network. Data passes through these layers, with each layer processing a different piece of the puzzle before arriving at a final conclusion.

  • Requirements: Deep learning requires incredibly massive datasets (known as Big Data) and enormous computing power (using specialized chips called GPUs) to function properly.
  • Real-Life Example: Deep learning is the technology that makes self-driving cars possible. A neural network processes millions of images of roads, stop signs, pedestrians, and weather conditions until it can instantly recognize an obstacle and hit the brakes without human input.

3. Generative AI

While traditional Machine Learning and Deep Learning are excellent at analysing existing data to make predictions (like predicting the weather or identifying a face), Generative AI goes a step further. It is a type of artificial intelligence that can create completely new, original content.

By studying the patterns in massive amounts of existing data, a Generative AI system can generate new text, draw images, compose music, or write computer code that has never existed before.

  • Underlying Technology: For text, Generative AI is largely powered by Large Language Models (LLMs). These are massive deep learning models trained on millions of books, articles, and websites so they can understand and generate human-like language.
  • Real-Life Example: OpenAI’s ChatGPT is a famous example of a Large Language Model. You can ask it to write an essay or a poem, and it will generate a brand-new, original piece of text in seconds. Tools like Midjourney or DALL-E do the same thing, but for generating artificial images based on text descriptions.

Here are some examples of generative AI applications:

1. Generative Adversarial Networks (GANs):

  • Description: GANs consist of two neural networks, a generator, and a discriminator, which are trained simultaneously through adversarial training.
  • Examples:
    • Generating realistic images of non existent faces.
    • Creating artwork or images based on specific styles or themes.

DALL·E 2 is an AI system that can create realistic images and art from a description in natural language.

2. Text Generation Models:

  • Description: Models trained to generate human-like text based on patterns learned from large datasets.
  • Examples:
    • OpenAI’s GPT (Generative Pre-trained Transformer) models for generating coherent and contextually relevant text.
    • Chatbots capable of generating natural and context-aware responses.

3. Style Transfer Models:

  • Description: Models that can transfer artistic styles from one image to another.
  • Examples:
    • Converting a photograph into a painting in the style of a famous artist.
    • Applying the visual style of one image to another.

4. Music Composition Models:

  • Description: AI models trained on musical patterns to compose original music.
  • Examples:
    • Creating new pieces of music in the style of classical composers.
    • Generating background music for videos or games.

5. Image-to-Image Translation Models:

  • Description: Models capable of converting images from one domain to another while preserving relevant content.
  • Examples:
    • Translating satellite images to maps.
    • Converting black-and-white photos to color.

6. Video Game Content Generation:

  • Description: AI models used in the gaming industry to generate various game content.
  • Examples:
    • Procedural generation of in-game landscapes and environments.
    • Creating non-player characters (NPCs) with diverse appearances and behaviours.

7. Drug Discovery:

  • Description: Generative models used in pharmaceutical research to propose new molecular structures for drug development.
  • Examples:
    • Generating potential drug candidates based on known chemical structures.
    • Designing molecules with desired properties.

Generative AI

Generative AI refers to deep-learning models that can take raw data — say, all of Wikipedia or the collected works of Rembrandt — and “learn” to generate statistically probable outputs when prompted. At a high level, generative models encode a simplified representation of their training data and draw from it to create a new work that’s similar, but not identical, to the original data.

Scroll to Top