AI chatbot tutorial for beginners free

I will give you a free, beginner‑friendly tutorial that walks you through building a real AI chatbot from scratch. No expensive tools, no previous AI experience needed. You only need basic Python.

What you will build

A chatbot that can understand different types of questions (weather, time, goodbye, etc.) and answer them. It learns from examples instead of hard‑coded rules. We will use a small neural network that you can train on your own computer.

Step 1 – Prepare your computer

Install Python (version 3.7 or newer) if you don’t have it. Then open a terminal (command prompt) and install one library:

textCopyDownload

pip install numpy

That is the only external library we need. Everything else we write ourselves.

Step 2 – Understand the idea

A human says: “What is the temperature?”
We want the bot to recognise that as the “temperature” intent and reply: “Let me check the temperature.”

We teach the bot by giving it example sentences for each intent. Then the bot learns to turn sentences into numbers, finds patterns, and classifies new sentences it has never seen.

Step 3 – Create the training data

Create a new Python file called chatbot.py. Copy this into it:

pythonCopyDownload

# Training data: intents, example sentences, and responses
intents = {
    "greeting": {
        "examples": ["hello", "hi", "good morning", "hey there"],
        "response": "Hello! How can I help you?"
    },
    "time": {
        "examples": ["what time is it", "tell me the time", "current time"],
        "response": "I cannot see a clock, but you can check your device."
    },
    "weather": {
        "examples": ["how is the weather", "will it rain", "weather forecast"],
        "response": "I am a demo bot – in a real app I would call a weather API."
    },
    "goodbye": {
        "examples": ["bye", "see you", "goodbye", "exit"],
        "response": "Goodbye! Come back anytime."
    },
    "unknown": {
        "examples": [],
        "response": "Sorry, I did not understand that. Try 'hello', 'time', 'weather' or 'bye'."
    }
}

Step 4 – Turn words into numbers

Neural networks only work with numbers. We will use a simple method:
– Break the sentence into lowercase words.
– Give each word a random number (50 numbers per word).
– Average the word numbers to get one number‑vector for the whole sentence.

Add this code (it creates random “embeddings” for words – not perfect, but enough to learn):

pythonCopyDownload

import numpy as np
import re

# Create a random number vector for every word we might see
word_vectors = {}
def get_word_vector(word):
    if word not in word_vectors:
        word_vectors[word] = np.random.randn(50) * 0.1
    return word_vectors[word]

def sentence_to_vector(sentence):
    words = re.findall(r"[a-z]+", sentence.lower())
    if not words:
        return np.zeros(50)
    vectors = [get_word_vector(w) for w in words]
    return np.mean(vectors, axis=0)

Test it:

pythonCopyDownload

print(sentence_to_vector("hello there"))  # shows 50 numbers

Step 5 – Build a tiny neural network

We will make a network with one hidden layer. It takes 50 input numbers and outputs 4 scores (one for greeting, time, weather, goodbye). The highest score is the intent.

Add this class:

pythonCopyDownload

class TinyNN:
    def __init__(self, input_size, hidden_size, output_size):
        # Random weights and biases
        self.W1 = np.random.randn(input_size, hidden_size) * 0.01
        self.b1 = np.zeros((1, hidden_size))
        self.W2 = np.random.randn(hidden_size, output_size) * 0.01
        self.b2 = np.zeros((1, output_size))

    def forward(self, X):
        # X is a batch of sentence vectors
        self.z1 = np.dot(X, self.W1) + self.b1
        self.a1 = np.maximum(0, self.z1)  # ReLU activation
        self.z2 = np.dot(self.a1, self.W2) + self.b2
        # Softmax to get probabilities
        exp_z = np.exp(self.z2 - np.max(self.z2, axis=1, keepdims=True))
        self.output = exp_z / np.sum(exp_z, axis=1, keepdims=True)
        return self.output

    def backward(self, X, y, learning_rate=0.01):
        m = X.shape[0]
        # Output layer gradient
        dZ2 = self.output - y
        dW2 = np.dot(self.a1.T, dZ2) / m
        db2 = np.sum(dZ2, axis=0, keepdims=True) / m
        # Hidden layer gradient
        dA1 = np.dot(dZ2, self.W2.T)
        dZ1 = dA1 * (self.z1 > 0).astype(float)
        dW1 = np.dot(X.T, dZ1) / m
        db1 = np.sum(dZ1, axis=0, keepdims=True) / m
        # Update weights
        self.W2 -= learning_rate * dW2
        self.b2 -= learning_rate * db2
        self.W1 -= learning_rate * dW1
        self.b1 -= learning_rate * db1

Step 6 – Prepare the training data

Map each intent to an index (0,1,2,3). Then turn every example sentence into a vector and create a one‑hot target (e.g. [1,0,0,0] for greeting).

Add:

pythonCopyDownload

intent_list = ["greeting", "time", "weather", "goodbye"]
intent_to_idx = {name: i for i, name in enumerate(intent_list)}

X_train = []
y_train = []

for intent in intent_list:
    for sentence in intents[intent]["examples"]:
        vec = sentence_to_vector(sentence)
        X_train.append(vec)
        target = np.zeros(4)
        target[intent_to_idx[intent]] = 1
        y_train.append(target)

X_train = np.array(X_train)
y_train = np.array(y_train)

Step 7 – Train the network

Create the network and run the training loop. The network will see the examples many times (epochs) and gradually adjust its weights to predict the correct intent.

pythonCopyDownload

nn = TinyNN(input_size=50, hidden_size=30, output_size=4)

epochs = 600
for epoch in range(epochs):
    out = nn.forward(X_train)
    nn.backward(X_train, y_train, learning_rate=0.01)
    if epoch % 100 == 0:
        # calculate accuracy
        preds = np.argmax(out, axis=1)
        actual = np.argmax(y_train, axis=1)
        acc = np.mean(preds == actual)
        print(f"Epoch {epoch}, accuracy: {acc:.2f}")

After training, the accuracy should be near 100% on the training examples.

Step 8 – Add a prediction function

This function takes a sentence from the user, turns it into a vector, asks the network for probabilities, and picks the intent with the highest score. If the highest score is too low (below 0.5), it uses the “unknown” intent.

pythonCopyDownload

def predict_intent(sentence, confidence_threshold=0.5):
    vec = sentence_to_vector(sentence).reshape(1, -1)
    probs = nn.forward(vec)[0]
    best_idx = np.argmax(probs)
    best_prob = probs[best_idx]
    if best_prob < confidence_threshold:
        return "unknown", best_prob
    return intent_list[best_idx], best_prob

Step 9 – The chat loop

Now write the loop that lets you talk to the bot:

pythonCopyDownload

print("🤖 Chatbot ready. Type 'quit' to exit.")
while True:
    user = input("You: ").strip().lower()
    if user in ["quit", "exit", "bye"]:
        print("Bot: Goodbye!")
        break
    intent, conf = predict_intent(user)
    response = intents[intent]["response"]
    print(f"Bot: {response} (I am {conf:.0%} sure it was '{intent}')")

Step 10 – Run and test

Save your file and run it:

textCopyDownload

python chatbot.py

Try these phrases:

  • “hello” → greeting
  • “what time is it” → time
  • “will it rain” → weather
  • “goodbye” → goodbye
  • “my cat is funny” → unknown (low confidence)

What just happened (explanation)

  1. Word vectors – Each word got a random list of 50 numbers. That seems strange, but the training will move those numbers so that similar words end up with similar vectors. Words like “hello” and “hi” will drift together.
  2. Sentence vector – By averaging the word vectors, we lost word order (“is it raining” = “it is raining”) but for simple intent detection, that is often enough.
  3. Neural network – The two weight matrices (W1, W2) learned to map 50 numbers to 4 output probabilities. The hidden layer (30 neurons) learned patterns like “if I see words weather, rain, forecast → increase the weather output”.
  4. Backpropagation – Every time the network made a wrong guess, we calculated how much each weight contributed to the error and adjusted it slightly. After many corrections, the weights became useful.

How to make it better (free next steps)

  • Add more intents – calendar, news, jokes. Just add them to intents and retrain.
  • Use real word embeddings – Download the free GloVe vectors (glove.6B.50d.txt). Replace get_word_vector with a lookup of real pretrained vectors. That dramatically improves understanding.
  • Remember conversation – Store the last intent in a variable. If the user says “thank you” after weather, the bot can reply differently.
  • Call real APIs – Add a weather API or time API. The bot can then give real answers instead of placeholders.

Free resources to learn more

  • Official Python tutorial – python.org (free)
  • Neural Networks from Scratch – book by Sentdex (free online chapters)
  • Google’s Machine Learning Crash Course – free, no paid certificate needed

You now have a working AI chatbot. It is small, it is free, and you built it yourself. Every big commercial chatbot started exactly this way.

Leave a Comment

Join WhatsApp