Disculpa, pero esta entrada está disponible sólo en Inglés Estadounidense. For the sake of viewer convenience, the content is shown below in the alternative language. You may click the link to switch the active language.

The homework this week was to implement a perceptron in Python.

Here is the code in Github and I copied it below. It was interesting to see/remember that the weighted sum of the inputs could be calculated with a dot product of the inputs vector with the weights vector.

I played a little with the learning rate and the number of tranining iterations, these I have worked well for the OR gate and for the AND gate.

import numpy as np
#
N = 2 # Number of inputs
LearningRate = 0.1 # Learning rate
#
N += 1 # Increment 1 input because of the bias input
#
# Start with an array of random weights
W = np.random.random( N )*2 - 1 
print("Initial weights")
print(W)
#
# This function calculates the weighted sum of inputs and return sign
def infer(x):
    # The weighted sum is the dot product between inputs and weights
    s = np.dot(x,W)
    # Return the result of a function, in this case sign
    return np.sign(s)
#
# Receive an input / output pair and adjust the weights accordingly
def train(x, y):
    print("Example")
    print(x)
    # Expected output for x
    expected = y
    print("Expected %d" % expected)
    # Get the output based in the current weights
    guessed = infer(x)     
    print("Guessed: %d" % guessed)
    # Calculate the error
    error = expected - guessed     
    print("Error: %f" % error)
    # The amount of change in the weights is proportional to the error and input
    deltaW = error*x
    print("DeltaW*LearningRate")
    # Multiply it by a constant (learning rate) and return the new weights
    print(deltaW*LearningRate)
    return W + deltaW*LearningRate
#
# Training data, truth table for OR
# Input columns are first input, second input, and bias input (always 1)
trainingX = np.array( [
                        [0,0,1],
                        [0,1,1],
                        [1,0,1],
                        [1,1,1]
                        ])
# Outputs are either -1 (False) or 1 (True)
trainingY = np.array( [
                        -1,
                        1,
                        1,
                        1,
                        ])
#
# Train!
# Repeat the training with the same data several times
for i in range(10):
    print("Iteration number %d " % i)
    # For each example in the training data
    for index, x in enumerate(trainingX):
        print( "Training..." )
        # Get the expected output
        y = trainingY[index]
        # Update the weights with the new ones
        W = train(x,y)
#
# Results
print("Final weights")
print(W)
#
# Infer!
# Put the input data in the model and get the inferred results
print("Starting inferences")
for index,x in enumerate(trainingX):
    print("Input")
    print(x)
    print("Output")
print(infer(x))