Table of Contents
It is a cold January 1921 in the city of Hradec Králové, Czech Republic. The snow partially covers the streets while dozens of robots storm a factory; it is the end of humanity and the end, as well, of certain play by some Karel Čapek. With it, he introduced not only the word robot to the world but also some of the partially unfounded fears they engender. Those robots, meat robots, were meant to be slaves, a point of view still held about today's synthetic robots. We fear rebellion, we fear freedom.
Braitenberg's dream: A tale of a brain is a tale of a body [Robotics Tutorial]
Also Read: How to use TensorFlow in Python [Complete Tutorial]
Robots possess the not uninteresting and perhaps defining feature of having a body. With it, they cohabit this physical world with us and ultimately own part of that world themselves. In this series of articles, we will build some of those worlds: some worlds from within. Worlds where the body becomes part of some computation, a sort of life. We will make synthetic animals or at least something bearing resemblance at a level that will hopefully remain harmless.
Other things that possess a body are animals, humans, and ants. I do not want to get too philosophical yet, but there is something that differentiates them from things that ARE a body like a rock, a bottle or a table. Yet, all of them are subjected to the laws of physics. Let's start by diving into what a robot can do with all that physics. It won't be that intelligent, but perhaps it will help us reconsider what we mean by intelligence.
Following the Gradient
Difference drives a significant part of life. A bacterium senses the difference in some chemical signal for food or danger. Moreover, an animal also perceives the contrast between light and darkness or cold and warm temperatures. All of them orient towards and follow the most beneficial path. How do they do it?
A helpful way to start our investigation is by taking what we just said at face value. Imagine a signal P(x), a field with a value at each point in space in our tiny world. And suppose that the signal is maximal at some desired source . The smart thing to do then is to follow the gradient of our signal if we are hungry: to pursue the difference so that we approach the source, climb the proverbial hill, and potentially survive.
So we change our position in a way proportional to the difference. If it is positive, we move right; if negative, we move left. Let's see what our agent does. In order to implement it, we need to put it in an initial position and integrate the equation to see where it goes:-
And, we make some plots:-
Figure 1. Gradient following agent
In the first one, we see that, lo and behold, our agent is pursuing the gradient, climbing the hill. The second one just shows the position as it evolves over time. It clearly approaches and stays at the source, it must be cosy there.
Here is the complete code:-
import numpy as np import matplotlib.pyplot as plt # Parameters of the numerical scheme T = 500 h = 0.01 # Parameters of the model sigma = 1.0 m = 0.5 p = lambda x, m: np.exp(-sigma*(x - m)**2) f = lambda x, m: -2.0*sigma*(x - m)*np.exp(-sigma*(x - m)**2) X = np.zeros(T) t = np.zeros(T) X[0] = -0.3 for i in range(T-1): X[i + 1] = X[i] + h*f(X[i], m) t[i + 1] = t[i] + h # Plotting the path xx = np.linspace(-1.5, 2.0, 100) fig, ax = plt.subplots(1, 2) ax[0].plot(xx, p(xx, m), linewidth = 3.0) ax[0].plot(X, p(X, m) + 0.04, 'k--') ax[0].plot(X[-1], p(X[-1], m) + 0.04, 'k.', markersize = 16.0) ax[0].plot([m, m], [0, 1], '--', color = [.7, .7, .7]) ax[0].set_title('Following the gradient') ax[0].set_xlabel('x') ax[0].set_ylabel('P(x)') ax[1].plot(t, X, linewidth = 3.0) ax[1].plot([t[0], t[-1]], [m, m], '--', color = [.7, .7, .7]) ax[1].set_title('Agent\'s position') ax[1].set_xlabel('time') ax[1].set_ylabel('x(t)') plt.show()
An Actual Body
Of course, the model in the last part is not a robot, just an evolution equation, a description of our dynamical goal. In his book "Vehicles: Experiments in Synthetic Psychology", the neuroscientist Valentino Braitenberg imagined simple bodies that could carry these mindless yet essential tasks for survival.
The idea is simple. Let's connect two sensors to the two wheels in a differential drive robot (DDR). The nature of this connection is such that a world property (i.e. light) will generate a signal that excites or inhibits the actuators proportional to the quantity measured. There are 3 ways of implementing this idea. The first one (a), which we will call ipsilateral, connects the sensors to the wheel on the same side of the body. The second one (b), contralateral, connects each sensor to the wheel on the opposite side, while the third one (c) connects each sensor to both.
Figure 2. Simple Braitenberg vehicles
The Braitenberg book of "Vehicles: Experiments in Synthetic Psychology" describes the details and inspiration for these curious vehicles in fascinating detail. In the meantime, let's see for ourselves what they can do. We will focus on connection topologies (a) and (b), as they are the ones able to detect differences. This will only be scratching the surface of what they can do by implementing the simplest ones. So, go ahead and read the book after you finish this article!
Creating a Braitenberg vehicle (BV)
In robotics, simulations are our friend and, many times, the best way of testing our models. We now proceed to create a model of a BV. As mentioned before, this is a DDR vehicle, and the model is a kinematic one as we are still not interested in dynamics but only in the robot's movement. Specifically, we want to create two sensors, position them at some fixed point in the robot's body and use them to change the velocity of the wheels differentially.
We start by identifying the configuration space of our robot. From our description, it is clear that its kinematics will be fully determined by the position of a point in its rigid body, plus an orientation angle; therefore, the configuration space is a torus (note that other robots have different configuration spaces. Can you think of some others?). Additionally, in order to obtain our evolution equations, it is helpful to use two frames of reference: one fixed at the body (egocentric) and a world one (allocentric).
In the body frame of reference, the robot's velocity vector is given by:-
where vl is the velocity tangent to the robot's trajectory and θ˙ is the steering direction. The x-axis of the frame of reference is aligned with the steering direction. To find the velocity in world space, and that's precisely what we need to simulate our vehicle, we need to account for the rotation of the body's frame of reference, so the world space velocity is given by:-
Remember that we want the sensors to directly affect the wheels' velocity; from that, we need to derive some evolution equation for our robot's body in world space. This process is known as forward kinematics. In this case, the forward kinematics equations for our robot are:-
and
If the sensor readings are SL(x), SR(x), following Braitenberg's recipe, that is, making the wheel velocity proportional to them, we get:-
Here the function π here represents the fact that the order can change depending on the chosen topology (figure 2). Now we just need to attach the sensors to the body. For the sake of simplicity, we will assume a circular body with the two wheels' axis as its diameter, and the sensors positioned at pi/4 and 3pi/4 along the circle. In order to keep their position updated, we compute it in world space from
where T is the transformation that translates to the current robot’s position and rotates towards the current orientation (see the code later).
Finally, The Implementation
Ok, that was quite a ride. But hopefully, everything is ready to be implemented. First, we create the external scaffold for our simulation. This will provide helpful insight into what interface to offer for our vehicle. With this in mind, the main loop looks like this:-
It consists of just two steps! We update the position of the robot and draw the result. For the gradient, we use a two-dimensional version of the one in our first example. As we want to implement different connection topologies for our vehicles, we create an abstract class to contain the primary behaviour of the robot and leave the sensory-motor connection as an abstract method, this method should return a permutation matrix that maps sensors to robots.
Using forward kinematics, the method updatePosition() computes the wheel velocity from the sensor readings at the correct position in world space and the robot's linear and angular velocities. Then we perform an integration step to obtain the new position.
The method sense() obtains the readings from the gradient at the right location in space:-
Finally, the method draw draws a circle with smaller ellipses for the wheels and sensors.
Let's Test It
The connection method implements different connection matrices for our excitatory connections; the inhibitory cases are left as an exercise to the reader. The vehicles 2a (ipsilateral) and 2b(contralateral) have an excitatory connection in the topologies previously described, the concrete classes look like this:-
The vehicle 2a, seems to dislike the source:-
While 2b seems attracted to it:-
But instead of approaching peacefully, it charges at by accelerating in its vicinity. Vehicles 3a and 3b have a different behaviour, can you anticipate what is it ?
Here is the complete code:-
from abc import abstractmethod import numpy as np import matplotlib.pyplot as plt from matplotlib.patches import Circle # Generic vehicle class Vehicle(object): def __init__(self, x0, y0, theta0): self.pstep = 0.005 self.ostep = 4.0 self.x, self.y, self.theta = x0, y0, theta0 self.scale = 0.05 self.r = 1.0*self.scale # Wheel radius self.D = 2.0*self.scale # Diameter of the robot # Position of the sensors in the body frame of reference self.p_sensors = np.array([[self.r*np.cos(np.pi/4.0), self.r*np.sin(np.pi/4.0), 1], [self.r*np.cos(-np.pi/4.0), self.r*np.sin(-np.pi/4.0), 1]]).T @abstractmethod def getSensorTransform( self ): # This is the matrix defining the sensor to motor maping return None def getBodyTransform( self ): # Affine transform from the body to the work FoR return np.array([[np.cos(self.theta), -np.sin(self.theta), self.x], [np.sin(self.theta), np.cos(self.theta), self.y], [0.0, 0.0, 1.0]]) def sense( self, G ): k1 = 10.0 k2 = 10.0 P = self.getSensorTransform() T = self.getBodyTransform() # The position of the sensor is obtained in the world space # and the sensor permutation is also applied n_sensors = np.dot(T, np.dot(self.p_sensors, P)) # Then we get the gradient value at those positions r1 = G(n_sensors[0,0], n_sensors[1,0]) r2 = G(n_sensors[0,1], n_sensors[1,1]) return k1*r1, k2*r2 def wrap( self, x, y ): if x < 0: x = 2.0 elif x > 2.0: x = 0.0 if y < 0: y = 2.0 elif y > 2.0: y = 0 return x,y def updatePosition(self, G): # Main step function # First sense phi_L, phi_R = self.sense(G) # Then compute forward kinematics vl = (self.r/self.D)*(phi_R + phi_L) omega = (self.r/self.D)*(phi_R - phi_L) # Update the next statefrom the previous one self.theta += self.ostep*omega self.x += self.pstep*vl*np.cos(self.theta) self.y += self.pstep*vl*np.sin(self.theta) self.x, self.y = self.wrap(self.x, self.y) return self.x, self.y, self.theta def draw(self, ax): T = self.getBodyTransform() n_sensors = np.dot(T, self.p_sensors) left_wheel = np.dot(T, np.array([0, self.D/2.0, 1]).T) right_wheel = np.dot(T, np.array([0, -self.D/2.0, 1]).T) # drawing body body = Circle((self.x, self.y), self.D/2.0, fill = False, color = [0, 0, 0] ) # Drawing sensors s1 = Circle((n_sensors[0, 0], n_sensors[1, 0]), self.scale*0.1, color = 'red' ) s2 = Circle((n_sensors[0, 1], n_sensors[1, 1]), self.scale*0.1, color = 'red' ) w1 = Circle((left_wheel[0], left_wheel[1]), self.scale*0.2, color = 'black' ) w2 = Circle((right_wheel[0], right_wheel[1]), self.scale*0.2, color = 'black' ) ax.add_patch(body) ax.add_patch(s1) ax.add_patch(s2) ax.add_patch(w1) ax.add_patch(w2) # Particular vehicles class Vehicle2a(Vehicle): def getSensorTransform(self): return np.array([[1, 0], [0, 1]]) class Vehicle2b(Vehicle): def getSensorTransform(self): return np.array([[0, 1], [1, 0]]) # Numerical parameters T = 100 # The signal sigma = 0.1 mu = np.array([1.0, 1.0]) G = lambda x, y: np.exp(-sigma*((x - mu[0])**2 + (y - mu[1])**2)) fig, ax = plt.subplots() plt.show(block = False) x0, y0, theta0 = 0.5, 0.5, 0 # v = Vehicle2a(x0, y0, theta0) v = Vehicle2b(0.5, 0.5, 0) xx = np.arange(0,2,0.02) yy = np.arange(0,2,0.02) X, Y = np.meshgrid(xx, yy) Path = np.zeros((2, T)) for i in range(T): # Drawing gradient plt.cla() Z = G(X, Y) plt.contour(X, Y, Z, 40, alpha = 0.5) # Steping the robot x, y, theta = v.updatePosition( G ) for j in range(i): ax.plot(Path[0,j], Path[1,j], 'k.', markersize = 5) ax.plot(Path[0,0], Path[1,0], 'r*', markersize = 6) v.draw( ax ) Path[:,i] = np.array([x, y]).T plt.axis([0, 2, 0, 2]) plt.xlabel('x') plt.ylabel('y') plt.title('Braitenberg vehicle 2b') plt.pause(0.1) plt.show()
Braitenberg's Dream
In this article, we have illustrated how simple bodies can yield interesting behaviours that we sometimes associate with specific functions and interpretations. Such is the nature of our mind' a blessing and a curse. A blessing as far as when developing autonomous robots intended for Human-Robot Interaction, simple designs can show emergent behaviours attractive to the human user. Yet, a curse in that we can get fooled by assigning some function to some actions that might as well be the product of the entangled history of the organism's own evolution.
References
- Capek, K. (2004). RUR (Rossum's universal robots). Penguin
- Braitenberg, V. (1986). Vehicles: Experiments in synthetic psychology. MIT press.
- Lynch, K. M., & Park, F. C. (2017). Modern robotics. Cambridge University Press.