learning algorithm, linear regression. The term supervised
learning [ NOISE ] meant that you were given Xs which was a
picture of what 's in front of the car. And the algorithm
[ NOise ] had to map that to an output Y which was the
steering direction. And that was a regression problem, [NOISE ]
because the output Y that you want is a continuous value. As
opposed to a classification problem where Y is the speed. And
we'll talk about classification next Monday, but supervised
learning regression. The job of the learning algorithm is to
output a function to make predictions about housing prices.
And by convention, I 'm gon na call this a function that it
outputs a hypothesis , [ NOISE ] And go through how to do that.
And the hypothesis is to take as input, any size of a house, and
try to tell you what it thinks should be the price. Machine
learning sometimes just calls this a linear function but
technically it 's an affine function. Um, so more generally in- in-
this example we have just one input feature X. More generally,
if you have multiple input features , so if. you have more data ,
more information about these houses , such as number of
bedrooms.
Theta [ NOISE ] is called the parameters , um , of the learning
algorithm. M is going to be the number of rows where each
house you have in your training set. X is called input features ,
or sometimes input attributes, and Y is the output. And
sometimes we call this the target variable. N is n to denote the
number of features you have for the supervised learning
problem. N is equal to 2 in this example. The learning
algorithm 's job is to choose values for the parameters Theta
so that it can output a hypothesis. Sometimes for notational
convenience , I just write this as h of x , sometimes I include
the Theta there and they mean the same thing. In linear
regression, we want to minimize the square difference between
what the hypothesis outputs and what the true price of the
house is. We often put a one-half there so to make the math a
little bit simpler later. We 'll talk more about that when we talk
about the generalization of linear regression.
Theta equals the vector of all zeros would be a reasonable
default. We can use an algorithm called gradient descent to
find the value of Theta that minimizes the cost function J of the