CS50 2021 in HDR - Lecture 0 - Scratch
David Malan, a CS50 instructor at Harvard University, says that the key to success in this
course is to give it enough time and to take breaks when needed. He says that the ability to
create something, and bring a computer to life, to solve a problem is gratifying. Half of CS50
students have never taken a CS50 course before, and there are benefits to this inexperience.
In the world of computers specifically, we need to decide in advance how we represent these
inputs and outputs. We all need to decide that we are all going to speak a common language,
irrespective of our human languages. And you may very well know this language: binary, 0 and
1. Binary means that the world of computers has just two digits at its disposal-0 and 1. And
yet, somehow they can do so much. They can crunch numbers in Excel, send text messages,
create images and artwork and movies.
As high as two, a computer has to use a different pattern of zeros and ones to count
higher than 2. And that happens to be 010. So this is not 10 with a zero in front of it. It's
indeed zero one zero in the context of binary. And if we want 4 or 5 or 6 or 7, we're just kind of
toggling these zeros. And by turning those things on and off in patterns, a computer can
count from 0 on up to 7. David Malan: How is it that these patterns came to be? These
patterns actually follow something very familiar. decimal system, "dec" meaning 10, in
decimal system, means 10. In binary system, these columns mean a little something different.
This is technically 100 times 1, plus 10 times 2 times 2, plus 1 times 3 times 3. If you wanted to
count as high as 8, just change the bases if you're using only zeros and ones. This is why we
got the decimal number 1 in binary. And if you keep going, it's going to be 8s column, 16s
column, 32, 64, and so forth. So how would a computer represent something like a letter if all
they have is switches?
The paragraph starts by explaining that there is going to be a one-to-one correspondence
between letters and numbers in a text message program. It then goes on to explain that
computers and mobile devices are able to understand this correspondence, and that 8 plus 8
is equal to 12. Finally, the paragraph explains that bits are tiny and we don't tend to think or
talk in terms of bits.
These emoji are just characters, like letters of an alphabet, patterns of zeros and ones
that you're receiving. Unicode is a superset of what we called ascii. Unicode might even use
32 bits to represent letters and numbers and punctuation symbols. And that would give you
up to 4 billion possibilities. The companies themselves have generally interpreted it as you
see fit. This can lead to some human miscommunications. There are other ways to represent
numbers. Binary is one. Decimal is another. Unary is yet a fourth that uses 16 total digits.
Hexadecimal uses four bits per digit. And so, four bits, if you have two digits in hex that gives
you eight. And it's just a very convenient unit of measure. How might a computer represent
something like a color. How do we have the options if all we've got are zeros and ones and
switches . What then might we have besides emojis and letters and numbers . Well, we of
course have things like colors and programs like photoshop and pictures and photos.
David Malan, a CS50 instructor at Harvard University, says that the key to success in this
course is to give it enough time and to take breaks when needed. He says that the ability to
create something, and bring a computer to life, to solve a problem is gratifying. Half of CS50
students have never taken a CS50 course before, and there are benefits to this inexperience.
In the world of computers specifically, we need to decide in advance how we represent these
inputs and outputs. We all need to decide that we are all going to speak a common language,
irrespective of our human languages. And you may very well know this language: binary, 0 and
1. Binary means that the world of computers has just two digits at its disposal-0 and 1. And
yet, somehow they can do so much. They can crunch numbers in Excel, send text messages,
create images and artwork and movies.
As high as two, a computer has to use a different pattern of zeros and ones to count
higher than 2. And that happens to be 010. So this is not 10 with a zero in front of it. It's
indeed zero one zero in the context of binary. And if we want 4 or 5 or 6 or 7, we're just kind of
toggling these zeros. And by turning those things on and off in patterns, a computer can
count from 0 on up to 7. David Malan: How is it that these patterns came to be? These
patterns actually follow something very familiar. decimal system, "dec" meaning 10, in
decimal system, means 10. In binary system, these columns mean a little something different.
This is technically 100 times 1, plus 10 times 2 times 2, plus 1 times 3 times 3. If you wanted to
count as high as 8, just change the bases if you're using only zeros and ones. This is why we
got the decimal number 1 in binary. And if you keep going, it's going to be 8s column, 16s
column, 32, 64, and so forth. So how would a computer represent something like a letter if all
they have is switches?
The paragraph starts by explaining that there is going to be a one-to-one correspondence
between letters and numbers in a text message program. It then goes on to explain that
computers and mobile devices are able to understand this correspondence, and that 8 plus 8
is equal to 12. Finally, the paragraph explains that bits are tiny and we don't tend to think or
talk in terms of bits.
These emoji are just characters, like letters of an alphabet, patterns of zeros and ones
that you're receiving. Unicode is a superset of what we called ascii. Unicode might even use
32 bits to represent letters and numbers and punctuation symbols. And that would give you
up to 4 billion possibilities. The companies themselves have generally interpreted it as you
see fit. This can lead to some human miscommunications. There are other ways to represent
numbers. Binary is one. Decimal is another. Unary is yet a fourth that uses 16 total digits.
Hexadecimal uses four bits per digit. And so, four bits, if you have two digits in hex that gives
you eight. And it's just a very convenient unit of measure. How might a computer represent
something like a color. How do we have the options if all we've got are zeros and ones and
switches . What then might we have besides emojis and letters and numbers . Well, we of
course have things like colors and programs like photoshop and pictures and photos.