Representing Numbers and Letters with Binary


And in boolean algebra, there are only two, binary values: true and false.

Instead of true and false, we can call these two states 1 and 0 which is actually incredibly useful.

the decimal numbers - base-ten notation

AND Binary works exactly the same way, it’s just base-two.

Take this number in binary 10110111. We can convert it to decimal in the same way. We have 1 x 128, 0 x 64, 1 x 32, 1 x 16, 0 x 8, 1 x 4, 1 x 2, and 1 x 1. Which all adds up to 183.

Math with binary numbers isn’t hard either.

Each of these binary digits, 1 or 0, is called a “bit”.

Thats 256 different values, or 2 to the 8th power.

And 8-bits is such a common size in computing, it has a special word: a byte. A byte is 8 bits.

You’ve heard of kilobytes, megabytes, gigabytes and so on.

But hold on! That’s not always true.

In binary, a kilobyte has two to the power of 10 bytes, or 1024. 1000 is also right when talking about kilobytes, but we should acknowledge it isn’t the only correct definition.

You’ve probably also heard the term 32-bit or 64-bit computers – you’re almost certainly using one right now.

What this means is that they operate in chunks of 32 or 64 bits. That’s a lot of bits! The largest number you can represent with 32 bits is just under 4.3 billion. Which is thirty-two 1's in binary.

So we need a way to represent positive and negative numbers. Most computers use the first bit for the sign: 1 for negative, 0 for positive numbers, and then use the remaining 31 bits for the number itself.

That gives us a range of roughly plus or minus two billion.

This is why 64-bit numbers are useful. The largest value a 64-bit number can represent is around 9.2 quintillion!

As computer memory has grown to gigabytes and terabytes – that’s trillions of bytes – it was necessary to have 64-bit memory addresses as well.

In addition to negative and positive numbers, computers must deal with numbers that are not whole numbers, like 12.7 and 3.14, or maybe even stardate: 43989.1.

These are called “floating point” numbers, because the decimal point can float around in the middle of number.

The most common of which is the IEEE 754 standard.

In essence, this standard stores decimal values sort of like scientific notation. For example, 625.9 can be written as 0.6259 x 10^3.

There are two important numbers here: the .6259 is called the significand. And 3 is the exponent.

In a 32-bit floating point number, the first bit is used for the sign of the number -- positive or negative. The next 8 bits are used to store the exponent and the remaining 23 bits are used to store the significand.


Ok, we’ve talked a lot about numbers, but your name is probably composed of letters, so it’s really useful for computers to also have a way to represent text.

In fact, Francis Bacon, the famous English writer, used five-bit sequences to encode all 26 letters of the English alphabet to send secret messages back in the 1600s. And five bits can store 32 possible values
– so that’s enough for the 26 letters, but not enough for punctuation, digits, and upper and lower case letters.

Enter ASCII, the American Standard Code for Information Interchange. Invented in 1963, ASCII was a 7-bit code, enough to store 128 different values.

... it could encode ...

In older computer systems, the line of text would literally continue off the edge of the screen if you didn’t include a new line character!

“interoperability”.

There was no way to encode all those characters in 8-bits!

multi-byte encoding schemes, all of which were mutually incompatible.

And so it was born – Unicode – one format to rule them all.

You have no rights to post comments