This post is the 2nd in a series about basic concepts in programming and Computer Science. You can find all the posts in the series below.

In the last post in this series, we covered converting binary, quaternary, and octal numbers. Now we’ll go through hexadecimal and 32 bit numbers. If you haven’t read the first post, go ahead and read it now then come back.

Converting hex and 32 bit numbers is pretty close to the same as converting smaller bases, but there’s one important difference — we have to be able to account for numbers larger than 9.

We already know that the decimal system, or regular integers, can go from 0 - 9 in each digit. The trouble with something like hexadecimal, or Base16, numbers (and higher) is that there’s no way to represent something larger than 9 with a single-digit integer.

So what do we do?

Well, we use letter characters.

For hex and 32-bit numbers 0 - 9 are the same as decimal. When the digit gets larger than 9 we use letters.

**Hex**: 1 - 9, A - F

**32 Bit**: 1 - 9, A - V

That’s it!

### Hexadecimal (Base16)

65536 | 4096 | 256 | 16 | 1 | |
---|---|---|---|---|---|

0 | 0 | ||||

1 | 1 | ||||

10 | A | ||||

11 | B | ||||

12 | C | ||||

13 | D | ||||

14 | E | ||||

15 | F | ||||

16 | 1 | 0 | |||

17 | 1 | 1 | |||

26 | 1 | A |

As you can see, each digit in hexadecimal can represent 16 unique numbers.

Two digits can represent 256 unique numbers. If you’ve used hex colors, you’re using hexadecimal numbers with six digits. That’s 256^{ 3 }, or 16,777,216, color values.

Honestly, this table could go on forever, so I’m going to whip up a hex multiplication table with client-side JavaScript and you can use that for reference if you need to.

JavaScript provides some nice convenience methods for converting a binary number to a different base representation. To get the values for the table below, all I had to do was multiply two values and then write `cellValue = product.toString(16)`

, which will convert an integer to a string that represents the given base’s value.

So, that leaves 32-bit conversions. Those behave exactly the same as hexadecimal conversions, but since we’re dealing with numbers larger than 15, we need to include additional substitution characters.

Two 32-bit digits can account for 1,024 decimal numbers.

Here’s a 32-bit table (hopefully the table works for smaller screens, because it’s going to be pretty big).

Next time we’ll go over converting from one non-decimal base, like binary, to another base, like hex.