A Developer’s Guide to Number Base Conversion: Binary, Hex, Octal & Decimal
One afternoon you’re staring at 0x7FFF5FBFF8C0 in a debugger, then switching to a CSS file to tweak #FF5733, then running chmod 755 in a terminal. Three different number representations, same underlying arithmetic. If you’ve ever paused to mentally convert between hex and binary, or wondered why Unix permissions use octal of all things, this guide will make those connections click.
We’ll walk through the four number systems that show up in everyday programming, the three conversion methods worth memorizing, and real code in JavaScript, Python, Go, and C. If you want to skip the theory and just convert a number, open our Base Converter — it handles any base from 2 to 36 with arbitrary precision.
The Four Number Systems Every Developer Uses
Each base exists in programming for a practical reason, not historical accident.
Binary (Base 2) — The Machine’s Language
Two digits: 0 and 1. A transistor is either on or off — that physical constraint gives us binary. You’ll encounter it directly when working with bitmasks, feature flags, bitwise operations, and IP subnet calculations.
Binary gets unwieldy fast. The decimal number 255 is 11111111 in binary — eight digits for a value that fits in three decimal digits. That’s exactly why programmers rarely write raw binary except when individual bit positions matter.
Octal (Base 8) — The Unix Shorthand
Eight digits: 0 through 7. Each octal digit maps to exactly three binary bits, which is why Unix file permissions use octal — chmod 755 packs three groups of three permission bits into three readable digits.
Octal had a larger role in the PDP-11 era, when machine words divided cleanly into 3-bit groups. Today it’s mostly a Unix permissions thing, plus the occasional C literal prefixed with 0 (which has caused more than a few off-by-one bugs when someone writes 0177 expecting decimal 177).
Decimal (Base 10) — The Human Default
Ten digits: 0 through 9. This is the system your brain defaults to — port numbers, array indices, HTTP status codes, pixel dimensions. Computers don’t think in decimal, but humans do, so every user-facing number comes out in base 10.
Hexadecimal (Base 16) — The Developer’s Swiss Knife
Sixteen symbols: 0-9 and A-F. Each hex digit represents exactly four binary bits (a nibble), which makes hex the ideal compact representation of binary data. Two hex digits = one byte. Always.
You’ll see hex in memory addresses (0x7FFF5FBFF8C0), CSS colors (#FF5733), MAC addresses (00:1A:2B:3C:4D:5E), UUID formatting, and hash digests. It’s the lingua franca of byte-level programming.
Try converting between all four systems in our Base Converter — type a value in any base and see the others update instantly.
How Base Conversion Works: Three Core Methods
You don’t need a tool to convert between bases, though one certainly speeds things up. Three methods cover every case.
Method 1 — Positional Expansion (Any Base → Decimal)
Every positional number system works the same way: each digit is multiplied by the base raised to the power of its position, counting from the right starting at zero.
Binary 1011:
1×2³ + 0×2² + 1×2¹ + 1×2⁰
= 8 + 0 + 2 + 1
= 11
Hex FF:
15×16¹ + 15×16⁰
= 240 + 15
= 255
In code, most languages handle this with a parse function:
parseInt('1011', 2) // 11
parseInt('FF', 16) // 255
parseInt('755', 8) // 493
Method 2 — Repeated Division (Decimal → Any Base)
To go the other direction, divide the decimal number by the target base repeatedly and collect the remainders. Read the remainders from bottom to top.
Decimal 255 → hexadecimal:
255 ÷ 16 = 15 remainder 15 (F)
15 ÷ 16 = 0 remainder 15 (F)
→ Read upward: FF
Decimal 42 → binary:
42 ÷ 2 = 21 remainder 0
21 ÷ 2 = 10 remainder 1
10 ÷ 2 = 5 remainder 0
5 ÷ 2 = 2 remainder 1
2 ÷ 2 = 1 remainder 0
1 ÷ 2 = 0 remainder 1
→ Read upward: 101010
bin(42) # '0b101010'
hex(255) # '0xff'
oct(493) # '0o755'
Method 3 — Bit Grouping (Binary ↔ Hex/Octal Direct)
This is the method experienced developers use most. Since 16 = 2⁴ and 8 = 2³, you can convert between binary and hex (or octal) by grouping bits directly — no arithmetic needed.
Binary → Hex: Group into nibbles (4 bits) from the right. Pad the leftmost group with zeros if needed.
Binary: 1010 1111
Hex: A F
→ AF
Binary → Octal: Group into triplets (3 bits) from the right.
Binary: 111 101 101
Octal: 7 5 5
→ 755
The nibble lookup table is worth committing to memory:
| Binary | Hex | Binary | Hex |
|---|---|---|---|
0000 | 0 | 1000 | 8 |
0001 | 1 | 1001 | 9 |
0010 | 2 | 1010 | A |
0011 | 3 | 1011 | B |
0100 | 4 | 1100 | C |
0101 | 5 | 1101 | D |
0110 | 6 | 1110 | E |
0111 | 7 | 1111 | F |
Once you’ve internalized this table, binary-to-hex conversion becomes a glance-and-read operation.
Base Conversion in Every Language
Here’s how to handle number base conversion in the four languages where it comes up most.
JavaScript / TypeScript
// Parsing: string in any base → number
parseInt('FF', 16) // 255
parseInt('101010', 2) // 42
parseInt('755', 8) // 493
// Formatting: number → string in any base
(255).toString(16) // 'ff'
(42).toString(2) // '101010'
(493).toString(8) // '755'
// Literals
const bin = 0b11111111; // 255
const oct = 0o377; // 255
const hex = 0xff; // 255
// BigInt for values beyond 2⁵³
const big = BigInt('0xFFFFFFFFFFFFFFFF');
big.toString(2) // 64 ones
big.toString(10) // '18446744073709551615'
Python
# Decimal → other bases (returns prefixed strings)
bin(255) # '0b11111111'
oct(493) # '0o755'
hex(255) # '0xff'
# Other bases → decimal
int('11111111', 2) # 255
int('FF', 16) # 255
int('755', 8) # 493
# Formatted output with padding
f'{255:08b}' # '11111111' (8-digit binary, zero-padded)
f'{255:02x}' # 'ff' (2-digit hex, lowercase)
f'{255:02X}' # 'FF' (2-digit hex, uppercase)
# Python integers have arbitrary precision by default
big = int('F' * 64, 16) # 256-bit number, no overflow
Go
package main
import (
"fmt"
"strconv"
)
func main() {
// Formatting: int → string in any base
fmt.Println(strconv.FormatInt(255, 16)) // "ff"
fmt.Println(strconv.FormatInt(255, 2)) // "11111111"
fmt.Println(strconv.FormatInt(493, 8)) // "755"
// Parsing: string in any base → int
n, _ := strconv.ParseInt("FF", 16, 64) // 255
fmt.Println(n)
// Printf verbs
fmt.Printf("%b %o %x %d\n", 255, 255, 255, 255)
// 11111111 377 ff 255
}
C
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
int main() {
// Output in decimal, octal, hex
printf("%d %o %x\n", 255, 255, 255);
// 255 377 ff
// Parse from any base
long val = strtol("FF", NULL, 16); // 255
long bin = strtol("101010", NULL, 2); // 42
// No built-in binary printf — manual extraction:
uint8_t byte = 0xAF;
for (int i = 7; i >= 0; i--)
putchar(((byte >> i) & 1) ? '1' : '0');
// 10101111
return 0;
}
Real-World Base Conversion Scenarios
Five situations where base conversion stops being academic and starts being part of the job.
1. Debugging Memory Addresses
Your debugger shows a pointer at 0x7FFF5FBFF8C0. Converting the last two bytes to binary — 1000 1100 0000 — reveals the address is aligned to a 64-byte boundary (six trailing zeros). Alignment matters for cache performance, SIMD operations, and memory-mapped I/O. The hex representation makes these patterns visible at a glance.
Pointer arithmetic is easier to reason about in hex too. Offset 0x100 from a base address is exactly 256 bytes — one page in many embedded systems. In decimal that’s less obvious.
2. CSS Hex Colors ↔ RGB
The color #FF5733 is three bytes packed as hex pairs:
| Pair | Hex | Decimal | Channel |
|---|---|---|---|
FF | FF | 255 | Red (max) |
57 | 57 | 87 | Green |
33 | 33 | 51 | Blue |
Shorthand notation like #F00 expands to #FF0000 — pure red. The 8-digit variant #FF573380 adds an alpha channel where 80 (decimal 128) is approximately 50% opacity.
Knowing the hex-to-decimal mapping means fewer trips to a color picker when you want to bump a channel up or down by a specific amount.
3. Unix File Permissions (Octal)
chmod 755 breaks down as:
7 → 111 → rwx (owner: read + write + execute)
5 → 101 → r-x (group: read + execute)
5 → 101 → r-x (others: read + execute)
Each octal digit encodes exactly one permission group because three permission bits (read=4, write=2, execute=1) map to one octal digit (0-7). Common patterns:
| Octal | Binary | Permissions | Typical use |
|---|---|---|---|
755 | 111 101 101 | rwxr-xr-x | Executables, directories |
644 | 110 100 100 | rw-r--r-- | Regular files |
700 | 111 000 000 | rwx------ | Private scripts |
600 | 110 000 000 | rw------- | SSH keys, secrets |
4. Network Subnet Calculations
A /24 subnet mask means 24 leading ones in binary:
11111111.11111111.11111111.00000000
→ 255.255.255.0
To find the network address, AND the IP and mask in binary:
192.168.1.37 → 11000000.10101000.00000001.00100101
255.255.255.0 → 11111111.11111111.11111111.00000000
AND result → 11000000.10101000.00000001.00000000
→ 192.168.1.0 (network address)
Network engineers routinely convert between decimal (IP notation), binary (subnet math), and sometimes hex (packet captures). Our Base Converter handles each octet individually if you prefer a visual check.
5. Reading Hash Digests and UUIDs
An MD5 hash like d41d8cd98f00b204e9800998ecf8427e is 32 hex characters, representing 16 bytes (128 bits). Each pair of hex digits is one byte.
A UUID follows the 8-4-4-4-12 hex pattern:
550e8400-e29b-41d4-a716-446655440000
That’s 32 hex digits separated by hyphens — the same 128 bits, just formatted differently. The version nibble sits at position 13 (the 4 in 41d4 means UUID v4).
Generate hashes with our MD5 & SHA Hash Generator or create UUIDs with our UUID Generator — both produce hex output that maps directly to the binary representations discussed here.
Beyond Base 16: Base 36, Base 64, and Custom Bases
The standard four bases cover most work, but a few others pop up in specialized contexts.
Base 36 — Compact Alphanumeric Encoding
Base 36 uses all 10 digits plus 26 letters (A-Z), giving you the most compact case-insensitive alphanumeric representation possible. URL shorteners love it — YouTube video IDs, short links, and compact database keys often use base 36.
(1000000).toString(36) // 'lfls'
parseInt('lfls', 36) // 1000000
Decimal 1,000,000 compresses to just four characters. For URL-safe short identifiers, that’s hard to beat.
Base 64 — Data Encoding (Not a Number System)
Base64 looks like another base, but it serves a different purpose. Rather than representing a numeric value in a positional system, Base64 encodes arbitrary binary data (images, files, JWT tokens) as ASCII text. It uses A-Z, a-z, 0-9, +, and / — 64 symbols total.
The math is different from base conversion. Base64 processes input in 3-byte (24-bit) blocks and outputs four 6-bit characters. It’s an encoding scheme, not a numeral system.
For encoding and decoding Base64 data, use our Base64 Encoder & Decoder.
Arbitrary Bases (2-36) and When They Appear
A few other bases show up in the wild:
- Base 12 (dozenal): Time (12 hours), quantities (a dozen). Some mathematicians argue base 12 would be better than base 10 for everyday use because 12 has more divisors.
- Base 60 (sexagesimal): Time (60 seconds, 60 minutes) and angles (360°). Inherited from Babylonian mathematics.
- Base 32: Crockford’s Base32 encoding for human-readable identifiers (excludes ambiguous characters like I, L, O). Also used in geohashing.
Our Base Converter supports any integer base from 2 to 36.
Bitwise Operations and Base Conversion
Understanding binary isn’t just about converting numbers — it’s the foundation for bitwise operations that show up in systems programming, game development, and permission systems.
const READ = 0b100; // 4
const WRITE = 0b010; // 2
const EXEC = 0b001; // 1
// Combine permissions with OR
const perms = READ | WRITE; // 0b110 = 6
// Check a specific permission with AND
(perms & READ) !== 0 // true — has read
(perms & EXEC) !== 0 // false — no exec
// Toggle a permission with XOR
perms ^ WRITE // 0b100 = 4 — write removed
// Shift operations
1 << 3 // 0b1000 = 8 (1 shifted left by 3)
0xFF >> 4 // 0b00001111 = 15 (right shift by 4 = divide by 16)
Feature flags, hardware registers, network protocols, and graphics programming all lean on bitwise operations. Once binary clicks, bit manipulation reads like plain logic — no magic involved.
FAQ
What are the four main number systems used in programming?
Binary (base 2), octal (base 8), decimal (base 10), and hexadecimal (base 16). Binary is how data physically exists in hardware. Octal maps to Unix file permissions. Decimal is the human-facing default. Hex compresses binary into readable form — each hex digit is exactly 4 bits.
How do I convert binary to hexadecimal?
Group binary digits into sets of 4 from right to left, padding the leftmost group with zeros if needed. Map each group: 0000=0, 0001=1, …, 1010=A, …, 1111=F. Example: binary 10101111 → groups 1010 1111 → hex AF. This works because 16 = 2⁴.
How do I convert hexadecimal to decimal?
Multiply each hex digit by 16 raised to the power of its position (rightmost = 0), then sum. A=10, B=11, C=12, D=13, E=14, F=15. Example: hex FF = 15×16¹ + 15×16⁰ = 240 + 15 = 255. In code: JavaScript parseInt('FF', 16), Python int('FF', 16).
Why do programmers use hexadecimal instead of binary?
Hex is compact. Each hex digit maps to exactly 4 bits, so 11111111 00001010 in binary becomes FF0A in hex — far shorter and easier to scan. Hex is the standard for memory addresses, CSS colors (#FF5733), MAC addresses, hash outputs, and UUID formatting.
What is a nibble and why does it matter for hex conversion?
A nibble is 4 binary bits — half a byte. One nibble maps to exactly one hex digit, which is why binary-to-hex conversion is just a nibble-by-nibble lookup. One byte = two nibbles = two hex digits. This clean 4-bit mapping is the reason hex became the standard for representing byte-level data.
Why are Unix file permissions written in octal?
Three permission bits (read=4, write=2, execute=1) per group, three groups (owner, group, others). Since 2³ = 8, each group of three bits maps to one octal digit. 755 means owner=7 (rwx), group=5 (r-x), others=5 (r-x). Octal is the natural base for 3-bit groupings.
How do I handle numbers larger than 2⁵³ in JavaScript?
Use BigInt. Append n to a literal or use the BigInt() constructor: BigInt('0xFFFFFFFFFFFFFFFF').toString(2) produces the full 64-bit binary string. Standard Number loses precision beyond 9,007,199,254,740,991 (2⁵³ - 1). Our Base Converter uses BigInt internally, so it handles numbers of any size without precision loss.
How do I convert hexadecimal to binary?
Replace each hex digit with its 4-bit binary equivalent, using the same nibble table in reverse: 0=0000, 1=0001, …, A=1010, …, F=1111. Example: hex AF → A (1010) + F (1111) → binary 10101111. In code: JavaScript parseInt('AF', 16).toString(2) or Python bin(int('AF', 16))[2:].
Why do CSS hex colors sometimes have 3, 6, or 8 digits?
All three forms map to the same RGB model but encode different precision and alpha. 3-digit #F0A is shorthand that expands to #FF00AA (each digit doubled). 6-digit #FF00AA is the standard RGB form — two hex digits per channel. 8-digit #FF00AA80 adds an alpha channel as a fourth byte, where 80 (decimal 128) is ~50% opacity. Modern browsers accept all three; designers pick the shortest form that preserves intent.