💾 Binary Data Storage Converter 2026

Instantly convert between bits, bytes, KB, MB, GB, TB, PB, EB in both Binary (1024) and Decimal (1000) standards — with IEC kibibyte/mebibyte notation, download time calculator, hard drive gap tool, and a complete guide to data units, binary math, and the IEC 80000-13 standard.

Binary 1024 (IEC) Decimal 1000 (SI) Bit → Yottabyte Download Time Tool HDD Gap Calculator KiB / MiB / GiB Names
⇄ Data Storage Converter
1 GB = 1,024 MB
Formula: value × 1,024
Binary Standard (IEC): 1 KB = 2¹⁰ = 1,024 bytes

📊 All Unit Conversions (same input value)

🛠 Bonus Tools

⬇️ Download Time Calculator
Enter speed in megabits per second (Mbps) — divide your "MB/s" by 8 to get Mbps
~0.32 seconds
4 GB ÷ (100 Mbps ÷ 8) = 320 s
Formula: t = (size in MB × 8) ÷ speed in Mbps
💿 Hard Drive Gap Calculator
≈ 931.32 GiB
1 TB (decimal) = 10¹² ÷ 2³⁰ = 931.32 GiB
What your OS shows (GiB binary) vs. what's on the box (GB decimal).

📖 How to Use This Data Storage Converter

  1. 1
    Choose Your Standard: Binary (1024) or Decimal (1000)

    Click the toggle bar to select your standard. Use Binary (IEC) for RAM, CPU cache, and operating system displays (Windows, macOS, Linux all use binary). Use Decimal (SI/metric) for hard drive and SSD labels from manufacturers, network speed calculations, and internet bandwidth. The distinction matters significantly at large scales — a 1 TB hard drive shows only ~931 GiB in Windows.

  2. 2
    Enter Your Value and Select Units

    Type your number in the "Value" field. Select the source unit in the "Convert From" dropdown (bits → bytes → KB → MB → GB → TB → PB → EB → ZB → YB). Select the target unit in "Convert To." Results update instantly in real time. Hit the ⇄ swap button to reverse the conversion direction. The full "All Unit Conversions" panel shows the input value expressed in every unit simultaneously.

  3. 3
    Use the Download Time Calculator

    Enter the file size (in MB, GB, or KB) and your internet connection speed in Mbps (megabits per second, not megabytes). Your ISP quotes speed in Mbps — your download manager shows MB/s. To convert: divide listed Mbps by 8 = MB/s. Example: 100 Mbps = 12.5 MB/s. A 4 GB movie file at 100 Mbps takes ≈ 320 seconds (5.3 minutes). The formula: \(t(s) = \frac{\text{file size (MB)} \times 8}{\text{speed (Mbps)}}\).

  4. 4
    Use the Hard Drive Gap Calculator

    Manufacturers label drives using decimal (1 TB = 10¹² bytes). Your operating system reports in binary (1 TiB = 2⁴⁰ bytes ≈ 1.0995 × 10¹² bytes). So a "1 TB" drive shows ~931.32 GiB in Windows/macOS. Enter the labeled capacity and unit — the calculator shows exactly how much usable space your OS reports. This is why users often feel "cheated" by storage devices: there is no deception, just two different measurement standards.

📐 Data Storage Conversion Formulas — MathJax Rendered

Fundamental Units: Bit, Nibble, Byte, and Powers of 2

\( 1 \text{ nibble} = 4 \text{ bits} \qquad 1 \text{ byte} = 8 \text{ bits} = 2 \text{ nibbles} \)

\( 1 \text{ KiB} = 2^{10} = 1{,}024 \text{ bytes} \qquad 1 \text{ MiB} = 2^{20} = 1{,}048{,}576 \text{ bytes} \)

\( 1 \text{ GiB} = 2^{30} = 1{,}073{,}741{,}824 \text{ bytes} \qquad 1 \text{ TiB} = 2^{40} \approx 1.0995 \times 10^{12} \text{ bytes} \)

\( \text{General binary }(n\text{-th unit): } 1 \text{ unit} = 2^{10n} \text{ bytes,} \quad n = 0,1,2,3,\ldots \)

The base-2 exponential structure of computer memory arises because transistors in memory chips are binary switches — on (1) or off (0). The closest power of 2 to 1,000 is \(2^{10} = 1{,}024\), which is why early engineers rounded and used "kilo" to mean 1,024 before the IEC established kibibyte notation in 1998. Each step up the scale multiplies by 1,024 in binary: bit (×8) → byte (×1,024) → KiB (×1,024) → MiB → GiB → TiB → PiB → EiB → ZiB → YiB.
Universal Conversion Formula and Binary vs Decimal Divergence

\[ V_{target} = V_{source} \times \frac{f_{source}}{f_{target}} \]

\( \text{where } f = \text{unit factor in bytes (e.g., } f_{GB,\,binary} = 2^{30},\; f_{GB,\,decimal} = 10^9 \text{)} \)

\( \text{Binary–Decimal divergence at scale } n: \quad \Delta\% = \left(\frac{2^{10n}}{10^{3n}} - 1\right) \times 100\% \)

\( n=1: 2.4\% \quad n=2: 4.9\% \quad n=3: 7.4\% \quad n=4: 9.95\% \quad n=5: 12.6\% \)

To convert any quantity: express both units in bytes (the common denominator), then divide. Example: \(500 \text{ MB} \to \text{GB (binary)} = 500 \times 2^{20} / 2^{30} = 500/1024 \approx 0.4883 \text{ GiB}\). The divergence formula shows that using different standards diverges by 10% at terabyte scale and nearly 13% at petabyte scale — significant for enterprise storage budgeting. Binary is always larger than decimal for the same unit name because \(2^{10} = 1{,}024 > 10^3 = 1{,}000\).
Download Time, Hard Drive Gap, and Mbps vs MB/s

\( t_{download}(s) = \frac{S_{MB} \times 8_{\text{ bits/byte}}}{R_{Mbps}} \qquad \text{e.g., } \frac{4{,}000\text{ MB} \times 8}{100 \text{ Mbps}} = 320\text{ s} \)

\( V_{OS\text{-reported GiB}} = \frac{V_{labeled} \times 10^{12}}{2^{40}} \qquad \text{e.g., 1 TB}: \frac{10^{12}}{1{,}099{,}511{,}627{,}776} \approx 0.9095 \text{ TiB} = 931.32\text{ GiB} \)

\( 1 \text{ MB/s} = 8 \text{ Mbps} \qquad \Leftrightarrow \qquad R_{MB/s} = \frac{R_{Mbps}}{8} \)

The download time formula multiplies file size (in MB) by 8 to convert to megabits, then divides by connection speed in Mbps. The hard drive gap formula is the root of consumer confusion worldwide: manufacturers apply decimal prefixes (\(10^{12}\) for TB) while operating systems report binary gibibytes (\(2^{40}\)). The ratio \(2^{40}/10^{12} = 1.0995\) means the gap is exactly 9.95% at 1 TB. A "2 TB" drive shows ≈ 1,862.65 GiB in Windows Explorer. The Mbps / MBps conversion (divide by 8) is critical for downloading: a "100 Mbps" connection delivers at most 12.5 MB/s (actual speed will be lower due to protocol overhead, typically 85–90% of theoretical max).

📋 Binary vs Decimal vs IEC Notation — Complete Reference Table

Decimal NameIEC Binary NameSymbol (SI/IEC)Binary (bytes)Decimal (bytes)Gap %
KilobyteKibibyteKB / KiB2¹⁰ = 1,02410³ = 1,000+2.4%
MegabyteMebibyteMB / MiB2²⁰ = 1,048,57610⁶ = 1,000,000+4.9%
GigabyteGibibyteGB / GiB2³⁰ = 1,073,741,82410⁹ = 1,000,000,000+7.4%
TerabyteTebibyteTB / TiB2⁴⁰ ≈ 1.0995 × 10¹²10¹² = 1,000,000,000,000+10.0%
PetabytePebibytePB / PiB2⁵⁰ ≈ 1.1259 × 10¹⁵10¹⁵+12.6%
ExabyteExbibyteEB / EiB2⁶⁰ ≈ 1.1529 × 10¹⁸10¹⁸+15.3%
ZettabyteZebibyteZB / ZiB2⁷⁰ ≈ 1.1806 × 10²¹10²¹+18.1%
YottabyteYobibyteYB / YiB2⁸⁰ ≈ 1.2089 × 10²⁴10²⁴+20.9%
💡 Quick Memory Aid: "Bi" in KiB/MiB/GiB stands for binary, and the doubled letter (Ki, Mi, Gi) hints at ×2. IEC 80000-13 (formerly IEC 60027-2) formally defined these prefixes in 1998 to end the ambiguity between decimal SI prefixes (kilo = 1,000) and the binary usage (kilo ≈ 1,024) that computing had borrowed for decades. Linux kernel, Android, and most open-source software now use GiB correctly; Windows still says "GB" but means GiB internally.

💡 Complete Guide to Data Storage Units, Binary Math, and the IEC Standard

Digital storage is measured in bits and bytes — but the units that build on those foundations have been a source of confusion for 50+ years. When your "1 TB" hard drive shows 931 GB in Windows, when your ISP advertises "100 Mbps" but you download at "12.5 MB/s," or when your RAM specs say 16 GB but your BIOS shows 16,384 MB — you are experiencing the consequence of two competing measurement standards that grew from a single historic accident in computing history.

Everything digital starts with the bit. A bit (binary digit) is the smallest unit of information in computing: it can hold exactly one of two values, 0 or 1. This binary encoding maps directly to the physical behavior of transistors — the billions of microscopic switches in every CPU, RAM chip, and NAND flash cell. When a transistor gate voltage exceeds a threshold, the stored value is 1; below it, 0. This fundamental binary nature propagates through every layer of computing: machine code, CPU instructions, memory addressing, file systems, and data transmission protocols.

The byte evolved as the natural grouping unit. While a bit holds 0 or 1, early computing needed to represent characters, numbers, and control codes. The earliest machines used 6-bit bytes (enough for 64 characters). IBM's System/360 architecture (1964) standardized the 8-bit byte because it offered 256 unique states (2⁸), sufficient to encode all English characters, digits, punctuation, and control codes — while remaining efficient for hardware implementation. This decision cascaded through the industry and became universal. Today, 1 byte = 8 bits is as fundamental as 12 inches = 1 foot.

💡

Why Computers Use Powers of 2

Memory chips are designed with 2ⁿ address lines — because binary addressing naturally doubles with each added wire. A chip with 10 address lines has exactly 2¹⁰ = 1,024 addressable locations. This makes 1,024 a fundamental architectural constant. When computer scientists in the 1960s needed a unit "close to 1,000" for memory, 1,024 was already baked into the hardware design. They borrowed the SI prefix "kilo" (meaning 1,000) and applied it to 1,024 — a convenient approximation that created a 60-year naming controversy.

⚖️

The IEC 80000-13 Standard (1998): Ending the Ambiguity

The International Electrotechnical Commission formally solved the binary/decimal naming conflict in 1998 with IEC 60027-2 (later IEC 80000-13). They created new binary prefixes: kibi- (Ki), mebi- (Mi), gibi- (Gi), tebi- (Ti) — derived from "kilo binary," "mega binary," etc. 1 KiB = 2¹⁰ = 1,024 bytes; 1 MiB = 2²⁰; 1 GiB = 2³⁰. ISO and JEDEC adopted these standards. Linux, GNU, Android, and most technical specifications now use GiB/MiB correctly. Windows still misuses "GB" to mean GiB — creating ongoing user confusion.

💿

The Hard Drive Lawsuit That Changed Nothing

In 2006, Western Digital settled a class-action lawsuit (Orin v. WD) for $30 million over its use of decimal GB in drive labeling (1 GB = 10⁹ bytes) while Windows reports binary GiB. Courts found that manufacturers were within their rights using SI decimal prefixes — the confusion was a standard mismatch, not fraud. The FTC declined to require binary labeling. Hard drive makers continue using decimal today; SSDs, USB drives, and memory cards follow the same decimal convention. Only RAM universally uses binary (DDR5-32GB module = exactly 32 × 2³⁰ bytes).

🌐

Bits vs Bytes in Networking — The Mbps Confusion

Network speeds are measured in megabits per second (Mbps) — NOT megabytes per second (MB/s). Divide by 8 to get file transfer speed: 1,000 Mbps (gigabit fiber) = 125 MB/s theoretical max (real-world: ~110 MB/s due to protocol overhead). Internet plans: "100 Mbps" → ~12.5 MB/s downloads. "1 Gbps" → ~125 MB/s. 5G theoretical: 10 Gbps → 1,250 MB/s. The confusion is exacerbated because capital "B" = byte and lowercase "b" = bit — a capitalization distinction many interfaces ignore. Netflix 4K streams at ~25 Mbps = ~3.1 MB/s. A 2-hour 4K movie is roughly 22.5 GB.

⚠️ The Modern Data Hierarchy — Memory vs Storage: Computers have multiple storage tiers with vastly different speeds and sizes: Registers (CPU-internal): ~32–512 bytes per core, sub-nanosecond access. L1 Cache: 32–512 KiB per core, ~1 ns access. L2 Cache: 256 KiB–16 MiB per core, ~4 ns. L3 Cache: 8–128 MiB shared, ~10–40 ns. RAM (DRAM): 8–256 GiB typical, ~60–100 ns. NVMe SSD: 500 GB–8 TB, ~100 µs sequential. SATA SSD: 250 GB–4 TB, ~200 µs. HDD: 500 GB–22 TB, ~10,000 µs (10 ms). Cloud/Tape Archive: Petabytes, minutes–hours retrieval. Each tier uses binary (IEC) or decimal units depending on whether it's a memory chip or a storage device.

📊 Common File Sizes and Storage Reference

File / Content TypeTypical SizeStorage Scale
Plain text page (ASCII)~3 KBKilobytes
Email with attachments50 KB – 10 MBKilobytes–Megabytes
JPEG photo (smartphone)2–12 MBMegabytes
RAW photo (DSLR/mirrorless)20–80 MBMegabytes
MP3 song (128 kbps, 4 min)≈ 3.8 MBMegabytes
FLAC lossless song20–40 MBMegabytes
1080p movie (H.264, 2 hrs)4–8 GBGigabytes
4K HDR movie (H.265, 2 hrs)20–80 GBGigabytes
Modern AAA video game50–200 GBGigabytes
Windows 11 installation~64 GBGigabytes
MacOS Sequoia~15 GBGigabytes
1 million photos (12 MP)≈ 4–6 TBTerabytes
Netflix's global video library (est.)~60 PBPetabytes
Google's total storage (est.)> 15 EBExabytes
Global internet traffic per year (2025)~5.5 ZBZettabytes
Data Storage in Context — The Zettabyte Era: IDC's Global DataSphere report estimates that the world generated and replicated approximately 147 zettabytes (ZB) of data in 2024. By 2025 this is projected to reach ~175 ZB. To put this in perspective: 1 ZB = 10²¹ bytes = 1,000,000,000 terabytes. If you could print all this data as text on A4 paper, the stack would extend 200 light-years into space. Cloud data centers worldwide store approximately 10–15 ZB of unique (non-replicated) data. AWS, Microsoft Azure, Google Cloud, and Alibaba Cloud collectively operated over 2.5 million servers as of 2024. The shift to AI training datasets, autonomous vehicle telemetry (1 self-driving car generates ~4 TB per day), and IoT sensors is doubling the Global DataSphere every two years.
N
Written & Reviewed by Num8ers Editorial Team — Computer Science, Digital Storage Systems & Data Infrastructure Researchers Last updated: April 2026 · Sources: IEC 80000-13:2008 (International Standard for quantities and units — information science and technology) — defines kibibyte, mebibyte, gibibyte, tebibyte et seq. · IEC 60027-2 (1999, first edition IEC binary prefixes: kibi/mebi/gibi/tebi/pebi/exbi/zebi/yobi) · IEEE 1541-2002 standard — prefixes for binary multiples · JEDEC Solid State Technology Association Standard 21C (JESD21C) — RAM module sizing in binary gigabytes · IBM System/360 Architecture Reference (1964) — 8-bit byte standardization · Orin v. Western Digital (U.S. District Court, N.D. Cal., 2006) — class action on binary vs decimal storage labeling, $30M settlement · IDC Global DataSphere 2025 — annual world data generation forecast · Cisco Annual Internet Report 2022 (projecting global IP traffic volumes in zettabytes) · RFC 1340 / IEEE 802.3 — Ethernet/network speed Mbps definitions · Wikipedia "Binary prefix" citing NIST SP 330 (2008) and ISO 80000-13 · NIST Guide to the SI (Special Publication 811, 2008): use of power-of-2 prefixes in computing context. The converter uses IEEE-754 double-precision floating point — sufficient for all practical storage calculations up to yottabyte scale. For extremely large or small values, scientific notation (e.g., 1.2089 × 10²⁴) is used to preserve readability.

❓ Frequently Asked Questions — Data Storage Conversion

How many bytes are in a kilobyte?
It depends on the standard: Binary (IEC): 1 kibibyte (KiB) = \(2^{10}\) = 1,024 bytes. Used by RAM, CPU cache, and operating systems (Windows, macOS, Linux). Decimal (SI): 1 kilobyte (KB) = \(10^3\) = 1,000 bytes. Used by hard drive manufacturers, SSD labels, and network bandwidth. In common casual usage, "KB" often means 1,024 bytes (the legacy computing convention), but technically the IEC-correct term for 1,024 bytes is KiB (kibibyte). The difference (+2.4%) seems small but compounds at scale.
How many bytes are in a megabyte?
Binary: 1 mebibyte (MiB) = \(2^{20}\) = 1,048,576 bytes (exactly 1,024 KiB). Decimal: 1 megabyte (MB) = \(10^6\) = 1,000,000 bytes. The binary MB is 4.86% larger than the decimal MB. A 1,000 MB file is 1,000,000,000 bytes (decimal) = 953.67 MiB (binary). File managers typically measure in binary even when they say "MB." When your OS says a video is "700 MB," it means ~734 decimal MB worth of data.
How many bytes are in a gigabyte?
Binary: 1 gibibyte (GiB) = \(2^{30}\) = 1,073,741,824 bytes. Decimal: 1 gigabyte (GB) = \(10^9\) = 1,000,000,000 bytes. The divergence is 7.37%. This is why a smartphone showing "12 GB used" in the system storage screen (which uses binary GiB) represents more data than 12 × 10⁹ bytes. In practice: binary 1 GiB = decimal 1.073741824 GB = 1,073.741824 MB (decimal).
Why does my hard drive show less space than advertised?
This is a measurement standards mismatch, not missing storage. Hard drive manufacturers use the SI/decimal standard: 1 TB = \(10^{12}\) bytes = 1,000,000,000,000 bytes. Your operating system (Windows, macOS, Linux) reports in binary gibibytes: 1 GiB = \(2^{30}\) = 1,073,741,824 bytes. So a "1 TB" drive contains exactly \(10^{12}\) bytes = \(10^{12} / 2^{40}\) TiB = 0.9095 TiB = 931.32 GiB. Windows shows "931 GB" for what the box calls "1 TB." No storage is lost — the formula is: \(V_{GiB} = \frac{V_{labeled} \times 10^{12}}{2^{40}}\). A "2 TB" drive shows ≈ 1,862.65 GiB; a "4 TB" shows ≈ 3,725.29 GiB; an "8 TB" shows ≈ 7,450.58 GiB.
What is the difference between Mbps and MB/s?
Mbps = megabits per second; MB/s = megabytes per second. Since 1 byte = 8 bits: MB/s = Mbps ÷ 8. Examples: 100 Mbps = 12.5 MB/s · 500 Mbps = 62.5 MB/s · 1,000 Mbps (1 Gbps) = 125 MB/s · 10 Gbps = 1,250 MB/s. ISPs advertise in Mbps (larger sounding number). Download managers show MB/s (what you relate to file sizes). The lowercase 'b' (bit) vs uppercase 'B' (byte) distinction is critical. If your ISP says "1 Gbps" and you're only downloading at 125 MB/s, that is actually the correct maximum theoretical speed — not a problem.
What are KiB, MiB, GiB — and are they different from KB, MB, GB?
Yes — KiB/MiB/GiB are unambiguous binary units defined by IEC 80000-13 (1998): 1 KiB (kibibyte) = 1,024 bytes · 1 MiB (mebibyte) = 1,048,576 bytes · 1 GiB (gibibyte) = 1,073,741,824 bytes. 1 KB (kilobyte, SI) = 1,000 bytes · 1 MB (megabyte, SI) = 1,000,000 bytes · 1 GB (gigabyte, SI) = 1,000,000,000 bytes. Unfortunately, common usage conflates them: Windows says "GB" but means "GiB"; Linux/macOS increasingly use GiB correctly. When reading technical documentation: KiB/MiB/GiB always mean binary powers-of-1024; KB/MB/GB should mean decimal powers-of-1000 per IEC, but may mean binary in legacy contexts.
What comes after terabyte? And what comes after petabyte?
The full scale from smallest to largest: Bit (b) → Nibble (4 bits) → Byte (B) → Kilobyte (KB/KiB) → Megabyte (MB/MiB) → Gigabyte (GB/GiB) → Terabyte (TB/TiB) → Petabyte (PB/PiB)Exabyte (EB/EiB)Zettabyte (ZB/ZiB)Yottabyte (YB/YiB). After yottabyte: Ronnabyte (RB, \(10^{27}\)) and Quettabyte (QB, \(10^{30}\)) — approved by BIPM in 2022. In binary: Robibyte (RiB, \(2^{90}\)) and Quebibyte (QiB, \(2^{100}\)). The global datasphere was ~147 ZB in 2024; a single yottabyte would hold roughly 7 years of the entire internet's traffic.
How do I convert MB to GB manually?
Binary method (for RAM, OS storage): Divide MB by 1,024. 500 MB ÷ 1,024 = 0.4883 GiB. Decimal method (for hard drive labels): Divide MB by 1,000. 500 MB ÷ 1,000 = 0.500 GB. Formula: \(V_{GB} = V_{MB} / 1{,}024\) (binary) or \(V_{GB} = V_{MB} / 1{,}000\) (decimal). Common conversions: 1,024 MB = 1 GiB · 2,048 MB = 2 GiB · 512 MB = 0.5 GiB · 256 MB = 0.25 GiB · 128 MB = 0.125 GiB. For the opposite (GB to MB): multiply by 1,024 (binary) or 1,000 (decimal).
How do I convert GB to TB?
Binary: Divide GB (GiB) by 1,024. 500 GB ÷ 1,024 = 0.4883 TiB. Decimal: Divide GB by 1,000. 500 GB ÷ 1,000 = 0.500 TB. Practical examples: 1,024 GiB = 1 TiB · 512 GiB = 0.5 TiB · 2,048 GiB = 2 TiB · 4,096 GiB = 4 TiB. To summarize: each step up in the scale requires dividing by 1,024 (binary) or 1,000 (decimal). To go down a step, multiply. The general formula: \(V_{n+1} = V_n / \text{base}\) where base = 1,024 (binary) or 1,000 (decimal).
How many MB is 1 GB? How many GB is 1 TB?
1 GB (binary/GiB): = 1,024 MB (binary/MiB) = 1,073.741824 MB (decimal). 1 TB (binary/TiB): = 1,024 GB (GiB) = 1,099,511.627776 MB (binary) ≈ 1,099.51 GB (decimal). Common misconception: "1 TB = 1,000 GB." This is true only in the decimal system. In binary (what your OS uses), 1 TiB = 1,024 GiB. The labeled "1 TB" hard drive contains 1,000 GB (decimal) = 931.32 GiB (binary), so Windows shows 931 GB.
Why is 1024 used in computing memory instead of 1000?
Because memory chips use binary addressing: a chip with \(n\) address lines has exactly \(2^n\) addressable locations. For efficient chip design, capacities are powers of 2: 256 = \(2^8\), 512 = \(2^9\), 1,024 = \(2^{10}\), 2,048 = \(2^{11}\). You cannot build a memory chip with exactly 1,000 cells in a binary-addressed architecture — the next power of 2 above 1,000 is 1,024 (\(2^{10}\)). This is why RAM always comes in 4 GB, 8 GB, 16 GB, 32 GB, 64 GB increments (powers of 2 × 4) rather than 5 GB or 10 GB. Hard drives, SSDs, and USB flash memory do NOT require power-of-2 capacities (they use block-mapping and controllers), which is why manufacturers freely chose the cleaner decimal system.
What is a bit compared to a byte?
1 byte = 8 bits. A bit (binary digit) is either 0 or 1 — the most primitive unit of digital information. It maps to a transistor state (on/off), a magnetic domain (north/south), or a pit/land on optical media (reflective/non-reflective). A byte groups 8 bits, providing \(2^8 = 256\) possible states (0 through 255). This is enough to encode all ASCII characters (128 values), a single RGB color channel (0–255), a single audio sample byte (PCM), or a single pixel in an 8-bit grayscale image. The capitalization convention: lowercase "b" = bit; uppercase "B" = byte. This distinction matters: "100 Mb" = 100 megabits; "100 MB" = 100 megabytes = 800 megabits.
How much data does a 1080p vs 4K video use per hour?
Typical streaming bitrates and storage sizes per hour: SD (480p) — streaming: ~0.7 GB/hr; Blu-ray quality: ~3 GB/hr. HD 1080p — streaming (Netflix): ~3 GB/hr; Blu-ray: ~15 GB/hr; H.265/HEVC encode: ~4 GB/hr. 4K UHD — streaming (Netflix): ~7 GB/hr; Ultra HD Blu-ray: 50–100 GB/hr; H.265 4K encode: ~10–25 GB/hr. 8K H.265 encode: ~80–150 GB/hr; 8K RAW video: ~1–5 TB/hr. For download time: at 100 Mbps (12.5 MB/s), downloading 7 GB takes \(7{,}000 \text{ MB} / 12.5 \text{ MB/s} = 560 \text{ seconds} \approx 9.3 \text{ minutes}\).

🔗 Related Calculators on Num8ers