Skip to content
Back to Blog
Security

bcrypt vs Argon2 vs scrypt: password hashing in 2026

Compare bcrypt, Argon2id, and scrypt against OWASP 2026 parameters, with a decision guide and code samples for picking a password hash.

18 min read

bcrypt vs Argon2 vs scrypt: Password Hashing in 2026

Short answer: for any new project in 2026, use Argon2id with m=19456, t=2, p=1. That matches the OWASP Password Storage Cheat Sheet baseline, and it gives you the best GPU and side-channel resistance you can ship today.

If Argon2 isn’t in your stack (rare, but it happens on some embedded or older runtimes), pick scrypt with N=2^17, r=8, p=1. Use bcrypt with cost=12 only when you’re stuck with a legacy system that already speaks bcrypt and you can’t add a new dependency. Stick to PBKDF2-HMAC-SHA-256 with 600,000 iterations when FIPS-140 compliance is mandatory.

AlgorithmOWASP 2026 parametersWhen to pick
Argon2idm=19456 KiB, t=2, p=1Default for new projects
scryptN=2^17, r=8, p=1Argon2 not available
bcryptcost=12 (min 10)Legacy systems only
PBKDF2HMAC-SHA-256, 600k iterationsFIPS-140 required

The rest of this article explains why these numbers, how to tune them for your hardware, and how to migrate without forcing a password reset. If you need strong test passwords for benchmarking, use the random password generator. For the broader picture, see the web security best practices guide.

Why password hashing is different from general hashing

Hash functions look the same from the outside: data goes in, a fixed-length digest comes out, and you can’t reverse it. But the design goals for “hash this 4 GB ISO” and “hash this 12-character password” pull in opposite directions. One should run as fast as silicon allows. The other should run as slow as your login latency budget tolerates.

Mixing them up is how breaches turn into account takeovers.

Why MD5 and SHA-256 fall short for passwords

General-purpose hashes like MD5, SHA-1, and SHA-256 were built for throughput. They process gigabytes per second on commodity CPUs and tens of gigabytes per second on GPUs. That makes them excellent for file checksums and content addressing, and disastrous for passwords.

Hashcat benchmarks on a single RTX 4090 show roughly 164 GH/s for MD5 and 22 GH/s for SHA-256 in 2024. An eight-character lowercase-alphanumeric password (36^8 ≈ 2.8 × 10^12 candidates) falls to a single GPU in under a minute against MD5 and under a couple of minutes against SHA-256. A breached database storing sha256(password) is basically plaintext.

Salt won’t save you either. It blocks pre-computed rainbow tables, but it does nothing to slow down a per-account attack: the attacker just hashes each candidate concatenated with the leaked salt.

For non-security checksums, MD5 and SHA-256 still pull their weight; that’s what tools like the general-purpose hash generator are built for. For a deeper comparison of when each algorithm is appropriate, read MD5 vs SHA-256 hash algorithm comparison. But for passwords, you need a hash that runs slow on purpose.

What a modern password hash needs to do

A password hash worth shipping in 2026 has three properties:

  1. Slow on purpose, with a tunable work factor. Login should take 100–500 ms: fast enough that users don’t notice, slow enough that an offline attacker burns days per million guesses. The work factor needs to be a parameter so you can crank it up as hardware improves.
  2. Per-record salt. A unique random salt per password defeats rainbow tables and forces the attacker to attack each account on its own. Modern algorithms generate and embed the salt in the output string for you.
  3. Memory-hard. GPUs and ASICs are fast at compute but expensive at high-bandwidth memory. An algorithm that requires tens of MiB per hash forces an attacker to provision RAM proportional to their parallelism, killing the cost-effectiveness of GPU farms.

bcrypt nails (1) and (2) but not (3). scrypt was the first algorithm to hit all three. Argon2 refined the design and won the Password Hashing Competition. The next section walks through each one.

The three algorithms: architecture and tradeoffs

bcrypt: Blowfish-based, time-hard

bcrypt was designed in 1999 by Niels Provos and David Mazières for OpenBSD. It’s built on the Blowfish cipher, with an expensive key-setup phase (“EksBlowfish”) repeated 2^cost times. The single tunable parameter is the cost factor (also called the “log rounds”): each increment doubles the work. A cost=10 hash does 1,024 key schedules; cost=14 does 16,384.

A bcrypt hash looks like this:

$2b$12$R9h/cIPz0gi.URNNX3kh2OPST9/PgBkqquzi.Ss7KIUgO2t0jWMUW
 │  │  │                      │
 │  │  │                      └─ 31-char base64 hash
 │  │  └─ 22-char base64 salt
 │  └─ cost factor (12)
 └─ algorithm identifier ($2b$ = bcrypt v2)

The format is self-describing: verify() reads the cost and salt from the stored string, no separate columns required.

The downsides are real. bcrypt’s memory footprint is about 4 KiB, small enough that a high-end GPU can run thousands of bcrypt cores in parallel. And bcrypt silently truncates input at 72 bytes. A 100-character passphrase has the same security as its first 72 bytes. The maximum cost is 31, but anything above ~16 starts hurting login latency on commodity hardware.

scrypt: the memory-hard pioneer

scrypt was published in 2009 by Colin Percival for the Tarsnap backup service and standardized as RFC 7914 in 2016. It introduced the idea of memory-hardness: the algorithm fills a large buffer with pseudo-random data, then reads from random positions, forcing any implementation to actually allocate the memory.

scrypt takes three parameters:

  • N — CPU/memory cost (must be a power of 2)
  • r — block size in bytes (multiplier on memory and mixing rounds)
  • p — parallelism (independent computations, mostly used to scale CPU time without scaling memory)

Memory usage is roughly 128 × N × r bytes. With OWASP’s recommended N=2^17, r=8, that’s 128 × 131072 × 8 = 134,217,728 bytes, or 128 MiB per hash.

scrypt also doubles as a key derivation function, not just a password hash. You’ll find it in cryptocurrency wallets, full-disk encryption, and the original Litecoin proof-of-work. That dual role is convenient when you need both password storage and key derivation in one library.

Argon2 (id/i/d): Password Hashing Competition winner

The Password Hashing Competition ran from 2013 to 2015, evaluating 24 candidate algorithms against memory-hardness, side-channel resistance, and implementation simplicity. Argon2 won. It was standardized as RFC 9106 in 2021.

Argon2 has three variants. The differences come down to how the memory gets addressed during mixing:

  • Argon2d uses data-dependent memory addresses. That gives the best resistance to GPU and ASIC attacks but leaks information through cache-timing side channels. Suitable for cryptocurrency proof-of-work, not authentication.
  • Argon2i uses data-independent addresses. Side-channel safe, but slightly weaker against GPU tradeoff attacks.
  • Argon2id is a hybrid: the first half of the first pass uses Argon2i indexing (side-channel safe), and the rest uses Argon2d indexing (GPU-resistant). RFC 9106 explicitly recommends Argon2id for password hashing, and so does OWASP.

Argon2 takes three parameters:

  • m — memory in KiB
  • t — time cost (number of passes over the memory buffer)
  • p — parallelism (number of lanes processed concurrently)

An Argon2id hash uses the PHC string format and looks like this:

$argon2id$v=19$m=19456,t=2,p=1$c29tZXNhbHQ$RdescudvJCsgt3ub+b+dWRWJTmaaJObG

Like bcrypt, all parameters live inside the string, so verify() doesn’t need a parameter table.

The OWASP Password Storage Cheat Sheet is the canonical reference. The numbers below match its current guidance. They’re conservative, sized for a typical web server with a 100–500 ms login latency budget, and you should still benchmark on your own hardware before shipping.

Argon2id parameters: first choice

OWASP’s baseline recommendation: m=19456 (19 MiB), t=2, p=1.

If your server has more RAM headroom, you can shift the work between memory and time. RFC 9106 publishes equivalent profiles; OWASP recommends any of these:

memoryCost (m)timeCost (t)parallelism (p)RAM per hash
471041146 MiB
194562119 MiB (baseline)
122883112 MiB
9216419 MiB
7168517 MiB

Tuning rule of thumb. Pick m first based on your peak concurrent-login RAM budget. If you expect 100 simultaneous logins and have 4 GiB to spare, that’s 40 MiB per hash. Then increase t until a single verify takes 100–500 ms on your production CPU. Leave p=1 unless you have a specific multi-core reason to change it (most web frameworks already give each request its own thread).

scrypt parameters: when Argon2 isn’t available

OWASP’s recommendation: N=2^17 (131072), r=8, p=1, which uses 128 MiB per hash.

If 128 MiB per concurrent login is too much for your server, OWASP allows weaker profiles:

NrpRAM per hash
2^1781128 MiB (preferred)
2^168164 MiB
2^158132 MiB

N must be a power of two. Increasing r raises both memory and CPU work proportionally; increasing p raises CPU work without raising per-instance memory. For password hashing, leave r and p at the defaults and only tune N.

bcrypt: cost factor 10+ for legacy only

OWASP no longer recommends bcrypt for new projects, but it’s still everywhere: Devise, Spring Security, ASP.NET Identity, and countless homegrown auth systems default to it.

If you’re stuck with bcrypt, the rules are:

  • Minimum bcrypt cost factor: 10. Below 10, a single GPU finishes a leaked database in days.
  • Recommended: 12 to 14, depending on hardware. On a modern x86 server, cost=12 takes around 250 ms per hash; cost=13 takes 500 ms.
  • Target 100–300 ms per verify on your production hardware. Benchmark, don’t guess.
  • Remember the 72-byte input limit. If users can choose passphrases, pre-hash with SHA-256 (see the FAQ).

bcrypt’s GPU resistance is bounded by its 4 KiB memory footprint. No bcrypt cost factor will ever match Argon2id’s memory-hardness, so pick Argon2id when you can.

For a practical reference, on a 2024 EPYC server, bcrypt(cost=12) runs in roughly 250 ms; on a high-end laptop, closer to 350 ms. If your numbers fall outside 100–500 ms by an order of magnitude, recheck whether your library is actually doing native bcrypt or falling back to a slow JavaScript polyfill (some bundlers strip native dependencies in serverless builds).

PBKDF2: FIPS-140 compliance path

PBKDF2 (RFC 8018) is the algorithm of last resort in security guidance. It’s older than bcrypt, it isn’t memory-hard, and it falls to GPU attacks faster than any of the three above. But it’s the only password-hashing primitive that’s FIPS-140 validated, which matters for federal government, healthcare HIPAA, and certain financial deployments.

When you need PBKDF2, use:

  • HMAC-SHA-256 as the PRF (don’t use SHA-1; don’t use plain SHA-256 without HMAC)
  • 600,000 iterations minimum (OWASP 2026 baseline)
  • At least a 16-byte random salt per password

If FIPS doesn’t apply to you, prefer Argon2id. PBKDF2’s fixed-output, fixed-memory design means every dollar of GPU silicon an attacker buys translates directly into more password guesses per second.

NIST’s SP 800-63B calls PBKDF2-HMAC “approved” for password hashing but stops short of recommending it over memory-hard alternatives. Read that as: NIST permits PBKDF2 because retiring it would invalidate every legacy government deployment, not because it’s the best choice for a greenfield project.

Decision framework: which algorithm should you pick?

Comparison table

DimensionbcryptscryptArgon2idPBKDF2
Memory-hardNoYesYesNo
GPU resistanceMediumHighVery highLow
Side-channel resistanceMediumMediumHigh (id)Medium
Parameter complexity1 (cost)3 (N, r, p)3 (m, t, p)1 (iterations)
Library maturityExcellentGoodGoodExcellent
Input length limit72 bytesNoneNoneNone
Standardizationde factoRFC 7914RFC 9106RFC 8018
OWASP 2026 statusLegacy onlyAlternativeFirst choiceFIPS only

Use Argon2id by default

For a new project (typical web app, modern Node/Python/Go/Rust/JVM stack, no FIPS constraint), use Argon2id with m=19456, t=2, p=1. You get the best GPU and side-channel resistance available today, an embedded-parameter format that survives library upgrades, and no 72-byte input cap. The library ecosystem is mature: argon2 on npm, argon2-cffi on PyPI, golang.org/x/crypto/argon2, the argon2 crate on crates.io, all maintained and benchmarked.

When to pick scrypt or bcrypt instead

Pick scrypt when Argon2 isn’t available in your runtime (genuinely rare in 2026; even Cloudflare Workers and Deno have it now), or when you already have a scrypt-based system in production and the migration cost outweighs the security delta. scrypt is still a solid algorithm; it just lacks the side-channel polish of Argon2id.

Pick bcrypt when you’re maintaining a legacy system, you have a hard dependency-minimization requirement (no native code, no extra packages), and the 72-byte input limit is acceptable for your user base. bcrypt has run at internet scale for two decades; its failure modes are documented.

Pick PBKDF2 when the regulator says so. That’s the only reason. If your auditor accepts Argon2id (which a growing number now do for non-FIPS workloads), use Argon2id.

Common mistakes to avoid

Most password-storage breaches in the last decade trace back to a handful of recurring engineering mistakes. None of them are exotic, and all of them get caught by reviewing your auth code with the list below in front of you.

  • Hashing passwords with raw SHA-256 or MD5. This is the single biggest password-storage failure. See MD5 vs SHA-256 for why these are wrong for passwords.
  • Reusing a single global salt across all users. A salt has to be unique per record. Argon2 and bcrypt generate one for you; don’t override that.
  • Setting hash time below 50 ms. You traded security for a speed gain no user can perceive. Aim for 100–500 ms.
  • Setting hash time above 1 second. You created a denial-of-service vector against your own login endpoint. Cap at ~500 ms.
  • Hashing passwords client-side and sending the digest to the server. The hash is now the password. Anyone who steals the database can authenticate without ever inverting it. Always hash on the server.
  • Storing the algorithm parameters in a separate column. The PHC string format puts them in the hash for you. Use it.
  • Logging passwords or hashes during error handling. Both belong to the user, not your log aggregator. Scrub them at the request-parsing layer before they reach any logger.
  • Treating verify() exceptions as authentication failures. A library that throws on a malformed stored hash should surface the error, not silently fall through to “wrong password.” Distinguish between “wrong password” (return 401) and “stored hash is corrupt” (return 500 and page on-call).

Real-world implementation

Argon2id in Node.js

The argon2 package (native bindings to the reference implementation) is the canonical choice on Node:

import argon2 from 'argon2';

// Hashing on signup or password change
const hash = await argon2.hash(password, {
  type: argon2.argon2id,
  memoryCost: 19456,  // 19 MiB
  timeCost: 2,
  parallelism: 1,
});
// → '$argon2id$v=19$m=19456,t=2,p=1$<salt>$<hash>'

// Verifying on login
const ok = await argon2.verify(hash, candidate);
if (!ok) throw new Error('Invalid credentials');

// Detect outdated parameters and re-hash on successful login
if (argon2.needsRehash(hash, { type: argon2.argon2id, memoryCost: 19456, timeCost: 2, parallelism: 1 })) {
  const upgraded = await argon2.hash(candidate, {
    type: argon2.argon2id, memoryCost: 19456, timeCost: 2, parallelism: 1,
  });
  await db.users.update({ id: user.id }, { password_hash: upgraded });
}

The needsRehash step is what makes long-term migration painless: every successful login becomes an opportunity to upgrade the stored hash to current parameters, without bothering the user.

The same pattern in Python with argon2-cffi:

from argon2 import PasswordHasher
from argon2.exceptions import VerifyMismatchError

ph = PasswordHasher(memory_cost=19456, time_cost=2, parallelism=1)

# Hash
stored = ph.hash(password)

# Verify
try:
    ph.verify(stored, candidate)
except VerifyMismatchError:
    raise ValueError('Invalid credentials')

# Re-hash on parameter upgrade
if ph.check_needs_rehash(stored):
    stored = ph.hash(candidate)

In Go with golang.org/x/crypto/argon2:

import (
    "crypto/rand"
    "golang.org/x/crypto/argon2"
)

func hashPassword(password string) ([]byte, []byte) {
    salt := make([]byte, 16)
    rand.Read(salt)
    hash := argon2.IDKey([]byte(password), salt, 2, 19456, 1, 32)
    return hash, salt
}

The Go standard library doesn’t ship a PHC-format encoder; if you use the argon2.IDKey primitive directly, you have to encode the parameters and salt alongside the hash yourself. Most Go projects use a wrapper like github.com/alexedwards/argon2id for that.

Rust with the argon2 crate is similarly idiomatic:

use argon2::{Argon2, PasswordHasher, PasswordVerifier, password_hash::{SaltString, rand_core::OsRng}};

let salt = SaltString::generate(&mut OsRng);
let argon2 = Argon2::default();  // Argon2id, m=19456, t=2, p=1 by default
let hash = argon2.hash_password(password.as_bytes(), &salt)?.to_string();

// On verify
let parsed = argon2::password_hash::PasswordHash::new(&hash)?;
argon2.verify_password(candidate.as_bytes(), &parsed)?;

In all three runtimes, the produced string is interchangeable: a hash created in Node verifies cleanly in Python or Rust. That cross-runtime compatibility makes Argon2 a safer bet for polyglot architectures than algorithm-specific wrappers.

bcrypt-to-Argon2id migration pattern

You almost never get to wipe the user table and start over. The pattern that actually works is the one used in the MD5-to-bcrypt section of our hash generator FAQ: a soft, login-driven upgrade.

Add a column to track the algorithm:

ALTER TABLE users ADD COLUMN password_algo VARCHAR(16) NOT NULL DEFAULT 'bcrypt';

On login, dispatch to the right verifier:

async function verifyAndMaybeRehash(user, candidate) {
  let ok;
  if (user.password_algo === 'argon2id') {
    ok = await argon2.verify(user.password_hash, candidate);
  } else if (user.password_algo === 'bcrypt') {
    ok = await bcrypt.compare(candidate, user.password_hash);
    if (ok) {
      // Successful legacy verify → re-hash with Argon2id
      const newHash = await argon2.hash(candidate, {
        type: argon2.argon2id, memoryCost: 19456, timeCost: 2, parallelism: 1,
      });
      await db.users.update({ id: user.id }, {
        password_hash: newHash,
        password_algo: 'argon2id',
      });
    }
  }
  return ok;
}

Set a sunset window of 6–12 months. Send a “your password is stored using an outdated method, please log in to upgrade” email at the 9-month mark. After 12 months, accounts still on bcrypt require a forced password reset on next login. Active users migrate transparently; inactive accounts get a one-time friction event.

The same pattern works for migrating off scrypt or PBKDF2. The only state you need is the password_algo column.

Pepper, length limits, and encoding pitfalls

A few sharp edges that bite real deployments:

Pepper. A pepper is an application-level secret added to every password before hashing, stored separately from the database (in a KMS, env var, or Hashicorp Vault). If your database leaks but your app secret doesn’t, the leaked hashes are unattackable without the pepper. Apply it as an HMAC, not concatenation:

import { createHmac } from 'crypto';
const peppered = createHmac('sha256', process.env.PEPPER).update(password).digest();
const hash = await argon2.hash(peppered, { type: argon2.argon2id, /* ... */ });

Rotate the pepper rarely (it requires re-hashing) but do support rotation by versioning it: PEPPER_V2, with a fallback to PEPPER_V1 on verify.

bcrypt 72-byte limit. If you must use bcrypt and want to support arbitrary-length passwords, pre-hash with SHA-256 and base64-encode (avoiding embedded NUL bytes that bcrypt also handles inconsistently):

import { createHash } from 'crypto';
const prepped = createHash('sha256').update(password, 'utf8').digest('base64');
const hash = await bcrypt.hash(prepped, 12);

The same prepped transformation must run on verify. Document this in your auth code with a giant comment so the next person to touch it knows what’s happening.

UTF-8 normalization. The string "café" can be encoded as either c-a-f-é (4 codepoints, NFC) or c-a-f-e + combining acute (5 codepoints, NFD). They look identical but produce different hashes. Always normalize to NFC before hashing:

const normalized = password.normalize('NFC');

This bites mobile keyboards and copy-paste from PDFs more often than you’d expect.

Never pre-hash on the client. A client-computed hash sent to the server is the new password. Anyone who reads your database can authenticate. Hash on the server, period. JWTs don’t change this; see how to decode JWT tokens for what JWTs do and don’t authenticate.

Benchmark on production hardware, not your laptop. A 13th-gen Intel laptop running Argon2id at m=19456, t=2, p=1 finishes in roughly 35 ms. The same parameters on a t3.small EC2 instance take closer to 180 ms; on a Raspberry Pi 4, over 600 ms. Pick the hardware that will actually run production, time 1,000 verifies, and tune from the median. Login latency variance from cold-start serverless containers is also worth measuring; Lambda cold starts can add 200–800 ms unrelated to hashing.

FAQ

What’s the difference between password hashing and encryption?

Hashing is one-way: you compute a fixed-length fingerprint that can’t be reversed to recover the input. Encryption is two-way: with the right key, you can decrypt back to the original. Passwords must be hashed, not encrypted. A server shouldn’t be able to recover any user’s password, so that a database leak doesn’t turn into a credential leak.

Why can’t I just use SHA-256 for passwords?

SHA-256 is built for speed. A modern GPU computes 22 billion SHA-256 hashes per second, so an 8-character lowercase password from a leaked database falls in minutes. Password hashes need three properties SHA-256 lacks: slow execution on purpose, per-record salt, and memory-hardness. The tradeoff principle is the same one explained in our hash generator’s “Don’t Use MD5 for Security” guidance, and you can read more about how attackers turn weak hashes into plaintext in password entropy explained.

Is bcrypt still secure in 2026?

bcrypt itself hasn’t been broken. The Blowfish-based key schedule remains cryptographically sound. What has changed is the threat model: GPUs and ASICs make bcrypt’s lack of memory-hardness a meaningful weakness compared to Argon2id. OWASP’s 2026 stance is that bcrypt is acceptable for legacy systems with cost ≥ 10, but new projects should pick Argon2id.

Argon2i vs Argon2d vs Argon2id: which should I use?

Use Argon2id. RFC 9106 specifies it as the recommended variant for password hashing. Argon2i is data-independent (side-channel safe but weaker against GPU tradeoff attacks). Argon2d is data-dependent (GPU-strong but vulnerable to cache-timing side channels). Argon2id is a hybrid that gets both properties for the price of one.

How do I choose Argon2id parameters for my app?

Start with the OWASP baseline: m=19456, t=2, p=1. Then benchmark on your production CPU and adjust:

  1. Decide your per-login RAM budget (say, 50 MiB at peak concurrency).
  2. Set m to that value or below.
  3. Run argon2.hash() in a loop and measure wall time.
  4. Raise t until the median sits between 100 and 500 ms.

Leave p=1 unless you’ve profiled and know multi-lane parallelism helps your runtime. For high-traffic auth servers, biasing toward higher t and lower m often gives better RAM headroom.

What’s bcrypt’s 72-byte limit and how do I handle long passphrases?

bcrypt feeds its input into the Blowfish key schedule, which truncates at 72 bytes. A 150-character passphrase has the same security as its first 72 bytes; the rest is ignored. The fix is to pre-hash with SHA-256 (32 bytes) or SHA-512 (64 bytes), base64-encode the digest to avoid NUL bytes, and feed that to bcrypt. Argon2id and scrypt have no such limit; they accept arbitrarily long input directly.

Can I migrate bcrypt to Argon2 without forcing password resets?

Yes. The pattern is: store both algorithms behind a password_algo column, dispatch verification to the right library, and on every successful bcrypt verify, immediately re-hash with Argon2id and update the row. Active users migrate silently within their normal login cadence. Set a 6–12 month sunset window for inactive accounts, then force a password reset for any record still on bcrypt. The same pattern works for any algorithm-to-algorithm migration.

Is PBKDF2 still a good choice in 2026?

Only when FIPS-140 compliance forces your hand: typical in federal government, regulated healthcare (HIPAA), and certain financial systems. Use HMAC-SHA-256 as the PRF with at least 600,000 iterations. PBKDF2 isn’t memory-hard, so it falls to GPU attacks faster than Argon2id at equivalent latency budgets. If FIPS doesn’t apply, pick Argon2id and skip the extra compliance work.


The 2026 password hashing answer is short: default to Argon2id with OWASP’s baseline parameters, fall back to scrypt if Argon2 isn’t available, keep bcrypt only where legacy demands it, and reserve PBKDF2 for FIPS-bound systems. Pair the hash with a per-record salt (every modern library handles this automatically), an application-level pepper stored outside the database, and a login-driven re-hash loop that lets you raise work factors as hardware improves.

Generate a representative password set with the random password generator, benchmark your verify path against your production CPU, and write the parameters into a constants file so the next engineer knows exactly what to bump in 2028. The full security context (TLS, session management, rate limiting, MFA) lives in our web security best practices guide.

Related Articles

View all articles