CSV to JSON Converter
Convert CSV to JSON in your browser. RFC 4180, type inference, header row, big-int safe. 100% private, no upload.
Options · , · auto · LF · header · no BOM · flatten
What is JSON and Why Convert from CSV?
JSON (JavaScript Object Notation) is the universal format for API responses, configuration files, and structured data exchange — every modern programming language, every database, and every web framework has first-class JSON support. CSV (Comma-Separated Values), by contrast, is the oldest and most widely supported tabular format — every spreadsheet app, every database export, and every analytics tool can produce it. Converting between them is one of the most common chores in data engineering: you receive a CSV from a spreadsheet, a database dump, or a third-party export, and you need JSON to feed an API, hydrate a frontend, or load into a NoSQL store. This tool is built for that conversion path and handles four scenarios that most online converters botch.
This tool has four important differentiators compared to typical online CSV-to-JSON converters:
**1. RFC 4180 State-Machine Parser.** CSV looks simple but the quoting rules are subtle: a field wrapped in double quotes can contain commas, embedded newlines, and escaped double quotes (doubled, like ""). Naive split-by-comma parsers break on real-world data — addresses with commas, multiline text fields, and quoted values containing quotes. This tool implements a proper state-machine parser following RFC 4180 (the IETF spec for CSV), correctly handling quoted fields, embedded delimiters, embedded line endings, and escaped quotes in every direction. The output is round-trippable through Python's csv module, PostgreSQL COPY, AWS S3 SELECT, and any compliant parser.
**2. Type Inference with Big-Integer Safety.** With Infer types on, numeric strings become numbers, true/false become booleans, empty cells become null. But the inference pipeline has two important guards: leading-zero strings (007, 0123) are kept as strings because leading zeros indicate identifiers — converting to a number would silently strip them. And integers above 2^53 - 1 (9007199254740991) are also kept as strings to avoid IEEE 754 precision loss. Twitter snowflake IDs, Discord IDs, MongoDB Long fields, and K8s resourceVersion all stay exact instead of being silently rounded. ISO date strings are intentionally kept as strings — JSON has no native date type.
**3. Header Autonames or Use First Row.** With Header on (the default), the first row is treated as column names and each subsequent row becomes a JSON object keyed by those names. With Header off, the parser auto-names columns col1, col2, col3 — useful for raw data dumps without a header line. The Delimiter chip row covers the four most common separators: comma (RFC 4180 default), semicolon (Excel-EU locales), tab (TSV from Unix tools and data warehouses), and pipe (high-comma fields). Pick the chip and parse — no manual configuration needed for typical real-world CSVs.
**4. 100% Browser-Based Privacy.** Your CSV data — which often contains user PII, internal database exports, customer records, and production exports — never leaves your browser. No data is sent to any server, no logging, no analytics that capture input. You can verify this in your browser's Network tab. This is the only safe way to handle sensitive data in an online tool. See the reverse direction by clicking Swap or use our companion JSON to CSV Converter when CSV is your target. Need to validate the JSON output before consuming it? Try our JSON Formatter.
JSON's strengths are precise types, native nesting, and a strict spec that parses identically everywhere — the right format whenever a machine consumes the data. CSV's strengths are universality and human-readability — the right format whenever a human opens a spreadsheet. The right tool depends on the consumer: human reading a spreadsheet → CSV, machine consuming an API → JSON. This converter handles the bridge in both directions.
// Input CSV (comma + LF, header on, infer types on)
id,name,active,score
1,Alice,true,98.5
2,Bob,false,87
3,Carol,true,
// Output JSON
[
{ "id": 1, "name": "Alice", "active": true, "score": 98.5 },
{ "id": 2, "name": "Bob", "active": false, "score": 87 },
{ "id": 3, "name": "Carol", "active": true, "score": null }
]
// Same input with Header off (no first-row keys)
1,Alice,true,98.5
2,Bob,false,87
// Becomes
[
{ "col1": 1, "col2": "Alice", "col3": true, "col4": 98.5 },
{ "col1": 2, "col2": "Bob", "col3": false, "col4": 87 }
] Key Features
RFC 4180 State-Machine Parser
Strict state-machine parser following the IETF CSV specification: correct handling of quoted fields, embedded delimiters, embedded CR/LF, and escaped double quotes (doubled). Output round-trips cleanly through Python csv, PostgreSQL COPY, and AWS S3 SELECT.
Type Inference with Big-Integer Safety
Infer types on converts numeric strings to numbers, true/false to booleans, empty cells to null. Integers above 2^53 - 1 stay as strings to avoid IEEE 754 precision loss; leading-zero strings (007, 0123) stay as strings to preserve identifier semantics.
Header On/Off with Autonames
Header on (default) uses the first row as JSON keys. Header off auto-names columns col1, col2, col3 in order — useful for raw data dumps and machine-generated CSVs without a header line. The autonames are deterministic and pipeline-friendly.
Comma, Semicolon, Tab, Pipe Delimiters
One-click Delimiter chips for the four most common separators: `,` (RFC 4180 default), `;` (Excel-EU locales), `\t` (TSV from Unix tools, BigQuery, Snowflake), and `|` (high-comma free-form text fields). The parser switches modes immediately — no need to convert files first.
Big-Integer Detection
Integers above 2^53 are detected during parsing and preserved as strings in the JSON — Twitter snowflake IDs, Discord IDs, MongoDB Long fields, and K8s resourceVersion stay exact instead of being silently rounded by JavaScript's IEEE 754 number type.
Bidirectional with Swap
One Swap direction button flips the conversion in place: input becomes JSON, output becomes CSV, current text is preserved. Round-trip your data through both directions to verify lossless conversion before shipping it to a pipeline.
Examples
Spreadsheet Export with Header
id,name,email,role 1,Alice,alice@example.com,admin 2,Bob,bob@example.com,editor 3,Carol,carol@example.com,viewer 4,Dan,dan@example.com,viewer
Standard CSV from a spreadsheet. With Header on and Infer types on, you get clean typed JSON: integers stay integers, booleans/null are detected.
Tab-Delimited Log Export (TSV)
ts event user duration 2026-05-09T10:00:00Z signup alice 142 2026-05-09T10:01:00Z login alice 87 2026-05-09T10:02:00Z checkout alice 312 2026-05-09T10:03:00Z logout alice 44
Choose `\t` (Tab) as delimiter. The default Header on auto-uses the first row as keys.
Excel-EU CSV (semicolon delimiter, CRLF)
id;name;price 1;Alice;1234,56 2;Bob;9876,54 3;Carol;42,00
Excel in DE/FR/IT/ES locales emits `;` separators because comma is the decimal mark. Pick `;` from the Delimiter chip — the parser handles the rest.
Embedded Commas and Escaped Quotes
name,role,note "Smith, Jr.",admin,"He said ""hi""" "Doe, Jane",editor,"Two lines"
Standard RFC 4180 quoting: quoted fields can contain delimiters and escaped quotes (doubled). The parser is a state machine — it never splits inside quotes.
CSV with Big-Integer IDs
id,event,user 9007199254740993,signup,alice 9007199254740994,login,bob 9007199254740995,checkout,carol
Big integers exceed JavaScript's safe range (2^53 - 1). With Infer types on, the parser detects this and keeps the value as a string to preserve precision — no truncation.
No-Header CSV
1,Alice,admin 2,Bob,editor 3,Carol,viewer 4,Dan,viewer
Toggle Header off; columns auto-name to `col1`, `col2`, `col3`. Use this for raw data dumps without a header line.
How to Use
- 1
Paste your CSV
Enter or paste your CSV into the input field above. The tool accepts comma, semicolon, tab, and pipe-delimited data. You can also click 'Load example' to try a sample like a spreadsheet export, TSV log, or Excel-EU CSV with semicolons.
- 2
Pick the delimiter (or Tab)
Click `,` (default), `;` (Excel-EU semicolon), `\t` (TSV), or `|` (Pipe) to switch the delimiter in one click. Open the Options panel for fine control: Header on/off and Infer types on/off. Header off auto-names columns col1, col2, col3.
- 3
Copy or Download the JSON
Click Copy to grab the JSON to your clipboard, or Download to save it as a .json file ready for your code, API, or pipeline. For round-trips, click Swap direction to convert JSON back to CSV in place.
Common Conversion Pitfalls
Embedded Comma Not Quoted in Source
If your CSV was built by hand with a naive join(','), any field containing a comma (Smith, Jr. or 1,234.56) breaks the column boundaries — the parser sees extra columns where there should be one. The fix is to wrap the offending field in double quotes per RFC 4180. This tool correctly handles quoted fields, but the source CSV must use proper quoting.
name,role Smith, Jr.,admin // Parser reads 3 columns: "Smith", " Jr.", "admin"
name,role "Smith, Jr.",admin // Parser reads 2 columns: "Smith, Jr.", "admin"
Excel-EU Semicolons Parsed as Comma
European Excel locales (Germany, France, Spain, Italy, etc.) emit semicolon-delimited CSV because the comma is reserved for the decimal separator. If you leave the delimiter on `,` (default), every row collapses into a single column with embedded semicolons. Pick the `;` Delimiter chip — the parser switches to semicolon mode and produces correct columns.
// Wrong delimiter (default comma) on Excel-EU file
id;name;price
1;Alice;1234,56
// Each row becomes one column: { col1: "1;Alice;1234,56" } // Correct: pick `;` Delimiter chip
id;name;price
1;Alice;1234,56
// Output: { id: 1, name: "Alice", price: "1234,56" } Big-Integer IDs Lose Precision after JSON.parse
Twitter snowflake IDs, Discord IDs, and other 64-bit integers exceed JavaScript's safe range (2^53 - 1) and lose precision when JSON.parse() reads them as numbers. With Infer types on, this tool detects values above the safe boundary and keeps them as strings instead, preserving the exact digits. Use BigInt("9007199254740993") in your code to convert back to a numeric type.
// Without big-int detection
{"id": 9007199254740993}
// JavaScript reads as 9007199254740992 (precision lost) // With Infer types on, big integers stay as strings
{"id": "9007199254740993"}
// Use BigInt(value) in code to preserve precision Header Row Contains Spaces
If your CSV header is `id, name, email` (with spaces after commas), the JSON keys become "id", " name", " email" — including the leading space. The parser preserves the header exactly as given, per RFC 4180. The fix is to either clean the source CSV before pasting, or rename keys downstream (jq 'with_entries(.key |= ltrimstr(" "))' or JavaScript Object.fromEntries(Object.entries(o).map(([k,v]) => [k.trim(), v]))).
id, name, email 1, Alice, alice@example.com // Output keys: "id", " name", " email" (with leading spaces)
id,name,email 1,Alice,alice@example.com // Output keys: "id", "name", "email" (clean)
Inconsistent Row Length
When rows in the CSV have different column counts (some with trailing commas, some without), the parser fills missing cells with empty strings (or null when Infer types is on) and drops extras beyond the header length. A Schema notes warning appears so you know the rows were normalized. This is usually fine, but verify the output if downstream consumers expect a strict row shape.
name,role,note Alice,admin Bob,editor,first day // Row 1 is short by one cell
// Output (note empty/null cell in row 1)
[
{ "name": "Alice", "role": "admin", "note": null },
{ "name": "Bob", "role": "editor", "note": "first day" }
] Date Strings Coerced Unexpectedly
ISO 8601 date strings (2026-05-09T10:00:00Z) are intentionally kept as strings in the JSON output — JSON has no native date type, so coercion would either produce a JavaScript Date object that doesn't survive serialization or a numeric epoch that loses timezone information. This is by design. Parse dates at the point of use with new Date(value) or your date library of choice. Do not toggle Infer types off solely to preserve dates — that would also keep numbers as strings.
// Expecting a Date object in the output ts,event 2026-05-09T10:00:00Z,signup // Output ts is the string "2026-05-09T10:00:00Z", NOT a Date
// Correct: parse at the point of use in your code const rows = JSON.parse(output); const when = new Date(rows[0].ts); // when is now a Date object
Common Use Cases
- Spreadsheet Export to API Import
- Paste a CSV exported from Excel, Google Sheets, or Numbers and get a JSON array of objects ready for POST to a REST API, GraphQL mutation, or bulk-import endpoint. The most common use case — analysts produce spreadsheet data, engineers need typed JSON to feed the backend.
- Excel Export to Tooling
- Convert Excel CSV exports (including Excel-EU semicolon-delimited files with the `;` chip) into JSON for processing with JavaScript tooling, jq scripts, or any system that reads JSON. The parser handles BOM stripping and CRLF line endings correctly so Excel exports don't break on the first row.
- TSV Log to Analytics
- Tab-separated logs from BigQuery exports, Snowflake unloads, Vector pipelines, or Unix tools (cut, awk) often arrive as .tsv. Pick the Tab Delimiter chip and get a typed JSON array ready for ad-hoc analysis, dashboard ingest, or pipeline-stage transformation.
- Database CSV Dump to ETL
- Convert PostgreSQL COPY TO CSV output, MySQL SELECT INTO OUTFILE, or any database CSV dump to JSON for loading into a NoSQL store, feeding into a JavaScript ETL pipeline, or shipping to BigQuery as line-delimited JSON. Big-integer detection preserves numeric IDs that exceed JavaScript's safe range.
- Postman/Newman CSV Test Result Consumption
- Postman test runs export CSV reports of pass/fail per request. Convert to JSON for programmatic consumption — feed into a status dashboard, alert pipeline, or test-result aggregator. Mixed-shape rows (failed tests have an extra error column) are handled with empty/null fills.
- Small CSV to Quick JSON Config
- Have a small CSV of constants — currency codes, country names, product SKUs — and need a JSON array for a config file or a JavaScript constant? Paste, copy, paste. With Infer types on, numbers and booleans are typed correctly; with Header on, you get an array of named-field objects ready to drop into a .json file.
Technical Details
- RFC 4180 State-Machine Parser Internals
- The parser is a proper finite-state-machine implementation following RFC 4180. States include UnquotedField, QuotedField, AfterQuote, RowEnd, and EndOfInput. The parser correctly handles quoted fields containing the delimiter, embedded CR/LF inside quoted fields, escaped double quotes (doubled, like ""), and trailing newlines. This produces output that round-trips losslessly through Python's csv module, PostgreSQL COPY, AWS S3 SELECT, and any compliant parser. The state machine is delimiter-aware, so switching from `,` to `;` or `\t` does not change the quoting semantics — only the field separator.
- Type Inference Algorithm
- With Infer types on, each cell runs through an ordered detection pipeline. First, an empty cell becomes JSON null. Second, the literal strings true and false become JSON booleans. Third, leading-zero strings (^0[0-9]+$) are kept as strings to preserve identifier semantics — converting to numbers would silently strip the leading zeros. Fourth, integer literals are tested against the safe-integer boundary (-2^53+1 to 2^53-1); values outside this range are kept as strings to avoid IEEE 754 precision loss. Fifth, ISO 8601 date strings are detected by regex and intentionally kept as strings — JSON has no native date type. Anything that survives all five guards is converted via Number() (numeric) or kept as a string (everything else).
- BOM Stripping and Encoding Handling
- All input is treated as UTF-8. The optional UTF-8 BOM (0xEF 0xBB 0xBF) is silently stripped from the first cell of the first row when present — this prevents BOM bytes from being included as a stray character at the start of the first column name (Excel on Windows commonly emits the BOM, breaking naive parsers). Other encodings (Windows-1252, ISO-8859-1) are not auto-detected; the browser File API would have already decoded the bytes as UTF-8 by the time the text reaches this tool. If you have non-UTF-8 input, convert it first with iconv or your editor's encoding-export option before pasting.
Best Practices
- Pick the Delimiter Explicitly for Non-Comma Data
- Don't rely on auto-detection. If your CSV uses semicolons (Excel-EU), tabs (TSV from BigQuery, Snowflake, or Unix tools), or pipes (high-comma fields), click the matching Delimiter chip before pasting. The parser is delimiter-aware: switching the chip immediately re-parses the input. This avoids the most common CSV-to-JSON failure mode where every row collapses into one cell because the parser used the wrong separator.
- Keep Infer Types On for Typed JSON
- With Infer types on (the default), you get typed JSON: numbers as numbers, booleans as booleans, null where empty cells appear. This is what most consumers want — APIs, frontends, JavaScript code. Toggle Infer types off only when you specifically need every cell as a string (downstream type-strict consumers, validation pipelines that compare exact source bytes). The detection pipeline has guards for leading-zero strings, big integers, and ISO dates, so identifiers and dates stay safe even with inference on.
- Quote IDs as Strings in Upstream CSV
- If your CSV is generated by a database or pipeline you control, emit large numeric IDs (Twitter snowflakes, Discord IDs, K8s resourceVersion) as quoted CSV strings ("9007199254740993") so they pass through Type Inference cleanly. The parser will keep them as strings either way (big-int detection catches values above 2^53 - 1), but explicit quoting is the most robust upstream contract and avoids any ambiguity about precision.
- Header Row Should Be the First Line
- Header on (the default) auto-detects the first row as column names. If your CSV has comments, blank lines, or metadata before the header, strip them before pasting — the parser does not skip leading non-data lines. For headerless CSVs (raw exports, machine-generated dumps), toggle Header off and the columns will be auto-named col1, col2, col3 in order. Don't try to fake a header by prepending one to a headerless file; either toggle Header off or fix the source.
- Use Stringify Mode for CSV → JSON → CSV Round-Trips
- If you plan to round-trip data through both directions (CSV → JSON → CSV), the reverse direction (JSON → CSV) needs Stringify mode for any nested arrays or objects to survive losslessly. Flatten mode in the reverse direction emits dotted keys (customer.address.city) that can't be perfectly reconstructed by the CSV parser. See our JSON to CSV converter for the full reverse-direction reference and round-trip testing notes.
Frequently Asked Questions
What does this tool do?
Is my data uploaded anywhere?
How does Type Inference work?
Why are big integers kept as strings?
My CSV uses semicolons — how do I parse it?
Does it handle TSV (tab-delimited)?
What if my CSV has no header row?
Can it handle quoted fields with embedded commas?
Why are my dates being kept as strings?
What happens if rows have different lengths?
How big a file can I paste?
Can I round-trip JSON → CSV → JSON?
Related Tools
View all tools →Base64 Decoder & Encoder
Encoding & Formatting
Decode and encode Base64 online for free. Real-time conversion with full UTF-8 and emoji support. 100% private — runs in your browser. No signup needed.
JSON Diff & Compare
Encoding & Formatting
Compare two JSON files instantly in your browser. Side-by-side highlighting, RFC 6902 JSON Patch output, ignore noisy fields like timestamps and IDs. 100% private, no upload.
JSON Formatter & Validator
Encoding & Formatting
Format, validate and beautify JSON instantly in your browser. Free online tool with syntax validation, error detection, minify and one-click copy. 100% private.
JSON Schema Validator
Encoding & Formatting
Validate JSON against any JSON Schema instantly in your browser. Supports Draft 2020-12, 2019-09, and Draft-07 with path-precise error messages. 100% private — no upload, no account, free.
JSON to CSV Converter
Encoding & Formatting
Convert JSON to CSV in your browser. RFC 4180, Excel-EU, TSV, Pipe presets. Flatten nested or stringify. 100% private, no upload.
JSON to YAML Converter
Encoding & Formatting
Paste JSON, get YAML instantly. Live conversion in your browser. K8s/Compose-ready, 2/4-space indent, smart quoting. 100% private, no upload.