JSON to CSV Converter
Convert JSON to CSV in your browser. RFC 4180, Excel-EU, TSV, Pipe presets. Flatten nested or stringify. 100% private, no upload.
Options · , · auto · LF · header · no BOM · flatten
What is CSV and Why Convert from JSON?
CSV (Comma-Separated Values) is the oldest and most widely supported tabular data format in computing — every spreadsheet app, every database, every analytics tool, and most programming languages have first-class CSV support. JSON, by contrast, is the universal format for API responses, configuration, and structured data exchange. Converting between them is one of the most common chores in data engineering: you receive JSON from an API or a NoSQL database, and you need a CSV to load into Excel for analysis, into a Postgres table via COPY, or into a BigQuery / Snowflake warehouse. This tool is built for that conversion path and handles four scenarios that most online converters botch.
This tool has four important differentiators compared to typical online converters:
**1. RFC 4180 State-Machine Parser.** CSV looks simple but the quoting rules are subtle: a field wrapped in double quotes can contain commas, embedded newlines, and escaped double quotes (doubled, like ""). Naive split-by-comma parsers break on real-world data — addresses with commas, multiline text fields, and quoted values containing quotes. This tool implements a proper state-machine parser following RFC 4180 (the IETF spec for CSV), correctly handling quoted fields, embedded delimiters, embedded line endings, and escaped quotes in every direction. The output is round-trippable through Python's csv module, PostgreSQL COPY, AWS S3 SELECT, and any compliant parser.
**2. Flatten One-Way / Stringify Reversible.** Nested JSON is fundamentally incompatible with CSV's flat tabular shape, and most converters silently corrupt data when they hit a nested object or array. This tool gives you an explicit choice: Flatten mode emits dotted keys (customer.address.city) and indexed keys (items.0.sku) for the cleanest spreadsheet layout — readable in Excel but lossy for round-trips. Stringify mode keeps arrays and objects as JSON inside a single cell — uglier but fully round-trippable: CSV → JSON → CSV produces identical data when paired with Infer types on the reverse. Choose based on your goal: analysis in Excel (Flatten) or pipeline round-trips (Stringify).
**3. Big-Integer Detection.** JavaScript's Number type uses IEEE 754 double precision and silently rounds integers above 2^53 - 1 (9007199254740991). This bites real-world JSON: Twitter snowflake IDs, Discord IDs, MongoDB Long fields, and Kubernetes resourceVersion are all 64-bit integers that exceed the safe range. Most browser-based JSON tools silently produce wrong numbers without warning. This tool detects big-integer values during parsing, shows a warning banner listing affected fields, and preserves the original digits as strings in the CSV output so Excel and Google Sheets won't truncate them to scientific notation.
**4. 100% Browser-Based Privacy.** Your JSON data — which often contains user PII, internal database exports, API keys embedded in payloads, and production secrets — never leaves your browser. No data is sent to any server, no logging, no analytics that capture input. You can verify this in your browser's Network tab. This is the only safe way to handle sensitive data in an online tool. See the reverse direction by clicking Swap or use our companion JSON to YAML Converter when YAML is your target. Need to validate JSON before converting? Try our JSON Formatter.
CSV's strengths are universality and simplicity: every tool reads it, parsers are tiny, and the file format is human-readable in any text editor. Its weaknesses are the lack of type information (everything is a string until you tell the parser otherwise), no native nested-structure support, and locale-specific quirks (Excel-EU semicolons, Windows CRLF vs Unix LF). JSON's strengths are exactly the opposite: precise types, native nesting, and a strict spec that parses identically everywhere. The right tool depends on the consumer: human reading a spreadsheet → CSV, machine consuming an API → JSON. This converter handles the bridge in both directions.
// Input JSON
[
{ "id": 1, "name": "Alice", "role": "admin" },
{ "id": 2, "name": "Bob", "role": "editor" }
]
// Output CSV (RFC 4180 preset: comma + CRLF + no BOM)
id,name,role
1,Alice,admin
2,Bob,editor
// Same input with Stringify mode + nested data
[
{ "id": 1, "tags": ["a", "b"] }
]
// Becomes
id,tags
1,"[""a"",""b""]" Key Features
RFC 4180 Compliant
Strict state-machine parser following the IETF CSV specification: correct handling of quoted fields, embedded delimiters, embedded CR/LF, and escaped double quotes (doubled). Output round-trips cleanly through Python csv, PostgreSQL COPY, and AWS S3 SELECT.
Excel, TSV, and Pipe Presets
One-click presets for the four most common targets: RFC 4180 (comma + CRLF), Excel (semicolon + CRLF + UTF-8 BOM for EU locales), TSV (Tab + LF), and Pipe (| + LF). Switch between formats without manually tweaking five separate options.
Flatten or Stringify Nested Data
Explicit nested handling: Flatten emits dotted keys (customer.address.city) for clean spreadsheet analysis, Stringify keeps arrays and objects as JSON inside one cell for lossless round-trips. Choose based on whether you need analysis in Excel or a pipeline round-trip.
Big-Integer Detection
Integers above 2^53 are detected during parsing and preserved as strings in the CSV — Twitter IDs, Discord snowflakes, MongoDB Long fields, and K8s resourceVersion stay exact instead of being silently rounded by JavaScript's IEEE 754 number type.
NDJSON Auto-Detection
Line-delimited JSON (.ndjson, .jsonl) is auto-detected — paste streaming logs, API event exports, or data lake outputs directly without manually wrapping into an array. Each line becomes a row in the CSV.
Bidirectional with Swap
One Swap direction button flips the conversion in place: input becomes CSV, output becomes JSON, current text is preserved. Round-trip your data through both directions to verify lossless conversion before shipping it to a pipeline.
Examples
REST API Response
[{"id":1,"name":"Alice Chen","email":"alice@example.com","role":"admin"},{"id":2,"name":"Bob Garcia","email":"bob@example.com","role":"editor"},{"id":3,"name":"Carol Singh","email":"carol@example.com","role":"viewer"},{"id":4,"name":"Dan Müller","email":"dan@example.com","role":"viewer"}] Common API output. With Header on, you get a clean tabular CSV ready for Excel.
GitHub Issues API
[{"id":1001,"title":"Bug: login redirects to 404","state":"open","labels":["bug","priority:high"],"user":{"login":"alice"}},{"id":1002,"title":"Feature: dark mode toggle","state":"open","labels":["enhancement"],"user":{"login":"bob"}},{"id":1003,"title":"Docs: update README","state":"closed","labels":["docs","good-first-issue"],"user":{"login":"carol"}}] Pulled from /repos/{owner}/{repo}/issues. Flatten on, header on — exposes user.login as a flattened column.
MongoDB mongoexport
[{"_id":{"$oid":"6634a1b2c3d4e5f600000001"},"email":"alice@example.com","metadata":{"signupDate":"2026-01-15T10:30:00Z","preferences":{"newsletter":true,"theme":"dark"}}},{"_id":{"$oid":"6634a1b2c3d4e5f600000002"},"email":"bob@example.com","metadata":{"signupDate":"2026-02-20T14:15:00Z","preferences":{"newsletter":false,"theme":"light"}}}] Mongo export shape. Choose Stringify mode if you plan to round-trip CSV→JSON.
NDJSON Event Log
{"event":"signup","user":"alice","ts":1715000000}
{"event":"login","user":"alice","ts":1715000060}
{"event":"checkout","user":"alice","ts":1715000300} Line-delimited JSON (.ndjson/.jsonl) is auto-detected — paste the file contents directly.
Nested E-commerce Orders
[{"id":"ord-001","customer":{"name":"Alice Chen","email":"alice@example.com","address":{"city":"Seattle","country":"US"}},"items":[{"sku":"SKU-100","qty":2},{"sku":"SKU-205","qty":1}]},{"id":"ord-002","customer":{"name":"Bob Garcia","email":"bob@example.com","address":{"city":"Madrid","country":"ES"}},"items":[{"sku":"SKU-100","qty":1}]}] Flatten mode emits dotted keys (customer.address.city, items.0.sku). Switch to Stringify to keep arrays/objects as JSON in one cell.
Postman Test Results
[{"name":"GET /users returns 200","status":"pass","duration":142},{"name":"POST /users creates record","status":"pass","duration":287},{"name":"GET /users/999 returns 404","status":"fail","duration":98,"error":"Expected 404, got 500"},{"name":"DELETE /users/1 returns 204","status":"pass","duration":156}] Postman/Newman test export. Mixed-shape rows (some with error) get filled with empty cells — you'll see a Schema notes warning.
How to Use
- 1
Paste your JSON
Enter or paste your JSON into the input field above. The tool accepts arrays of objects, single objects, and NDJSON (line-delimited JSON) — auto-detected. You can also click 'Load example' to try a sample like a REST API response, MongoDB export, or NDJSON event log.
- 2
Pick a preset (or tweak Options)
Click RFC 4180 (default), Excel (EU semicolon + BOM), TSV (Tab), or Pipe to dial in a target format in one click. Open the Options panel for fine control: Delimiter, Header, Quote (Auto/Always), Line ending (LF/CRLF), BOM, and Nested (Flatten/Stringify).
- 3
Copy or Download the CSV
Click Copy to grab the CSV to your clipboard, or Download to save it as a .csv (.tsv with TSV preset) file ready for Excel, Google Sheets, PostgreSQL COPY, or any data pipeline. For round-trips, click Swap direction to convert CSV back to JSON in place.
Common Conversion Pitfalls
Top-level Value is a Primitive
CSV is fundamentally tabular — rows of fields. A single number, string, or boolean has no row/column structure to project onto. The top-level JSON value must be an object (one-row table) or an array (multi-row table). Wrap a primitive in an object or array first.
42
[{"value": 42}] Mixed-Shape Array (rows with different keys)
When rows in a JSON array have different keys (some have an error field, some don't), the tool unions all keys across all rows and fills missing cells with empty values. A Schema notes warning appears so you know the columns were merged. This is usually fine, but verify the output if downstream tools expect a strict schema.
[
{"name": "GET /users", "status": "pass"},
{"name": "GET /users/999", "status": "fail", "error": "500"}
] // Output CSV (note empty cell in row 1) name,status,error GET /users,pass, GET /users/999,fail,500
Big Integer Truncated by Excel
Twitter snowflake IDs, Discord IDs, and other 64-bit integers exceed JavaScript's safe range (2^53 - 1) and lose precision when JSON.parse() reads them. Even if the CSV preserves the digits, Excel will display them in scientific notation by default. The fix is two-fold: store IDs as strings in your JSON before converting, and enable BOM (or use the Excel preset) so Excel preserves the cell as text.
{"id": 9007199254740993}
// JavaScript reads as 9007199254740992 (precision lost) {"id": "9007199254740993"}
// CSV preserves the string; Excel displays exactly Embedded Comma Not Quoted
If you build CSV by hand with a naive join(','), any field containing a comma (Smith, Jr. or 1,234.56) will break the column boundaries — the parser sees extra columns where there should be one. The Auto Quote mode on this tool follows RFC 4180 and automatically wraps any field containing the delimiter, double quote, CR, or LF in double quotes.
name,role Smith, Jr.,admin // Parser reads 3 columns: "Smith", " Jr.", "admin"
name,role "Smith, Jr.",admin // Parser reads 2 columns: "Smith, Jr.", "admin"
Excel-EU File Unreadable as Comma-CSV
European Excel locales (Germany, France, Spain, Italy, etc.) reserve the comma for the decimal separator and use semicolons as the field delimiter. A standard comma-CSV opens with every row collapsed into column A. The fix is the Excel preset: it switches to ; + CRLF + UTF-8 BOM so Excel correctly parses the file in any locale.
id,name,price 1,Alice,1,234.56 // Excel-EU mis-parses 1,234.56 as two columns
// Excel preset output: ; + CRLF + BOM id;name;price 1;Alice;1234,56 // Excel-EU reads cleanly with comma decimal
NDJSON Not Detected (input shape)
NDJSON auto-detection requires one valid JSON value per line, with no leading or trailing array brackets. If you paste a JSON array of arrays, or your file has a leading [ and trailing ], the tool treats it as a single JSON value, not NDJSON. Strip wrapping brackets and ensure each line is a self-contained JSON object.
[
{"event": "signup"},
{"event": "login"}
]
// Detected as a regular JSON array (works, but not NDJSON path) {"event": "signup"}
{"event": "login"}
// Each line is one JSON value — auto-detected as NDJSON Common Use Cases
- API Response to Spreadsheet
- Paste a REST API response (array of user/order/event objects) and get a clean tabular CSV ready for Excel, Google Sheets, or Numbers. The most common use case — analysts and PMs need spreadsheet-shaped data, engineers ship JSON from the backend.
- MongoDB Export to Data Warehouse
- Convert mongoexport JSON output (with $oid wrappers and nested metadata documents) into CSV for loading into BigQuery, Snowflake, or Redshift. Stringify mode preserves the nested shape losslessly when downstream tools support JSON-in-cell.
- GitHub Issues Triage
- Pull issues from /repos/{owner}/{repo}/issues, paste the JSON, and get a flat CSV with id, title, state, labels, and user.login as columns. Drop into Sheets to filter, sort, and assign during sprint planning.
- NDJSON Event Log Review
- Streaming logs from Cloud Logging, Datadog, Vector, or your own pipeline often arrive as NDJSON (.ndjson, .jsonl). Paste the file contents directly — auto-detected — and get a CSV for ad-hoc analysis without spinning up a real ETL pipeline.
- E-commerce Order Extraction
- Convert nested order JSON (customer.address.city, items array) to a flat CSV for finance, fulfillment, or fraud review. Flatten mode produces one column per leaf field, ready to load into Excel for ad-hoc reporting.
- Postman/Newman Test Report
- Postman test exports include mixed-shape rows (some with optional error fields). Paste the JSON, get a CSV with all keys unioned and missing cells filled — perfect for sharing failed-test reports with non-engineers in Sheets.
Technical Details
- RFC 4180 State-Machine Parser
- Both directions use a proper finite-state-machine parser following RFC 4180. States include UnquotedField, QuotedField, AfterQuote, RowEnd, and EndOfInput. The parser correctly handles quoted fields containing the delimiter, embedded CR/LF inside quoted fields, escaped double quotes (doubled, like ""), and trailing newlines. This produces output that round-trips losslessly through Python's csv module, PostgreSQL COPY, AWS S3 SELECT, and any compliant parser.
- Big-Integer Detection Algorithm
- Before passing JSON through JSON.parse() (which would silently round large integers via IEEE 754), the tool runs a regex scan over the raw JSON text looking for integer literals outside the safe range (-2^53+1 to 2^53-1). When detected, the affected field paths are recorded and a Big-integer warning banner appears below the output. The CSV writer then preserves these values as strings, ensuring exact precision through Excel, Google Sheets, and any text-aware downstream consumer.
- BOM and UTF-8 Encoding for Excel
- All input and output is UTF-8. The optional UTF-8 BOM (0xEF 0xBB 0xBF) is prepended to the output when the BOM toggle is on or when the Excel preset is selected. Excel on Windows uses the BOM to detect UTF-8 encoding — without it, Excel falls back to the system locale (Windows-1252 in the US, Windows-1251 in Russia, etc.) and mangles non-ASCII characters. Modern parsers (Python csv, Pandas, jq, PostgreSQL) generally do not need the BOM and may include it as a stray character at the start of the first cell, so leave BOM off for non-Excel pipelines.
Best Practices
- Choose a Preset Before Tweaking Options
- RFC 4180, Excel, TSV, and Pipe presets dial in five options at once (delimiter, line ending, quote mode, BOM, header). Pick the closest preset first, then tweak individual options only if needed — this avoids the common mistake of flipping one option and forgetting another (e.g., switching to semicolon but leaving LF, which Excel-EU on Windows still mis-parses).
- Flatten for Analysis, Stringify for Round-Trips
- Use Flatten when the destination is Excel, Sheets, or a one-time analysis — dotted keys produce the cleanest spreadsheet layout. Use Stringify when you need a CSV → JSON → CSV round-trip without data loss — arrays and objects survive as JSON inside a single cell. Switching mid-export and re-running is cheap; pick based on the consumer.
- Use BOM Only for Excel
- The UTF-8 BOM is required for Excel on Windows to detect encoding correctly. Every other parser (Python csv, Pandas, jq, PostgreSQL COPY, BigQuery) either ignores the BOM or includes it as a stray character at the start of the first cell, breaking column names. Leave BOM off for pipelines and toggle on (or use the Excel preset) only when the destination is Excel.
- Keep Large IDs as Strings in JSON
- Twitter snowflake IDs, Discord IDs, MongoDB Long fields, and K8s resourceVersion are 64-bit integers that exceed JavaScript's safe range (2^53 - 1). Store them as JSON strings ("id": "9007199254740993") before converting — the CSV will preserve the digits exactly, while a numeric literal would be silently rounded by JSON.parse().
- Validate Row Shape Before Loading
- Mixed-shape rows (some objects with keys others don't) are merged with empty cells in the output, and the tool shows a Schema notes warning. For strict-schema consumers (BigQuery, Redshift COPY), validate that all rows share the same keys before exporting — or handle the missing values explicitly in your pipeline. Use our JSON Formatter to inspect the input shape first.
Frequently Asked Questions
What does this tool do?
Is my data uploaded anywhere?
What's the difference between Flatten and Stringify mode?
How does it handle big integers like Twitter IDs or Snowflake keys?
Why is Excel showing my CSV in one column?
Does this support NDJSON or JSON Lines?
What is RFC 4180?
Why are some cells wrapped in quotes and others not?
Can I round-trip CSV → JSON → CSV without data loss?
How do I get a TSV file?
What happens with very large input?
What encodings are supported?
Related Tools
View all tools →Base64 Decoder & Encoder
Encoding & Formatting
Decode and encode Base64 online for free. Real-time conversion with full UTF-8 and emoji support. 100% private — runs in your browser. No signup needed.
CSV to JSON Converter
Encoding & Formatting
Convert CSV to JSON in your browser. RFC 4180, type inference, header row, big-int safe. 100% private, no upload.
JSON Diff & Compare
Encoding & Formatting
Compare two JSON files instantly in your browser. Side-by-side highlighting, RFC 6902 JSON Patch output, ignore noisy fields like timestamps and IDs. 100% private, no upload.
JSON Formatter & Validator
Encoding & Formatting
Format, validate and beautify JSON instantly in your browser. Free online tool with syntax validation, error detection, minify and one-click copy. 100% private.
JSON Schema Validator
Encoding & Formatting
Validate JSON against any JSON Schema instantly in your browser. Supports Draft 2020-12, 2019-09, and Draft-07 with path-precise error messages. 100% private — no upload, no account, free.
JSON to YAML Converter
Encoding & Formatting
Paste JSON, get YAML instantly. Live conversion in your browser. K8s/Compose-ready, 2/4-space indent, smart quoting. 100% private, no upload.