Skip to content

JSON to CSV Converter

Convert JSON to CSV in your browser. RFC 4180, Excel-EU, TSV, Pipe presets. Flatten nested or stringify. 100% private, no upload.

No Tracking Runs in Browser Free
Preset
Options · , · auto · LF · header · no BOM · flatten
Delimiter
Emit header row
Quote
Line ending
UTF-8 BOM
Nested
0 chars 0 lines
CSV Output
0 rows · 0 cols
Reviewed for RFC 4180 compliance, Excel-EU locale handling, NDJSON auto-detection, and big-integer precision preservation — Go Tools Engineering Team · May 8, 2026

What is CSV and Why Convert from JSON?

CSV (Comma-Separated Values) is the oldest and most widely supported tabular data format in computing — every spreadsheet app, every database, every analytics tool, and most programming languages have first-class CSV support. JSON, by contrast, is the universal format for API responses, configuration, and structured data exchange. Converting between them is one of the most common chores in data engineering: you receive JSON from an API or a NoSQL database, and you need a CSV to load into Excel for analysis, into a Postgres table via COPY, or into a BigQuery / Snowflake warehouse. This tool is built for that conversion path and handles four scenarios that most online converters botch.

This tool has four important differentiators compared to typical online converters:

**1. RFC 4180 State-Machine Parser.** CSV looks simple but the quoting rules are subtle: a field wrapped in double quotes can contain commas, embedded newlines, and escaped double quotes (doubled, like ""). Naive split-by-comma parsers break on real-world data — addresses with commas, multiline text fields, and quoted values containing quotes. This tool implements a proper state-machine parser following RFC 4180 (the IETF spec for CSV), correctly handling quoted fields, embedded delimiters, embedded line endings, and escaped quotes in every direction. The output is round-trippable through Python's csv module, PostgreSQL COPY, AWS S3 SELECT, and any compliant parser.

**2. Flatten One-Way / Stringify Reversible.** Nested JSON is fundamentally incompatible with CSV's flat tabular shape, and most converters silently corrupt data when they hit a nested object or array. This tool gives you an explicit choice: Flatten mode emits dotted keys (customer.address.city) and indexed keys (items.0.sku) for the cleanest spreadsheet layout — readable in Excel but lossy for round-trips. Stringify mode keeps arrays and objects as JSON inside a single cell — uglier but fully round-trippable: CSV → JSON → CSV produces identical data when paired with Infer types on the reverse. Choose based on your goal: analysis in Excel (Flatten) or pipeline round-trips (Stringify).

**3. Big-Integer Detection.** JavaScript's Number type uses IEEE 754 double precision and silently rounds integers above 2^53 - 1 (9007199254740991). This bites real-world JSON: Twitter snowflake IDs, Discord IDs, MongoDB Long fields, and Kubernetes resourceVersion are all 64-bit integers that exceed the safe range. Most browser-based JSON tools silently produce wrong numbers without warning. This tool detects big-integer values during parsing, shows a warning banner listing affected fields, and preserves the original digits as strings in the CSV output so Excel and Google Sheets won't truncate them to scientific notation.

**4. 100% Browser-Based Privacy.** Your JSON data — which often contains user PII, internal database exports, API keys embedded in payloads, and production secrets — never leaves your browser. No data is sent to any server, no logging, no analytics that capture input. You can verify this in your browser's Network tab. This is the only safe way to handle sensitive data in an online tool. See the reverse direction by clicking Swap or use our companion JSON to YAML Converter when YAML is your target. Need to validate JSON before converting? Try our JSON Formatter.

CSV's strengths are universality and simplicity: every tool reads it, parsers are tiny, and the file format is human-readable in any text editor. Its weaknesses are the lack of type information (everything is a string until you tell the parser otherwise), no native nested-structure support, and locale-specific quirks (Excel-EU semicolons, Windows CRLF vs Unix LF). JSON's strengths are exactly the opposite: precise types, native nesting, and a strict spec that parses identically everywhere. The right tool depends on the consumer: human reading a spreadsheet → CSV, machine consuming an API → JSON. This converter handles the bridge in both directions.

// Input JSON
[
  { "id": 1, "name": "Alice", "role": "admin" },
  { "id": 2, "name": "Bob", "role": "editor" }
]

// Output CSV (RFC 4180 preset: comma + CRLF + no BOM)
id,name,role
1,Alice,admin
2,Bob,editor

// Same input with Stringify mode + nested data
[
  { "id": 1, "tags": ["a", "b"] }
]

// Becomes
id,tags
1,"[""a"",""b""]"

Key Features

RFC 4180 Compliant

Strict state-machine parser following the IETF CSV specification: correct handling of quoted fields, embedded delimiters, embedded CR/LF, and escaped double quotes (doubled). Output round-trips cleanly through Python csv, PostgreSQL COPY, and AWS S3 SELECT.

Excel, TSV, and Pipe Presets

One-click presets for the four most common targets: RFC 4180 (comma + CRLF), Excel (semicolon + CRLF + UTF-8 BOM for EU locales), TSV (Tab + LF), and Pipe (| + LF). Switch between formats without manually tweaking five separate options.

Flatten or Stringify Nested Data

Explicit nested handling: Flatten emits dotted keys (customer.address.city) for clean spreadsheet analysis, Stringify keeps arrays and objects as JSON inside one cell for lossless round-trips. Choose based on whether you need analysis in Excel or a pipeline round-trip.

Big-Integer Detection

Integers above 2^53 are detected during parsing and preserved as strings in the CSV — Twitter IDs, Discord snowflakes, MongoDB Long fields, and K8s resourceVersion stay exact instead of being silently rounded by JavaScript's IEEE 754 number type.

NDJSON Auto-Detection

Line-delimited JSON (.ndjson, .jsonl) is auto-detected — paste streaming logs, API event exports, or data lake outputs directly without manually wrapping into an array. Each line becomes a row in the CSV.

Bidirectional with Swap

One Swap direction button flips the conversion in place: input becomes CSV, output becomes JSON, current text is preserved. Round-trip your data through both directions to verify lossless conversion before shipping it to a pipeline.

Examples

REST API Response

[{"id":1,"name":"Alice Chen","email":"alice@example.com","role":"admin"},{"id":2,"name":"Bob Garcia","email":"bob@example.com","role":"editor"},{"id":3,"name":"Carol Singh","email":"carol@example.com","role":"viewer"},{"id":4,"name":"Dan Müller","email":"dan@example.com","role":"viewer"}]

Common API output. With Header on, you get a clean tabular CSV ready for Excel.

GitHub Issues API

[{"id":1001,"title":"Bug: login redirects to 404","state":"open","labels":["bug","priority:high"],"user":{"login":"alice"}},{"id":1002,"title":"Feature: dark mode toggle","state":"open","labels":["enhancement"],"user":{"login":"bob"}},{"id":1003,"title":"Docs: update README","state":"closed","labels":["docs","good-first-issue"],"user":{"login":"carol"}}]

Pulled from /repos/{owner}/{repo}/issues. Flatten on, header on — exposes user.login as a flattened column.

MongoDB mongoexport

[{"_id":{"$oid":"6634a1b2c3d4e5f600000001"},"email":"alice@example.com","metadata":{"signupDate":"2026-01-15T10:30:00Z","preferences":{"newsletter":true,"theme":"dark"}}},{"_id":{"$oid":"6634a1b2c3d4e5f600000002"},"email":"bob@example.com","metadata":{"signupDate":"2026-02-20T14:15:00Z","preferences":{"newsletter":false,"theme":"light"}}}]

Mongo export shape. Choose Stringify mode if you plan to round-trip CSV→JSON.

NDJSON Event Log

{"event":"signup","user":"alice","ts":1715000000}
{"event":"login","user":"alice","ts":1715000060}
{"event":"checkout","user":"alice","ts":1715000300}

Line-delimited JSON (.ndjson/.jsonl) is auto-detected — paste the file contents directly.

Nested E-commerce Orders

[{"id":"ord-001","customer":{"name":"Alice Chen","email":"alice@example.com","address":{"city":"Seattle","country":"US"}},"items":[{"sku":"SKU-100","qty":2},{"sku":"SKU-205","qty":1}]},{"id":"ord-002","customer":{"name":"Bob Garcia","email":"bob@example.com","address":{"city":"Madrid","country":"ES"}},"items":[{"sku":"SKU-100","qty":1}]}]

Flatten mode emits dotted keys (customer.address.city, items.0.sku). Switch to Stringify to keep arrays/objects as JSON in one cell.

Postman Test Results

[{"name":"GET /users returns 200","status":"pass","duration":142},{"name":"POST /users creates record","status":"pass","duration":287},{"name":"GET /users/999 returns 404","status":"fail","duration":98,"error":"Expected 404, got 500"},{"name":"DELETE /users/1 returns 204","status":"pass","duration":156}]

Postman/Newman test export. Mixed-shape rows (some with error) get filled with empty cells — you'll see a Schema notes warning.

How to Use

  1. 1

    Paste your JSON

    Enter or paste your JSON into the input field above. The tool accepts arrays of objects, single objects, and NDJSON (line-delimited JSON) — auto-detected. You can also click 'Load example' to try a sample like a REST API response, MongoDB export, or NDJSON event log.

  2. 2

    Pick a preset (or tweak Options)

    Click RFC 4180 (default), Excel (EU semicolon + BOM), TSV (Tab), or Pipe to dial in a target format in one click. Open the Options panel for fine control: Delimiter, Header, Quote (Auto/Always), Line ending (LF/CRLF), BOM, and Nested (Flatten/Stringify).

  3. 3

    Copy or Download the CSV

    Click Copy to grab the CSV to your clipboard, or Download to save it as a .csv (.tsv with TSV preset) file ready for Excel, Google Sheets, PostgreSQL COPY, or any data pipeline. For round-trips, click Swap direction to convert CSV back to JSON in place.

Common Conversion Pitfalls

Top-level Value is a Primitive

CSV is fundamentally tabular — rows of fields. A single number, string, or boolean has no row/column structure to project onto. The top-level JSON value must be an object (one-row table) or an array (multi-row table). Wrap a primitive in an object or array first.

✗ Wrong
42
✓ Correct
[{"value": 42}]

Mixed-Shape Array (rows with different keys)

When rows in a JSON array have different keys (some have an error field, some don't), the tool unions all keys across all rows and fills missing cells with empty values. A Schema notes warning appears so you know the columns were merged. This is usually fine, but verify the output if downstream tools expect a strict schema.

✗ Wrong
[
  {"name": "GET /users", "status": "pass"},
  {"name": "GET /users/999", "status": "fail", "error": "500"}
]
✓ Correct
// Output CSV (note empty cell in row 1)
name,status,error
GET /users,pass,
GET /users/999,fail,500

Big Integer Truncated by Excel

Twitter snowflake IDs, Discord IDs, and other 64-bit integers exceed JavaScript's safe range (2^53 - 1) and lose precision when JSON.parse() reads them. Even if the CSV preserves the digits, Excel will display them in scientific notation by default. The fix is two-fold: store IDs as strings in your JSON before converting, and enable BOM (or use the Excel preset) so Excel preserves the cell as text.

✗ Wrong
{"id": 9007199254740993}
// JavaScript reads as 9007199254740992 (precision lost)
✓ Correct
{"id": "9007199254740993"}
// CSV preserves the string; Excel displays exactly

Embedded Comma Not Quoted

If you build CSV by hand with a naive join(','), any field containing a comma (Smith, Jr. or 1,234.56) will break the column boundaries — the parser sees extra columns where there should be one. The Auto Quote mode on this tool follows RFC 4180 and automatically wraps any field containing the delimiter, double quote, CR, or LF in double quotes.

✗ Wrong
name,role
Smith, Jr.,admin
// Parser reads 3 columns: "Smith", " Jr.", "admin"
✓ Correct
name,role
"Smith, Jr.",admin
// Parser reads 2 columns: "Smith, Jr.", "admin"

Excel-EU File Unreadable as Comma-CSV

European Excel locales (Germany, France, Spain, Italy, etc.) reserve the comma for the decimal separator and use semicolons as the field delimiter. A standard comma-CSV opens with every row collapsed into column A. The fix is the Excel preset: it switches to ; + CRLF + UTF-8 BOM so Excel correctly parses the file in any locale.

✗ Wrong
id,name,price
1,Alice,1,234.56
// Excel-EU mis-parses 1,234.56 as two columns
✓ Correct
// Excel preset output: ; + CRLF + BOM
id;name;price
1;Alice;1234,56
// Excel-EU reads cleanly with comma decimal

NDJSON Not Detected (input shape)

NDJSON auto-detection requires one valid JSON value per line, with no leading or trailing array brackets. If you paste a JSON array of arrays, or your file has a leading [ and trailing ], the tool treats it as a single JSON value, not NDJSON. Strip wrapping brackets and ensure each line is a self-contained JSON object.

✗ Wrong
[
  {"event": "signup"},
  {"event": "login"}
]
// Detected as a regular JSON array (works, but not NDJSON path)
✓ Correct
{"event": "signup"}
{"event": "login"}
// Each line is one JSON value — auto-detected as NDJSON

Common Use Cases

API Response to Spreadsheet
Paste a REST API response (array of user/order/event objects) and get a clean tabular CSV ready for Excel, Google Sheets, or Numbers. The most common use case — analysts and PMs need spreadsheet-shaped data, engineers ship JSON from the backend.
MongoDB Export to Data Warehouse
Convert mongoexport JSON output (with $oid wrappers and nested metadata documents) into CSV for loading into BigQuery, Snowflake, or Redshift. Stringify mode preserves the nested shape losslessly when downstream tools support JSON-in-cell.
GitHub Issues Triage
Pull issues from /repos/{owner}/{repo}/issues, paste the JSON, and get a flat CSV with id, title, state, labels, and user.login as columns. Drop into Sheets to filter, sort, and assign during sprint planning.
NDJSON Event Log Review
Streaming logs from Cloud Logging, Datadog, Vector, or your own pipeline often arrive as NDJSON (.ndjson, .jsonl). Paste the file contents directly — auto-detected — and get a CSV for ad-hoc analysis without spinning up a real ETL pipeline.
E-commerce Order Extraction
Convert nested order JSON (customer.address.city, items array) to a flat CSV for finance, fulfillment, or fraud review. Flatten mode produces one column per leaf field, ready to load into Excel for ad-hoc reporting.
Postman/Newman Test Report
Postman test exports include mixed-shape rows (some with optional error fields). Paste the JSON, get a CSV with all keys unioned and missing cells filled — perfect for sharing failed-test reports with non-engineers in Sheets.

Technical Details

RFC 4180 State-Machine Parser
Both directions use a proper finite-state-machine parser following RFC 4180. States include UnquotedField, QuotedField, AfterQuote, RowEnd, and EndOfInput. The parser correctly handles quoted fields containing the delimiter, embedded CR/LF inside quoted fields, escaped double quotes (doubled, like ""), and trailing newlines. This produces output that round-trips losslessly through Python's csv module, PostgreSQL COPY, AWS S3 SELECT, and any compliant parser.
Big-Integer Detection Algorithm
Before passing JSON through JSON.parse() (which would silently round large integers via IEEE 754), the tool runs a regex scan over the raw JSON text looking for integer literals outside the safe range (-2^53+1 to 2^53-1). When detected, the affected field paths are recorded and a Big-integer warning banner appears below the output. The CSV writer then preserves these values as strings, ensuring exact precision through Excel, Google Sheets, and any text-aware downstream consumer.
BOM and UTF-8 Encoding for Excel
All input and output is UTF-8. The optional UTF-8 BOM (0xEF 0xBB 0xBF) is prepended to the output when the BOM toggle is on or when the Excel preset is selected. Excel on Windows uses the BOM to detect UTF-8 encoding — without it, Excel falls back to the system locale (Windows-1252 in the US, Windows-1251 in Russia, etc.) and mangles non-ASCII characters. Modern parsers (Python csv, Pandas, jq, PostgreSQL) generally do not need the BOM and may include it as a stray character at the start of the first cell, so leave BOM off for non-Excel pipelines.

Best Practices

Choose a Preset Before Tweaking Options
RFC 4180, Excel, TSV, and Pipe presets dial in five options at once (delimiter, line ending, quote mode, BOM, header). Pick the closest preset first, then tweak individual options only if needed — this avoids the common mistake of flipping one option and forgetting another (e.g., switching to semicolon but leaving LF, which Excel-EU on Windows still mis-parses).
Flatten for Analysis, Stringify for Round-Trips
Use Flatten when the destination is Excel, Sheets, or a one-time analysis — dotted keys produce the cleanest spreadsheet layout. Use Stringify when you need a CSV → JSON → CSV round-trip without data loss — arrays and objects survive as JSON inside a single cell. Switching mid-export and re-running is cheap; pick based on the consumer.
Use BOM Only for Excel
The UTF-8 BOM is required for Excel on Windows to detect encoding correctly. Every other parser (Python csv, Pandas, jq, PostgreSQL COPY, BigQuery) either ignores the BOM or includes it as a stray character at the start of the first cell, breaking column names. Leave BOM off for pipelines and toggle on (or use the Excel preset) only when the destination is Excel.
Keep Large IDs as Strings in JSON
Twitter snowflake IDs, Discord IDs, MongoDB Long fields, and K8s resourceVersion are 64-bit integers that exceed JavaScript's safe range (2^53 - 1). Store them as JSON strings ("id": "9007199254740993") before converting — the CSV will preserve the digits exactly, while a numeric literal would be silently rounded by JSON.parse().
Validate Row Shape Before Loading
Mixed-shape rows (some objects with keys others don't) are merged with empty cells in the output, and the tool shows a Schema notes warning. For strict-schema consumers (BigQuery, Redshift COPY), validate that all rows share the same keys before exporting — or handle the missing values explicitly in your pipeline. Use our JSON Formatter to inspect the input shape first.

Frequently Asked Questions

What does this tool do?
It converts JSON to CSV directly in your browser, with bidirectional support: click Swap direction to convert CSV back to JSON in the same panel. Paste JSON in the input area and the tool produces CSV output instantly — no upload, no signup, nothing leaves your machine. The output respects your chosen preset (RFC 4180, Excel, TSV, or Pipe) so you can paste straight into Excel, Google Sheets, a database COPY command, or any data pipeline. The tool handles flat arrays of objects, nested structures (via Flatten or Stringify mode), NDJSON line-delimited input, and big-integer values that would otherwise lose precision in spreadsheet apps.
Is my data uploaded anywhere?
No. All conversion runs 100% client-side in your browser using JavaScript. Your JSON data is never transmitted, never stored on any server, never logged, and never analyzed. This makes the tool safe for API responses containing PII, internal database exports, MongoDB dumps, and any sensitive data. You can verify this in your browser's Network tab — pasting JSON triggers zero network requests. The tool uses no cookies for input data and no third-party analytics that would capture what you paste.
What's the difference between Flatten and Stringify mode?
Flatten mode emits dotted keys for nested objects and indexed keys for nested arrays (customer.address.city, items.0.sku) so each leaf value lives in its own column. This is the most readable layout for analysis in Excel or BigQuery, but it is lossy for round-trips because the dotted-key structure cannot be perfectly reconstructed. Stringify mode keeps arrays and objects as JSON inside a single cell ({"name":"Alice","city":"Seattle"}) — uglier in a spreadsheet, but fully round-trippable: CSV → JSON → CSV produces identical data. Choose Flatten for analysis, Stringify for round-trip safety. Pick before you convert; switching mid-session re-runs the conversion on the current input.
How does it handle big integers like Twitter IDs or Snowflake keys?
Big integers (above 2^53 - 1, or 9007199254740991) are detected during JSON parsing and a warning banner appears below the output. The tool preserves the original digits as strings in the CSV so Excel and Google Sheets won't truncate them to scientific notation. This matters because JavaScript's IEEE 754 double-precision float silently rounds integers above 2^53 — for example, 9007199254740993 becomes 9007199254740992. To preserve precision when generating the JSON upstream, store these IDs as strings ("id": "9007199254740993"). The tool will keep them as strings in the CSV without any precision loss.
Why is Excel showing my CSV in one column?
European Excel locales (Germany, France, Spain, Italy, etc.) expect a semicolon delimiter because the comma is reserved for decimal separators. When you open a comma-delimited CSV in Excel-EU, every row collapses into column A. Use the Excel preset on this tool — it switches the delimiter to ;, the line ending to CRLF, and adds a UTF-8 BOM so Excel correctly detects encoding and column boundaries. If you are sharing CSVs across regions, the safer option is TSV (Tab delimiter) which Excel handles consistently in every locale.
Does this support NDJSON or JSON Lines?
Yes. NDJSON (.ndjson) and JSONL (.jsonl) are line-delimited formats where each line is one valid JSON value. Paste the file contents directly into the input area — the tool auto-detects the format by looking for multiple top-level JSON values separated by newlines, and treats each line as a row in the output CSV. This is the natural shape for streaming logs, API event exports, and many data lake pipelines. NDJSON does not require a wrapping array, so you do not need to manually merge lines into one JSON document.
What is RFC 4180?
RFC 4180 is the IETF specification that codified the de facto CSV format in 2005. It defines the rules for delimiters (typically comma), line endings (CRLF), the optional header row, and most importantly the quoting rules: fields containing the delimiter, double quote, CR, or LF must be wrapped in double quotes, and embedded double quotes are escaped by doubling them (""). The RFC 4180 preset on this tool produces output strictly compliant with the spec: comma delimiter, CRLF line endings, no BOM, double-quote auto-escaping. This is the safest choice for interoperability with parsers in Python (csv module), PostgreSQL COPY, AWS S3 SELECT, and most data pipelines.
Why are some cells wrapped in quotes and others not?
The default Quote mode is Auto, which follows RFC 4180: a cell is wrapped in double quotes only when it contains the delimiter, a double quote, a carriage return, or a newline. This produces the cleanest, most human-readable CSV — values like Alice or 42 stay unquoted, while values like Smith, Jr. or Line 1\nLine 2 get wrapped. Switch to Always quote mode to wrap every cell, even simple ones — useful when downstream tools have buggy CSV parsers that misinterpret unquoted values, or when your team's pipeline expects every field to be quoted for consistency.
Can I round-trip CSV → JSON → CSV without data loss?
Yes, when the input is flat (no nested objects or arrays). For nested data, you must use Stringify mode — it keeps arrays and objects as JSON inside a single cell, which round-trips losslessly back to the original structure when you reverse with Swap direction and Infer types. Flatten mode is one-way: it emits dotted keys (customer.address.city) that cannot be perfectly reconstructed because the parser cannot distinguish a dotted key from a nested path. The tool detects nested structures and shows a Schema notes warning when round-trip safety is at risk so you can switch modes before exporting.
How do I get a TSV file?
Click the TSV preset chip. This switches the delimiter to Tab, the line ending to LF, and disables the BOM — the standard format for tab-separated values used by Unix tools (cut, awk), data warehouses (BigQuery, Snowflake), and most Excel locales without ambiguity. TSV is generally safer than comma-CSV for cross-locale sharing because Tab is unlikely to appear inside text fields, eliminating most quoting edge cases. Save the output with a .tsv or .tab extension and most tools will recognize it automatically.
What happens with very large input?
Above 100,000 characters or 2,000 rows, live conversion automatically switches to manual mode: a Convert button appears in an info banner and conversion only runs when you click it. This prevents the browser's main thread from blocking on every keystroke during heavy serialization. For output above 5 MB or 50,000 rows, the tool truncates the on-screen preview to the first 500 rows and shows a Showing the first 500 of N rows banner — but the Download button still produces the full file with every row included. Hard upper limit is 10 MB of input; above that the tool shows an error and asks you to reduce the input.
What encodings are supported?
Input and output are both UTF-8. UTF-8 covers every modern character set including emoji, CJK ideographs, Arabic, Hebrew, and combining marks. The only encoding nuance is the optional UTF-8 BOM (Byte Order Mark): Excel on Windows traditionally needs the BOM to detect UTF-8 correctly, otherwise it falls back to the system locale and mangles non-ASCII characters. Toggle BOM on (or use the Excel preset, which enables BOM by default) when you plan to open the CSV in Excel. Leave BOM off for everything else — most modern parsers (PostgreSQL, Pandas, jq, Python csv) will choke or include the BOM as a stray character at the start of the first cell.

Related Tools

View all tools →