Skip to content

CSV to JSON Converter

Convert CSV to JSON in your browser. RFC 4180, type inference, header row, big-int safe. 100% private, no upload.

No Tracking Runs in Browser Free
Preset
Options · , · auto · LF · header · no BOM · flatten
Delimiter
First row is header
Infer types (numbers, booleans, null)
0 chars 0 lines
JSON Output
0 rows · 0 cols
Reviewed for RFC 4180 compliance, Type Inference correctness, big-integer precision preservation, and headerless autonames behavior — Go Tools Engineering Team · May 9, 2026

What is JSON and Why Convert from CSV?

JSON (JavaScript Object Notation) is the universal format for API responses, configuration files, and structured data exchange — every modern programming language, every database, and every web framework has first-class JSON support. CSV (Comma-Separated Values), by contrast, is the oldest and most widely supported tabular format — every spreadsheet app, every database export, and every analytics tool can produce it. Converting between them is one of the most common chores in data engineering: you receive a CSV from a spreadsheet, a database dump, or a third-party export, and you need JSON to feed an API, hydrate a frontend, or load into a NoSQL store. This tool is built for that conversion path and handles four scenarios that most online converters botch.

This tool has four important differentiators compared to typical online CSV-to-JSON converters:

**1. RFC 4180 State-Machine Parser.** CSV looks simple but the quoting rules are subtle: a field wrapped in double quotes can contain commas, embedded newlines, and escaped double quotes (doubled, like ""). Naive split-by-comma parsers break on real-world data — addresses with commas, multiline text fields, and quoted values containing quotes. This tool implements a proper state-machine parser following RFC 4180 (the IETF spec for CSV), correctly handling quoted fields, embedded delimiters, embedded line endings, and escaped quotes in every direction. The output is round-trippable through Python's csv module, PostgreSQL COPY, AWS S3 SELECT, and any compliant parser.

**2. Type Inference with Big-Integer Safety.** With Infer types on, numeric strings become numbers, true/false become booleans, empty cells become null. But the inference pipeline has two important guards: leading-zero strings (007, 0123) are kept as strings because leading zeros indicate identifiers — converting to a number would silently strip them. And integers above 2^53 - 1 (9007199254740991) are also kept as strings to avoid IEEE 754 precision loss. Twitter snowflake IDs, Discord IDs, MongoDB Long fields, and K8s resourceVersion all stay exact instead of being silently rounded. ISO date strings are intentionally kept as strings — JSON has no native date type.

**3. Header Autonames or Use First Row.** With Header on (the default), the first row is treated as column names and each subsequent row becomes a JSON object keyed by those names. With Header off, the parser auto-names columns col1, col2, col3 — useful for raw data dumps without a header line. The Delimiter chip row covers the four most common separators: comma (RFC 4180 default), semicolon (Excel-EU locales), tab (TSV from Unix tools and data warehouses), and pipe (high-comma fields). Pick the chip and parse — no manual configuration needed for typical real-world CSVs.

**4. 100% Browser-Based Privacy.** Your CSV data — which often contains user PII, internal database exports, customer records, and production exports — never leaves your browser. No data is sent to any server, no logging, no analytics that capture input. You can verify this in your browser's Network tab. This is the only safe way to handle sensitive data in an online tool. See the reverse direction by clicking Swap or use our companion JSON to CSV Converter when CSV is your target. Need to validate the JSON output before consuming it? Try our JSON Formatter.

JSON's strengths are precise types, native nesting, and a strict spec that parses identically everywhere — the right format whenever a machine consumes the data. CSV's strengths are universality and human-readability — the right format whenever a human opens a spreadsheet. The right tool depends on the consumer: human reading a spreadsheet → CSV, machine consuming an API → JSON. This converter handles the bridge in both directions.

// Input CSV (comma + LF, header on, infer types on)
id,name,active,score
1,Alice,true,98.5
2,Bob,false,87
3,Carol,true,

// Output JSON
[
  { "id": 1, "name": "Alice", "active": true, "score": 98.5 },
  { "id": 2, "name": "Bob", "active": false, "score": 87 },
  { "id": 3, "name": "Carol", "active": true, "score": null }
]

// Same input with Header off (no first-row keys)
1,Alice,true,98.5
2,Bob,false,87

// Becomes
[
  { "col1": 1, "col2": "Alice", "col3": true, "col4": 98.5 },
  { "col1": 2, "col2": "Bob", "col3": false, "col4": 87 }
]

Key Features

RFC 4180 State-Machine Parser

Strict state-machine parser following the IETF CSV specification: correct handling of quoted fields, embedded delimiters, embedded CR/LF, and escaped double quotes (doubled). Output round-trips cleanly through Python csv, PostgreSQL COPY, and AWS S3 SELECT.

Type Inference with Big-Integer Safety

Infer types on converts numeric strings to numbers, true/false to booleans, empty cells to null. Integers above 2^53 - 1 stay as strings to avoid IEEE 754 precision loss; leading-zero strings (007, 0123) stay as strings to preserve identifier semantics.

Header On/Off with Autonames

Header on (default) uses the first row as JSON keys. Header off auto-names columns col1, col2, col3 in order — useful for raw data dumps and machine-generated CSVs without a header line. The autonames are deterministic and pipeline-friendly.

Comma, Semicolon, Tab, Pipe Delimiters

One-click Delimiter chips for the four most common separators: `,` (RFC 4180 default), `;` (Excel-EU locales), `\t` (TSV from Unix tools, BigQuery, Snowflake), and `|` (high-comma free-form text fields). The parser switches modes immediately — no need to convert files first.

Big-Integer Detection

Integers above 2^53 are detected during parsing and preserved as strings in the JSON — Twitter snowflake IDs, Discord IDs, MongoDB Long fields, and K8s resourceVersion stay exact instead of being silently rounded by JavaScript's IEEE 754 number type.

Bidirectional with Swap

One Swap direction button flips the conversion in place: input becomes JSON, output becomes CSV, current text is preserved. Round-trip your data through both directions to verify lossless conversion before shipping it to a pipeline.

Examples

Spreadsheet Export with Header

id,name,email,role
1,Alice,alice@example.com,admin
2,Bob,bob@example.com,editor
3,Carol,carol@example.com,viewer
4,Dan,dan@example.com,viewer

Standard CSV from a spreadsheet. With Header on and Infer types on, you get clean typed JSON: integers stay integers, booleans/null are detected.

Tab-Delimited Log Export (TSV)

ts	event	user	duration
2026-05-09T10:00:00Z	signup	alice	142
2026-05-09T10:01:00Z	login	alice	87
2026-05-09T10:02:00Z	checkout	alice	312
2026-05-09T10:03:00Z	logout	alice	44

Choose `\t` (Tab) as delimiter. The default Header on auto-uses the first row as keys.

Excel-EU CSV (semicolon delimiter, CRLF)

id;name;price
1;Alice;1234,56
2;Bob;9876,54
3;Carol;42,00

Excel in DE/FR/IT/ES locales emits `;` separators because comma is the decimal mark. Pick `;` from the Delimiter chip — the parser handles the rest.

Embedded Commas and Escaped Quotes

name,role,note
"Smith, Jr.",admin,"He said ""hi"""
"Doe, Jane",editor,"Two
lines"

Standard RFC 4180 quoting: quoted fields can contain delimiters and escaped quotes (doubled). The parser is a state machine — it never splits inside quotes.

CSV with Big-Integer IDs

id,event,user
9007199254740993,signup,alice
9007199254740994,login,bob
9007199254740995,checkout,carol

Big integers exceed JavaScript's safe range (2^53 - 1). With Infer types on, the parser detects this and keeps the value as a string to preserve precision — no truncation.

No-Header CSV

1,Alice,admin
2,Bob,editor
3,Carol,viewer
4,Dan,viewer

Toggle Header off; columns auto-name to `col1`, `col2`, `col3`. Use this for raw data dumps without a header line.

How to Use

  1. 1

    Paste your CSV

    Enter or paste your CSV into the input field above. The tool accepts comma, semicolon, tab, and pipe-delimited data. You can also click 'Load example' to try a sample like a spreadsheet export, TSV log, or Excel-EU CSV with semicolons.

  2. 2

    Pick the delimiter (or Tab)

    Click `,` (default), `;` (Excel-EU semicolon), `\t` (TSV), or `|` (Pipe) to switch the delimiter in one click. Open the Options panel for fine control: Header on/off and Infer types on/off. Header off auto-names columns col1, col2, col3.

  3. 3

    Copy or Download the JSON

    Click Copy to grab the JSON to your clipboard, or Download to save it as a .json file ready for your code, API, or pipeline. For round-trips, click Swap direction to convert JSON back to CSV in place.

Common Conversion Pitfalls

Embedded Comma Not Quoted in Source

If your CSV was built by hand with a naive join(','), any field containing a comma (Smith, Jr. or 1,234.56) breaks the column boundaries — the parser sees extra columns where there should be one. The fix is to wrap the offending field in double quotes per RFC 4180. This tool correctly handles quoted fields, but the source CSV must use proper quoting.

✗ Wrong
name,role
Smith, Jr.,admin
// Parser reads 3 columns: "Smith", " Jr.", "admin"
✓ Correct
name,role
"Smith, Jr.",admin
// Parser reads 2 columns: "Smith, Jr.", "admin"

Excel-EU Semicolons Parsed as Comma

European Excel locales (Germany, France, Spain, Italy, etc.) emit semicolon-delimited CSV because the comma is reserved for the decimal separator. If you leave the delimiter on `,` (default), every row collapses into a single column with embedded semicolons. Pick the `;` Delimiter chip — the parser switches to semicolon mode and produces correct columns.

✗ Wrong
// Wrong delimiter (default comma) on Excel-EU file
id;name;price
1;Alice;1234,56
// Each row becomes one column: { col1: "1;Alice;1234,56" }
✓ Correct
// Correct: pick `;` Delimiter chip
id;name;price
1;Alice;1234,56
// Output: { id: 1, name: "Alice", price: "1234,56" }

Big-Integer IDs Lose Precision after JSON.parse

Twitter snowflake IDs, Discord IDs, and other 64-bit integers exceed JavaScript's safe range (2^53 - 1) and lose precision when JSON.parse() reads them as numbers. With Infer types on, this tool detects values above the safe boundary and keeps them as strings instead, preserving the exact digits. Use BigInt("9007199254740993") in your code to convert back to a numeric type.

✗ Wrong
// Without big-int detection
{"id": 9007199254740993}
// JavaScript reads as 9007199254740992 (precision lost)
✓ Correct
// With Infer types on, big integers stay as strings
{"id": "9007199254740993"}
// Use BigInt(value) in code to preserve precision

Header Row Contains Spaces

If your CSV header is `id, name, email` (with spaces after commas), the JSON keys become "id", " name", " email" — including the leading space. The parser preserves the header exactly as given, per RFC 4180. The fix is to either clean the source CSV before pasting, or rename keys downstream (jq 'with_entries(.key |= ltrimstr(" "))' or JavaScript Object.fromEntries(Object.entries(o).map(([k,v]) => [k.trim(), v]))).

✗ Wrong
id, name, email
1, Alice, alice@example.com
// Output keys: "id", " name", " email" (with leading spaces)
✓ Correct
id,name,email
1,Alice,alice@example.com
// Output keys: "id", "name", "email" (clean)

Inconsistent Row Length

When rows in the CSV have different column counts (some with trailing commas, some without), the parser fills missing cells with empty strings (or null when Infer types is on) and drops extras beyond the header length. A Schema notes warning appears so you know the rows were normalized. This is usually fine, but verify the output if downstream consumers expect a strict row shape.

✗ Wrong
name,role,note
Alice,admin
Bob,editor,first day
// Row 1 is short by one cell
✓ Correct
// Output (note empty/null cell in row 1)
[
  { "name": "Alice", "role": "admin", "note": null },
  { "name": "Bob", "role": "editor", "note": "first day" }
]

Date Strings Coerced Unexpectedly

ISO 8601 date strings (2026-05-09T10:00:00Z) are intentionally kept as strings in the JSON output — JSON has no native date type, so coercion would either produce a JavaScript Date object that doesn't survive serialization or a numeric epoch that loses timezone information. This is by design. Parse dates at the point of use with new Date(value) or your date library of choice. Do not toggle Infer types off solely to preserve dates — that would also keep numbers as strings.

✗ Wrong
// Expecting a Date object in the output
ts,event
2026-05-09T10:00:00Z,signup
// Output ts is the string "2026-05-09T10:00:00Z", NOT a Date
✓ Correct
// Correct: parse at the point of use in your code
const rows = JSON.parse(output);
const when = new Date(rows[0].ts);
// when is now a Date object

Common Use Cases

Spreadsheet Export to API Import
Paste a CSV exported from Excel, Google Sheets, or Numbers and get a JSON array of objects ready for POST to a REST API, GraphQL mutation, or bulk-import endpoint. The most common use case — analysts produce spreadsheet data, engineers need typed JSON to feed the backend.
Excel Export to Tooling
Convert Excel CSV exports (including Excel-EU semicolon-delimited files with the `;` chip) into JSON for processing with JavaScript tooling, jq scripts, or any system that reads JSON. The parser handles BOM stripping and CRLF line endings correctly so Excel exports don't break on the first row.
TSV Log to Analytics
Tab-separated logs from BigQuery exports, Snowflake unloads, Vector pipelines, or Unix tools (cut, awk) often arrive as .tsv. Pick the Tab Delimiter chip and get a typed JSON array ready for ad-hoc analysis, dashboard ingest, or pipeline-stage transformation.
Database CSV Dump to ETL
Convert PostgreSQL COPY TO CSV output, MySQL SELECT INTO OUTFILE, or any database CSV dump to JSON for loading into a NoSQL store, feeding into a JavaScript ETL pipeline, or shipping to BigQuery as line-delimited JSON. Big-integer detection preserves numeric IDs that exceed JavaScript's safe range.
Postman/Newman CSV Test Result Consumption
Postman test runs export CSV reports of pass/fail per request. Convert to JSON for programmatic consumption — feed into a status dashboard, alert pipeline, or test-result aggregator. Mixed-shape rows (failed tests have an extra error column) are handled with empty/null fills.
Small CSV to Quick JSON Config
Have a small CSV of constants — currency codes, country names, product SKUs — and need a JSON array for a config file or a JavaScript constant? Paste, copy, paste. With Infer types on, numbers and booleans are typed correctly; with Header on, you get an array of named-field objects ready to drop into a .json file.

Technical Details

RFC 4180 State-Machine Parser Internals
The parser is a proper finite-state-machine implementation following RFC 4180. States include UnquotedField, QuotedField, AfterQuote, RowEnd, and EndOfInput. The parser correctly handles quoted fields containing the delimiter, embedded CR/LF inside quoted fields, escaped double quotes (doubled, like ""), and trailing newlines. This produces output that round-trips losslessly through Python's csv module, PostgreSQL COPY, AWS S3 SELECT, and any compliant parser. The state machine is delimiter-aware, so switching from `,` to `;` or `\t` does not change the quoting semantics — only the field separator.
Type Inference Algorithm
With Infer types on, each cell runs through an ordered detection pipeline. First, an empty cell becomes JSON null. Second, the literal strings true and false become JSON booleans. Third, leading-zero strings (^0[0-9]+$) are kept as strings to preserve identifier semantics — converting to numbers would silently strip the leading zeros. Fourth, integer literals are tested against the safe-integer boundary (-2^53+1 to 2^53-1); values outside this range are kept as strings to avoid IEEE 754 precision loss. Fifth, ISO 8601 date strings are detected by regex and intentionally kept as strings — JSON has no native date type. Anything that survives all five guards is converted via Number() (numeric) or kept as a string (everything else).
BOM Stripping and Encoding Handling
All input is treated as UTF-8. The optional UTF-8 BOM (0xEF 0xBB 0xBF) is silently stripped from the first cell of the first row when present — this prevents BOM bytes from being included as a stray character at the start of the first column name (Excel on Windows commonly emits the BOM, breaking naive parsers). Other encodings (Windows-1252, ISO-8859-1) are not auto-detected; the browser File API would have already decoded the bytes as UTF-8 by the time the text reaches this tool. If you have non-UTF-8 input, convert it first with iconv or your editor's encoding-export option before pasting.

Best Practices

Pick the Delimiter Explicitly for Non-Comma Data
Don't rely on auto-detection. If your CSV uses semicolons (Excel-EU), tabs (TSV from BigQuery, Snowflake, or Unix tools), or pipes (high-comma fields), click the matching Delimiter chip before pasting. The parser is delimiter-aware: switching the chip immediately re-parses the input. This avoids the most common CSV-to-JSON failure mode where every row collapses into one cell because the parser used the wrong separator.
Keep Infer Types On for Typed JSON
With Infer types on (the default), you get typed JSON: numbers as numbers, booleans as booleans, null where empty cells appear. This is what most consumers want — APIs, frontends, JavaScript code. Toggle Infer types off only when you specifically need every cell as a string (downstream type-strict consumers, validation pipelines that compare exact source bytes). The detection pipeline has guards for leading-zero strings, big integers, and ISO dates, so identifiers and dates stay safe even with inference on.
Quote IDs as Strings in Upstream CSV
If your CSV is generated by a database or pipeline you control, emit large numeric IDs (Twitter snowflakes, Discord IDs, K8s resourceVersion) as quoted CSV strings ("9007199254740993") so they pass through Type Inference cleanly. The parser will keep them as strings either way (big-int detection catches values above 2^53 - 1), but explicit quoting is the most robust upstream contract and avoids any ambiguity about precision.
Header Row Should Be the First Line
Header on (the default) auto-detects the first row as column names. If your CSV has comments, blank lines, or metadata before the header, strip them before pasting — the parser does not skip leading non-data lines. For headerless CSVs (raw exports, machine-generated dumps), toggle Header off and the columns will be auto-named col1, col2, col3 in order. Don't try to fake a header by prepending one to a headerless file; either toggle Header off or fix the source.
Use Stringify Mode for CSV → JSON → CSV Round-Trips
If you plan to round-trip data through both directions (CSV → JSON → CSV), the reverse direction (JSON → CSV) needs Stringify mode for any nested arrays or objects to survive losslessly. Flatten mode in the reverse direction emits dotted keys (customer.address.city) that can't be perfectly reconstructed by the CSV parser. See our JSON to CSV converter for the full reverse-direction reference and round-trip testing notes.

Frequently Asked Questions

What does this tool do?
It converts CSV to JSON directly in your browser, with bidirectional support: click Swap direction to convert JSON back to CSV in the same panel. Paste CSV in the input area and the tool produces JSON output instantly — no upload, no signup, nothing leaves your machine. The parser is RFC 4180 compliant, handles delimiter chips for comma, semicolon (Excel-EU), tab (TSV), and pipe, and the Infer types option converts numeric strings to numbers, true/false to booleans, and empty cells to null. The tool also handles big-integer IDs that would otherwise lose precision through JSON.parse, embedded commas inside quoted fields, escaped double quotes (doubled), and headerless data with autonamed columns (col1, col2, col3).
Is my data uploaded anywhere?
No. All conversion runs 100% client-side in your browser using JavaScript. Your CSV data is never transmitted, never stored on any server, never logged, and never analyzed. This makes the tool safe for spreadsheet exports containing PII, internal database CSV dumps, customer records, and any sensitive data. You can verify this in your browser's Network tab — pasting CSV triggers zero network requests. The tool uses no cookies for input data and no third-party analytics that would capture what you paste.
How does Type Inference work?
With Infer types on, each parsed cell is run through a small detection pipeline before being placed in the JSON: numeric strings (1, 42, -3.14) become numbers, true/false become booleans, empty strings and the literal null become JSON null, and everything else stays as a string. There are two important guards. First, leading-zero strings like 007 or 0123 are kept as strings even though they look numeric — leading zeros indicate the value is an identifier (zip codes, phone codes, sequence IDs) and converting to a number would silently strip the zeros. Second, integers above 2^53 - 1 (9007199254740991) are also kept as strings to avoid IEEE 754 precision loss. ISO date strings (2026-05-09T10:00:00Z) are intentionally left as strings — JSON has no native date type, so coercing them would produce a JavaScript Date object that doesn't survive serialization.
Why are big integers kept as strings?
JavaScript's Number type uses IEEE 754 double-precision and can only represent integers exactly up to 2^53 - 1 (9007199254740991). Real-world identifiers — Twitter snowflake IDs, Discord IDs, MongoDB Long fields, K8s resourceVersion — are 64-bit integers that exceed this safe range. If the parser called Number() on these, the result would silently round (9007199254740993 becomes 9007199254740992). The Infer types pipeline detects values above the safe-integer boundary and keeps them as strings instead, so the digits survive intact. A warning banner appears below the output listing the affected fields. To convert back precisely in code, use BigInt("9007199254740993") on the JSON string value.
My CSV uses semicolons — how do I parse it?
European Excel locales (Germany, France, Spain, Italy, etc.) emit semicolon-delimited CSVs because the comma is reserved for the decimal separator. Click the `;` chip on the Delimiter row (or open the full Options panel and pick `;`) and the parser switches to semicolon-mode immediately. Numeric values with comma decimals (1234,56) inside such files are kept as strings by Type Inference because European decimal notation is locale-specific — convert them in code if you need numeric values. The parser still applies RFC 4180 quoting rules with the new delimiter, so quoted fields containing semicolons inside them are handled correctly.
Does it handle TSV (tab-delimited)?
Yes. Click the Tab chip on the Delimiter row and the parser splits on tab characters instead of commas. TSV is the cleanest format for cross-locale CSV sharing because tab is unlikely to appear inside text fields, eliminating most quoting edge cases. It is the default output of Unix tools (cut, awk), data warehouses (BigQuery, Snowflake), and is well-supported by Excel in any locale. Paste your .tsv or .tab file content directly — the rest of the parser (header autonames, type inference, big-integer detection) works identically.
What if my CSV has no header row?
Toggle Header off in the Options panel. The parser will treat the first line as data instead of column names and auto-generate keys: col1, col2, col3, … one per column. The output JSON is an array of objects with these synthetic keys. This is useful for raw exports from databases that omit the header, fixed-format flat files, and machine-generated CSVs. If you want different key names, convert with autonames first then rename keys in your downstream pipeline (jq, JavaScript map, etc.). The tool does not infer keys from data heuristics — Header off always produces col1, col2, col3.
Can it handle quoted fields with embedded commas?
Yes. The parser is a proper RFC 4180 state machine: when it sees an opening double quote, it switches to QuotedField state and treats everything until the next unescaped double quote as a single field, including delimiters and embedded line endings (CR/LF). Escaped double quotes (doubled, like "") are correctly collapsed to a single quote. This means `"Smith, Jr."` parses as one field containing `Smith, Jr.`, and `"He said ""hi"""` parses as `He said "hi"`. Naive split-by-comma parsers break on this real-world data; this tool does not.
Why are my dates being kept as strings?
By design. JSON has no native date type — only strings, numbers, booleans, null, arrays, and objects. ISO 8601 date strings (2026-05-09T10:00:00Z) are kept verbatim as strings in the JSON output, which is the correct, lossless representation. If the parser coerced them to JavaScript Date objects, serializing the resulting JSON would produce different output (an object with no useful round-trip representation, or a numeric timestamp). Keep dates as strings in JSON and parse them at the point of use with new Date(value) or your date library of choice. This matches the behavior of every major JSON-from-CSV pipeline: Pandas, jq, and the Python csv + json modules.
What happens if rows have different lengths?
Mixed-shape rows (some with more or fewer columns than the header) are filled to match the header length. Extra cells beyond the header count are dropped, and missing cells are set to empty string (or null when Infer types is on and the parser sees an empty value). A Schema notes warning appears below the output so you know the rows were normalized. This is usually fine for downstream tools that union keys, but verify the output if your consumer expects strict row-shape consistency. The most common cause is trailing commas in some rows or quoted fields with embedded line endings being mis-counted by upstream exporters.
How big a file can I paste?
Above 100,000 characters or 2,000 rows, live conversion automatically switches to manual mode: a Convert button appears in an info banner and conversion only runs when you click it. This prevents the browser's main thread from blocking on every keystroke during heavy parsing. For output above 5 MB or 50,000 rows, the tool truncates the on-screen JSON preview to the first 500 rows and shows a Showing the first 500 of N rows banner — but the Download button still produces the full file with every row included. Hard upper limit is 10 MB of input; above that the tool shows an error and asks you to reduce the input.
Can I round-trip JSON → CSV → JSON?
Yes, when the JSON is flat (no nested objects or arrays). For nested data, the reverse direction (JSON → CSV) needs Stringify mode to keep arrays and objects as JSON inside a single cell — which then round-trips losslessly through this CSV → JSON converter when Infer types is on. Click Swap direction at the top of the panel to flip into JSON-to-CSV mode and verify the round-trip. Flatten mode in the reverse direction is one-way: it emits dotted keys (customer.address.city) that cannot be perfectly reconstructed from CSV. See our JSON to CSV converter for the reverse direction with full Stringify support.

Related Tools

View all tools →