bin | ||
priv/test_cases | ||
src | ||
.gitignore | ||
.travis.yml | ||
LICENSE | ||
README.markdown | ||
rebar.config | ||
sinan.config |
jsx (v1.2.1)
an erlang application for consuming, producing and manipulating json. inspired by yajl
copyright 2011, 2012 alisdair sullivan
jsx is released under the terms of the MIT license
jsx uses sinan or rebar for it's build chain
index
quickstart
to build the library
tanga:jsx alisdair$ sinan build
or
tanga:jsx alisdair$ rebar compile
to run tests
tanga:jsx alisdair$ sinan -r tests eunit
or
tanga:jsx alisdair$ rebar eunit
to convert a utf8 binary containing a json string into an erlang term
1> jsx:to_term(<<"{\"library\": \"jsx\", \"awesome\": true}">>).
[{<<"library">>,<<"jsx">>},{<<"awesome">>,true}]
2> jsx:to_term(<<"[\"a\",\"list\",\"of\",\"words\"]">>).
[<<"a">>, <<"list">>, <<"of">>, <<"words">>]
to convert an erlang term into a utf8 binary containing a json string
1> jsx:to_json([{<<"library">>,<<"jsx">>},{<<"awesome">>,true}]).
<<"{\"library\": \"jsx\", \"awesome\": true}">>
2> jsx:to_json([<<"a">>, <<"list">>, <<"of">>, <<"words">>]).
<<"[\"a\",\"list\",\"of\",\"words\"]">>
to check if a binary or a term is valid json
1> jsx:is_json(<<"[\"this is json\"]">>).
true
2> jsx:is_json("[\"this is not\"]").
false
3> jsx:is_term([<<"this is a term">>]).
true
4> jsx:is_term(["this is not"]).
false
to minify some json
1> jsx:minify(<<"{
\"a list\": [
1,
2,
3
]
}">>).
<<"{\"a list\":[1,2,3]}">>
to prettify some json
1> jsx:prettify(<<"{\"a list\":[1,2,3]}">>).
<<"{
\"a list\": [
1,
2,
3
]
}">>
description
jsx is an erlang application for consuming, producing and manipulating json
jsx strives to be quick but complete, correct but pragmatic, and approachable but powerful. it handles json as encountered in common use with extensions to handle even less common usage. comments, strings quoted with '
instead of "
, json fragments and json streams, and invalid utf8 are all supported
jsx is a collection of functions useful when dealing with json in erlang. jsx is also a json compiler with separate parsing and semantic analysis stages. new, custom, semantic analysis steps are relatively simple to add. the syntactic analysis stage is also exposed separately for use with user defined tokenizers
json <-> erlang mapping
json | erlang |
---|---|
number |
integer() and float() |
string |
binary() |
true , false and null |
true , false and null |
array |
[] and [JSON] |
object |
[{}] and [{binary() OR atom(), JSON}] |
- json
json must be a binary encoded in
utf8
. if it's invalidutf8
or invalid json, it probably won't parse without errors. there are a few non-standard extensions to the parser available that may change that, they are detailed in the options section below jsx also supports json fragments; valid json values that are not complete json. that means jsx will parse things like<<"1">>
,<<"true">>
and<<"\"hello world\"">>
without complaint - erlang only the erlang terms in the table above are supported. non supported terms result in badarg errors. jsx is never going to support erlang lists instead of binaries, mostly because you can't discriminate between lists of integers and strings without hinting, and hinting is silly
- numbers
javascript and thus json represent all numeric values with floats. as this is woefully insufficient for many uses, jsx, just like erlang, supports bigints. whenever possible, this library will interpret json numbers that look like integers as integers. other numbers will be converted to erlang's floating point type, which is nearly but not quite iee754. negative zero is not representable in erlang (zero is unsigned in erlang and
0
is equivalent to-0
) and will be interpreted as regular zero. numbers not representable are beyond the concern of this implementation, and will result in parsing errors when converting from erlang to json, numbers are represented with their shortest representation that will round trip without loss of precision. this means that some floats may be superficially dissimilar (although functionally equivalent). for example,1.0000000000000001
will be represented by1.0
- strings
the json spec is frustratingly vague on the exact details of json strings. json must be unicode, but no encoding is specified. javascript explicitly allows strings containing codepoints explicitly disallowed by unicode. json allows implementations to set limits on the content of strings and other implementations attempt to resolve this in various ways. this implementation, in default operation, only accepts strings that meet the constraints set out in the json spec (strings are sequences of unicode codepoints deliminated by
"
(u+0022
) that may not contain control codes unless properly escaped with\
(u+005c
)) and that are encoded inutf8
the utf8 restriction means improperly paired surrogates are explicitly disallowed.u+d800
tou+dfff
are allowed, but only when they form valid surrogate pairs. surrogates that appear otherwise are an error json string escapes of the form\uXXXX
will be converted to their equivalent codepoint during parsing. this means control characters and other codepoints disallowed by the json spec may be encountered in resulting strings, but codepoints disallowed by the unicode spec (like the two cases above) will not be in the interests of pragmatism, there is an option for looser parsing, see options below all erlang strings are represented by validutf8
encoded binaries. the encoder will check strings for conformance. noncharacters (likeu+ffff
) are allowed in erlang utf8 encoded binaries, but not in strings passed to the encoder (although see options below) this implementation performs no normalization on strings beyond that detailed here. be careful when comparing strings as equivalent strings may have differentutf8
encodings - true, false and null
the json primitives
true
,false
andnull
are represented by the erlang atomstrue
,false
andnull
. surprise - arrays json arrays are represented with erlang lists of json values as described in this section
- objects
json objects are represented by erlang proplists. the empty object has the special representation
[{}]
to differentiate it from the empty list. ambiguities like[true, false]
prevent using the shorthand form of property lists using atoms as properties so all properties must be tuples. all keys must be encoded as instring
, above, or as atoms (which will be escaped and converted to binaries for presentation to handlers). values should be valid json values
incomplete input
jsx handles incomplete json texts. if a partial json text is parsed, rather than returning a term from your callback handler, jsx returns {incomplete, F}
where F
is a function with an identical API to the anonymous fun returned from decoder/3
, encoder/3
or parser/3
. it retains the internal state of the parser at the point where input was exhausted. this allows you to parse as you stream json over a socket or file descriptor or to parse large json texts without needing to keep them entirely in memory
however, it is important to recognize that jsx is greedy by default. if input is exhausted and the json text is not unambiguously incomplete jsx will consider the parsing complete. this is mostly relevant when parsing bare numbers like <<"1234">>
. this could be a complete json integer or just the beginning of a json integer that is being parsed incrementally. jsx will treat it as a whole integer. the option explicit_end
can be used to modify this behaviour, see below
data types
json_term() = [json_term()]
| [{binary() | atom(), json_term()}]
| true
| false
| null
| integer()
| float()
| binary()
the erlang representation of json. binaries should be utf8
encoded (but see below in options)
json_text() = binary()
a utf8 encoded binary containing a json string
tokens() = token() | [token()]
token() = start_object
| end_object
| start_array
| end_array
| {key, binary()}
| {string, binary()}
| binary()
| {number, integer() | float()}
| {integer, integer()}
| {float, float()}
| integer()
| float()
| {literal, true}
| {literal, false}
| {literal, null}
| true
| false
| null
| end_json
the internal representation used during syntactic analysis
event() = start_object
| end_object
| start_array
| end_array
| {key, binary()}
| {string, binary()}
| {integer, integer()}
| {float, float()}
| {literal, true}
| {literal, false}
| {literal, null}
| end_json
the internal representation used during semantic analysis
options() = [option()]
option() = replaced_bad_utf8
| escaped_forward_slashes
| single_quoted_strings
| unescaped_jsonp
| comments
| escaped_strings
| dirty_strings
| ignored_bad_escapes
| relax
| explicit_end
jsx functions all take a common set of options. not all flags have meaning in all contexts, but they are always valid options. functions may have additional options beyond these, see individual function documentation for details
replaced_bad_utf8
json text input and json strings SHOULD be utf8 encoded binaries, appropriately escaped as per the json spec. if this option is present attempts are made to replace invalid codepoints withu+FFFD
as per the unicode spec. this applies both to malformed unicode and disallowed codepointsescaped_forward_slashes
json strings are escaped according to the json spec. this means forward slashes (solidus) are optionally escaped. this option is only relevant for encoding, you may want to use this if you are embedding json directly into a html or xml documentsingle_quoted_strings
some parsers allow double quotes (u+0022
) to be replaced by single quotes (u+0027
) to deliminate keys and strings. this option allows json containing single quotes as structural (deliminator) characters to be parsed without errors. note that the parser expects strings to be terminated by the same quote type that opened it and that single quotes must, obviously, be escaped within strings deliminated by single quotes double quotes must ALWAYS be escaped, regardless of what kind of quotes deliminate the string they are found in the parser will never emit json with keys or strings deliminated by single quotesunescaped_jsonp
javascript interpreters treat the codepointsu+2028
andu+2029
as significant whitespace. json strings that contain either of these codepoints will be parsed incorrectly by some javascript interpreters. by default, these codepoints are escaped (to\u2028
and\u2029
, respectively) to retain compatibility. this option simply removes that escapingcomments
json has no official comments but some parsers allow c style comments. this flag allows comments (both// ...
and/* ... */
style) anywhere whitespace is allowedescaped_strings
by default, both the encoder and decoder return strings as utf8 binaries appropriate for use in erlang. escape sequences that were present in decoded terms are converted into the appropriate codepoint and encoded terms are unaltered. this flag escapes strings as if for output in json, removing control codes and problematic codepoints and replacing them with the appropriate escapesdirty_strings
json escaping is lossy, it mutates the json string and repeated application can result in unwanted behaviour. if your strings are already escaped (or you'd like to force invalid strings into "json") use this flag to bypass escapingignored_bad_escapes
during decoding, ignore unrecognized escape sequences and leave them as is in the stream. note that if you combine this option withescaped_strings
the escape character itself will be escapedexplicit_end
this option treats all exhausted inputs as incomplete, as explained below. the parser will not attempt to return a final state until the function is called with the valueend_stream
relax
relax is a synonym for[replaced_bad_utf8, single_quoted_strings, comments, ignored_bad_escapes]
for when you don't care how janky and awful your json input is, you just want the parser to do the best it can
exports
encoder/3, decoder/3 and parser/3
decoder(Module, Args, Opts) -> Fun((JSONText) -> any())
encoder(Module, Args, Opts) -> Fun((JSONTerm) -> any())
parser(Module, Args, Opts) -> Fun((Tokens) -> any())
Module = atom()
Args = any()
Opts = options()
JSONText = json_text()
JSONTerm = json_term()
Tokens = tokens()
jsx is a json compiler with distinct tokenizing, syntactic analysis and semantic analysis stages. (actually, semantic analysis takes place during syntactic analysis, for efficiency) included are two tokenizers, one that handles json texts (decoder/3
) and one that handles erlang terms (encoder/3
). there is also an entry point to the syntactic analysis stage for use with user defined tokenizers (parser/3
)
all three functions return an anonymous function that takes the appropriate type of input and returns the result of performing semantic analysis, the tuple {incomplete, F}
where F
is a new anonymous function (see incomplete input) or a badarg
error exception if syntactic analysis fails
Module
is the name of the callback module
Args
is any term that will be passed to Module:init/1
prior to syntactic analysis to produce an initial state
Opts
are detailed above
see below for details on the callback module
decode/1,2
decode(JSON) -> Term
decode(JSON, Opts) -> Term
JSON = json_text()
Term = json_term()
Opts = [option() | labels | {labels, Label} | {post_decode, F}]
Label = binary | atom | existing_atom
F = fun((any()) -> any())
decode
parses a json text (a utf8
encoded binary) and produces an erlang term (see json <-> erlang mapping)
the option labels
controls how keys are converted from json to erlang terms. binary
does no conversion beyond normal escaping. atom
converts keys to erlang atoms, and results in a badarg error if keys fall outside the range of erlang atoms. existing_atom
is identical to atom
, except it will not add new atoms to the atom table
{post_decode, F}
is a user defined function of arity 1 that is called on each output value (objects, arrays, strings, numbers and literals). it may return any value to be substituted in the returned term. for example:
1> F = fun(V) when is_list(V) -> V; (V) -> false end.
2> jsx:decode(<<"{\"a list\": [true, \"a string\", 1]}">>, [{post_decode, F}]).
[{<<"a list">>, [false, false, false]}]
if more than one decoder is declared a badarg
error exception will result
raises a badarg
error exception if input is not valid json
encode/1,2
encode(Term) -> JSON
encode(Term, Opts) -> JSON
Term = json_term()
JSON = json_text()
Opts = [option() | {pre_encode, F} | space | {space, N} | indent | {indent, N}]
F = fun((any()) -> any())
N = pos_integer()
encode
parses a json text (a utf8
encoded binary) and produces an erlang term (see json <-> erlang mapping)
{pre_encode, F}
is a user defined function of arity 1 that is called on each input value. it may return any valid json value to be substituted in the returned json. for example:
1> F = fun(V) when is_list(V) -> V; (V) -> false end.
2> jsx:encode([{<<"a list">>, [true, <<"a string">>, 1]}], [{pre_encode, F}].
<<"{\"a list\": [false, false, false]}">>
if more than one decoder is declared a badarg
error exception will result
the option {space, N}
inserts N
spaces after every comma and colon in your json output. space
is an alias for {space, 1}
. the default is {space, 0}
the option {indent, N}
inserts a newline and N
spaces for each level of indentation in your json output. note that this overrides spaces inserted after a comma. indent
is an alias for {indent, 1}
. the default is {indent, 0}
raises a badarg
error exception if input is not a valid erlang representation of json
format/1,2
format(JSON) -> JSON
format(JSON, Opts) -> JSON
JSON = json_text()
Opts = [option() | space | {space, N} | indent | {indent, N}]
N = pos_integer()
format
parses a json text (a utf8
encoded binary) and produces a new json text according to the format rules specified by Opts
the option {space, N}
inserts N
spaces after every comma and colon in your json output. space
is an alias for {space, 1}
. the default is {space, 0}
the option {indent, N}
inserts a newline and N
spaces for each level of indentation in your json output. note that this overrides spaces inserted after a comma. indent
is an alias for {indent, 1}
. the default is {indent, 0}
raises a badarg
error exception if input is not valid json
minify/1
minify(JSON) -> JSON
JSON = json_text()
minify
parses a json text (a utf8
encoded binary) and produces a new json text stripped of whitespace
raises a badarg
error exception if input is not valid json
prettify/1
prettify(JSON) -> JSON
JSON = json_text()
prettify
parses a json text (a utf8
encoded binary) and produces a new json text equivalent to format(JSON, [{space, 1}, {indent, 2}])
raises a badarg
error exception if input is not valid json
is_json/1,2
is_json(MaybeJSON) -> true | false
is_json(MaybeJSON, Opts) -> true | false
MaybeJSON = any()
Opts = options()
returns true if input is a valid json text, false if not what exactly constitutes valid json may be altered per options
is_term/1,2
is_term(MaybeJSON) -> true | false
is_term(MaybeJSON, Opts) -> true | false
MaybeJSON = any()
Opts = options()
returns true if input is a valid erlang representation of json, false if not what exactly constitutes valid json may be altered per options
callback exports
the following functions should be exported from a jsx
callback module
Module:init/1
Module:init(Args) -> InitialState
Args = any()
InitialState = any()
whenever encoder/3
, decoder/3
or parser/3
are called, this function is called with the Args
argument provided in the calling function to obtain InitialState
Module:handle_event/2
Module:handle_event(Event, State) -> NewState
Event = events()
State = any()
NewState = any()
semantic analysis is performed by repeatedly calling handle_event/2
with a stream of events emitted by the tokenizer and the current state. the new state returned is used as the input to the next call to handle_event/2
. the following events must be handled:
start_object
the start of a json objectend_object
the end of a json objectstart_array
the start of a json arrayend_array
the end of a json array{key, binary()}
a key in a json object. this is guaranteed to follow eitherstart_object
or a json value. it will usually be autf8
encoded binary, see options for possible exceptions{string, binary()}
a json string. it will usually be autf8
encoded binary, see options for possible exceptions{integer, integer()}
an erlang integer (bignum){float, float()}
an erlang float{literal, true}
the atomtrue
{literal, false}
the atomfalse
{literal, null}
the atomnull
end_json
this event is emitted when syntactic analysis is completed. you should do any cleanup and return the result of your semantic analysis
acknowledgements
jsx wouldn't be what it is without the contributions of paul davis, lloyd hilaiel, john engelhart, bob ippolito, fernando benavides, alex kropivny, steve strong, michael truog and dmitry kolesnikov