--- JSON-XS/README 2008/03/19 22:31:00 1.23 +++ JSON-XS/README 2009/08/08 10:06:02 1.31 @@ -22,8 +22,8 @@ # Note that JSON version 2.0 and above will automatically use JSON::XS # if available, at virtually no speed overhead either, so you should # be able to just: - - use JSON; + + use JSON; # and do the same things, except that you have a pure-perl fallback now. @@ -34,7 +34,7 @@ Beginning with version 2.0 of the JSON module, when both JSON and JSON::XS are installed, then JSON will fall back on JSON::XS (this can - be overriden) with no overhead due to emulation (by inheritign + be overridden) with no overhead due to emulation (by inheriting constructor and methods). If JSON::XS is not available, it will fall back to the compatible JSON::PP module as backend, so using JSON instead of JSON::XS gives you a portable JSON API that can be fast when you need @@ -46,8 +46,6 @@ cases their maintainers are unresponsive, gone missing, or not listening to bug reports for other reasons. - See COMPARISON, below, for a comparison to some other JSON modules. - See MAPPING, below, on how JSON::XS maps perl values to JSON values and vice versa. @@ -59,7 +57,7 @@ * round-trip integrity - When you serialise a perl data structure using only datatypes + When you serialise a perl data structure using only data types supported by JSON, the deserialised data structure is identical on the Perl level. (e.g. the string "2.0" doesn't suddenly become "2" just because it looks like a number). There minor *are* exceptions @@ -80,12 +78,12 @@ * simple to use This module has both a simple functional interface as well as an - objetc oriented interface interface. + object oriented interface interface. * reasonably versatile output formats You can choose between the most compact guaranteed-single-line - format possible (nice for simple line-based protocols), a pure-ascii + format possible (nice for simple line-based protocols), a pure-ASCII format (for when your transport is not 8-bit clean, still supports the whole Unicode range), or a pretty-printed format (for when you want to read that stuff). Or you can combine those features in @@ -103,7 +101,7 @@ $json_text = JSON::XS->new->utf8->encode ($perl_scalar) - except being faster. + Except being faster. $perl_scalar = decode_json $json_text The opposite of "encode_json": expects an UTF-8 (binary) string and @@ -114,7 +112,7 @@ $perl_scalar = JSON::XS->new->utf8->decode ($json_text) - except being faster. + Except being faster. $is_boolean = JSON::XS::is_bool $scalar Returns true if the passed scalar represents either JSON::XS::true @@ -154,7 +152,7 @@ doesn't exist. 4. A "Unicode String" is simply a string where each character can be - validly interpreted as a Unicode codepoint. + validly interpreted as a Unicode code point. If you have UTF-8 encoded data, it is no longer a Unicode string, but a Unicode string encoded in UTF-8, giving you a binary string. @@ -382,6 +380,8 @@ This setting has no effect when decoding JSON texts. + This setting has currently no effect on tied hashes. + $json = $json->allow_nonref ([$enable]) $enabled = $json->get_allow_nonref If $enable is true (or missing), then the "encode" method can @@ -400,6 +400,21 @@ JSON::XS->new->allow_nonref->encode ("Hello, World!") => "Hello, World!" + $json = $json->allow_unknown ([$enable]) + $enabled = $json->get_allow_unknown + If $enable is true (or missing), then "encode" will *not* throw an + exception when it encounters values it cannot represent in JSON (for + example, filehandles) but instead will encode a JSON "null" value. + Note that blessed objects are not included here and are handled + separately by c. + + If $enable is false (the default), then "encode" will throw an + exception when it encounters anything it cannot encode as JSON. + + This option does not affect "decode" in any way, and it is + recommended to leave it off unless you know your communications + partner. + $json = $json->allow_blessed ([$enable]) $enabled = $json->get_allow_blessed If $enable is true (or missing), then the "encode" method will not @@ -543,9 +558,9 @@ $json = $json->max_depth ([$maximum_nesting_depth]) $max_depth = $json->get_max_depth Sets the maximum nesting level (default 512) accepted while encoding - or decoding. If the JSON text or Perl data structure has an equal or - higher nesting level then this limit, then the encoder and decoder - will stop and croak at that point. + or decoding. If a higher nesting level is detected in JSON text or a + Perl data structure, then the encoder and decoder will stop and + croak at that point. Nesting level is defined by number of hash- or arrayrefs that the encoder needs to traverse to reach a given point or the number of @@ -555,9 +570,12 @@ Setting the maximum depth to one disallows any nesting, so that ensures that the object is only a single hash/object or array. - The argument to "max_depth" will be rounded up to the next highest - power of two. If no argument is given, the highest possible setting - will be used, which is rarely useful. + If no argument is given, the highest possible setting will be used, + which is rarely useful. + + Note that nesting is implemented by recursion in C. The default + value has been chosen to be as large as typical operating systems + allow without crashing. See SECURITY CONSIDERATIONS, below, for more info on why this is useful. @@ -566,14 +584,12 @@ $max_size = $json->get_max_size Set the maximum length a JSON text may have (in bytes) where decoding is being attempted. The default is 0, meaning no limit. - When "decode" is called on a string longer then this number of - characters it will not attempt to decode the string but throw an + When "decode" is called on a string that is longer then this many + bytes, it will not attempt to decode the string but throw an exception. This setting has no effect on "encode" (yet). - The argument to "max_size" will be rounded up to the next highest - power of two (so may be more than requested). If no argument is - given, the limit check will be deactivated (same as when 0 is - specified). + If no argument is given, the limit check will be deactivated (same + as when 0 is specified). See SECURITY CONSIDERATIONS, below, for more info on why this is useful. @@ -608,6 +624,233 @@ JSON::XS->new->decode_prefix ("[1] the tail") => ([], 3) +INCREMENTAL PARSING + In some cases, there is the need for incremental parsing of JSON texts. + While this module always has to keep both JSON text and resulting Perl + data structure in memory at one time, it does allow you to parse a JSON + stream incrementally. It does so by accumulating text until it has a + full JSON object, which it then can decode. This process is similar to + using "decode_prefix" to see if a full JSON object is available, but is + much more efficient (and can be implemented with a minimum of method + calls). + + JSON::XS will only attempt to parse the JSON text once it is sure it has + enough text to get a decisive result, using a very simple but truly + incremental parser. This means that it sometimes won't stop as early as + the full parser, for example, it doesn't detect parenthese mismatches. + The only thing it guarantees is that it starts decoding as soon as a + syntactically valid JSON text has been seen. This means you need to set + resource limits (e.g. "max_size") to ensure the parser will stop parsing + in the presence if syntax errors. + + The following methods implement this incremental parser. + + [void, scalar or list context] = $json->incr_parse ([$string]) + This is the central parsing function. It can both append new text + and extract objects from the stream accumulated so far (both of + these functions are optional). + + If $string is given, then this string is appended to the already + existing JSON fragment stored in the $json object. + + After that, if the function is called in void context, it will + simply return without doing anything further. This can be used to + add more text in as many chunks as you want. + + If the method is called in scalar context, then it will try to + extract exactly *one* JSON object. If that is successful, it will + return this object, otherwise it will return "undef". If there is a + parse error, this method will croak just as "decode" would do (one + can then use "incr_skip" to skip the errornous part). This is the + most common way of using the method. + + And finally, in list context, it will try to extract as many objects + from the stream as it can find and return them, or the empty list + otherwise. For this to work, there must be no separators between the + JSON objects or arrays, instead they must be concatenated + back-to-back. If an error occurs, an exception will be raised as in + the scalar context case. Note that in this case, any + previously-parsed JSON texts will be lost. + + $lvalue_string = $json->incr_text + This method returns the currently stored JSON fragment as an lvalue, + that is, you can manipulate it. This *only* works when a preceding + call to "incr_parse" in *scalar context* successfully returned an + object. Under all other circumstances you must not call this + function (I mean it. although in simple tests it might actually + work, it *will* fail under real world conditions). As a special + exception, you can also call this method before having parsed + anything. + + This function is useful in two cases: a) finding the trailing text + after a JSON object or b) parsing multiple JSON objects separated by + non-JSON text (such as commas). + + $json->incr_skip + This will reset the state of the incremental parser and will remove + the parsed text from the input buffer so far. This is useful after + "incr_parse" died, in which case the input buffer and incremental + parser state is left unchanged, to skip the text parsed so far and + to reset the parse state. + + The difference to "incr_reset" is that only text until the parse + error occured is removed. + + $json->incr_reset + This completely resets the incremental parser, that is, after this + call, it will be as if the parser had never parsed anything. + + This is useful if you want to repeatedly parse JSON objects and want + to ignore any trailing data, which means you have to reset the + parser after each successful decode. + + LIMITATIONS + All options that affect decoding are supported, except "allow_nonref". + The reason for this is that it cannot be made to work sensibly: JSON + objects and arrays are self-delimited, i.e. you can concatenate them + back to back and still decode them perfectly. This does not hold true + for JSON numbers, however. + + For example, is the string 1 a single JSON number, or is it simply the + start of 12? Or is 12 a single JSON number, or the concatenation of 1 + and 2? In neither case you can tell, and this is why JSON::XS takes the + conservative route and disallows this case. + + EXAMPLES + Some examples will make all this clearer. First, a simple example that + works similarly to "decode_prefix": We want to decode the JSON object at + the start of a string and identify the portion after the JSON object: + + my $text = "[1,2,3] hello"; + + my $json = new JSON::XS; + + my $obj = $json->incr_parse ($text) + or die "expected JSON object or array at beginning of string"; + + my $tail = $json->incr_text; + # $tail now contains " hello" + + Easy, isn't it? + + Now for a more complicated example: Imagine a hypothetical protocol + where you read some requests from a TCP stream, and each request is a + JSON array, without any separation between them (in fact, it is often + useful to use newlines as "separators", as these get interpreted as + whitespace at the start of the JSON text, which makes it possible to + test said protocol with "telnet"...). + + Here is how you'd do it (it is trivial to write this in an event-based + manner): + + my $json = new JSON::XS; + + # read some data from the socket + while (sysread $socket, my $buf, 4096) { + + # split and decode as many requests as possible + for my $request ($json->incr_parse ($buf)) { + # act on the $request + } + } + + Another complicated example: Assume you have a string with JSON objects + or arrays, all separated by (optional) comma characters (e.g. "[1],[2], + [3]"). To parse them, we have to skip the commas between the JSON texts, + and here is where the lvalue-ness of "incr_text" comes in useful: + + my $text = "[1],[2], [3]"; + my $json = new JSON::XS; + + # void context, so no parsing done + $json->incr_parse ($text); + + # now extract as many objects as possible. note the + # use of scalar context so incr_text can be called. + while (my $obj = $json->incr_parse) { + # do something with $obj + + # now skip the optional comma + $json->incr_text =~ s/^ \s* , //x; + } + + Now lets go for a very complex example: Assume that you have a gigantic + JSON array-of-objects, many gigabytes in size, and you want to parse it, + but you cannot load it into memory fully (this has actually happened in + the real world :). + + Well, you lost, you have to implement your own JSON parser. But JSON::XS + can still help you: You implement a (very simple) array parser and let + JSON decode the array elements, which are all full JSON objects on their + own (this wouldn't work if the array elements could be JSON numbers, for + example): + + my $json = new JSON::XS; + + # open the monster + open my $fh, "incr_parse ($buf); # void context, so no parsing + + # Exit the loop once we found and removed(!) the initial "[". + # In essence, we are (ab-)using the $json object as a simple scalar + # we append data to. + last if $json->incr_text =~ s/^ \s* \[ //x; + } + + # now we have the skipped the initial "[", so continue + # parsing all the elements. + for (;;) { + # in this loop we read data until we got a single JSON object + for (;;) { + if (my $obj = $json->incr_parse) { + # do something with $obj + last; + } + + # add more data + sysread $fh, my $buf, 65536 + or die "read error: $!"; + $json->incr_parse ($buf); # void context, so no parsing + } + + # in this loop we read data until we either found and parsed the + # separating "," between elements, or the final "]" + for (;;) { + # first skip whitespace + $json->incr_text =~ s/^\s*//; + + # if we find "]", we are done + if ($json->incr_text =~ s/^\]//) { + print "finished.\n"; + exit; + } + + # if we find ",", we can continue with the next element + if ($json->incr_text =~ s/^,//) { + last; + } + + # if we find anything else, we have a parse error! + if (length $json->incr_text) { + die "parse error near ", $json->incr_text; + } + + # else add more data + sysread $fh, my $buf, 65536 + or die "read error: $!"; + $json->incr_parse ($buf); # void context, so no parsing + } + + This is a complex example, but most of the complexity comes from the + fact that we are trying to be correct (bear with me if I am wrong, I + never ran the above example :). + MAPPING This section describes how JSON::XS maps Perl values to JSON values and vice versa. These mappings are designed to "do the right thing" in most @@ -689,7 +932,7 @@ can also use "JSON::XS::false" and "JSON::XS::true" to improve readability. - encode_json [\0,JSON::XS::true] # yields [false,true] + encode_json [\0, JSON::XS::true] # yields [false,true] JSON::XS::true, JSON::XS::false These special values become JSON true and JSON false values, @@ -736,16 +979,16 @@ You can not currently force the type in other, less obscure, ways. Tell me if you need this capability (but don't forget to explain why - its needed :). + it's needed :). ENCODING/CODESET FLAG NOTES The interested reader might have seen a number of flags that signify encodings or codesets - "utf8", "latin1" and "ascii". There seems to be some confusion on what these do, so here is a short comparison: - "utf8" controls wether the JSON text created by "encode" (and expected + "utf8" controls whether the JSON text created by "encode" (and expected by "decode") is UTF-8 encoded or not, while "latin1" and "ascii" only - control wether "encode" escapes character values outside their + control whether "encode" escapes character values outside their respective codeset range. Neither of these flags conflict with each other, although some combinations make less sense than others. @@ -833,95 +1076,68 @@ in mail), and works because ASCII is a proper subset of most 8-bit and multibyte encodings in use in the world. -COMPARISON - As already mentioned, this module was created because none of the - existing JSON modules could be made to work correctly. First I will - describe the problems (or pleasures) I encountered with various existing - JSON modules, followed by some benchmark values. JSON::XS was designed - not to suffer from any of these problems or limitations. + JSON and ECMAscript + JSON syntax is based on how literals are represented in javascript (the + not-standardised predecessor of ECMAscript) which is presumably why it + is called "JavaScript Object Notation". - JSON 2.xx - A marvellous piece of engineering, this module either uses JSON::XS - directly when available (so will be 100% compatible with it, - including speed), or it uses JSON::PP, which is basically JSON::XS - translated to Pure Perl, which should be 100% compatible with - JSON::XS, just a bit slower. + However, JSON is not a subset (and also not a superset of course) of + ECMAscript (the standard) or javascript (whatever browsers actually + implement). - You cannot really lose by using this module, especially as it tries - very hard to work even with ancient Perl versions, while JSON::XS - does not. + If you want to use javascript's "eval" function to "parse" JSON, you + might run into parse errors for valid JSON texts, or the resulting data + structure might not be queryable: - JSON 1.07 - Slow (but very portable, as it is written in pure Perl). + One of the problems is that U+2028 and U+2029 are valid characters + inside JSON strings, but are not allowed in ECMAscript string literals, + so the following Perl fragment will not output something that can be + guaranteed to be parsable by javascript's "eval": - Undocumented/buggy Unicode handling (how JSON handles Unicode values - is undocumented. One can get far by feeding it Unicode strings and - doing en-/decoding oneself, but Unicode escapes are not working - properly). + use JSON::XS; - No round-tripping (strings get clobbered if they look like numbers, - e.g. the string 2.0 will encode to 2.0 instead of "2.0", and that - will decode into the number 2. + print encode_json [chr 0x2028]; - JSON::PC 0.01 - Very fast. + The right fix for this is to use a proper JSON parser in your javascript + programs, and not rely on "eval" (see for example Douglas Crockford's + json2.js parser). - Undocumented/buggy Unicode handling. + If this is not an option, you can, as a stop-gap measure, simply encode + to ASCII-only JSON: - No round-tripping. + use JSON::XS; - Has problems handling many Perl values (e.g. regex results and other - magic values will make it croak). + print JSON::XS->new->ascii->encode ([chr 0x2028]); - Does not even generate valid JSON ("{1,2}" gets converted to "{1:2}" - which is not a valid JSON text. + Note that this will enlarge the resulting JSON text quite a bit if you + have many non-ASCII characters. You might be tempted to run some regexes + to only escape U+2028 and U+2029, e.g.: - Unmaintained (maintainer unresponsive for many months, bugs are not - getting fixed). + # DO NOT USE THIS! + my $json = JSON::XS->new->utf8->encode ([chr 0x2028]); + $json =~ s/\xe2\x80\xa8/\\u2028/g; # escape U+2028 + $json =~ s/\xe2\x80\xa9/\\u2029/g; # escape U+2029 + print $json; - JSON::Syck 0.21 - Very buggy (often crashes). + Note that *this is a bad idea*: the above only works for U+2028 and + U+2029 and thus only for fully ECMAscript-compliant parsers. Many + existing javascript implementations, however, have issues with other + characters as well - using "eval" naively simply *will* cause problems. - Very inflexible (no human-readable format supported, format pretty - much undocumented. I need at least a format for easy reading by - humans and a single-line compact format for use in a protocol, and - preferably a way to generate ASCII-only JSON texts). + Another problem is that some javascript implementations reserve some + property names for their own purposes (which probably makes them + non-ECMAscript-compliant). For example, Iceweasel reserves the + "__proto__" property name for it's own purposes. - Completely broken (and confusingly documented) Unicode handling - (Unicode escapes are not working properly, you need to set - ImplicitUnicode to *different* values on en- and decoding to get - symmetric behaviour). + If that is a problem, you could parse try to filter the resulting JSON + output for these property strings, e.g.: - No round-tripping (simple cases work, but this depends on whether - the scalar value was used in a numeric context or not). + $json =~ s/"__proto__"\s*:/"__proto__renamed":/g; - Dumping hashes may skip hash values depending on iterator state. + This works because "__proto__" is not valid outside of strings, so every + occurence of ""__proto__"\s*:" must be a string used as property name. - Unmaintained (maintainer unresponsive for many months, bugs are not - getting fixed). - - Does not check input for validity (i.e. will accept non-JSON input - and return "something" instead of raising an exception. This is a - security issue: imagine two banks transferring money between each - other using JSON. One bank might parse a given non-JSON request and - deduct money, while the other might reject the transaction with a - syntax error. While a good protocol will at least recover, that is - extra unnecessary work and the transaction will still not succeed). - - JSON::DWIW 0.04 - Very fast. Very natural. Very nice. - - Undocumented Unicode handling (but the best of the pack. Unicode - escapes still don't get parsed properly). - - Very inflexible. - - No round-tripping. - - Does not generate valid JSON texts (key strings are often unquoted, - empty keys result in nothing being output) - - Does not check input for validity. + If you know of other incompatibilities, please let me know. JSON and YAML You often hear that JSON is a subset of YAML. This is, however, a mass @@ -979,8 +1195,9 @@ single-line JSON string (also available at ). - {"method": "handleMessage", "params": ["user1", "we were just talking"], \ - "id": null, "array":[1,11,234,-5,1e5,1e7, true, false]} + {"method": "handleMessage", "params": ["user1", + "we were just talking"], "id": null, "array":[1,11,234,-5,1e5,1e7, + true, false]} It shows the number of encodes/decodes per second (JSON::XS uses the functional interface, while JSON::XS/2 uses the OO interface with @@ -1077,19 +1294,21 @@ This module is *not* guaranteed to be thread safe and there are no plans to change this until Perl gets thread support (as opposed to the horribly slow so-called "threads" which are simply slow and bloated - process simulations - use fork, its *much* faster, cheaper, better). + process simulations - use fork, it's *much* faster, cheaper, better). (It might actually work, but you have been warned). BUGS While the goal of this module is to be correct, that unfortunately does - not mean its bug-free, only that I think its design is bug-free. It is - still relatively early in its development. If you keep reporting bugs - they will be fixed swiftly, though. + not mean it's bug-free, only that I think its design is bug-free. If you + keep reporting bugs they will be fixed swiftly, though. Please refrain from using rt.cpan.org or any other bug reporting service. I put the contact address into my modules for a reason. +SEE ALSO + The json_xs command line utility for quick experiments. + AUTHOR Marc Lehmann http://home.schmorp.de/