--- JSON-XS/README 2008/04/16 18:38:38 1.25 +++ JSON-XS/README 2013/05/23 09:32:02 1.37 @@ -22,8 +22,8 @@ # Note that JSON version 2.0 and above will automatically use JSON::XS # if available, at virtually no speed overhead either, so you should # be able to just: - - use JSON; + + use JSON; # and do the same things, except that you have a pure-perl fallback now. @@ -34,7 +34,7 @@ Beginning with version 2.0 of the JSON module, when both JSON and JSON::XS are installed, then JSON will fall back on JSON::XS (this can - be overriden) with no overhead due to emulation (by inheritign + be overridden) with no overhead due to emulation (by inheriting constructor and methods). If JSON::XS is not available, it will fall back to the compatible JSON::PP module as backend, so using JSON instead of JSON::XS gives you a portable JSON API that can be fast when you need @@ -46,8 +46,6 @@ cases their maintainers are unresponsive, gone missing, or not listening to bug reports for other reasons. - See COMPARISON, below, for a comparison to some other JSON modules. - See MAPPING, below, on how JSON::XS maps perl values to JSON values and vice versa. @@ -59,11 +57,12 @@ * round-trip integrity - When you serialise a perl data structure using only datatypes - supported by JSON, the deserialised data structure is identical on - the Perl level. (e.g. the string "2.0" doesn't suddenly become "2" - just because it looks like a number). There minor *are* exceptions - to this, read the MAPPING section below to learn about those. + When you serialise a perl data structure using only data types + supported by JSON and Perl, the deserialised data structure is + identical on the Perl level. (e.g. the string "2.0" doesn't suddenly + become "2" just because it looks like a number). There *are* minor + exceptions to this, read the MAPPING section below to learn about + those. * strict checking of JSON correctness @@ -80,12 +79,12 @@ * simple to use This module has both a simple functional interface as well as an - objetc oriented interface interface. + object oriented interface interface. * reasonably versatile output formats You can choose between the most compact guaranteed-single-line - format possible (nice for simple line-based protocols), a pure-ascii + format possible (nice for simple line-based protocols), a pure-ASCII format (for when your transport is not 8-bit clean, still supports the whole Unicode range), or a pretty-printed format (for when you want to read that stuff). Or you can combine those features in @@ -103,7 +102,7 @@ $json_text = JSON::XS->new->utf8->encode ($perl_scalar) - except being faster. + Except being faster. $perl_scalar = decode_json $json_text The opposite of "encode_json": expects an UTF-8 (binary) string and @@ -114,7 +113,7 @@ $perl_scalar = JSON::XS->new->utf8->decode ($json_text) - except being faster. + Except being faster. $is_boolean = JSON::XS::is_bool $scalar Returns true if the passed scalar represents either JSON::XS::true @@ -154,7 +153,7 @@ doesn't exist. 4. A "Unicode String" is simply a string where each character can be - validly interpreted as a Unicode codepoint. + validly interpreted as a Unicode code point. If you have UTF-8 encoded data, it is no longer a Unicode string, but a Unicode string encoded in UTF-8, giving you a binary string. @@ -372,7 +371,8 @@ If $enable is false, then the "encode" method will output key-value pairs in the order Perl stores them (which will likely change - between runs of the same script). + between runs of the same script, and can change even within the same + run from 5.18 onwards). This option is useful if you want the same data structure to be encoded as the same JSON text (given the same overall settings). If @@ -382,6 +382,8 @@ This setting has no effect when decoding JSON texts. + This setting has currently no effect on tied hashes. + $json = $json->allow_nonref ([$enable]) $enabled = $json->get_allow_nonref If $enable is true (or missing), then the "encode" method can @@ -625,19 +627,25 @@ => ([], 3) INCREMENTAL PARSING - [This section and the API it details is still EXPERIMENTAL] - In some cases, there is the need for incremental parsing of JSON texts. While this module always has to keep both JSON text and resulting Perl data structure in memory at one time, it does allow you to parse a JSON stream incrementally. It does so by accumulating text until it has a full JSON object, which it then can decode. This process is similar to using "decode_prefix" to see if a full JSON object is available, but is - much more efficient (JSON::XS will only attempt to parse the JSON text - once it is sure it has enough text to get a decisive result, using a - very simple but truly incremental parser). + much more efficient (and can be implemented with a minimum of method + calls). + + JSON::XS will only attempt to parse the JSON text once it is sure it has + enough text to get a decisive result, using a very simple but truly + incremental parser. This means that it sometimes won't stop as early as + the full parser, for example, it doesn't detect mismatched parentheses. + The only thing it guarantees is that it starts decoding as soon as a + syntactically valid JSON text has been seen. This means you need to set + resource limits (e.g. "max_size") to ensure the parser will stop parsing + in the presence if syntax errors. - The following two methods deal with this. + The following methods implement this incremental parser. [void, scalar or list context] = $json->incr_parse ([$string]) This is the central parsing function. It can both append new text @@ -666,6 +674,11 @@ the scalar context case. Note that in this case, any previously-parsed JSON texts will be lost. + Example: Parse some JSON arrays/objects in a given string and return + them. + + my @objs = JSON::XS->new->incr_parse ("[5][7][1,2]"); + $lvalue_string = $json->incr_text This method returns the currently stored JSON fragment as an lvalue, that is, you can manipulate it. This *only* works when a preceding @@ -682,11 +695,22 @@ $json->incr_skip This will reset the state of the incremental parser and will remove - the parsed text from the input buffer. This is useful after + the parsed text from the input buffer so far. This is useful after "incr_parse" died, in which case the input buffer and incremental parser state is left unchanged, to skip the text parsed so far and to reset the parse state. + The difference to "incr_reset" is that only text until the parse + error occured is removed. + + $json->incr_reset + This completely resets the incremental parser, that is, after this + call, it will be as if the parser had never parsed anything. + + This is useful if you want to repeatedly parse JSON objects and want + to ignore any trailing data, which means you have to reset the + parser after each successful decode. + LIMITATIONS All options that affect decoding are supported, except "allow_nonref". The reason for this is that it cannot be made to work sensibly: JSON @@ -879,6 +903,11 @@ ability, but the JSON number will still be re-encoded as a JSON number). + Note that precision is not accuracy - binary floating point values + cannot represent most decimal fractions exactly, and when converting + from and to floating point, JSON::XS only guarantees precision up to + but not including the leats significant bit. + true, false These JSON atoms become "JSON::XS::true" and "JSON::XS::false", respectively. They are overloaded to act almost exactly like the @@ -915,7 +944,7 @@ can also use "JSON::XS::false" and "JSON::XS::true" to improve readability. - encode_json [\0,JSON::XS::true] # yields [false,true] + encode_json [\0, JSON::XS::true] # yields [false,true] JSON::XS::true, JSON::XS::false These special values become JSON true and JSON false values, @@ -964,6 +993,13 @@ Tell me if you need this capability (but don't forget to explain why it's needed :). + Note that numerical precision has the same meaning as under Perl (so + binary to decimal conversion follows the same rules as in Perl, + which can differ to other languages). Also, your perl interpreter + might expose extensions to the floating point numbers of your + platform, such as infinities or NaN's - these cannot be represented + in JSON, and it is an error to pass those in. + ENCODING/CODESET FLAG NOTES The interested reader might have seen a number of flags that signify encodings or codesets - "utf8", "latin1" and "ascii". There seems to be @@ -1059,6 +1095,69 @@ in mail), and works because ASCII is a proper subset of most 8-bit and multibyte encodings in use in the world. + JSON and ECMAscript + JSON syntax is based on how literals are represented in javascript (the + not-standardised predecessor of ECMAscript) which is presumably why it + is called "JavaScript Object Notation". + + However, JSON is not a subset (and also not a superset of course) of + ECMAscript (the standard) or javascript (whatever browsers actually + implement). + + If you want to use javascript's "eval" function to "parse" JSON, you + might run into parse errors for valid JSON texts, or the resulting data + structure might not be queryable: + + One of the problems is that U+2028 and U+2029 are valid characters + inside JSON strings, but are not allowed in ECMAscript string literals, + so the following Perl fragment will not output something that can be + guaranteed to be parsable by javascript's "eval": + + use JSON::XS; + + print encode_json [chr 0x2028]; + + The right fix for this is to use a proper JSON parser in your javascript + programs, and not rely on "eval" (see for example Douglas Crockford's + json2.js parser). + + If this is not an option, you can, as a stop-gap measure, simply encode + to ASCII-only JSON: + + use JSON::XS; + + print JSON::XS->new->ascii->encode ([chr 0x2028]); + + Note that this will enlarge the resulting JSON text quite a bit if you + have many non-ASCII characters. You might be tempted to run some regexes + to only escape U+2028 and U+2029, e.g.: + + # DO NOT USE THIS! + my $json = JSON::XS->new->utf8->encode ([chr 0x2028]); + $json =~ s/\xe2\x80\xa8/\\u2028/g; # escape U+2028 + $json =~ s/\xe2\x80\xa9/\\u2029/g; # escape U+2029 + print $json; + + Note that *this is a bad idea*: the above only works for U+2028 and + U+2029 and thus only for fully ECMAscript-compliant parsers. Many + existing javascript implementations, however, have issues with other + characters as well - using "eval" naively simply *will* cause problems. + + Another problem is that some javascript implementations reserve some + property names for their own purposes (which probably makes them + non-ECMAscript-compliant). For example, Iceweasel reserves the + "__proto__" property name for its own purposes. + + If that is a problem, you could parse try to filter the resulting JSON + output for these property strings, e.g.: + + $json =~ s/"__proto__"\s*:/"__proto__renamed":/g; + + This works because "__proto__" is not valid outside of strings, so every + occurence of ""__proto__"\s*:" must be a string used as property name. + + If you know of other incompatibilities, please let me know. + JSON and YAML You often hear that JSON is a subset of YAML. This is, however, a mass hysteria(*) and very far from the truth (as of the time of this @@ -1075,10 +1174,10 @@ This will *usually* generate JSON texts that also parse as valid YAML. Please note that YAML has hardcoded limits on (simple) object key lengths that JSON doesn't have and also has different and incompatible - unicode handling, so you should make sure that your hash keys are - noticeably shorter than the 1024 "stream characters" YAML allows and - that you do not have characters with codepoint values outside the - Unicode BMP (basic multilingual page). YAML also does not allow "\/" + unicode character escape syntax, so you should make sure that your hash + keys are noticeably shorter than the 1024 "stream characters" YAML + allows and that you do not have characters with codepoint values outside + the Unicode BMP (basic multilingual page). YAML also does not allow "\/" sequences in strings (which JSON::XS does not *currently* generate, but other JSON generators might). @@ -1105,6 +1204,13 @@ spreading lies about the real compatibility for many *years* and trying to silence people who point out that it isn't true. + Addendum/2009: the YAML 1.2 spec is still incompatible with JSON, + even though the incompatibilities have been documented (and are + known to Brian) for many years and the spec makes explicit claims + that YAML is a superset of JSON. It would be so easy to fix, but + apparently, bullying people and corrupting userdata is so much + easier. + SPEED It seems that JSON::XS is surprisingly fast, as shown in the following tables. They have been generated with the help of the "eg/bench" program @@ -1117,49 +1223,48 @@ {"method": "handleMessage", "params": ["user1", "we were just talking"], "id": null, "array":[1,11,234,-5,1e5,1e7, - true, false]} + 1, 0]} It shows the number of encodes/decodes per second (JSON::XS uses the functional interface, while JSON::XS/2 uses the OO interface with - pretty-printing and hashkey sorting enabled, JSON::XS/3 enables shrink). - Higher is better: - - module | encode | decode | - -----------|------------|------------| - JSON 1.x | 4990.842 | 4088.813 | - JSON::DWIW | 51653.990 | 71575.154 | - JSON::PC | 65948.176 | 74631.744 | - JSON::PP | 8931.652 | 3817.168 | - JSON::Syck | 24877.248 | 27776.848 | - JSON::XS | 388361.481 | 227951.304 | - JSON::XS/2 | 227951.304 | 218453.333 | - JSON::XS/3 | 338250.323 | 218453.333 | - Storable | 16500.016 | 135300.129 | - -----------+------------+------------+ - - That is, JSON::XS is about five times faster than JSON::DWIW on - encoding, about three times faster on decoding, and over forty times - faster than JSON, even with pretty-printing and key sorting. It also + pretty-printing and hashkey sorting enabled, JSON::XS/3 enables shrink. + JSON::DWIW/DS uses the deserialise function, while JSON::DWIW::FJ uses + the from_json method). Higher is better: + + module | encode | decode | + --------------|------------|------------| + JSON::DWIW/DS | 86302.551 | 102300.098 | + JSON::DWIW/FJ | 86302.551 | 75983.768 | + JSON::PP | 15827.562 | 6638.658 | + JSON::Syck | 63358.066 | 47662.545 | + JSON::XS | 511500.488 | 511500.488 | + JSON::XS/2 | 291271.111 | 388361.481 | + JSON::XS/3 | 361577.931 | 361577.931 | + Storable | 66788.280 | 265462.278 | + --------------+------------+------------+ + + That is, JSON::XS is almost six times faster than JSON::DWIW on + encoding, about five times faster on decoding, and over thirty to + seventy times faster than JSON's pure perl implementation. It also compares favourably to Storable for small amounts of data. Using a longer test string (roughly 18KB, generated from Yahoo! Locals search API (). - module | encode | decode | - -----------|------------|------------| - JSON 1.x | 55.260 | 34.971 | - JSON::DWIW | 825.228 | 1082.513 | - JSON::PC | 3571.444 | 2394.829 | - JSON::PP | 210.987 | 32.574 | - JSON::Syck | 552.551 | 787.544 | - JSON::XS | 5780.463 | 4854.519 | - JSON::XS/2 | 3869.998 | 4798.975 | - JSON::XS/3 | 5862.880 | 4798.975 | - Storable | 4445.002 | 5235.027 | - -----------+------------+------------+ + module | encode | decode | + --------------|------------|------------| + JSON::DWIW/DS | 1647.927 | 2673.916 | + JSON::DWIW/FJ | 1630.249 | 2596.128 | + JSON::PP | 400.640 | 62.311 | + JSON::Syck | 1481.040 | 1524.869 | + JSON::XS | 20661.596 | 9541.183 | + JSON::XS/2 | 10683.403 | 9416.938 | + JSON::XS/3 | 20661.596 | 9400.054 | + Storable | 19765.806 | 10000.725 | + --------------+------------+------------+ Again, JSON::XS leads by far (except for Storable which non-surprisingly - decodes faster). + decodes a bit faster). On large strings containing lots of high Unicode characters, some modules (such as JSON::PC) seem to decode faster than JSON::XS, but the @@ -1204,11 +1309,11 @@ If you are using JSON::XS to return packets to consumption by JavaScript scripts in a browser you should have a look at - to see whether - you are vulnerable to some common attack vectors (which really are - browser design bugs, but it is still you who will have to deal with it, - as major browser developers care only for features, not about getting - security right). + + to see whether you are vulnerable to some common attack vectors (which + really are browser design bugs, but it is still you who will have to + deal with it, as major browser developers care only for features, not + about getting security right). THREADS This module is *not* guaranteed to be thread safe and there are no plans @@ -1218,11 +1323,26 @@ (It might actually work, but you have been warned). +THE PERILS OF SETLOCALE + Sometimes people avoid the Perl locale support and directly call the + system's setlocale function with "LC_ALL". + + This breaks both perl and modules such as JSON::XS, as stringification + of numbers no longer works correcly (e.g. "$x = 0.1; print "$x"+1" might + print 1, and JSON::XS might output illegal JSON as JSON::XS relies on + perl to stringify numbers). + + The solution is simple: don't call "setlocale", or use it for only those + categories you need, such as "LC_MESSAGES" or "LC_CTYPE". + + If you need "LC_NUMERIC", you should enable it only around the code that + actually needs it (avoiding stringification of numbers), and restore it + afterwards. + BUGS While the goal of this module is to be correct, that unfortunately does - not mean it's bug-free, only that I think its design is bug-free. It is - still relatively early in its development. If you keep reporting bugs - they will be fixed swiftly, though. + not mean it's bug-free, only that I think its design is bug-free. If you + keep reporting bugs they will be fixed swiftly, though. Please refrain from using rt.cpan.org or any other bug reporting service. I put the contact address into my modules for a reason.